WorldWideScience

Sample records for auditory perceptual learning

  1. Motivation and intelligence drive auditory perceptual learning.

    Science.gov (United States)

    Amitay, Sygal; Halliday, Lorna; Taylor, Jenny; Sohoglu, Ediz; Moore, David R

    2010-03-23

    Although feedback on performance is generally thought to promote perceptual learning, the role and necessity of feedback remain unclear. We investigated the effect of providing varying amounts of positive feedback while listeners attempted to discriminate between three identical tones on learning frequency discrimination. Using this novel procedure, the feedback was meaningless and random in relation to the listeners' responses, but the amount of feedback provided (or lack thereof) affected learning. We found that a group of listeners who received positive feedback on 10% of the trials improved their performance on the task (learned), while other groups provided either with excess (90%) or with no feedback did not learn. Superimposed on these group data, however, individual listeners showed other systematic changes of performance. In particular, those with lower non-verbal IQ who trained in the no feedback condition performed more poorly after training. This pattern of results cannot be accounted for by learning models that ascribe an external teacher role to feedback. We suggest, instead, that feedback is used to monitor performance on the task in relation to its perceived difficulty, and that listeners who learn without the benefit of feedback are adept at self-monitoring of performance, a trait that also supports better performance on non-verbal IQ tests. These results show that 'perceptual' learning is strongly influenced by top-down processes of motivation and intelligence.

  2. Auditory temporal perceptual learning and transfer in Chinese-speaking children with developmental dyslexia.

    Science.gov (United States)

    Zhang, Manli; Xie, Weiyi; Xu, Yanzhi; Meng, Xiangzhi

    2018-03-01

    Perceptual learning refers to the improvement of perceptual performance as a function of training. Recent studies found that auditory perceptual learning may improve phonological skills in individuals with developmental dyslexia in alphabetic writing system. However, whether auditory perceptual learning could also benefit the reading skills of those learning the Chinese logographic writing system is, as yet, unknown. The current study aimed to investigate the remediation effect of auditory temporal perceptual learning on Mandarin-speaking school children with developmental dyslexia. Thirty children with dyslexia were screened from a large pool of students in 3th-5th grades. They completed a series of pretests and then were assigned to either a non-training control group or a training group. The training group worked on a pure tone duration discrimination task for 7 sessions over 2 weeks with thirty minutes per session. Post-tests immediately after training and a follow-up test 2 months later were conducted. Analyses revealed a significant training effect in the training group relative to non-training group, as well as near transfer to the temporal interval discrimination task and far transfer to phonological awareness, character recognition and reading fluency. Importantly, the training effect and all the transfer effects were stable at the 2-month follow-up session. Further analyses found that a significant correlation between character recognition performance and learning rate mainly existed in the slow learning phase, the consolidation stage of perceptual learning, and this effect was modulated by an individuals' executive function. These findings indicate that adaptive auditory temporal perceptual learning can lead to learning and transfer effects on reading performance, and shed further light on the potential role of basic perceptual learning in the remediation and prevention of developmental dyslexia. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training.

    Science.gov (United States)

    Bernstein, Lynne E; Auer, Edward T; Eberhardt, Silvio P; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

  4. Feedback Valence Affects Auditory Perceptual Learning Independently of Feedback Probability

    Science.gov (United States)

    Amitay, Sygal; Moore, David R.; Molloy, Katharine; Halliday, Lorna F.

    2015-01-01

    Previous studies have suggested that negative feedback is more effective in driving learning than positive feedback. We investigated the effect on learning of providing varying amounts of negative and positive feedback while listeners attempted to discriminate between three identical tones; an impossible task that nevertheless produces robust learning. Four feedback conditions were compared during training: 90% positive feedback or 10% negative feedback informed the participants that they were doing equally well, while 10% positive or 90% negative feedback informed them they were doing equally badly. In all conditions the feedback was random in relation to the listeners’ responses (because the task was to discriminate three identical tones), yet both the valence (negative vs. positive) and the probability of feedback (10% vs. 90%) affected learning. Feedback that informed listeners they were doing badly resulted in better post-training performance than feedback that informed them they were doing well, independent of valence. In addition, positive feedback during training resulted in better post-training performance than negative feedback, but only positive feedback indicating listeners were doing badly on the task resulted in learning. As we have previously speculated, feedback that better reflected the difficulty of the task was more effective in driving learning than feedback that suggested performance was better than it should have been given perceived task difficulty. But contrary to expectations, positive feedback was more effective than negative feedback in driving learning. Feedback thus had two separable effects on learning: feedback valence affected motivation on a subjectively difficult task, and learning occurred only when feedback probability reflected the subjective difficulty. To optimize learning, training programs need to take into consideration both feedback valence and probability. PMID:25946173

  5. Supramodal Enhancement of Auditory Perceptual and Cognitive Learning by Video Game Playing.

    Science.gov (United States)

    Zhang, Yu-Xuan; Tang, Ding-Lan; Moore, David R; Amitay, Sygal

    2017-01-01

    Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD) with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training.

  6. Supramodal Enhancement of Auditory Perceptual and Cognitive Learning by Video Game Playing

    Directory of Open Access Journals (Sweden)

    Yu-Xuan Zhang

    2017-06-01

    Full Text Available Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training.

  7. Less is more: latent learning is maximized by shorter training sessions in auditory perceptual learning.

    Science.gov (United States)

    Molloy, Katharine; Moore, David R; Sohoglu, Ediz; Amitay, Sygal

    2012-01-01

    The time course and outcome of perceptual learning can be affected by the length and distribution of practice, but the training regimen parameters that govern these effects have received little systematic study in the auditory domain. We asked whether there was a minimum requirement on the number of trials within a training session for learning to occur, whether there was a maximum limit beyond which additional trials became ineffective, and whether multiple training sessions provided benefit over a single session. We investigated the efficacy of different regimens that varied in the distribution of practice across training sessions and in the overall amount of practice received on a frequency discrimination task. While learning was relatively robust to variations in regimen, the group with the shortest training sessions (∼8 min) had significantly faster learning in early stages of training than groups with longer sessions. In later stages, the group with the longest training sessions (>1 hr) showed slower learning than the other groups, suggesting overtraining. Between-session improvements were inversely correlated with performance; they were largest at the start of training and reduced as training progressed. In a second experiment we found no additional longer-term improvement in performance, retention, or transfer of learning for a group that trained over 4 sessions (∼4 hr in total) relative to a group that trained for a single session (∼1 hr). However, the mechanisms of learning differed; the single-session group continued to improve in the days following cessation of training, whereas the multi-session group showed no further improvement once training had ceased. Shorter training sessions were advantageous because they allowed for more latent, between-session and post-training learning to emerge. These findings suggest that efficient regimens should use short training sessions, and optimized spacing between sessions.

  8. Perceptual learning.

    Science.gov (United States)

    Seitz, Aaron R

    2017-07-10

    Perceptual learning refers to how experience can change the way we perceive sights, sounds, smells, tastes, and touch. Examples abound: music training improves our ability to discern tones; experience with food and wines can refine our pallet (and unfortunately more quickly empty our wallet), and with years of training radiologists learn to save lives by discerning subtle details of images that escape the notice of untrained viewers. We often take perceptual learning for granted, but it has a profound impact on how we perceive the world. In this Primer, I will explain how perceptual learning is transformative in guiding our perceptual processes, how research into perceptual learning provides insight into fundamental mechanisms of learning and brain processes, and how knowledge of perceptual learning can be used to develop more effective training approaches for those requiring expert perceptual skills or those in need of perceptual rehabilitation (such as individuals with poor vision). I will make a case that perceptual learning is ubiquitous, scientifically interesting, and has substantial practical utility to us all. Copyright © 2017. Published by Elsevier Ltd.

  9. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    Directory of Open Access Journals (Sweden)

    Yael Zaltz

    2017-11-01

    Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant

  10. Auditory perceptual learning in adults with and without age-related hearing loss

    Directory of Open Access Journals (Sweden)

    Hanin eKarawani

    2016-02-01

    Full Text Available Introduction: Speech recognition in adverse listening conditions becomes more difficult as we age, particularly for individuals with age-related hearing loss (ARHL. Whether these difficulties can be eased with training remains debated, because it is not clear whether the outcomes are sufficiently general to be of use outside of the training context. The aim of the current study was to compare training-induced learning and generalization between normal-hearing older adults and those with ARHL.Methods: 56 listeners (60-72 y/o, 35 participants with ARHL and 21 normal hearing adults participated in the study. The study design was a cross over design with three groups (immediate-training, delayed-training and no-training group. Trained participants received 13 sessions of home-based auditory training over the course of 4 weeks. Three adverse listening conditions were targeted: (1 Speech-in-noise (2 time compressed speech and (3 competing speakers, and the outcomes of training were compared between normal and ARHL groups. Pre- and post-test sessions were completed by all participants. Outcome measures included tests on all of the trained conditions as well as on a series of untrained conditions designed to assess the transfer of learning to other speech and non-speech conditions. Results: Significant improvements on all trained conditions were observed in both ARHL and normal-hearing groups over the course of training. Normal hearing participants learned more than participants with ARHL in the speech-in-noise condition, but showed similar patterns of learning in the other conditions. Greater pre- to post-test changes were observed in trained than in untrained listeners on all trained conditions. In addition, the ability of trained listeners from the ARHL group to discriminate minimally different pseudowords in noise also improved with training. Conclusions: ARHL did not preclude auditory perceptual learning but there was little generalization to

  11. Auditory-visual stimulus pairing enhances perceptual learning in a songbird.

    Science.gov (United States)

    Hultsch; Schleuss; Todt

    1999-07-01

    In many oscine birds, song learning is affected by social variables, for example the behaviour of a tutor. This implies that both auditory and visual perceptual systems should be involved in the acquisition process. To examine whether and how particular visual stimuli can affect song acquisition, we tested the impact of a tutoring design in which the presentation of auditory stimuli (i.e. species-specific master songs) was paired with a well-defined nonauditory stimulus (i.e. stroboscope light flashes: Strobe regime). The subjects were male hand-reared nightingales, Luscinia megarhynchos. For controls, males were exposed to tutoring without a light stimulus (Control regime). The males' singing recorded 9 months later showed that the Strobe regime had enhanced the acquisition of song patterns. During this treatment birds had acquired more songs than during the Control regime; the observed increase in repertoire size was from 20 to 30% in most cases. Furthermore, the copy quality of imitations acquired during the Strobe regime was better than that of imitations developed from the Control regime, and this was due to a significant increase in the number of 'perfect' song copies. We conclude that these effects were mediated by an intrinsic component (e.g. attention or arousal) which specifically responded to the Strobe regime. Our findings also show that mechanisms of song learning are well prepared to process information from cross-modal perception. Thus, more detailed enquiries into stimulus complexes that are usually referred to as social variables are promising. Copyright 1999 The Association for the Study of Animal Behaviour.

  12. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

    Science.gov (United States)

    Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

  13. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning.

    Science.gov (United States)

    Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.

  14. Fast learning of simple perceptual discriminations reduces brain activation in working memory and in high-level auditory regions.

    Science.gov (United States)

    Daikhin, Luba; Ahissar, Merav

    2015-07-01

    Introducing simple stimulus regularities facilitates learning of both simple and complex tasks. This facilitation may reflect an implicit change in the strategies used to solve the task when successful predictions regarding incoming stimuli can be formed. We studied the modifications in brain activity associated with fast perceptual learning based on regularity detection. We administered a two-tone frequency discrimination task and measured brain activation (fMRI) under two conditions: with and without a repeated reference tone. Although participants could not explicitly tell the difference between these two conditions, the introduced regularity affected both performance and the pattern of brain activation. The "No-Reference" condition induced a larger activation in frontoparietal areas known to be part of the working memory network. However, only the condition with a reference showed fast learning, which was accompanied by a reduction of activity in two regions: the left intraparietal area, involved in stimulus retention, and the posterior superior-temporal area, involved in representing auditory regularities. We propose that this joint reduction reflects a reduction in the need for online storage of the compared tones. We further suggest that this change reflects an implicit strategic shift "backwards" from reliance mainly on working memory networks in the "No-Reference" condition to increased reliance on detected regularities stored in high-level auditory networks.

  15. Perceptual learning: top to bottom.

    Science.gov (United States)

    Amitay, Sygal; Zhang, Yu-Xuan; Jones, Pete R; Moore, David R

    2014-06-01

    Perceptual learning has traditionally been portrayed as a bottom-up phenomenon that improves encoding or decoding of the trained stimulus. Cognitive skills such as attention and memory are thought to drive, guide and modulate learning but are, with notable exceptions, not generally considered to undergo changes themselves as a result of training with simple perceptual tasks. Moreover, shifts in threshold are interpreted as shifts in perceptual sensitivity, with no consideration for non-sensory factors (such as response bias) that may contribute to these changes. Accumulating evidence from our own research and others shows that perceptual learning is a conglomeration of effects, with training-induced changes ranging from the lowest (noise reduction in the phase locking of auditory signals) to the highest (working memory capacity) level of processing, and includes contributions from non-sensory factors that affect decision making even on a "simple" auditory task such as frequency discrimination. We discuss our emerging view of learning as a process that increases the signal-to-noise ratio associated with perceptual tasks by tackling noise sources and inefficiencies that cause performance bottlenecks, and present some implications for training populations other than young, smart, attentive and highly-motivated college students. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  16. Perceptual processing of a complex auditory context

    DEFF Research Database (Denmark)

    Quiroga Martinez, David Ricardo; Hansen, Niels Christian; Højlund, Andreas

    The mismatch negativity (MMN) is a brain response elicited by deviants in a series of repetitive sounds. It reflects the perception of change in low-level sound features and reliably measures perceptual auditory memory. However, most MMN studies use simple tone patterns as stimuli, failing...

  17. Perceptual consequences of disrupted auditory nerve activity.

    Science.gov (United States)

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique

  18. Auditory perceptual load: A review.

    Science.gov (United States)

    Murphy, Sandra; Spence, Charles; Dalton, Polly

    2017-09-01

    Selective attention is a crucial mechanism in everyday life, allowing us to focus on a portion of incoming sensory information at the expense of other less relevant stimuli. The circumstances under which irrelevant stimuli are successfully ignored have been a topic of scientific interest for several decades now. Over the last 20 years, the perceptual load theory (e.g. Lavie, 1995) has provided one robust framework for understanding these effects within the visual modality. The suggestion is that successful selection depends on the perceptual demands imposed by the task-relevant information. However, less research has addressed the question of whether the same principles hold in audition and, to date, the existing literature provides a mixed picture. Here, we review the evidence for and against the applicability of perceptual load theory in hearing, concluding that this question still awaits resolution. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults.

    Science.gov (United States)

    Bernstein, Lynne E; Eberhardt, Silvio P; Auer, Edward T

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  20. Perceptual Plasticity for Auditory Object Recognition

    Science.gov (United States)

    Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.

    2017-01-01

    In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples

  1. Visual Perceptual Learning and Models.

    Science.gov (United States)

    Dosher, Barbara; Lu, Zhong-Lin

    2017-09-15

    Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.

  2. Varieties of perceptual learning.

    Science.gov (United States)

    Mackintosh, N J

    2009-05-01

    Although most studies of perceptual learning in human participants have concentrated on the changes in perception assumed to be occurring, studies of nonhuman animals necessarily measure discrimination learning and generalization and remain agnostic on the question of whether changes in behavior reflect changes in perception. On the other hand, animal studies do make it easier to draw a distinction between supervised and unsupervised learning. Differential reinforcement will surely teach animals to attend to some features of a stimulus array rather than to others. But it is an open question as to whether such changes in attention underlie the enhanced discrimination seen after unreinforced exposure to such an array. I argue that most instances of unsupervised perceptual learning observed in animals (and at least some in human animals) are better explained by appeal to well-established principles and phenomena of associative learning theory: excitatory and inhibitory associations between stimulus elements, latent inhibition, and habituation.

  3. Acetylcholine and Olfactory Perceptual Learning

    Science.gov (United States)

    Wilson, Donald A.; Fletcher, Max L.; Sullivan, Regina M.

    2004-01-01

    Olfactory perceptual learning is a relatively long-term, learned increase in perceptual acuity, and has been described in both humans and animals. Data from recent electrophysiological studies have indicated that olfactory perceptual learning may be correlated with changes in odorant receptive fields of neurons in the olfactory bulb and piriform…

  4. Perceptual learning and human expertise.

    Science.gov (United States)

    Kellman, Philip J; Garrigan, Patrick

    2009-06-01

    We consider perceptual learning: experience-induced changes in the way perceivers extract information. Often neglected in scientific accounts of learning and in instruction, perceptual learning is a fundamental contributor to human expertise and is crucial in domains where humans show remarkable levels of attainment, such as language, chess, music, and mathematics. In Section 2, we give a brief history and discuss the relation of perceptual learning to other forms of learning. We consider in Section 3 several specific phenomena, illustrating the scope and characteristics of perceptual learning, including both discovery and fluency effects. We describe abstract perceptual learning, in which structural relationships are discovered and recognized in novel instances that do not share constituent elements or basic features. In Section 4, we consider primary concepts that have been used to explain and model perceptual learning, including receptive field change, selection, and relational recoding. In Section 5, we consider the scope of perceptual learning, contrasting recent research, focused on simple sensory discriminations, with earlier work that emphasized extraction of invariance from varied instances in more complex tasks. Contrary to some recent views, we argue that perceptual learning should not be confined to changes in early sensory analyzers. Phenomena at various levels, we suggest, can be unified by models that emphasize discovery and selection of relevant information. In a final section, we consider the potential role of perceptual learning in educational settings. Most instruction emphasizes facts and procedures that can be verbalized, whereas expertise depends heavily on implicit pattern recognition and selective extraction skills acquired through perceptual learning. We consider reasons why perceptual learning has not been systematically addressed in traditional instruction, and we describe recent successful efforts to create a technology of perceptual

  5. Perceptual learning and human expertise

    Science.gov (United States)

    Kellman, Philip J.; Garrigan, Patrick

    2009-06-01

    We consider perceptual learning: experience-induced changes in the way perceivers extract information. Often neglected in scientific accounts of learning and in instruction, perceptual learning is a fundamental contributor to human expertise and is crucial in domains where humans show remarkable levels of attainment, such as language, chess, music, and mathematics. In Section 2, we give a brief history and discuss the relation of perceptual learning to other forms of learning. We consider in Section 3 several specific phenomena, illustrating the scope and characteristics of perceptual learning, including both discovery and fluency effects. We describe abstract perceptual learning, in which structural relationships are discovered and recognized in novel instances that do not share constituent elements or basic features. In Section 4, we consider primary concepts that have been used to explain and model perceptual learning, including receptive field change, selection, and relational recoding. In Section 5, we consider the scope of perceptual learning, contrasting recent research, focused on simple sensory discriminations, with earlier work that emphasized extraction of invariance from varied instances in more complex tasks. Contrary to some recent views, we argue that perceptual learning should not be confined to changes in early sensory analyzers. Phenomena at various levels, we suggest, can be unified by models that emphasize discovery and selection of relevant information. In a final section, we consider the potential role of perceptual learning in educational settings. Most instruction emphasizes facts and procedures that can be verbalized, whereas expertise depends heavily on implicit pattern recognition and selective extraction skills acquired through perceptual learning. We consider reasons why perceptual learning has not been systematically addressed in traditional instruction, and we describe recent successful efforts to create a technology of perceptual

  6. Perceptual Fluency, Auditory Generation, and Metamemory: Analyzing the Perceptual Fluency Hypothesis in the Auditory Modality

    Science.gov (United States)

    Besken, Miri; Mulligan, Neil W.

    2014-01-01

    Judgments of learning (JOLs) are sometimes influenced by factors that do not impact actual memory performance. One recent proposal is that perceptual fluency during encoding affects metamemory and is a basis of metacognitive illusions. In the present experiments, participants identified aurally presented words that contained inter-spliced silences…

  7. Integrated approaches to perceptual learning.

    Science.gov (United States)

    Jacobs, Robert A

    2010-04-01

    New technologies and new ways of thinking have recently led to rapid expansions in the study of perceptual learning. We describe three themes shared by many of the nine articles included in this topic on Integrated Approaches to Perceptual Learning. First, perceptual learning cannot be studied on its own because it is closely linked to other aspects of cognition, such as attention, working memory, decision making, and conceptual knowledge. Second, perceptual learning is sensitive to both the stimulus properties of the environment in which an observer exists and to the properties of the tasks that the observer needs to perform. Moreover, the environmental and task properties can be characterized through their statistical regularities. Finally, the study of perceptual learning has important implications for society, including implications for science education and medical rehabilitation. Contributed articles relevant to each theme are summarized. Copyright © 2010 Cognitive Science Society, Inc.

  8. Structured Activities in Perceptual Training to Aid Retention of Visual and Auditory Images.

    Science.gov (United States)

    Graves, James W.; And Others

    The experimental program in structured activities in perceptual training was said to have two main objectives: to train children in retention of visual and auditory images and to increase the children's motivation to learn. Eight boys and girls participated in the program for two hours daily for a 10-week period. The age range was 7.0 to 12.10…

  9. Evolutionary conservation and neuronal mechanisms of auditory perceptual restoration.

    Science.gov (United States)

    Petkov, Christopher I; Sutter, Mitchell L

    2011-01-01

    Auditory perceptual 'restoration' occurs when the auditory system restores an occluded or masked sound of interest. Behavioral work on auditory restoration in humans began over 50 years ago using it to model a noisy environmental scene with competing sounds. It has become clear that not only humans experience auditory restoration: restoration has been broadly conserved in many species. Behavioral studies in humans and animals provide a necessary foundation to link the insights being obtained from human EEG and fMRI to those from animal neurophysiology. The aggregate of data resulting from multiple approaches across species has begun to clarify the neuronal bases of auditory restoration. Different types of neural responses supporting restoration have been found, supportive of multiple mechanisms working within a species. Yet a general principle has emerged that responses correlated with restoration mimic the response that would have been given to the uninterrupted sound of interest. Using the same technology to study different species will help us to better harness animal models of 'auditory scene analysis' to clarify the conserved neural mechanisms shaping the perceptual organization of sound and to advance strategies to improve hearing in natural environmental settings. © 2010 Elsevier B.V. All rights reserved.

  10. Iterative perceptual learning for social behavior synthesis

    NARCIS (Netherlands)

    de Kok, I.A.; Poppe, Ronald Walter; Heylen, Dirk K.J.

    We introduce Iterative Perceptual Learning (IPL), a novel approach to learn computational models for social behavior synthesis from corpora of human–human interactions. IPL combines perceptual evaluation with iterative model refinement. Human observers rate the appropriateness of synthesized

  11. Iterative Perceptual Learning for Social Behavior Synthesis

    NARCIS (Netherlands)

    de Kok, I.A.; Poppe, Ronald Walter; Heylen, Dirk K.J.

    We introduce Iterative Perceptual Learning (IPL), a novel approach for learning computational models for social behavior synthesis from corpora of human-human interactions. The IPL approach combines perceptual evaluation with iterative model refinement. Human observers rate the appropriateness of

  12. Perceptual learning of acoustic noise generates memory-evoked potentials.

    Science.gov (United States)

    Andrillon, Thomas; Kouider, Sid; Agus, Trevor; Pressnitzer, Daniel

    2015-11-02

    Experience continuously imprints on the brain at all stages of life. The traces it leaves behind can produce perceptual learning [1], which drives adaptive behavior to previously encountered stimuli. Recently, it has been shown that even random noise, a type of sound devoid of acoustic structure, can trigger fast and robust perceptual learning after repeated exposure [2]. Here, by combining psychophysics, electroencephalography (EEG), and modeling, we show that the perceptual learning of noise is associated with evoked potentials, without any salient physical discontinuity or obvious acoustic landmark in the sound. Rather, the potentials appeared whenever a memory trace was observed behaviorally. Such memory-evoked potentials were characterized by early latencies and auditory topographies, consistent with a sensory origin. Furthermore, they were generated even on conditions of diverted attention. The EEG waveforms could be modeled as standard evoked responses to auditory events (N1-P2) [3], triggered by idiosyncratic perceptual features acquired through learning. Thus, we argue that the learning of noise is accompanied by the rapid formation of sharp neural selectivity to arbitrary and complex acoustic patterns, within sensory regions. Such a mechanism bridges the gap between the short-term and longer-term plasticity observed in the learning of noise [2, 4-6]. It could also be key to the processing of natural sounds within auditory cortices [7], suggesting that the neural code for sound source identification will be shaped by experience as well as by acoustics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  14. Constraints on Perceptual Learning: Objects and Dimensions.

    Science.gov (United States)

    Bedford, Felice L.

    1995-01-01

    Addresses two questions that may be unique to perceptual learning: What are the circumstances that produce learning? and What is the content of learning? Suggests a critical principle for each question. Provides a discussion of perceptual learning theory, how learning occurs, and what gets learned. Includes a 121-item bibliography. (DR)

  15. Perceptual learning modifies untrained pursuit eye movements

    OpenAIRE

    Szpiro, Sarit F. A.; Spering, Miriam; Carrasco, Marisa

    2014-01-01

    Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training...

  16. Selective Attention to Auditory Memory Neurally Enhances Perceptual Precision.

    Science.gov (United States)

    Lim, Sung-Joo; Wöstmann, Malte; Obleser, Jonas

    2015-12-09

    Selective attention to a task-relevant stimulus facilitates encoding of that stimulus into a working memory representation. It is less clear whether selective attention also improves the precision of a stimulus already represented in memory. Here, we investigate the behavioral and neural dynamics of selective attention to representations in auditory working memory (i.e., auditory objects) using psychophysical modeling and model-based analysis of electroencephalographic signals. Human listeners performed a syllable pitch discrimination task where two syllables served as to-be-encoded auditory objects. Valid (vs neutral) retroactive cues were presented during retention to allow listeners to selectively attend to the to-be-probed auditory object in memory. Behaviorally, listeners represented auditory objects in memory more precisely (expressed by steeper slopes of a psychometric curve) and made faster perceptual decisions when valid compared to neutral retrocues were presented. Neurally, valid compared to neutral retrocues elicited a larger frontocentral sustained negativity in the evoked potential as well as enhanced parietal alpha/low-beta oscillatory power (9-18 Hz) during memory retention. Critically, individual magnitudes of alpha oscillatory power (7-11 Hz) modulation predicted the degree to which valid retrocues benefitted individuals' behavior. Our results indicate that selective attention to a specific object in auditory memory does benefit human performance not by simply reducing memory load, but by actively engaging complementary neural resources to sharpen the precision of the task-relevant object in memory. Can selective attention improve the representational precision with which objects are held in memory? And if so, what are the neural mechanisms that support such improvement? These issues have been rarely examined within the auditory modality, in which acoustic signals change and vanish on a milliseconds time scale. Introducing a new auditory memory

  17. Perceptual learning modifies untrained pursuit eye movements.

    Science.gov (United States)

    Szpiro, Sarit F A; Spering, Miriam; Carrasco, Marisa

    2014-07-07

    Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. © 2014 ARVO.

  18. Constraints on the Transfer of Perceptual Learning in Accented Speech

    Science.gov (United States)

    Eisner, Frank; Melinger, Alissa; Weber, Andrea

    2013-01-01

    The perception of speech sounds can be re-tuned through a mechanism of lexically driven perceptual learning after exposure to instances of atypical speech production. This study asked whether this re-tuning is sensitive to the position of the atypical sound within the word. We investigated perceptual learning using English voiced stop consonants, which are commonly devoiced in word-final position by Dutch learners of English. After exposure to a Dutch learner’s productions of devoiced stops in word-final position (but not in any other positions), British English (BE) listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with devoiced final stops (e.g., “seed”, pronounced [si:th]), facilitated recognition of visual targets with voiced final stops (e.g., SEED). In Experiment 1, this learning effect generalized to test pairs where the critical contrast was in word-initial position, e.g., auditory primes such as “town” facilitated recognition of visual targets like DOWN. Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The generalization to word-initial position did not occur when participants had also heard correctly voiced, word-initial stops during exposure (Experiment 2), and when the speaker was a native BE speaker who mimicked the word-final devoicing (Experiment 3). The readiness of the perceptual system to generalize a previously learned adjustment to other positions within the word thus appears to be modulated by distributional properties of the speech input, as well as by the perceived sociophonetic characteristics of the speaker. The results suggest that the transfer of pre-lexical perceptual adjustments that occur through lexically driven learning can be affected by a combination of acoustic, phonological, and sociophonetic factors. PMID:23554598

  19. The Effects of Meaning-Based Auditory Training on Behavioral Measures of Perceptual Effort in Individuals with Impaired Hearing.

    Science.gov (United States)

    Sommers, Mitchell S; Tye-Murray, Nancy; Barcroft, Joe; Spehar, Brent P

    2015-11-01

    There has been considerable interest in measuring the perceptual effort required to understand speech, as well as to identify factors that might reduce such effort. In the current study, we investigated whether, in addition to improving speech intelligibility, auditory training also could reduce perceptual or listening effort. Perceptual effort was assessed using a modified version of the n-back memory task in which participants heard lists of words presented without background noise and were asked to continually update their memory of the three most recently presented words. Perceptual effort was indexed by memory for items in the three-back position immediately before, immediately after, and 3 months after participants completed the Computerized Learning Exercises for Aural Rehabilitation (clEAR), a 12-session computerized auditory training program. Immediate posttraining measures of perceptual effort indicated that participants could remember approximately one additional word compared to pretraining. Moreover, some training gains were retained at the 3-month follow-up, as indicated by significantly greater recall for the three-back item at the 3-month measurement than at pretest. There was a small but significant correlation between gains in intelligibility and gains in perceptual effort. The findings are discussed within the framework of a limited-capacity speech perception system.

  20. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    Directory of Open Access Journals (Sweden)

    David Alais

    2010-06-01

    Full Text Available An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question.Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ. Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones was slightly weaker than visual learning (lateralised grating patches. Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes.The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order

  1. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    Science.gov (United States)

    Alais, David; Cass, John

    2010-06-23

    An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be

  2. Modelling the Emergence and Dynamics of Perceptual Organisation in Auditory Streaming

    Science.gov (United States)

    Mill, Robert W.; Bőhm, Tamás M.; Bendixen, Alexandra; Winkler, István; Denham, Susan L.

    2013-01-01

    Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives—a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the

  3. Developmental programming of auditory learning

    Directory of Open Access Journals (Sweden)

    Melania Puddu

    2012-10-01

    Full Text Available The basic structures involved in the development of auditory function and consequently in language acquisition are directed by genetic code, but the expression of individual genes may be altered by exposure to environmental factors, which if favorable, orient it in the proper direction, leading its development towards normality, if unfavorable, they deviate it from its physiological course. Early sensorial experience during the foetal period (i.e. intrauterine noise floor, sounds coming from the outside and attenuated by the uterine filter, particularly mother’s voice and modifications induced by it at the cochlear level represent the first example of programming in one of the earliest critical periods in development of the auditory system. This review will examine the factors that influence the developmental programming of auditory learning from the womb to the infancy. In particular it focuses on the following points: the prenatal auditory experience and the plastic phenomena presumably induced by it in the auditory system from the basilar membrane to the cortex;the involvement of these phenomena on language acquisition and on the perception of language communicative intention after birth;the consequences of auditory deprivation in critical periods of auditory development (i.e. premature interruption of foetal life.

  4. Relationship between perceptual learning in speech and statistical learning in younger and older adults

    Directory of Open Access Journals (Sweden)

    Thordis Marisa Neger

    2014-09-01

    Full Text Available Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech.In the present study, 73 older adults (aged over 60 years and 60 younger adults (aged between 18 and 30 years performed a visual artificial grammar learning task and were presented with sixty meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory and processing speed. Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.

  5. Multisensory perceptual learning is dependent upon task difficulty.

    Science.gov (United States)

    De Niear, Matthew A; Koo, Bonhwang; Wallace, Mark T

    2016-11-01

    There has been a growing interest in developing behavioral tasks to enhance temporal acuity as recent findings have demonstrated changes in temporal processing in a number of clinical conditions. Prior research has demonstrated that perceptual training can enhance temporal acuity both within and across different sensory modalities. Although certain forms of unisensory perceptual learning have been shown to be dependent upon task difficulty, this relationship has not been explored for multisensory learning. The present study sought to determine the effects of task difficulty on multisensory perceptual learning. Prior to and following a single training session, participants completed a simultaneity judgment (SJ) task, which required them to judge whether a visual stimulus (flash) and auditory stimulus (beep) presented in synchrony or at various stimulus onset asynchronies (SOAs) occurred synchronously or asynchronously. During the training session, participants completed the same SJ task but received feedback regarding the accuracy of their responses. Participants were randomly assigned to one of three levels of difficulty during training: easy, moderate, and hard, which were distinguished based on the SOAs used during training. We report that only the most difficult (i.e., hard) training protocol enhanced temporal acuity. We conclude that perceptual training protocols for enhancing multisensory temporal acuity may be optimized by employing audiovisual stimuli for which it is difficult to discriminate temporal synchrony from asynchrony.

  6. Perceptual learning and adult cortical plasticity.

    Science.gov (United States)

    Gilbert, Charles D; Li, Wu; Piech, Valentin

    2009-06-15

    The visual cortex retains the capacity for experience-dependent changes, or plasticity, of cortical function and cortical circuitry, throughout life. These changes constitute the mechanism of perceptual learning in normal visual experience and in recovery of function after CNS damage. Such plasticity can be seen at multiple stages in the visual pathway, including primary visual cortex. The manifestation of the functional changes associated with perceptual learning involve both long term modification of cortical circuits during the course of learning, and short term dynamics in the functional properties of cortical neurons. These dynamics are subject to top-down influences of attention, expectation and perceptual task. As a consequence, each cortical area is an adaptive processor, altering its function in accordance to immediate perceptual demands.

  7. Auditory working memory predicts individual differences in absolute pitch learning.

    Science.gov (United States)

    Van Hedger, Stephen C; Heald, Shannon L M; Koch, Rachelle; Nusbaum, Howard C

    2015-07-01

    Absolute pitch (AP) is typically defined as the ability to label an isolated tone as a musical note in the absence of a reference tone. At first glance the acquisition of AP note categories seems like a perceptual learning task, since individuals must assign a category label to a stimulus based on a single perceptual dimension (pitch) while ignoring other perceptual dimensions (e.g., loudness, octave, instrument). AP, however, is rarely discussed in terms of domain-general perceptual learning mechanisms. This is because AP is typically assumed to depend on a critical period of development, in which early exposure to pitches and musical labels is thought to be necessary for the development of AP precluding the possibility of adult acquisition of AP. Despite this view of AP, several previous studies have found evidence that absolute pitch category learning is, to an extent, trainable in a post-critical period adult population, even if the performance typically achieved by this population is below the performance of a "true" AP possessor. The current studies attempt to understand the individual differences in learning to categorize notes using absolute pitch cues by testing a specific prediction regarding cognitive capacity related to categorization - to what extent does an individual's general auditory working memory capacity (WMC) predict the success of absolute pitch category acquisition. Since WMC has been shown to predict performance on a wide variety of other perceptual and category learning tasks, we predict that individuals with higher WMC should be better at learning absolute pitch note categories than individuals with lower WMC. Across two studies, we demonstrate that auditory WMC predicts the efficacy of learning absolute pitch note categories. These results suggest that a higher general auditory WMC might underlie the formation of absolute pitch categories for post-critical period adults. Implications for understanding the mechanisms that underlie the

  8. Large-scale network dynamics of beta-band oscillations underlie auditory perceptual decision-making

    Directory of Open Access Journals (Sweden)

    Mohsen Alavash

    2017-06-01

    Full Text Available Perceptual decisions vary in the speed at which we make them. Evidence suggests that translating sensory information into perceptual decisions relies on distributed interacting neural populations, with decision speed hinging on power modulations of the neural oscillations. Yet the dependence of perceptual decisions on the large-scale network organization of coupled neural oscillations has remained elusive. We measured magnetoencephalographic signals in human listeners who judged acoustic stimuli composed of carefully titrated clouds of tone sweeps. These stimuli were used in two task contexts, in which the participants judged the overall pitch or direction of the tone sweeps. We traced the large-scale network dynamics of the source-projected neural oscillations on a trial-by-trial basis using power-envelope correlations and graph-theoretical network discovery. In both tasks, faster decisions were predicted by higher segregation and lower integration of coupled beta-band (∼16–28 Hz oscillations. We also uncovered the brain network states that promoted faster decisions in either lower-order auditory or higher-order control brain areas. Specifically, decision speed in judging the tone sweep direction critically relied on the nodal network configurations of anterior temporal, cingulate, and middle frontal cortices. Our findings suggest that global network communication during perceptual decision-making is implemented in the human brain by large-scale couplings between beta-band neural oscillations. The speed at which we make perceptual decisions varies. This translation of sensory information into perceptual decisions hinges on dynamic changes in neural oscillatory activity. However, the large-scale neural-network embodiment supporting perceptual decision-making is unclear. We addressed this question by experimenting two auditory perceptual decision-making situations. Using graph-theoretical network discovery, we traced the large-scale network

  9. Topographic generalization of tactile perceptual learning.

    Science.gov (United States)

    Harrar, Vanessa; Spence, Charles; Makin, Tamar R

    2014-02-01

    Perceptual learning can improve our sensory abilities. Understanding its underlying mechanisms, in particular, when perceptual learning generalizes, has become a focus of research and controversy. Specifically, there is little consensus regarding the extent to which tactile perceptual learning generalizes across fingers. We measured tactile orientation discrimination abilities on 4 fingers (index and middle fingers of both hands), using psychophysical measures, before and after 4 training sessions on 1 finger. Given the somatotopic organization of the hand representation in the somatosensory cortex, the topography of the cortical areas underlying tactile perceptual learning can be inferred from the pattern of generalization across fingers; only fingers sharing cortical representation with the trained finger ought to improve with it. Following training, performance improved not only for the trained finger but also for its adjacent and homologous fingers. Although these fingers were not exposed to training, they nevertheless demonstrated similar levels of learning as the trained finger. Conversely, the performance of the finger that was neither adjacent nor homologous to the trained finger was unaffected by training, despite the fact that our procedure was designed to enhance generalization, as described in recent visual perceptual learning research. This pattern of improved performance is compatible with previous reports of neuronal receptive fields (RFs) in the primary somatosensory cortex (SI) spanning adjacent and homologous digits. We conclude that perceptual learning rooted in low-level cortex can still generalize, and suggest potential applications for the neurorehabilitation of syndromes associated with maladaptive plasticity in SI. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  10. Data Collection and Analysis Techniques for Evaluating the Perceptual Qualities of Auditory Stimuli

    Energy Technology Data Exchange (ETDEWEB)

    Bonebright, T.L.; Caudell, T.P.; Goldsmith, T.E.; Miner, N.E.

    1998-11-17

    This paper describes a general methodological framework for evaluating the perceptual properties of auditory stimuli. The framework provides analysis techniques that can ensure the effective use of sound for a variety of applications including virtual reality and data sonification systems. Specifically, we discuss data collection techniques for the perceptual qualities of single auditory stimuli including identification tasks, context-based ratings, and attribute ratings. In addition, we present methods for comparing auditory stimuli, such as discrimination tasks, similarity ratings, and sorting tasks. Finally, we discuss statistical techniques that focus on the perceptual relations among stimuli, such as Multidimensional Scaling (MDS) and Pathfinder Analysis. These methods are presented as a starting point for an organized and systematic approach for non-experts in perceptual experimental methods, rather than as a complete manual for performing the statistical techniques and data collection methods. It is our hope that this paper will help foster further interdisciplinary collaboration among perceptual researchers, designers, engineers, and others in the development of effective auditory displays.

  11. Music lessons improve auditory perceptual and cognitive performance in deaf children.

    Science.gov (United States)

    Rochette, Françoise; Moussard, Aline; Bigand, Emmanuel

    2014-01-01

    Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5-4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  12. Music lessons improve auditory perceptual and cognitive performance in deaf children

    Directory of Open Access Journals (Sweden)

    Françoise eROCHETTE

    2014-07-01

    Full Text Available Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5 to 4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically-trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  13. Factors of Predicted Learning Disorders and their Interaction with Attentional and Perceptual Training Procedures.

    Science.gov (United States)

    Friar, John T.

    Two factors of predicted learning disorders were investigated: (1) inability to maintain appropriate classroom behavior (BEH), (2) perceptual discrimination deficit (PERC). Three groups of first-graders (BEH, PERC, normal control) were administered measures of impulse control, distractability, auditory discrimination, and visual discrimination.…

  14. No counterpart of visual perceptual echoes in the auditory system.

    Directory of Open Access Journals (Sweden)

    Barkın İlhan

    Full Text Available It has been previously demonstrated by our group that a visual stimulus made of dynamically changing luminance evokes an echo or reverberation at ~10 Hz, lasting up to a second. In this study we aimed to reveal whether similar echoes also exist in the auditory modality. A dynamically changing auditory stimulus equivalent to the visual stimulus was designed and employed in two separate series of experiments, and the presence of reverberations was analyzed based on reverse correlations between stimulus sequences and EEG epochs. The first experiment directly compared visual and auditory stimuli: while previous findings of ~10 Hz visual echoes were verified, no similar echo was found in the auditory modality regardless of frequency. In the second experiment, we tested if auditory sequences would influence the visual echoes when they were congruent or incongruent with the visual sequences. However, the results in that case similarly did not reveal any auditory echoes, nor any change in the characteristics of visual echoes as a function of audio-visual congruence. The negative findings from these experiments suggest that brain oscillations do not equivalently affect early sensory processes in the visual and auditory modalities, and that alpha (8-13 Hz oscillations play a special role in vision.

  15. Resistance to Interference of Olfactory Perceptual Learning

    Science.gov (United States)

    Stevenson, Richard J.; Case, Trevor I.; Tomiczek, Caroline

    2007-01-01

    Olfactory memory is especially persistent. The current study explored whether this applies to a form of perceptual learning, in which experience of an odor mixture results in greater judged similarity between its elements. Experiment 1A contrasted 2 forms of interference procedure, "compound" (mixture AW, followed by presentation of new mixtures…

  16. Lexically guided perceptual learning in Mandarin Chinese

    NARCIS (Netherlands)

    Burchfield, L.A.; Luk, S.H.K.; Antoniou, M.; Cutler, A.

    2017-01-01

    Lexically guided perceptual learni ng refers to the use of lexical knowledge to retune sp eech categories and thereby adapt to a novel talker's pronunciation. This adaptation has been extensively documented, but primarily for segmental-based learning in English and Dutch. In languages with lexical

  17. Age Differences in Voice Evaluation: From Auditory-Perceptual Evaluation to Social Interactions

    Science.gov (United States)

    Lortie, Catherine L.; Deschamps, Isabelle; Guitton, Matthieu J.; Tremblay, Pascale

    2018-01-01

    Purpose: The factors that influence the evaluation of voice in adulthood, as well as the consequences of such evaluation on social interactions, are not well understood. Here, we examined the effect of listeners' age and the effect of talker age, sex, and smoking status on the auditory-perceptual evaluation of voice, voice-related psychosocial…

  18. Crossmodal Perceptual Learning and Sensory Substitution

    Directory of Open Access Journals (Sweden)

    Michael J Proulx

    2011-10-01

    Full Text Available A sensory substitution device for blind persons aims to provide the missing visual input by converting images into a form that another modality can perceive, such as sound. Here I will discuss the perceptual learning and attentional mechanisms necessary for interpreting sounds produced by a device (The vOICe in a visuospatial manner. Although some aspects of the conversion, such as relating vertical location to pitch, rely on natural crossmodal mappings, the extensive training required suggests that synthetic mappings are required to generalize perceptual learning to new objects and environments, and ultimately to experience visual qualia. Here I will discuss the effects of the conversion and training on perception and attention that demonstrate the synthetic nature of learning the crossmodal mapping. Sensorimotor experience may be required to facilitate learning, develop expertise, and to develop a form of synthetic synaesthesia.

  19. Nicotine facilitates memory consolidation in perceptual learning.

    Science.gov (United States)

    Beer, Anton L; Vartak, Devavrat; Greenlee, Mark W

    2013-01-01

    Perceptual learning is a special type of non-declarative learning that involves experience-dependent plasticity in sensory cortices. The cholinergic system is known to modulate declarative learning. In particular, reduced levels or efficacy of the neurotransmitter acetylcholine were found to facilitate declarative memory consolidation. However, little is known about the role of the cholinergic system in memory consolidation of non-declarative learning. Here we compared two groups of non-smoking men who learned a visual texture discrimination task (TDT). One group received chewing tobacco containing nicotine for 1 h directly following the TDT training. The other group received a similar tasting control substance without nicotine. Electroencephalographic recordings during substance consumption showed reduced alpha activity and P300 latencies in the nicotine group compared to the control group. When re-tested on the TDT the following day, both groups responded more accurately and more rapidly than during training. These improvements were specific to the retinal location and orientation of the texture elements of the TDT suggesting that learning involved early visual cortex. A group comparison showed that learning effects were more pronounced in the nicotine group than in the control group. These findings suggest that oral consumption of nicotine enhances the efficacy of nicotinic acetylcholine receptors. Our findings further suggest that enhanced efficacy of the cholinergic system facilitates memory consolidation in perceptual learning (and possibly other types of non-declarative learning). In that regard acetylcholine seems to affect consolidation processes in perceptual learning in a different manner than in declarative learning. Alternatively, our findings might reflect dose-dependent cholinergic modulation of memory consolidation. This article is part of a Special Issue entitled 'Cognitive Enhancers'. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Auditory event-related potentials associated with perceptual reversals of bistable pitch motion.

    Science.gov (United States)

    Davidson, Gray D; Pitts, Michael A

    2014-01-01

    Previous event-related potential (ERP) experiments have consistently identified two components associated with perceptual transitions of bistable visual stimuli, the "reversal negativity" (RN) and the "late positive complex" (LPC). The RN (~200 ms post-stimulus, bilateral occipital-parietal distribution) is thought to reflect transitions between neural representations that form the moment-to-moment contents of conscious perception, while the LPC (~400 ms, central-parietal) is considered an index of post-perceptual processing related to accessing and reporting one's percept. To explore the generality of these components across sensory modalities, the present experiment utilized a novel bistable auditory stimulus. Pairs of complex tones with ambiguous pitch relationships were presented sequentially while subjects reported whether they perceived the tone pairs as ascending or descending in pitch. ERPs elicited by the tones were compared according to whether perceived pitch motion changed direction or remained the same across successive trials. An auditory reversal negativity (aRN) component was evident at ~170 ms post-stimulus over bilateral fronto-central scalp locations. An auditory LPC component (aLPC) was evident at subsequent latencies (~350 ms, fronto-central distribution). These two components may be auditory analogs of the visual RN and LPC, suggesting functionally equivalent but anatomically distinct processes in auditory vs. visual bistable perception.

  1. Young Drivers Perceptual Learning Styles Preferences and Traffic Accidents

    Directory of Open Access Journals (Sweden)

    Svetlana Čičević

    2011-05-01

    Full Text Available Young drivers are over-represented in crash and fatality statistics. One way of dealing with this problem is to achieve primary prevention through driver education and training. Factors of traffic accidents related to gender, age, driving experience, and self-assessments of safety and their relationship to perceptual learning styles (LS preferences have been analyzed in this study. The results show that auditory is the most prominent LS. Drivers in general, as well as drivers without traffic accidents favour visual and tactile LS. Both inexperienced and highly experienced drivers show relatively high preference of kinaesthetic style. Yet, taking into account driving experience we could see that the role of kinaesthetic LS is reduced, since individual LS has become more important. Based on the results of this study it can be concluded that a multivariate and multistage approach to driver education, taking into account differences in LS preferences, would be highly beneficial for traffic safety.

  2. Auditory-Perceptual Evaluation of Dysphonia: A Comparison Between Narrow and Broad Terminology Systems

    DEFF Research Database (Denmark)

    Iwarsson, Jenny

    2017-01-01

    of the terminology used in the multiparameter Danish Dysphonia Assessment (DDA) approach into the five-parameter GRBAS system. Methods. Voice samples illustrating type and grade of the voice qualities included in DDA were rated by five speech language pathologists using the GRBAS system with the aim of estimating...... terms and antagonists, reflecting muscular hypo- and hyperfunction. Key Words: Auditory-perceptual voice analysis–Dysphonia–GRBAS–Listening test–Voice ratings....

  3. Chromatic Perceptual Learning but No Category Effects without Linguistic Input.

    Science.gov (United States)

    Grandison, Alexandra; Sowden, Paul T; Drivonikou, Vicky G; Notman, Leslie A; Alexander, Iona; Davies, Ian R L

    2016-01-01

    Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest.

  4. Perceptual grouping over time within and across auditory and tactile modalities.

    Directory of Open Access Journals (Sweden)

    I-Fan Lin

    Full Text Available In auditory scene analysis, population separation and temporal coherence have been proposed to explain how auditory features are grouped together and streamed over time. The present study investigated whether these two theories can be applied to tactile streaming and whether temporal coherence theory can be applied to crossmodal streaming. The results show that synchrony detection between two tones/taps at different frequencies/locations became difficult when one of the tones/taps was embedded in a perceptual stream. While the taps applied to the same location were streamed over time, the taps applied to different locations were not. This observation suggests that tactile stream formation can be explained by population-separation theory. On the other hand, temporally coherent auditory stimuli at different frequencies were streamed over time, but temporally coherent tactile stimuli applied to different locations were not. When there was within-modality streaming, temporally coherent auditory stimuli and tactile stimuli were not streamed over time, either. This observation suggests the limitation of temporal coherence theory when it is applied to perceptual grouping over time.

  5. Ambiguity Tolerance and Perceptual Learning Styles of Chinese EFL Learners

    Science.gov (United States)

    Li, Haishan; He, Qingshun

    2016-01-01

    Ambiguity tolerance and perceptual learning styles are the two influential elements showing individual differences in EFL learning. This research is intended to explore the relationship between Chinese EFL learners' ambiguity tolerance and their preferred perceptual learning styles. The findings include (1) the learners are sensitive to English…

  6. Comparison of perceptual properties of auditory streaming between spectral and amplitude modulation domains.

    Science.gov (United States)

    Yamagishi, Shimpei; Otsuka, Sho; Furukawa, Shigeto; Kashino, Makio

    2017-07-01

    The two-tone sequence (ABA_), which comprises two different sounds (A and B) and a silent gap, has been used to investigate how the auditory system organizes sequential sounds depending on various stimulus conditions or brain states. Auditory streaming can be evoked by differences not only in the tone frequency ("spectral cue": ΔF TONE , TONE condition) but also in the amplitude modulation rate ("AM cue": ΔF AM , AM condition). The aim of the present study was to explore the relationship between the perceptual properties of auditory streaming for the TONE and AM conditions. A sequence with a long duration (400 repetitions of ABA_) was used to examine the property of the bistability of streaming. The ratio of feature differences that evoked an equivalent probability of the segregated percept was close to the ratio of the Q-values of the auditory and modulation filters, consistent with a "channeling theory" of auditory streaming. On the other hand, for values of ΔF AM and ΔF TONE evoking equal probabilities of the segregated percept, the number of perceptual switches was larger for the TONE condition than for the AM condition, indicating that the mechanism(s) that determine the bistability of auditory streaming are different between or sensitive to the two domains. Nevertheless, the number of switches for individual listeners was positively correlated between the spectral and AM domains. The results suggest a possibility that the neural substrates for spectral and AM processes share a common switching mechanism but differ in location and/or in the properties of neural activity or the strength of internal noise at each level. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Does perceptual learning require consciousness or attention?

    Science.gov (United States)

    Meuwese, Julia D I; Post, Ruben A G; Scholte, H Steven; Lamme, Victor A F

    2013-10-01

    It has been proposed that visual attention and consciousness are separate [Koch, C., & Tsuchiya, N. Attention and consciousness: Two distinct brain processes. Trends in Cognitive Sciences, 11, 16-22, 2007] and possibly even orthogonal processes [Lamme, V. A. F. Why visual attention and awareness are different. Trends in Cognitive Sciences, 7, 12-18, 2003]. Attention and consciousness converge when conscious visual percepts are attended and hence become available for conscious report. In such a view, a lack of reportability can have two causes: the absence of attention or the absence of a conscious percept. This raises an important question in the field of perceptual learning. It is known that learning can occur in the absence of reportability [Gutnisky, D. A., Hansen, B. J., Iliescu, B. F., & Dragoi, V. Attention alters visual plasticity during exposure-based learning. Current Biology, 19, 555-560, 2009; Seitz, A. R., Kim, D., & Watanabe, T. Rewards evoke learning of unconsciously processed visual stimuli in adult humans. Neuron, 61, 700-707, 2009; Seitz, A. R., & Watanabe, T. Is subliminal learning really passive? Nature, 422, 36, 2003; Watanabe, T., Náñez, J. E., & Sasaki, Y. Perceptual learning without perception. Nature, 413, 844-848, 2001], but it is unclear which of the two ingredients-consciousness or attention-is not necessary for learning. We presented textured figure-ground stimuli and manipulated reportability either by masking (which only interferes with consciousness) or with an inattention paradigm (which only interferes with attention). During the second session (24 hr later), learning was assessed neurally and behaviorally, via differences in figure-ground ERPs and via a detection task. Behavioral and neural learning effects were found for stimuli presented in the inattention paradigm and not for masked stimuli. Interestingly, the behavioral learning effect only became apparent when performance feedback was given on the task to measure learning

  8. Perceptual learning during action video game playing.

    Science.gov (United States)

    Green, C Shawn; Li, Renjie; Bavelier, Daphne

    2010-04-01

    Action video games have been shown to enhance behavioral performance on a wide variety of perceptual tasks, from those that require effective allocation of attentional resources across the visual scene, to those that demand the successful identification of fleetingly presented stimuli. Importantly, these effects have not only been shown in expert action video game players, but a causative link has been established between action video game play and enhanced processing through training studies. Although an account based solely on attention fails to capture the variety of enhancements observed after action game playing, a number of models of perceptual learning are consistent with the observed results, with behavioral modeling favoring the hypothesis that avid video game players are better able to form templates for, or extract the relevant statistics of, the task at hand. This may suggest that the neural site of learning is in areas where information is integrated and actions are selected; yet changes in low-level sensory areas cannot be ruled out. Copyright © 2009 Cognitive Science Society, Inc.

  9. Perceptual learning: toward a comprehensive theory.

    Science.gov (United States)

    Watanabe, Takeo; Sasaki, Yuka

    2015-01-03

    Visual perceptual learning (VPL) is long-term performance increase resulting from visual perceptual experience. Task-relevant VPL of a feature results from training of a task on the feature relevant to the task. Task-irrelevant VPL arises as a result of exposure to the feature irrelevant to the trained task. At least two serious problems exist. First, there is the controversy over which stage of information processing is changed in association with task-relevant VPL. Second, no model has ever explained both task-relevant and task-irrelevant VPL. Here we propose a dual plasticity model in which feature-based plasticity is a change in a representation of the learned feature, and task-based plasticity is a change in processing of the trained task. Although the two types of plasticity underlie task-relevant VPL, only feature-based plasticity underlies task-irrelevant VPL. This model provides a new comprehensive framework in which apparently contradictory results could be explained.

  10. Auditory Multi-Stability: Idiosyncratic Perceptual Switching Patterns, Executive Functions and Personality Traits.

    Directory of Open Access Journals (Sweden)

    Dávid Farkas

    Full Text Available Multi-stability refers to the phenomenon of perception stochastically switching between possible interpretations of an unchanging stimulus. Despite considerable variability, individuals show stable idiosyncratic patterns of switching between alternative perceptions in the auditory streaming paradigm. We explored correlates of the individual switching patterns with executive functions, personality traits, and creativity. The main dimensions on which individual switching patterns differed from each other were identified using multidimensional scaling. Individuals with high scores on the dimension explaining the largest portion of the inter-individual variance switched more often between the alternative perceptions than those with low scores. They also perceived the most unusual interpretation more often, and experienced all perceptual alternatives with a shorter delay from stimulus onset. The ego-resiliency personality trait, which reflects a tendency for adaptive flexibility and experience seeking, was significantly positively related to this dimension. Taking these results together we suggest that this dimension may reflect the individual's tendency for exploring the auditory environment. Executive functions were significantly related to some of the variables describing global properties of the switching patterns, such as the average number of switches. Thus individual patterns of perceptual switching in the auditory streaming paradigm are related to some personality traits and executive functions.

  11. Perceptual-Auditory and Acoustical Analysis of the Voices of Transgender Women.

    Science.gov (United States)

    Schwarz, Karine; Fontanari, Anna Martha Vaitses; Costa, Angelo Brandelli; Soll, Bianca Machado Borba; da Silva, Dhiordan Cardoso; de Sá Villas-Bôas, Anna Paula; Cielo, Carla Aparecida; Bastilha, Gabriele Rodrigues; Ribeiro, Vanessa Veis; Dorfman, Maria Elza Kazumi Yamaguti; Lobato, Maria Inês Rodrigues

    2017-09-28

    Voice is an important gender marker in the transition process as a transgender individual accepts a new gender identity. The objectives of this study were to describe and relate aspects of a perceptual-auditory analysis and the fundamental frequency (F0) of male-to-female (MtF) transsexual individuals. A case-control study was carried out with individuals aged 19-52 years who attended the Gender Identity Program of the Hospital de Clínicas of Porto Alegre. Vocal recordings from the MtF transgender and cisgender individuals (vowel /a:/ and six phrases of Consensus Auditory Perceptual Evaluation Voice [CAPE-V]) were edited and randomly coded before storage in a Dropbox folder. The voices (vowel /a:/) were analyzed by consensus on the same day by two judge speech therapists who had more than 10 years of experience in the voice area using the GRBASI perceptual-auditory vocal evaluation scale. Acoustic analysis of the voices was performed using the advanced Multi-Dimensional Voice Program software. The resonance focus and the degrees of masculinity and femininity for each voice recording were determined by listening to the CAPE-V phrases, for the same judges. There were significant differences between the groups regarding a greater frequency of subjects with F0 between 80 and 150 Hz (P = 0.003), and a greater frequency of hypernasal resonant focus (P < 0.001) in the MtF cases and greater frequency of subjects with absence of roughness (P = 0.031) in the control group. The MtF group of individuals showed altered vertical resonant focus, more masculine voices, and lower fundamental frequencies. The control group showed a significant absence of roughness. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  12. Salicylate-Induced Auditory Perceptual Disorders and Plastic Changes in Nonclassical Auditory Centers in Rats

    Directory of Open Access Journals (Sweden)

    Guang-Di Chen

    2014-01-01

    Full Text Available Previous studies have shown that sodium salicylate (SS activates not only central auditory structures, but also nonauditory regions associated with emotion and memory. To identify electrophysiological changes in the nonauditory regions, we recorded sound-evoked local field potentials and multiunit discharges from the striatum, amygdala, hippocampus, and cingulate cortex after SS-treatment. The SS-treatment produced behavioral evidence of tinnitus and hyperacusis. Physiologically, the treatment significantly enhanced sound-evoked neural activity in the striatum, amygdala, and hippocampus, but not in the cingulate. The enhanced sound evoked response could be linked to the hyperacusis-like behavior. Further analysis showed that the enhancement of sound-evoked activity occurred predominantly at the midfrequencies, likely reflecting shifts of neurons towards the midfrequency range after SS-treatment as observed in our previous studies in the auditory cortex and amygdala. The increased number of midfrequency neurons would lead to a relative higher number of total spontaneous discharges in the midfrequency region, even though the mean discharge rate of each neuron may not increase. The tonotopical overactivity in the midfrequency region in quiet may potentially lead to tonal sensation of midfrequency (the tinnitus. The neural changes in the amygdala and hippocampus may also contribute to the negative effect that patients associate with their tinnitus.

  13. [Design of standard voice sample text for subjective auditory perceptual evaluation of voice disorders].

    Science.gov (United States)

    Li, Jin-rang; Sun, Yan-yan; Xu, Wen

    2010-09-01

    To design a speech voice sample text with all phonemes in Mandarin for subjective auditory perceptual evaluation of voice disorders. The principles for design of a speech voice sample text are: The short text should include the 21 initials and 39 finals, this may cover all the phonemes in Mandarin. Also, the short text should have some meanings. A short text was made out. It had 155 Chinese words, and included 21 initials and 38 finals (the final, ê, was not included because it was rarely used in Mandarin). Also, the text covered 17 light tones and one "Erhua". The constituent ratios of the initials and finals presented in this short text were statistically similar as those in Mandarin according to the method of similarity of the sample and population (r = 0.742, P text were statistically not similar as those in Mandarin (r = 0.731, P > 0.05). A speech voice sample text with all phonemes in Mandarin was made out. The constituent ratios of the initials and finals presented in this short text are similar as those in Mandarin. Its value for subjective auditory perceptual evaluation of voice disorders need further study.

  14. Perceptual Organization of Visual Structure Requires a Flexible Learning Mechanism

    Science.gov (United States)

    Aslin, Richard N.

    2011-01-01

    Bhatt and Quinn (2011) provide a compelling and comprehensive review of empirical evidence that supports the operation of principles of perceptual organization in young infants. They also have provided a comprehensive list of experiences that could serve to trigger the learning of at least some of these principles of perceptual organization, and…

  15. Mode transition and change in variable use in perceptual learning

    NARCIS (Netherlands)

    Hajnal, A; Grocki, M; Jacobs, DM; Zaal, FTJM; Michaels, CF

    2006-01-01

    Runeson, Justin, and Olsson (2000) proposed (a) that perceptual learning entails a transition from an inferential to a direct-perceptual mode of apprehension, and (b) that relative confidence-the difference between estimated and actual performance-indicates whether apprehension is inferential or

  16. Mode transition and change in variable use in perceptual learning

    NARCIS (Netherlands)

    Hajnal, A.; Grocki, M.; Jacobs, D.M.; Zaal, F.T.J.M.; Michaels, C.F.

    2006-01-01

    Runeson, Juslin, and Olsson (2000) proposed (a) that perceptual learning entails a transition from an inferential to a direct-perceptual mode of apprehension, and (b) that relative confidence - the difference between estimated and actual performance - indicates whether apprehension is inferential or

  17. Shared mechanisms of perceptual learning and decision making.

    Science.gov (United States)

    Law, Chi-Tat; Gold, Joshua I

    2010-04-01

    Perceptual decisions require the brain to weigh noisy evidence from sensory neurons to form categorical judgments that guide behavior. Here we review behavioral and neurophysiological findings suggesting that at least some forms of perceptual learning do not appear to affect the response properties of neurons that represent the sensory evidence. Instead, improved perceptual performance results from changes in how the sensory evidence is selected and weighed to form the decision. We discuss the implications of this idea for possible sites and mechanisms of training-induced improvements in perceptual processing in the brain. Copyright © 2009 Cognitive Science Society, Inc.

  18. Effects of consensus training on the reliability of auditory perceptual ratings of voice quality.

    Science.gov (United States)

    Iwarsson, Jenny; Reinholt Petersen, Niels

    2012-05-01

    This study investigates the effect of consensus training of listeners on intrarater and interrater reliability and agreement of perceptual voice analysis. The use of such training, including a reference voice sample, could be assumed to make the internal standards held in memory common and more robust, which is of great importance to reduce the variability of auditory perceptual ratings. A prospective design with testing before and after training. Thirteen students of audiologopedics served as listening subjects. The ratings were made using a multidimensional protocol with four-point equal-appearing interval scales. The stimuli consisted of text reading by authentic dysphonic patients. The consensus training for each perceptual voice parameter included (1) definition, (2) underlying physiology, (3) presentation of carefully selected sound examples representing the parameter in three different grades followed by group discussions of perceived characteristics, and (4) practical exercises including imitation to make use of the listeners' proprioception. Intrarater reliability and agreement showed a marked improvement for intermittent aphonia but not for vocal fry. Interrater reliability was high for most parameters before training with a slight increase after training. Interrater agreement showed marked increases for most voice quality parameters as a result of the training. The results support the recommendation of specific consensus training, including use of a reference voice sample material, to calibrate, equalize, and stabilize the internal standards held in memory by the listeners. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  19. Supramodal processing optimizes visual perceptual learning and plasticity.

    Science.gov (United States)

    Zilber, Nicolas; Ciuciu, Philippe; Gramfort, Alexandre; Azizi, Leila; van Wassenhove, Virginie

    2014-06-01

    Multisensory interactions are ubiquitous in cortex and it has been suggested that sensory cortices may be supramodal i.e. capable of functional selectivity irrespective of the sensory modality of inputs (Pascual-Leone and Hamilton, 2001; Renier et al., 2013; Ricciardi and Pietrini, 2011; Voss and Zatorre, 2012). Here, we asked whether learning to discriminate visual coherence could benefit from supramodal processing. To this end, three groups of participants were briefly trained to discriminate which of a red or green intermixed population of random-dot-kinematograms (RDKs) was most coherent in a visual display while being recorded with magnetoencephalography (MEG). During training, participants heard no sound (V), congruent acoustic textures (AV) or auditory noise (AVn); importantly, congruent acoustic textures shared the temporal statistics - i.e. coherence - of visual RDKs. After training, the AV group significantly outperformed participants trained in V and AVn although they were not aware of their progress. In pre- and post-training blocks, all participants were tested without sound and with the same set of RDKs. When contrasting MEG data collected in these experimental blocks, selective differences were observed in the dynamic pattern and the cortical loci responsive to visual RDKs. First and common to all three groups, vlPFC showed selectivity to the learned coherence levels whereas selectivity in visual motion area hMT+ was only seen for the AV group. Second and solely for the AV group, activity in multisensory cortices (mSTS, pSTS) correlated with post-training performances; additionally, the latencies of these effects suggested feedback from vlPFC to hMT+ possibly mediated by temporal cortices in AV and AVn groups. Altogether, we interpret our results in the context of the Reverse Hierarchy Theory of learning (Ahissar and Hochstein, 2004) in which supramodal processing optimizes visual perceptual learning by capitalizing on sensory

  20. Is the auditory evoked P2 response a biomarker of learning?

    Directory of Open Access Journals (Sweden)

    Kelly eTremblay

    2014-02-01

    Full Text Available Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography and magnetoencephalography have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP, as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What’s more, these effects were retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN wave 600-900 ms post-stimulus onset, post-training, exclusively for the group that learned to identify the pre

  1. Using Auditory Cues to Perceptually Extract Visual Data in Collaborative, Immersive Big-Data Display Systems

    Science.gov (United States)

    Lee, Wendy

    The advent of multisensory display systems, such as virtual and augmented reality, has fostered a new relationship between humans and space. Not only can these systems mimic real-world environments, they have the ability to create a new space typology made solely of data. In these spaces, two-dimensional information is displayed in three dimensions, requiring human senses to be used to understand virtual, attention-based elements. Studies in the field of big data have predominately focused on visual representations and extractions of information with little focus on sounds. The goal of this research is to evaluate the most efficient methods of perceptually extracting visual data using auditory stimuli in immersive environments. Using Rensselaer's CRAIVE-Lab, a virtual reality space with 360-degree panorama visuals and an array of 128 loudspeakers, participants were asked questions based on complex visual displays using a variety of auditory cues ranging from sine tones to camera shutter sounds. Analysis of the speed and accuracy of participant responses revealed that auditory cues that were more favorable for localization and were positively perceived were best for data extraction and could help create more user-friendly systems in the future.

  2. Effects of regular aerobic exercise on visual perceptual learning.

    Science.gov (United States)

    Connell, Charlotte J W; Thompson, Benjamin; Green, Hayden; Sullivan, Rachel K; Gant, Nicholas

    2017-12-02

    This study investigated the influence of five days of moderate intensity aerobic exercise on the acquisition and consolidation of visual perceptual learning using a motion direction discrimination (MDD) task. The timing of exercise relative to learning was manipulated by administering exercise either before or after perceptual training. Within a matched-subjects design, twenty-seven healthy participants (n = 9 per group) completed five consecutive days of perceptual training on a MDD task under one of three interventions: no exercise, exercise before the MDD task, or exercise after the MDD task. MDD task accuracy improved in all groups over the five-day period, but there was a trend for impaired learning when exercise was performed before visual perceptual training. MDD task accuracy (mean ± SD) increased in exercise before by 4.5 ± 6.5%; exercise after by 11.8 ± 6.4%; and no exercise by 11.3 ± 7.2%. All intervention groups displayed similar MDD threshold reductions for the trained and untrained motion axes after training. These findings suggest that moderate daily exercise does not enhance the rate of visual perceptual learning for an MDD task or the transfer of learning to an untrained motion axis. Furthermore, exercise performed immediately prior to a visual perceptual learning task may impair learning. Further research with larger groups is required in order to better understand these effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Auditory and motor imagery modulate learning in music performance.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  4. Auditory and motor imagery modulate learning in music performance

    Science.gov (United States)

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  5. Visuo-perceptual capabilities predict sensitivity for coinciding auditory and visual transients in multi-element displays.

    Science.gov (United States)

    Meyerhoff, Hauke S; Gehrer, Nina A

    2017-01-01

    In order to obtain a coherent representation of the outside world, auditory and visual information are integrated during human information processing. There is remarkable variance among observers in the capability to integrate auditory and visual information. Here, we propose that visuo-perceptual capabilities predict detection performance for audiovisually coinciding transients in multi-element displays due to severe capacity limitations in audiovisual integration. In the reported experiment, we employed an individual differences approach in order to investigate this hypothesis. Therefore, we measured performance in a useful-field-of-view task that captures detection performance for briefly presented stimuli across a large perceptual field. Furthermore, we measured sensitivity for visual direction changes that coincide with tones within the same participants. Our results show that individual differences in visuo-perceptual capabilities predicted sensitivity for the presence of audiovisually synchronous events among competing visual stimuli. To ensure that this correlation does not stem from superordinate factors, we also tested performance in an unrelated working memory task. Performance in this task was independent of sensitivity for the presence of audiovisually synchronous events. Our findings strengthen the proposed link between visuo-perceptual capabilities and audiovisual integration. The results also suggest that basic visuo-perceptual capabilities provide the basis for the subsequent integration of auditory and visual information.

  6. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  7. Visual perceptual load reduces auditory detection in typically developing individuals but not in individuals with autism spectrum disorders.

    Science.gov (United States)

    Tillmann, Julian; Swettenham, John

    2017-02-01

    Previous studies examining selective attention in individuals with autism spectrum disorder (ASD) have yielded conflicting results, some suggesting superior focused attention (e.g., on visual search tasks), others demonstrating greater distractibility. This pattern could be accounted for by the proposal (derived by applying the Load theory of attention, e.g., Lavie, 2005) that ASD is characterized by an increased perceptual capacity (Remington, Swettenham, Campbell, & Coleman, 2009). Recent studies in the visual domain support this proposal. Here we hypothesize that ASD involves an enhanced perceptual capacity that also operates across sensory modalities, and test this prediction, for the first time using a signal detection paradigm. Seventeen neurotypical (NT) and 15 ASD adolescents performed a visual search task under varying levels of visual perceptual load while simultaneously detecting presence/absence of an auditory tone embedded in noise. Detection sensitivity (d') for the auditory stimulus was similarly high for both groups in the low visual perceptual load condition (e.g., 2 items: p = .391, d = 0.31, 95% confidence interval [CI] [-0.39, 1.00]). However, at a higher level of visual load, auditory d' reduced for the NT group but not the ASD group, leading to a group difference (p = .002, d = 1.2, 95% CI [0.44, 1.96]). As predicted, when visual perceptual load was highest, both groups then showed a similarly low auditory d' (p = .9, d = 0.05, 95% CI [-0.65, 0.74]). These findings demonstrate that increased perceptual capacity in ASD operates across modalities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Auditory-Perceptual and Acoustic Methods in Measuring Dysphonia Severity of Korean Speech.

    Science.gov (United States)

    Maryn, Youri; Kim, Hyung-Tae; Kim, Jaeock

    2016-09-01

    The purpose of this study was to explore the criterion-related concurrent validity of two standardized auditory-perceptual rating protocols and the Acoustic Voice Quality Index (AVQI) for measuring dysphonia severity in Korean speech. Sixty native Korean subjects with various voice disorders were asked to sustain the vowel [a:] and to read aloud the Korean text "Walk." A 3-second midvowel portion of the sustained vowel and two sentences (with 25 syllables) were edited, concatenated, and analyzed according to methods described elsewhere. From 56 participants, both continuous speech and sustained vowel recordings had sufficiently high signal-to-noise ratios (35.5 dB and 37 dB on average, respectively) and were therefore subjected to further dysphonia severity analysis with (1) "G" or Grade from the GRBAS protocol, (2) "OS" or Overall Severity from the Consensus Auditory-Perceptual Evaluation of Voice protocol, and (3) AVQI. First, high correlations were found between G and OS (rS = 0.955 for sustained vowels; rS = 0.965 for continuous speech). Second, the AVQI showed a strong correlation with G (rS = 0.911) as well as OS (rP = 0.924). These findings are in agreement with similar studies dealing with continuous speech in other languages. The present study highlights the criterion-related concurrent validity of these methods in Korean speech. Furthermore, it supports the cross-linguistic robustness of the AVQI as a valid and objective marker of overall dysphonia severity. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  9. The Role of Age and Executive Function in Auditory Category Learning

    Science.gov (United States)

    Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath

    2015-01-01

    Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987

  10. Is sequence awareness mandatory for perceptual sequence learning: An assessment using a pure perceptual sequence learning design.

    Science.gov (United States)

    Deroost, Natacha; Coomans, Daphné

    2018-02-01

    We examined the role of sequence awareness in a pure perceptual sequence learning design. Participants had to react to the target's colour that changed according to a perceptual sequence. By varying the mapping of the target's colour onto the response keys, motor responses changed randomly. The effect of sequence awareness on perceptual sequence learning was determined by manipulating the learning instructions (explicit versus implicit) and assessing the amount of sequence awareness after the experiment. In the explicit instruction condition (n = 15), participants were instructed to intentionally search for the colour sequence, whereas in the implicit instruction condition (n = 15), they were left uninformed about the sequenced nature of the task. Sequence awareness after the sequence learning task was tested by means of a questionnaire and the process-dissociation-procedure. The results showed that the instruction manipulation had no effect on the amount of perceptual sequence learning. Based on their report to have actively applied their sequence knowledge during the experiment, participants were subsequently regrouped in a sequence strategy group (n = 14, of which 4 participants from the implicit instruction condition and 10 participants from the explicit instruction condition) and a no-sequence strategy group (n = 16, of which 11 participants from the implicit instruction condition and 5 participants from the explicit instruction condition). Only participants of the sequence strategy group showed reliable perceptual sequence learning and sequence awareness. These results indicate that perceptual sequence learning depends upon the continuous employment of strategic cognitive control processes on sequence knowledge. Sequence awareness is suggested to be a necessary but not sufficient condition for perceptual learning to take place. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Perceptual learning as improved probabilistic inference in early sensory areas.

    Science.gov (United States)

    Bejjanki, Vikranth R; Beck, Jeffrey M; Lu, Zhong-Lin; Pouget, Alexandre

    2011-05-01

    Extensive training on simple tasks such as fine orientation discrimination results in large improvements in performance, a form of learning known as perceptual learning. Previous models have argued that perceptual learning is due to either sharpening and amplification of tuning curves in early visual areas or to improved probabilistic inference in later visual areas (at the decision stage). However, early theories are inconsistent with the conclusions of psychophysical experiments manipulating external noise, whereas late theories cannot explain the changes in neural responses that have been reported in cortical areas V1 and V4. Here we show that we can capture both the neurophysiological and behavioral aspects of perceptual learning by altering only the feedforward connectivity in a recurrent network of spiking neurons so as to improve probabilistic inference in early visual areas. The resulting network shows modest changes in tuning curves, in line with neurophysiological reports, along with a marked reduction in the amplitude of pairwise noise correlations.

  12. Behavioral semantics of learning and crossmodal processing in auditory cortex: the semantic processor concept.

    Science.gov (United States)

    Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram

    2011-01-01

    Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of

  13. Adaptive and perceptual learning technologies in medical education and training.

    Science.gov (United States)

    Kellman, Philip J

    2013-10-01

    Recent advances in the learning sciences offer remarkable potential to improve medical education and maximize the benefits of emerging medical technologies. This article describes 2 major innovation areas in the learning sciences that apply to simulation and other aspects of medical learning: Perceptual learning (PL) and adaptive learning technologies. PL technology offers, for the first time, systematic, computer-based methods for teaching pattern recognition, structural intuition, transfer, and fluency. Synergistic with PL are new adaptive learning technologies that optimize learning for each individual, embed objective assessment, and implement mastery criteria. The author describes the Adaptive Response-Time-based Sequencing (ARTS) system, which uses each learner's accuracy and speed in interactive learning to guide spacing, sequencing, and mastery. In recent efforts, these new technologies have been applied in medical learning contexts, including adaptive learning modules for initial medical diagnosis and perceptual/adaptive learning modules (PALMs) in dermatology, histology, and radiology. Results of all these efforts indicate the remarkable potential of perceptual and adaptive learning technologies, individually and in combination, to improve learning in a variety of medical domains. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.

  14. The perceptual effects of learning object categories that predict perceptual goals

    Science.gov (United States)

    Van Gulick, Ana E.; Gauthier, Isabel

    2014-01-01

    In classic category learning studies, subjects typically learn to assign items to one of two categories, with no further distinction between how items on each side of the category boundary should be treated. In real life, however, we often learn categories that dictate further processing goals, for instance with objects in only one category requiring further individuation. Using methods from category learning and perceptual expertise, we studied the perceptual consequences of experience with objects in tasks that rely on attention to different dimensions in different parts of the space. In two experiments, subjects first learned to categorize complex objects from a single morphspace into two categories based on one morph dimension, and then learned to perform a different task, either naming or a local feature judgment, for each of the two categories. A same-different discrimination test before and after each training measured sensitivity to feature dimensions of the space. After initial categorization, sensitivity increased along the category-diagnostic dimension. After task association, sensitivity increased more for the category that was named, especially along the non-diagnostic dimension. The results demonstrate that local attentional weights, associated with individual exemplars as a function of task requirements, can have lasting effects on perceptual representations. PMID:24820671

  15. Perceptual learning modifies the functional specializations of visual cortical areas.

    Science.gov (United States)

    Chen, Nihong; Cai, Peng; Zhou, Tiangang; Thompson, Benjamin; Fang, Fang

    2016-05-17

    Training can improve performance of perceptual tasks. This phenomenon, known as perceptual learning, is strongest for the trained task and stimulus, leading to a widely accepted assumption that the associated neuronal plasticity is restricted to brain circuits that mediate performance of the trained task. Nevertheless, learning does transfer to other tasks and stimuli, implying the presence of more widespread plasticity. Here, we trained human subjects to discriminate the direction of coherent motion stimuli. The behavioral learning effect substantially transferred to noisy motion stimuli. We used transcranial magnetic stimulation (TMS) and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms underlying the transfer of learning. The TMS experiment revealed dissociable, causal contributions of V3A (one of the visual areas in the extrastriate visual cortex) and MT+ (middle temporal/medial superior temporal cortex) to coherent and noisy motion processing. Surprisingly, the contribution of MT+ to noisy motion processing was replaced by V3A after perceptual training. The fMRI experiment complemented and corroborated the TMS finding. Multivariate pattern analysis showed that, before training, among visual cortical areas, coherent and noisy motion was decoded most accurately in V3A and MT+, respectively. After training, both kinds of motion were decoded most accurately in V3A. Our findings demonstrate that the effects of perceptual learning extend far beyond the retuning of specific neural populations for the trained stimuli. Learning could dramatically modify the inherent functional specializations of visual cortical areas and dynamically reweight their contributions to perceptual decisions based on their representational qualities. These neural changes might serve as the neural substrate for the transfer of perceptual learning.

  16. Perceptual learning effect on decision and confidence thresholds.

    Science.gov (United States)

    Solovey, Guillermo; Shalom, Diego; Pérez-Schuster, Verónica; Sigman, Mariano

    2016-10-01

    Practice can enhance of perceptual sensitivity, a well-known phenomenon called perceptual learning. However, the effect of practice on subjective perception has received little attention. We approach this problem from a visual psychophysics and computational modeling perspective. In a sequence of visual search experiments, subjects significantly increased the ability to detect a "trained target". Before and after training, subjects performed two psychophysical protocols that parametrically vary the visibility of the "trained target": an attentional blink and a visual masking task. We found that confidence increased after learning only in the attentional blink task. Despite large differences in some observables and task settings, we identify common mechanisms for decision-making and confidence. Specifically, our behavioral results and computational model suggest that perceptual ability is independent of processing time, indicating that changes in early cortical representations are effective, and learning changes decision criteria to convey choice and confidence. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Perceptual Learning Style Matching and L2 Vocabulary Acquisition

    Science.gov (United States)

    Tight, Daniel G.

    2010-01-01

    This study explored learning and retention of concrete nouns in second language Spanish by first language English undergraduates (N = 128). Each completed a learning style (visual, auditory, tactile/kinesthetic, mixed) assessment, took a vocabulary pretest, and then studied 12 words each through three conditions (matching, mismatching, mixed…

  18. The Perceptual Basis of the Modality Effect in Multimedia Learning

    Science.gov (United States)

    Rummer, Ralf; Schweppe, Judith; Furstenberg, Anne; Scheiter, Katharina; Zindler, Antje

    2011-01-01

    Various studies have demonstrated an advantage of auditory over visual text modality when learning with texts and pictures. To explain this modality effect, two complementary assumptions are proposed by cognitive theories of multimedia learning: first, the visuospatial load hypothesis, which explains the modality effect in terms of visuospatial…

  19. Can theories of animal discrimination explain perceptual learning in humans?

    Science.gov (United States)

    Mitchell, Chris; Hall, Geoffrey

    2014-01-01

    We present a review of recent studies of perceptual learning conducted with nonhuman animals. The focus of this research has been to elucidate the mechanisms by which mere exposure to a pair of similar stimuli can increase the ease with which those stimuli are discriminated. These studies establish an important role for 2 mechanisms, one involving inhibitory associations between the unique features of the stimuli, the other involving a long-term habituation process that enhances the relative salience of these features. We then examine recent work investigating equivalent perceptual learning procedures with human participants. Our aim is to determine the extent to which the phenomena exhibited by people are susceptible to explanation in terms of the mechanisms revealed by the animal studies. Although we find no evidence that associative inhibition contributes to the perceptual learning effect in humans, initial detection of unique features (those that allow discrimination between 2 similar stimuli) appears to depend on an habituation process. Once the unique features have been detected, a tendency to attend to those features and to learn about their properties enhances subsequent discrimination. We conclude that the effects obtained with humans engage mechanisms additional to those seen in animals but argue that, for the most part, these have their basis in learning processes that are common to animals and people. In a final section, we discuss some implications of this analysis of perceptual learning for other aspects of experimental psychology and consider some potential applications. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  20. Attentional Modulation in Visual Cortex Is Modified during Perceptual Learning

    Science.gov (United States)

    Bartolucci, Marco; Smith, Andrew T.

    2011-01-01

    Practicing a visual task commonly results in improved performance. Often the improvement does not transfer well to a new retinal location, suggesting that it is mediated by changes occurring in early visual cortex, and indeed neuroimaging and neurophysiological studies both demonstrate that perceptual learning is associated with altered activity…

  1. Audiovisual Cues and Perceptual Learning of Spectrally Distorted Speech

    Science.gov (United States)

    Pilling, Michael; Thomas, Sharon

    2011-01-01

    Two experiments investigate the effectiveness of audiovisual (AV) speech cues (cues derived from both seeing and hearing a talker speak) in facilitating perceptual learning of spectrally distorted speech. Speech was distorted through an eight channel noise-vocoder which shifted the spectral envelope of the speech signal to simulate the properties…

  2. Learning effects of dynamic postural control by auditory biofeedback versus visual biofeedback training.

    Science.gov (United States)

    Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi

    2017-10-01

    Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Perceptual Learning Style and Learning Proficiency: A Test of the Hypothesis

    Science.gov (United States)

    Kratzig, Gregory P.; Arbuthnott, Katherine D.

    2006-01-01

    Given the potential importance of using modality preference with instruction, the authors tested whether learning style preference correlated with memory performance in each of 3 sensory modalities: visual, auditory, and kinesthetic. In Study 1, participants completed objective measures of pictorial, auditory, and tactile learning and learning…

  4. Predicting perceptual learning from higher-order cortical processing.

    Science.gov (United States)

    Wang, Fang; Huang, Jing; Lv, Yaping; Ma, Xiaoli; Yang, Bin; Wang, Encong; Du, Boqi; Li, Wu; Song, Yan

    2016-01-01

    Visual perceptual learning has been shown to be highly specific to the retinotopic location and attributes of the trained stimulus. Recent psychophysical studies suggest that these specificities, which have been associated with early retinotopic visual cortex, may in fact not be inherent in perceptual learning and could be related to higher-order brain functions. Here we provide direct electrophysiological evidence in support of this proposition. In a series of event-related potential (ERP) experiments, we recorded high-density electroencephalography (EEG) from human adults over the course of learning in a texture discrimination task (TDT). The results consistently showed that the earliest C1 component (68-84ms), known to reflect V1 activity driven by feedforward inputs, was not modulated by learning regardless of whether the behavioral improvement is location specific or not. In contrast, two later posterior ERP components (posterior P1 and P160-350) over the occipital cortex and one anterior ERP component (anterior P160-350) over the prefrontal cortex were progressively modified day by day. Moreover, the change of the anterior component was closely correlated with improved behavioral performance on a daily basis. Consistent with recent psychophysical and imaging observations, our results indicate that perceptual learning can mainly involve changes in higher-level visual cortex as well as in the neural networks responsible for cognitive functions such as attention and decision making. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Audiovisual perceptual learning with multiple speakers.

    Science.gov (United States)

    Mitchel, Aaron D; Gerfen, Chip; Weiss, Daniel J

    2016-05-01

    One challenge for speech perception is between-speaker variability in the acoustic parameters of speech. For example, the same phoneme (e.g. the vowel in "cat") may have substantially different acoustic properties when produced by two different speakers and yet the listener must be able to interpret these disparate stimuli as equivalent. Perceptual tuning, the use of contextual information to adjust phonemic representations, may be one mechanism that helps listeners overcome obstacles they face due to this variability during speech perception. Here we test whether visual contextual cues to speaker identity may facilitate the formation and maintenance of distributional representations for individual speakers, allowing listeners to adjust phoneme boundaries in a speaker-specific manner. We familiarized participants to an audiovisual continuum between /aba/ and /ada/. During familiarization, the "b-face" mouthed /aba/ when an ambiguous token was played, while the "D-face" mouthed /ada/. At test, the same ambiguous token was more likely to be identified as /aba/ when paired with a stilled image of the "b-face" than with an image of the "D-face." This was not the case in the control condition when the two faces were paired equally with the ambiguous token. Together, these results suggest that listeners may form speaker-specific phonemic representations using facial identity cues.

  6. When does fading enhance perceptual category learning?

    Science.gov (United States)

    Pashler, Harold; Mozer, Michael C

    2013-07-01

    Training that uses exaggerated versions of a stimulus discrimination (fading) has sometimes been found to enhance category learning, mostly in studies involving animals and impaired populations. However, little is known about whether and when fading facilitates learning for typical individuals. This issue was explored in 7 experiments. In Experiments 1 and 2, observers discriminated stimuli based on a single sensory continuum (time duration and line length, respectively). Adaptive fading dramatically improved performance in training (unsurprisingly) but did not enhance learning as assessed in a final test. The same was true for nonadaptive linear fading (Experiment 3). However, when variation in length (predicting category membership) was embedded among other (category-irrelevant) variation, fading dramatically enhanced not only performance in training but also learning as assessed in a final test (Experiments 4 and 5). Fading also helped learners to acquire a color saturation discrimination amid category-irrelevant variation in hue and brightness, although this learning proved transitory after feedback was withdrawn (Experiment 7). Theoretical implications are discussed, and we argue that fading should have practical utility in naturalistic category learning tasks, which involve extremely high dimensional stimuli and many irrelevant dimensions. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  7. The role of culture in perceptual learning styles

    OpenAIRE

    حسینی فاطمی ، پیشقدم حسینی فاطمی ، پیشقدم

    2009-01-01

    The major aim of this article is to determine the role of culture in perceptual learning style (PLS) preferences of Iranian English learners, in order to minimize teacher-student style conflict in the classroom. To do this, 400 university students from different fields of study were selected from Allameh Tabatabaee University in Tehran, Ferdowsi University of Mashhad and Mashhad University of Medical Sciences. The subjects were asked to answer Reid’s questionnaire (1987) which was designed to...

  8. Perceptual learning is specific to the trained structure of information.

    Science.gov (United States)

    Cohen, Yamit; Daikhin, Luba; Ahissar, Merav

    2013-12-01

    What do we learn when we practice a simple perceptual task? Many studies have suggested that we learn to refine or better select the sensory representations of the task-relevant dimension. Here we show that learning is specific to the trained structural regularities. Specifically, when this structure is modified after training with a fixed temporal structure, performance regresses to pretraining levels, even when the trained stimuli and task are retained. This specificity raises key questions as to the importance of low-level sensory modifications in the learning process. We trained two groups of participants on a two-tone frequency discrimination task for several days. In one group, a fixed reference tone was consistently presented in the first interval (the second tone was higher or lower), and in the other group the same reference tone was consistently presented in the second interval. When following training, these temporal protocols were switched between groups, performance of both groups regressed to pretraining levels, and further training was needed to attain postlearning performance. ERP measures, taken before and after training, indicated that participants implicitly learned the temporal regularity of the protocol and formed an attentional template that matched the trained structure of information. These results are consistent with Reverse Hierarchy Theory, which posits that even the learning of simple perceptual tasks progresses in a top-down manner, hence can benefit from temporal regularities at the trial level, albeit at the potential cost that learning may be specific to these regularities.

  9. Conditions of Practice in Perceptual Skill Learning

    Science.gov (United States)

    Memmert, D.; Hagemann, N.; Althoetmar, R.; Geppert, S.; Seiler, D.

    2009-01-01

    This study uses three experiments with different kinds of training conditions to investigate the "easy-to-hard" principle, context interference conditions, and feedback effects for learning anticipatory skills in badminton. Experiment 1 (N = 60) showed that a training program that gradually increases the difficulty level has no advantage over the…

  10. Loud Music Exposure and Cochlear Synaptopathy in Young Adults: Isolated Auditory Brainstem Response Effects but No Perceptual Consequences.

    Science.gov (United States)

    Grose, John H; Buss, Emily; Hall, Joseph W

    2017-01-01

    The purpose of this study was to test the hypothesis that listeners with frequent exposure to loud music exhibit deficits in suprathreshold auditory performance consistent with cochlear synaptopathy. Young adults with normal audiograms were recruited who either did ( n = 31) or did not ( n = 30) have a history of frequent attendance at loud music venues where the typical sound levels could be expected to result in temporary threshold shifts. A test battery was administered that comprised three sets of procedures: (a) electrophysiological tests including distortion product otoacoustic emissions, auditory brainstem responses, envelope following responses, and the acoustic change complex evoked by an interaural phase inversion; (b) psychoacoustic tests including temporal modulation detection, spectral modulation detection, and sensitivity to interaural phase; and (c) speech tests including filtered phoneme recognition and speech-in-noise recognition. The results demonstrated that a history of loud music exposure can lead to a profile of peripheral auditory function that is consistent with an interpretation of cochlear synaptopathy in humans, namely, modestly abnormal auditory brainstem response Wave I/Wave V ratios in the presence of normal distortion product otoacoustic emissions and normal audiometric thresholds. However, there were no other electrophysiological, psychophysical, or speech perception effects. The absence of any behavioral effects in suprathreshold sound processing indicated that, even if cochlear synaptopathy is a valid pathophysiological condition in humans, its perceptual sequelae are either too diffuse or too inconsequential to permit a simple differential diagnosis of hidden hearing loss.

  11. Auditory cortex involvement in emotional learning and memory.

    Science.gov (United States)

    Grosso, A; Cambiaghi, M; Concina, G; Sacco, T; Sacchetti, B

    2015-07-23

    Emotional memories represent the core of human and animal life and drive future choices and behaviors. Early research involving brain lesion studies in animals lead to the idea that the auditory cortex participates in emotional learning by processing the sensory features of auditory stimuli paired with emotional consequences and by transmitting this information to the amygdala. Nevertheless, electrophysiological and imaging studies revealed that, following emotional experiences, the auditory cortex undergoes learning-induced changes that are highly specific, associative and long lasting. These studies suggested that the role played by the auditory cortex goes beyond stimulus elaboration and transmission. Here, we discuss three major perspectives created by these data. In particular, we analyze the possible roles of the auditory cortex in emotional learning, we examine the recruitment of the auditory cortex during early and late memory trace encoding, and finally we consider the functional interplay between the auditory cortex and subcortical nuclei, such as the amygdala, that process affective information. We conclude that, starting from the early phase of memory encoding, the auditory cortex has a more prominent role in emotional learning, through its connections with subcortical nuclei, than is typically acknowledged. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Auditory Processing Learning Disability, Suicidal Ideation, and Transformational Faith

    Science.gov (United States)

    Bailey, Frank S.; Yocum, Russell G.

    2015-01-01

    The purpose of this personal experience as a narrative investigation is to describe how an auditory processing learning disability exacerbated--and how spirituality and religiosity relieved--suicidal ideation, through the lived experiences of an individual born and raised in the United States. The study addresses: (a) how an auditory processing…

  13. Magnetic stimulation of visual cortex impairs perceptual learning.

    Science.gov (United States)

    Baldassarre, Antonello; Capotosto, Paolo; Committeri, Giorgia; Corbetta, Maurizio

    2016-12-01

    The ability to learn and process visual stimuli more efficiently is important for survival. Previous neuroimaging studies have shown that perceptual learning on a shape identification task differently modulates activity in both frontal-parietal cortical regions and visual cortex (Sigman et al., 2005;Lewis et al., 2009). Specifically, fronto-parietal regions (i.e. intra parietal sulcus, pIPS) became less activated for trained as compared to untrained stimuli, while visual regions (i.e. V2d/V3 and LO) exhibited higher activation for familiar shape. Here, after the intensive training, we employed transcranial magnetic stimulation over both visual occipital and parietal regions, previously shown to be modulated, to investigate their causal role in learning the shape identification task. We report that interference with V2d/V3 and LO increased reaction times to learned stimuli as compared to pIPS and Sham control condition. Moreover, the impairment observed after stimulation over the two visual regions was positive correlated. These results strongly support the causal role of the visual network in the control of the perceptual learning. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Neural Correlates of Auditory Perceptual Awareness and Release from Informational Masking Recorded Directly from Human Cortex: A Case Study

    Directory of Open Access Journals (Sweden)

    Andrew R Dykstra

    2016-10-01

    Full Text Available In complex acoustic environments, even salient supra-threshold sounds sometimes go unperceived, a phenomenon known as informational masking. The neural basis of informational masking (and its release has not been well characterized, particularly outside auditory cortex. We combined electrocorticography in a neurosurgical patient undergoing invasive epilepsy monitoring with trial-by-trial perceptual reports of isochronous target-tone streams embedded in random multi-tone maskers. Awareness of such masker-embedded target streams was associated with a focal negativity between 100 and 200 ms and high-gamma activity between 50 and 250 ms (both in auditory cortex on the posterolateral superior temporal gyrus as well as a broad P3b-like potential (between ~300 and 600 ms with generators in ventrolateral frontal and lateral temporal cortex. Unperceived target tones elicited drastically reduced versions of such responses, if at all. While it remains unclear whether these responses reflect conscious perception, itself, as opposed to pre- or post-perceptual processing, the results suggest that conscious perception of target sounds in complex listening environments may engage diverse neural mechanisms in distributed brain areas.

  15. The effects of interstimulus interval on event-related indices of attention: an auditory selective attention test of perceptual load theory.

    Science.gov (United States)

    Gomes, Hilary; Barrett, Sophia; Duff, Martin; Barnhardt, Jack; Ritter, Walter

    2008-03-01

    We examined the impact of perceptual load by manipulating interstimulus interval (ISI) in two auditory selective attention studies that varied in the difficulty of the target discrimination. In the paradigm, channels were separated by frequency and target/deviant tones were softer in intensity. Three ISI conditions were presented: fast (300ms), medium (600ms) and slow (900ms). Behavioral (accuracy and RT) and electrophysiological measures (Nd, P3b) were observed. In both studies, participants evidenced poorer accuracy during the fast ISI condition than the slow suggesting that ISI impacted task difficulty. However, none of the three measures of processing examined, Nd amplitude, P3b amplitude elicited by unattended deviant stimuli, or false alarms to unattended deviants, were impacted by ISI in the manner predicted by perceptual load theory. The prediction based on perceptual load theory, that there would be more processing of irrelevant stimuli under conditions of low as compared to high perceptual load, was not supported in these auditory studies. Task difficulty/perceptual load impacts the processing of irrelevant stimuli in the auditory modality differently than predicted by perceptual load theory, and perhaps differently than in the visual modality.

  16. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    Science.gov (United States)

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Age-related declines of stability in visual perceptual learning.

    Science.gov (United States)

    Chang, Li-Hung; Shibata, Kazuhisa; Andersen, George J; Sasaki, Yuka; Watanabe, Takeo

    2014-12-15

    One of the biggest questions in learning is how a system can resolve the plasticity and stability dilemma. Specifically, the learning system needs to have not only a high capability of learning new items (plasticity) but also a high stability to retain important items or processing in the system by preventing unimportant or irrelevant information from being learned. This dilemma should hold true for visual perceptual learning (VPL), which is defined as a long-term increase in performance on a visual task as a result of visual experience. Although it is well known that aging influences learning, the effect of aging on the stability and plasticity of the visual system is unclear. To address the question, we asked older and younger adults to perform a task while a task-irrelevant feature was merely exposed. We found that older individuals learned the task-irrelevant features that younger individuals did not learn, both the features that were sufficiently strong for younger individuals to suppress and the features that were too weak for younger individuals to learn. At the same time, there was no plasticity reduction in older individuals within the task tested. These results suggest that the older visual system is less stable to unimportant information than the younger visual system. A learning problem with older individuals may be due to a decrease in stability rather than a decrease in plasticity, at least in VPL. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Making perceptual learning practical to improve visual functions.

    Science.gov (United States)

    Polat, Uri

    2009-10-01

    Task-specific improvement in performance after training is well established. The finding that learning is stimulus-specific and does not transfer well between different stimuli, between stimulus locations in the visual field, or between the two eyes has been used to support the notion that neurons or assemblies of neurons are modified at the earliest stage of cortical processing. However, a debate regarding the proposed mechanism underlying perceptual learning is an ongoing issue. Nevertheless, generalization of a trained task to other functions is an important key, for both understanding the neural mechanisms and the practical value of the training. This manuscript describes a structured perceptual learning method that previously used (amblyopia, myopia) and a novel technique and results that were applied for presbyopia. In general, subjects were trained for contrast detection of Gabor targets under lateral masking conditions. Training improved contrast sensitivity and diminished the lateral suppression when it existed (amblyopia). The improvement was transferred to unrelated functions such as visual acuity. The new results of presbyopia show substantial improvement of the spatial and temporal contrast sensitivity, leading to improved processing speed of target detection as well as reaction time. Consequently, the subjects, who were able to eliminate the need for reading glasses, benefited. Thus, here we show that the transfer of functions indicates that the specificity of improvement in the trained task can be generalized by repetitive practice of target detection, covering a sufficient range of spatial frequencies and orientations, leading to an improvement in unrelated visual functions. Thus, perceptual learning can be a practical method to improve visual functions in people with impaired or blurred vision.

  19. Perceptual learning in Williams syndrome: looking beyond averages.

    Directory of Open Access Journals (Sweden)

    Patricia Gervan

    Full Text Available Williams Syndrome is a genetically determined neurodevelopmental disorder characterized by an uneven cognitive profile and surprisingly large neurobehavioral differences among individuals. Previous studies have already shown different forms of memory deficiencies and learning difficulties in WS. Here we studied the capacity of WS subjects to improve their performance in a basic visual task. We employed a contour integration paradigm that addresses occipital visual function, and analyzed the initial (i.e. baseline and after-learning performance of WS individuals. Instead of pooling the very inhomogeneous results of WS subjects together, we evaluated individual performance by expressing it in terms of the deviation from the average performance of the group of typically developing subjects of similar age. This approach helped us to reveal information about the possible origins of poor performance of WS subjects in contour integration. Although the majority of WS individuals showed both reduced baseline and reduced learning performance, individual analysis also revealed a dissociation between baseline and learning capacity in several WS subjects. In spite of impaired initial contour integration performance, some WS individuals presented learning capacity comparable to learning in the typically developing population, and vice versa, poor learning was also observed in subjects with high initial performance levels. These data indicate a dissociation between factors determining initial performance and perceptual learning.

  20. Applying perceptual and adaptive learning techniques for teaching introductory histopathology

    Directory of Open Access Journals (Sweden)

    Sally Krasne

    2013-01-01

    Full Text Available Background: Medical students are expected to master the ability to interpret histopathologic images, a difficult and time-consuming process. A major problem is the issue of transferring information learned from one example of a particular pathology to a new example. Recent advances in cognitive science have identified new approaches to address this problem. Methods: We adapted a new approach for enhancing pattern recognition of basic pathologic processes in skin histopathology images that utilizes perceptual learning techniques, allowing learners to see relevant structure in novel cases along with adaptive learning algorithms that space and sequence different categories (e.g. diagnoses that appear during a learning session based on each learner′s accuracy and response time (RT. We developed a perceptual and adaptive learning module (PALM that utilized 261 unique images of cell injury, inflammation, neoplasia, or normal histology at low and high magnification. Accuracy and RT were tracked and integrated into a "Score" that reflected students rapid recognition of the pathologies and pre- and post-tests were given to assess the effectiveness. Results: Accuracy, RT and Scores significantly improved from the pre- to post-test with Scores showing much greater improvement than accuracy alone. Delayed post-tests with previously unseen cases, given after 6-7 weeks, showed a decline in accuracy relative to the post-test for 1 st -year students, but not significantly so for 2 nd -year students. However, the delayed post-test scores maintained a significant and large improvement relative to those of the pre-test for both 1 st and 2 nd year students suggesting good retention of pattern recognition. Student evaluations were very favorable. Conclusion: A web-based learning module based on the principles of cognitive science showed an evidence for improved recognition of histopathology patterns by medical students.

  1. Learning of arbitrary association between visual and auditory novel stimuli in adults: the "bond effect" of haptic exploration.

    Directory of Open Access Journals (Sweden)

    Benjamin Fredembach

    Full Text Available BACKGROUND: It is well-known that human beings are able to associate stimuli (novel or not perceived in their environment. For example, this ability is used by children in reading acquisition when arbitrary associations between visual and auditory stimuli must be learned. The studies tend to consider it as an "implicit" process triggered by the learning of letter/sound correspondences. The study described in this paper examined whether the addition of the visuo-haptic exploration would help adults to learn more effectively the arbitrary association between visual and auditory novel stimuli. METHODOLOGY/PRINCIPAL FINDINGS: Adults were asked to learn 15 new arbitrary associations between visual stimuli and their corresponding sounds using two learning methods which differed according to the perceptual modalities involved in the exploration of the visual stimuli. Adults used their visual modality in the "classic" learning method and both their visual and haptic modalities in the "multisensory" learning one. After both learning methods, participants showed a similar above-chance ability to recognize the visual and auditory stimuli and the audio-visual associations. However, the ability to recognize the visual-auditory associations was better after the multisensory method than after the classic one. CONCLUSION/SIGNIFICANCE: This study revealed that adults learned more efficiently the arbitrary association between visual and auditory novel stimuli when the visual stimuli were explored with both vision and touch. The results are discussed from the perspective of how they relate to the functional differences of the manual haptic modality and the hypothesis of a "haptic bond" between visual and auditory stimuli.

  2. Profiling Perceptual Learning Styles of Chinese as a Second Language Learners in University Settings.

    Science.gov (United States)

    Sun, Peijian Paul; Teng, Lin Sophie

    2017-12-01

    This study revisited Reid's (1987) perceptual learning style preference questionnaire (PLSPQ) in an attempt to answer whether the PLSPQ fits in the Chinese-as-a-second-language (CSL) context. If not, what are CSL learners' learning styles drawing on the PLSPQ? The PLSPQ was first re-examined through reliability analysis and confirmatory factor analysis (CFA) with 224 CSL learners. The results showed that Reid's six-factor PLSPQ could not satisfactorily explain the CSL learners' learning styles. Exploratory factor analyses were, therefore, performed to explore the dimensionality of the PLSPQ in the CSL context. A four-factor PLSPQ was successfully constructed including auditory/visual, kinaesthetic/tactile, group, and individual styles. Such a measurement model was cross-validated through CFAs with 118 CSL learners. The study not only lends evidence to the literature that Reid's PLSPQ lacks construct validity, but also provides CSL teachers and learners with insightful and practical guidance concerning learning styles. Implications and limitations of the present study are discussed.

  3. Unconscious Attentional Capture Effect Can be Induced by Perceptual Learning

    Directory of Open Access Journals (Sweden)

    Zhe Qu

    2011-05-01

    Full Text Available Previous ERP studies have shown that N2pc serves as an index for salient stimuli that capture attention, even if they are task irrelevant. This study aims to investigate whether nonsalient stimuli can capture attention automatically and unconsciously after perceptual learning. Adult subjects were trained with a visual search task for eight to ten sessions. The training task was to detect whether the target (triangle with one particular direction was present or not. After training, an ERP session was performed, in which subjects were required to detect the presence of either the trained triangle (i.e., the target triangle in the training sessions or an untrained triangle. Results showed that, while the untrained triangle did not elicit an N2pc effect, the trained triangle elicited a significant N2pc effect regardless of whether it was perceived correctly or not, even when it was task irrelevant. Moreover, the N2pc effect for the trained triangle was completely retained 3 months later. These results suggest that, after perceptual learning, previously unsalient stimuli become more salient and can capture attention automatically and unconsciously. Once the facilitating process of the unsalient stimulus has been built up in the brain, it can last for a long time.

  4. Perceptual learning improves visual performance in juvenile amblyopia.

    Science.gov (United States)

    Li, Roger W; Young, Karen G; Hoenig, Pia; Levi, Dennis M

    2005-09-01

    To determine whether practicing a position-discrimination task improves visual performance in children with amblyopia and to determine the mechanism(s) of improvement. Five children (age range, 7-10 years) with amblyopia practiced a positional acuity task in which they had to judge which of three pairs of lines was misaligned. Positional noise was produced by distributing the individual patches of each line segment according to a Gaussian probability function. Observers were trained at three noise levels (including 0), with each observer performing between 3000 and 4000 responses in 7 to 10 sessions. Trial-by-trial feedback was provided. Four of the five observers showed significant improvement in positional acuity. In those four observers, on average, positional acuity with no noise improved by approximately 32% and with high noise by approximately 26%. A position-averaging model was used to parse the improvement into an increase in efficiency or a decrease in equivalent input noise. Two observers showed increased efficiency (51% and 117% improvements) with no significant change in equivalent input noise across sessions. The other two observers showed both a decrease in equivalent input noise (18% and 29%) and an increase in efficiency (17% and 71%). All five observers showed substantial improvement in Snellen acuity (approximately 26%) after practice. Perceptual learning can improve visual performance in amblyopic children. The improvement can be parsed into two important factors: decreased equivalent input noise and increased efficiency. Perceptual learning techniques may add an effective new method to the armamentarium of amblyopia treatments.

  5. Olfactory Perceptual Learning Requires Action of Noradrenaline in the Olfactory Bulb: Comparison with Olfactory Associative Learning

    Science.gov (United States)

    Vinera, Jennifer; Kermen, Florence; Sacquet, Joëlle; Didier, Anne; Mandairon, Nathalie; Richard, Marion

    2015-01-01

    Noradrenaline contributes to olfactory-guided behaviors but its role in olfactory learning during adulthood is poorly documented. We investigated its implication in olfactory associative and perceptual learning using local infusion of mixed a1-ß adrenergic receptor antagonist (labetalol) in the adult mouse olfactory bulb. We reported that…

  6. The Role of Feedback Contingency in Perceptual Category Learning

    Science.gov (United States)

    Ashby, F. Gregory; Vucovich, Lauren E.

    2016-01-01

    Feedback is highly contingent on behavior if it eventually becomes easy to predict, and weakly contingent on behavior if it remains difficult or impossible to predict even after learning is complete. Many studies have demonstrated that humans and nonhuman animals are highly sensitive to feedback contingency, but no known studies have examined how feedback contingency affects category learning, and current theories assign little or no importance to this variable. Two experiments examined the effects of contingency degradation on rule-based and information-integration category learning. In rule-based tasks, optimal accuracy is possible with a simple explicit rule, whereas optimal accuracy in information-integration tasks requires integrating information from two or more incommensurable perceptual dimensions. In both experiments, participants each learned rule-based or information-integration categories under either high or low levels of feedback contingency. The exact same stimuli were used in all four conditions and optimal accuracy was identical in every condition. Learning was good in both high-contingency conditions, but most participants showed little or no evidence of learning in either low-contingency condition. Possible causes of these effects are discussed, as well as their theoretical implications. PMID:27149393

  7. Vocal Acoustic and Auditory-Perceptual Characteristics During Fluctuations in Estradiol Levels During the Menstrual Cycle: A Longitudinal Study.

    Science.gov (United States)

    Arruda, Polyanna; Diniz da Rosa, Marine Raquel; Almeida, Larissa Nadjara Alves; de Araujo Pernambuco, Leandro; Almeida, Anna Alice

    2018-03-07

    Estradiol production varies cyclically, changes in levels are hypothesized to affect the voice. The main objective of this study was to investigate vocal acoustic and auditory-perceptual characteristics during fluctuations in the levels of the hormone estradiol during the menstrual cycle. A total of 44 volunteers aged between 18 and 45 were selected. Of these, 27 women with regular menstrual cycles comprised the test group (TG) and 17 combined oral contraceptive users comprised the control group (CG). The study was performed in two phases. In phase 1, anamnesis was performed. Subsequently, the TG underwent blood sample collection for measurement of estradiol levels and voice recording for later acoustic and auditory-perceptual analysis. The CG underwent only voice recording. Phase 2 involved the same measurements as phase 1 for each group. Variables were evaluated using descriptive and inferential analysis to compare groups and phases and to determine relationships between variables. Voice changes were found during the menstrual cycle, and such changes were determined to be related to variations in estradiol levels. Impaired voice quality was observed to be associated with decreased levels of estradiol. The CG did not demonstrate significant vocal changes during phases 1 and 2. The TG showed significant increases in vocal parameters of roughness, tension, and instability during phase 2 (the period of low estradiol levels) when compared with the CG. Low estradiol levels were also found to be negatively correlated with the parameters of tension, instability, and jitter and positively correlated with fundamental voice frequency. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  8. Individual differences in learning to perceive length by dynamic touch : Evidence for variation in perceptual learning capacities

    NARCIS (Netherlands)

    Withagen, Rob; van Wermeskerken, Margot

    Recent studies of perceptual learning have explored and commented on variation in learning trajectories. Although several factors have been suggested to account for this variation, thus far the idea that humans vary in their perceptual learning capacities has received scant attention. In the present

  9. Effects of Consensus Training on the Reliability of Auditory Perceptual Ratings of Voice Quality

    DEFF Research Database (Denmark)

    Iwarsson, Jenny; Petersen, Niels Reinholt

    2012-01-01

    Objectives/Hypothesis: This study investigates the effect of consensus training of listeners on intrarater and interrater reliability and agreement of perceptual voice analysis. The use of such training, including a reference voice sample, could be assumed to make the internal standards held in m...

  10. The Neural Circuitry of Expertise: Perceptual Learning and Social Cognition

    Directory of Open Access Journals (Sweden)

    Michael eHarre

    2013-12-01

    Full Text Available Amongst the most significant questions we are confronted with today include the integration of the brain's micro-circuitry, our ability to build the complex social networks that underpin society and how our society impacts on our ecological environment. In trying to unravel these issues one place to begin is at the level of the individual: to consider how we accumulate information about our environment, how this information leads to decisions and how our individual decisions in turn create our social environment. While this is an enormous task, we may already have at hand many of the tools we need. This article is intended to review some of the recent results in neuro-cognitive research and show how they can be extended to two very specific types of expertise: perceptual expertise and social cognition. These two cognitive skills span a vast range of our genetic heritage. Perceptual expertise developed very early in our evolutionary history and is likely a highly developed part of all mammals' cognitive ability. On the other hand social cognition is most highly developed in humans in that we are able to maintain larger and more stable long term social connections with more behaviourally diverse individuals than any other species. To illustrate these ideas I will discuss board games as a toy model of social interactions as they include many of the relevant concepts: perceptual learning, decision-making, long term planning and understanding the mental states of other people. Using techniques that have been developed in mathematical psychology, I show that we can represent some of the key features of expertise using stochastic differential equations. Such models demonstrate how an expert's long exposure to a particular context influences the information they accumulate in order to make a decision.These processes are not confined to board games, we are all experts in our daily lives through long exposure to the many regularities of daily tasks and

  11. The Relationship Between the Learning Style Perceptual Preferences of Urban Fourth Grade Children and the Acquisition of Selected Physical Science Concepts Through Learning Cycle Instructional Methodology.

    Science.gov (United States)

    Adams, Kenneth Mark

    to be sensitive to different perceptual preferences. Students with different preferences for auditory, visual, and tactile modalities, when learning, seem to benefit equally from learning cycle exposure. Increased use of a double blind for future learning styles research was recommended.

  12. Two-stage perceptual learning to break visual crowding.

    Science.gov (United States)

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  13. Towards an understanding of the mechanisms of weak central coherence effects: experiments in visual configural learning and auditory perception.

    Science.gov (United States)

    Plaisted, Kate; Saksida, Lisa; Alcántara, José; Weisblatt, Emma

    2003-01-01

    The weak central coherence hypothesis of Frith is one of the most prominent theories concerning the abnormal performance of individuals with autism on tasks that involve local and global processing. Individuals with autism often outperform matched nonautistic individuals on tasks in which success depends upon processing of local features, and underperform on tasks that require global processing. We review those studies that have been unable to identify the locus of the mechanisms that may be responsible for weak central coherence effects and those that show that local processing is enhanced in autism but not at the expense of global processing. In the light of these studies, we propose that the mechanisms which can give rise to 'weak central coherence' effects may be perceptual. More specifically, we propose that perception operates to enhance the representation of individual perceptual features but that this does not impact adversely on representations that involve integration of features. This proposal was supported in the two experiments we report on configural and feature discrimination learning in high-functioning children with autism. We also examined processes of perception directly, in an auditory filtering task which measured the width of auditory filters in individuals with autism and found that the width of auditory filters in autism were abnormally broad. We consider the implications of these findings for perceptual theories of the mechanisms underpinning weak central coherence effects. PMID:12639334

  14. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors.

    Science.gov (United States)

    Chen, Zhaocong; Wong, Francis C K; Jones, Jeffery A; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun

    2015-08-17

    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production.

  15. Hearing illusory sounds in noise: sensory-perceptual transformations in primary auditory cortex.

    NARCIS (Netherlands)

    Riecke, L.; Opstal, A.J. van; Goebel, R.; Formisano, E.

    2007-01-01

    A sound that is interrupted by silence is perceived as discontinuous. However, when the silence is replaced by noise, the target sound may be heard as uninterrupted. Understanding the neural basis of this continuity illusion may elucidate the ability to track sounds of interest in noisy auditory

  16. Auditory Stream Segregation in Autism Spectrum Disorder: Benefits and Downsides of Superior Perceptual Processes

    Science.gov (United States)

    Bouvet, Lucie; Mottron, Laurent; Valdois, Sylviane; Donnadieu, Sophie

    2016-01-01

    Auditory stream segregation allows us to organize our sound environment, by focusing on specific information and ignoring what is unimportant. One previous study reported difficulty in stream segregation ability in children with Asperger syndrome. In order to investigate this question further, we used an interleaved melody recognition task with…

  17. Perceptual learning in children with visual impairment improves near visual acuity.

    Science.gov (United States)

    Huurneman, Bianca; Boonstra, F Nienke; Cox, Ralf F A; van Rens, Ger; Cillessen, Antonius H N

    2013-09-17

    This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Participants were 45 children with visual impairment and 29 children with normal vision. Children with visual impairment were divided into three groups: a magnifier group (n = 12), a crowded perceptual learning group (n = 18), and an uncrowded perceptual learning group (n = 15). Children with normal vision also were divided in three groups, but were measured only at baseline. Dependent variables were single near visual acuity (NVA), crowded NVA, LH line 50% crowding NVA, number of trials, accuracy, performance time, amount of small errors, and amount of large errors. Children with visual impairment trained during six weeks, two times per week, for 30 minutes (12 training sessions). After training, children showed significant improvement of NVA in addition to specific improvements on the training task. The crowded perceptual learning group showed the largest acuity improvements (1.7 logMAR lines on the crowded chart, P children in the crowded perceptual learning group showed improvements on all NVA charts. Children with visual impairment benefit from perceptual training. While task-specific improvements were observed in all training groups, transfer to crowded NVA was largest in the crowded perceptual learning group. To our knowledge, this is the first study to provide evidence for the improvement of NVA by perceptual learning in children with visual impairment. (http://www.trialregister.nl number, NTR2537.).

  18. The relationship between students' perceptual learning style preferences, language learning strategies and English language vocabulary size

    OpenAIRE

    Gorevanova, Anna

    2000-01-01

    Ankara : The Institute of Economic and Social Sciences Bilkent Univ., 2000. Thesis (Master's) -- Bilkent University, 2000. Includes bibliographical references leaves 54-58 This study investigated the relationship between students’ perceptual learning style preferences, language learning strategies and English language vocabulary size. It is very important for teachers to be aware of students’ preferences in learning to help them be more successful and to avoid conflicts when...

  19. Vocal Function Exercises for Muscle Tension Dysphonia: Auditory-Perceptual Evaluation and Self-Assessment Rating.

    Science.gov (United States)

    Jafari, Narges; Salehi, Abolfazl; Izadi, Farzad; Talebian Moghadam, Saeed; Ebadi, Abbas; Dabirmoghadam, Payman; Faham, Maryam; Shahbazi, Mehdi

    2017-07-01

    Muscle tension dysphonia (MTD) is a functional dysphonia, which appears with an excessive tension in the intrinsic and extrinsic laryngeal musculatures. MTD can affect voice quality and quality of life. The purpose of the present study was to assess the effectiveness of vocal function exercises (VFEs) on perceptual and self-assessment ratings in a group of 15 subjects with MTD. The study comprised 15 subjects with MTD (8 men and 7 women, mean age 39.8 years, standard deviation 10.6, age range 24-62 years). All participants were native Persian speakers who underwent a 6-week course of VFEs. The Voice Handicap Index (VHI) (the self-assessment scale) and Grade, Roughness, Breathiness, Asthenia, Strain (GRBAS) scale (perceptual rating of voice quality) were used to compare pre- and post-VFEs. GRBAS data of patients before and after VFEs were compared using Wilcoxon signed-rank test, and VHI data of patients pre- and post-VFEs were compared using Student paired t test. These perceptual parameters showed a statistically significant improvement in subjects with MTD after voice therapy (significant at P self-assessment ratings measurements (with the VHI). As a result, the data provide evidence regarding the efficacy of VFEs in the treatment of patients with MTD. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  20. Short-term perceptual learning in visual conjunction search.

    Science.gov (United States)

    Su, Yuling; Lai, Yunpeng; Huang, Wanyi; Tan, Wei; Qu, Zhe; Ding, Yulong

    2014-08-01

    Although some studies showed that training can improve the ability of cross-dimension conjunction search, less is known about the underlying mechanism. Specifically, it remains unclear whether training of visual conjunction search can successfully bind different features of separated dimensions into a new function unit at early stages of visual processing. In the present study, we utilized stimulus specificity and generalization to provide a new approach to investigate the mechanisms underlying perceptual learning (PL) in visual conjunction search. Five experiments consistently showed that after 40 to 50 min of training of color-shape/orientation conjunction search, the ability to search for a certain conjunction target improved significantly and the learning effects did not transfer to a new target that differed from the trained target in both color and shape/orientation features. However, the learning effects were not strictly specific. In color-shape conjunction search, although the learning effect could not transfer to a same-shape different-color target, it almost completely transferred to a same-color different-shape target. In color-orientation conjunction search, the learning effect partly transferred to a new target that shared same color or same orientation with the trained target. Moreover, the sum of transfer effects for the same color target and the same orientation target in color-orientation conjunction search was algebraically equivalent to the learning effect for trained target, showing an additive transfer effect. The different transfer patterns in color-shape and color-orientation conjunction search learning might reflect the different complexity and discriminability between feature dimensions. These results suggested a feature-based attention enhancement mechanism rather than a unitization mechanism underlying the short-term PL of color-shape/orientation conjunction search.

  1. Short-term plasticity in auditory cognition.

    Science.gov (United States)

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  2. From Hearing Sounds to Recognizing Phonemes: Primary Auditory Cortex is A Truly Perceptual Language Area

    Directory of Open Access Journals (Sweden)

    Byron Bernal

    2016-11-01

    Full Text Available The aim of this article is to present a systematic review about the anatomy, function, connectivity, and functional activation of the primary auditory cortex (PAC (Brodmann areas 41/42 when involved in language paradigms. PAC activates with a plethora of diverse basic stimuli including but not limited to tones, chords, natural sounds, consonants, and speech. Nonetheless, the PAC shows specific sensitivity to speech. Damage in the PAC is associated with so-called “pure word-deafness” (“auditory verbal agnosia”. BA41, and to a lesser extent BA42, are involved in early stages of phonological processing (phoneme recognition. Phonological processing may take place in either the right or left side, but customarily the left exerts an inhibitory tone over the right, gaining dominance in function. BA41/42 are primary auditory cortices harboring complex phoneme perception functions with asymmetrical expression, making it possible to include them as core language processing areas (Wernicke’s area.

  3. Influence of Perceptual Saliency Hierarchy on Learning of Language Structures: An Artificial Language Learning Experiment.

    Science.gov (United States)

    Gong, Tao; Lam, Yau W; Shuai, Lan

    2016-01-01

    Psychological experiments have revealed that in normal visual perception of humans, color cues are more salient than shape cues, which are more salient than textural patterns. We carried out an artificial language learning experiment to study whether such perceptual saliency hierarchy (color > shape > texture) influences the learning of orders regulating adjectives of involved visual features in a manner either congruent (expressing a salient feature in a salient part of the form) or incongruent (expressing a salient feature in a less salient part of the form) with that hierarchy. Results showed that within a few rounds of learning participants could learn the compositional segments encoding the visual features and the order between them, generalize the learned knowledge to unseen instances with the same or different orders, and show learning biases for orders that are congruent with the perceptual saliency hierarchy. Although the learning performances for both the biased and unbiased orders became similar given more learning trials, our study confirms that this type of individual perceptual constraint could contribute to the structural configuration of language, and points out that such constraint, as well as other factors, could collectively affect the structural diversity in languages.

  4. Influence of Perceptual Saliency Hierarchy on Learning of Language Structures: An Artificial Language Learning Experiment

    Science.gov (United States)

    Gong, Tao; Lam, Yau W.; Shuai, Lan

    2016-01-01

    Psychological experiments have revealed that in normal visual perception of humans, color cues are more salient than shape cues, which are more salient than textural patterns. We carried out an artificial language learning experiment to study whether such perceptual saliency hierarchy (color > shape > texture) influences the learning of orders regulating adjectives of involved visual features in a manner either congruent (expressing a salient feature in a salient part of the form) or incongruent (expressing a salient feature in a less salient part of the form) with that hierarchy. Results showed that within a few rounds of learning participants could learn the compositional segments encoding the visual features and the order between them, generalize the learned knowledge to unseen instances with the same or different orders, and show learning biases for orders that are congruent with the perceptual saliency hierarchy. Although the learning performances for both the biased and unbiased orders became similar given more learning trials, our study confirms that this type of individual perceptual constraint could contribute to the structural configuration of language, and points out that such constraint, as well as other factors, could collectively affect the structural diversity in languages. PMID:28066281

  5. A Model for the Transfer of Perceptual-Motor Skill Learning in Human Behaviors

    Science.gov (United States)

    Rosalie, Simon M.; Muller, Sean

    2012-01-01

    This paper presents a preliminary model that outlines the mechanisms underlying the transfer of perceptual-motor skill learning in sport and everyday tasks. Perceptual-motor behavior is motivated by performance demands and evolves over time to increase the probability of success through adaptation. Performance demands at the time of an event…

  6. Dual mechanisms governing reward-driven perceptual learning [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Dongho Kim

    2015-09-01

    Full Text Available In this review, we explore how reward signals shape perceptual learning in animals and humans. Perceptual learning is the well-established phenomenon by which extensive practice elicits selective improvement in one’s perceptual discrimination of basic visual features, such as oriented lines or moving stimuli. While perceptual learning has long been thought to rely on ‘top-down’ processes, such as attention and decision-making, a wave of recent findings suggests that these higher-level processes are, in fact, not necessary.  Rather, these recent findings indicate that reward signals alone, in the absence of the contribution of higher-level cognitive processes, are sufficient to drive the benefits of perceptual learning. Here, we will review the literature tying reward signals to perceptual learning. Based on these findings, we propose dual underlying mechanisms that give rise to perceptual learning: one mechanism that operates ‘automatically’ and is tied directly to reward signals, and another mechanism that involves more ‘top-down’, goal-directed computations.

  7. Perceptual learning shapes multisensory causal inference via two distinct mechanisms.

    Science.gov (United States)

    McGovern, David P; Roudaia, Eugenie; Newell, Fiona N; Roach, Neil W

    2016-04-19

    To accurately represent the environment, our brains must integrate sensory signals from a common source while segregating those from independent sources. A reasonable strategy for performing this task is to restrict integration to cues that coincide in space and time. However, because multisensory signals are subject to differential transmission and processing delays, the brain must retain a degree of tolerance for temporal discrepancies. Recent research suggests that the width of this 'temporal binding window' can be reduced through perceptual learning, however, little is known about the mechanisms underlying these experience-dependent effects. Here, in separate experiments, we measure the temporal and spatial binding windows of human participants before and after training on an audiovisual temporal discrimination task. We show that training leads to two distinct effects on multisensory integration in the form of (i) a specific narrowing of the temporal binding window that does not transfer to spatial binding and (ii) a general reduction in the magnitude of crossmodal interactions across all spatiotemporal disparities. These effects arise naturally from a Bayesian model of causal inference in which learning improves the precision of audiovisual timing estimation, whilst concomitantly decreasing the prior expectation that stimuli emanate from a common source.

  8. Influence of syllable structure on L2 auditory word learning.

    Science.gov (United States)

    Hamada, Megumi; Goya, Hideki

    2015-04-01

    This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a closed-syllable structure and consonant clusters. Two groups of college students (Japanese group, N = 22; and native speakers of English, N = 21) learned paired English pseudowords and pictures. The pseudoword types differed in terms of the syllable structure and consonant clusters (congruent vs. incongruent) and the position of consonant clusters (coda vs. onset). Recall accuracy was higher for the pseudowords in the congruent type and the pseudowords with the coda-consonant clusters. The syllable structure effect was obtained from both participant groups, disconfirming the hypothesized cross-linguistic influence on L2 auditory word learning.

  9. The cerebellum and visual perceptual learning: evidence from a motion extrapolation task.

    Science.gov (United States)

    Deluca, Cristina; Golzar, Ashkan; Santandrea, Elisa; Lo Gerfo, Emanuele; Eštočinová, Jana; Moretto, Giuseppe; Fiaschi, Antonio; Panzeri, Marta; Mariotti, Caterina; Tinazzi, Michele; Chelazzi, Leonardo

    2014-09-01

    Visual perceptual learning is widely assumed to reflect plastic changes occurring along the cerebro-cortical visual pathways, including at the earliest stages of processing, though increasing evidence indicates that higher-level brain areas are also involved. Here we addressed the possibility that the cerebellum plays an important role in visual perceptual learning. Within the realm of motor control, the cerebellum supports learning of new skills and recalibration of motor commands when movement execution is consistently perturbed (adaptation). Growing evidence indicates that the cerebellum is also involved in cognition and mediates forms of cognitive learning. Therefore, the obvious question arises whether the cerebellum might play a similar role in learning and adaptation within the perceptual domain. We explored a possible deficit in visual perceptual learning (and adaptation) in patients with cerebellar damage using variants of a novel motion extrapolation, psychophysical paradigm. Compared to their age- and gender-matched controls, patients with focal damage to the posterior (but not the anterior) cerebellum showed strongly diminished learning, in terms of both rate and amount of improvement over time. Consistent with a double-dissociation pattern, patients with focal damage to the anterior cerebellum instead showed more severe clinical motor deficits, indicative of a distinct role of the anterior cerebellum in the motor domain. The collected evidence demonstrates that a pure form of slow-incremental visual perceptual learning is crucially dependent on the intact cerebellum, bearing the notion that the human cerebellum acts as a learning device for motor, cognitive and perceptual functions. We interpret the deficit in terms of an inability to fine-tune predictive models of the incoming flow of visual perceptual input over time. Moreover, our results suggest a strong dissociation between the role of different portions of the cerebellum in motor versus

  10. Perceptual Learning: 12-Month-Olds' Discrimination of Monkey Faces

    Science.gov (United States)

    Fair, Joseph; Flom, Ross; Jones, Jacob; Martin, Justin

    2012-01-01

    Six-month-olds reliably discriminate different monkey and human faces whereas 9-month-olds only discriminate different human faces. It is often falsely assumed that perceptual narrowing reflects a permanent change in perceptual abilities. In 3 experiments, ninety-six 12-month-olds' discrimination of unfamiliar monkey faces was examined. Following…

  11. Gains following perceptual learning are closely linked to the initial visual acuity.

    Science.gov (United States)

    Yehezkel, Oren; Sterkin, Anna; Lev, Maria; Levi, Dennis M; Polat, Uri

    2016-04-28

    The goal of the present study was to evaluate the dependence of perceptual learning gains on initial visual acuity (VA), in a large sample of subjects with a wide range of VAs. A large sample of normally sighted and presbyopic subjects (N = 119; aged 40 to 63) with a wide range of uncorrected near visual acuities (VA, -0.12 to 0.8 LogMAR), underwent perceptual learning. Training consisted of detecting briefly presented Gabor stimuli under spatial and temporal masking conditions. Consistent with previous findings, perceptual learning induced a significant improvement in near VA and reading speed under conditions of limited exposure duration. Our results show that the improvements in VA and reading speed observed following perceptual learning are closely linked to the initial VA, with only a minor fraction of the observed improvement that may be attributed to the additional sessions performed by those with the worse VA.

  12. Maturation of Rapid Auditory Temporal Processing and Subsequent Nonword Repetition Performance in Children

    Science.gov (United States)

    Fox, Allison M.; Reid, Corinne L.; Anderson, Mike; Richardson, Cassandra; Bishop, Dorothy V. M.

    2012-01-01

    According to the rapid auditory processing theory, the ability to parse incoming auditory information underpins learning of oral and written language. There is wide variation in this low-level perceptual ability, which appears to follow a protracted developmental course. We studied the development of rapid auditory processing using event-related…

  13. Neurofeedback in Learning Disabled Children: Visual versus Auditory Reinforcement.

    Science.gov (United States)

    Fernández, Thalía; Bosch-Bayard, Jorge; Harmony, Thalía; Caballero, María I; Díaz-Comas, Lourdes; Galán, Lídice; Ricardo-Garcell, Josefina; Aubert, Eduardo; Otero-Ojeda, Gloria

    2016-03-01

    Children with learning disabilities (LD) frequently have an EEG characterized by an excess of theta and a deficit of alpha activities. NFB using an auditory stimulus as reinforcer has proven to be a useful tool to treat LD children by positively reinforcing decreases of the theta/alpha ratio. The aim of the present study was to optimize the NFB procedure by comparing the efficacy of visual (with eyes open) versus auditory (with eyes closed) reinforcers. Twenty LD children with an abnormally high theta/alpha ratio were randomly assigned to the Auditory or the Visual group, where a 500 Hz tone or a visual stimulus (a white square), respectively, was used as a positive reinforcer when the value of the theta/alpha ratio was reduced. Both groups had signs consistent with EEG maturation, but only the Auditory Group showed behavioral/cognitive improvements. In conclusion, the auditory reinforcer was more efficacious in reducing the theta/alpha ratio, and it improved the cognitive abilities more than the visual reinforcer.

  14. Neuroanatomical and cognitive mediators of age-related differences in perceptual priming and learning

    OpenAIRE

    Kennedy, Kristen M.; Rodrigue, Karen M.; Head, Denise; Gunning-Dixon, Faith; Raz, Naftali

    2009-01-01

    Our objectives were to assess age differences in perceptual repetition priming and perceptual skill learning, and to determine whether they are mediated by cognitive resources and regional cerebral volume differences. Fragmented picture identification paradigm allows the study of both priming and learning within the same task. We presented this task to 169 adults (ages 18–80), assessed working memory and fluid intelligence, and measured brain volumes of regions that were deemed relevant to th...

  15. Selective increase of auditory cortico-striatal coherence during auditory-cued Go/NoGo discrimination learning.

    Directory of Open Access Journals (Sweden)

    Andreas L. Schulz

    2016-01-01

    Full Text Available Goal directed behavior and associated learning processes are tightly linked to neuronal activity in the ventral striatum. Mechanisms that integrate task relevant sensory information into striatal processing during decision making and learning are implicitly assumed in current reinforcementmodels, yet they are still weakly understood. To identify the functional activation of cortico-striatal subpopulations of connections during auditory discrimination learning, we trained Mongolian gerbils in a two-way active avoidance task in a shuttlebox to discriminate between falling and rising frequency modulated tones with identical spectral properties. We assessed functional coupling by analyzing the field-field coherence between the auditory cortex and the ventral striatum of animals performing the task. During the course of training, we observed a selective increase of functionalcoupling during Go-stimulus presentations. These results suggest that the auditory cortex functionally interacts with the ventral striatum during auditory learning and that the strengthening of these functional connections is selectively goal-directed.

  16. Neural plasticity underlying visual perceptual learning in aging.

    Science.gov (United States)

    Mishra, Jyoti; Rolle, Camarin; Gazzaley, Adam

    2015-07-01

    Healthy aging is associated with a decline in basic perceptual abilities, as well as higher-level cognitive functions such as working memory. In a recent perceptual training study using moving sweeps of Gabor stimuli, Berry et al. (2010) observed that older adults significantly improved discrimination abilities on the most challenging perceptual tasks that presented paired sweeps at rapid rates of 5 and 10 Hz. Berry et al. further showed that this perceptual training engendered transfer-of-benefit to an untrained working memory task. Here, we investigated the neural underpinnings of the improvements in these perceptual tasks, as assessed by event-related potential (ERP) recordings. Early visual ERP components time-locked to stimulus onset were compared pre- and post-training, as well as relative to a no-contact control group. The visual N1 and N2 components were significantly enhanced after training, and the N1 change correlated with improvements in perceptual discrimination on the task. Further, the change observed for the N1 and N2 was associated with the rapidity of the perceptual challenge; the visual N1 (120-150 ms) was enhanced post-training for 10 Hz sweep pairs, while the N2 (240-280 ms) was enhanced for the 5 Hz sweep pairs. We speculate that these observed post-training neural enhancements reflect improvements by older adults in the allocation of attention that is required to accurately dissociate perceptually overlapping stimuli when presented in rapid sequence. This article is part of a Special Issue entitled SI: Memory Å. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Motor-related signals in the auditory system for listening and learning.

    Science.gov (United States)

    Schneider, David M; Mooney, Richard

    2015-08-01

    In the auditory system, corollary discharge signals are theorized to facilitate normal hearing and the learning of acoustic behaviors, including speech and music. Despite clear evidence of corollary discharge signals in the auditory cortex and their presumed importance for hearing and auditory-guided motor learning, the circuitry and function of corollary discharge signals in the auditory cortex are not well described. In this review, we focus on recent developments in the mouse and songbird that provide insights into the circuitry that transmits corollary discharge signals to the auditory system and the function of these signals in the context of hearing and vocal learning. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Broad-based visual benefits from training with an integrated perceptual-learning video game.

    Science.gov (United States)

    Deveau, Jenni; Lovcik, Gary; Seitz, Aaron R

    2014-06-01

    Perception is the window through which we understand all information about our environment, and therefore deficits in perception due to disease, injury, stroke or aging can have significant negative impacts on individuals' lives. Research in the field of perceptual learning has demonstrated that vision can be improved in both normally seeing and visually impaired individuals, however, a limitation of most perceptual learning approaches is their emphasis on isolating particular mechanisms. In the current study, we adopted an integrative approach where the goal is not to achieve highly specific learning but instead to achieve general improvements to vision. We combined multiple perceptual learning approaches that have individually contributed to increasing the speed, magnitude and generality of learning into a perceptual-learning based video-game. Our results demonstrate broad-based benefits of vision in a healthy adult population. Transfer from the game includes; improvements in acuity (measured with self-paced standard eye-charts), improvement along the full contrast sensitivity function, and improvements in peripheral acuity and contrast thresholds. The use of this type of this custom video game framework built up from psychophysical approaches takes advantage of the benefits found from video game training while maintaining a tight link to psychophysical designs that enable understanding of mechanisms of perceptual learning and has great potential both as a scientific tool and as therapy to help improve vision. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Brain dynamics that correlate with effects of learning on auditory distance perception

    Directory of Open Access Journals (Sweden)

    Matthew G. Wisniewski

    2014-12-01

    Full Text Available Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m and far (30-m distances. Listeners’ accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC processes identified in electroencephalographic (EEG data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4-8 Hz power (theta event-related synchronization; ERS that was smaller after training and largest for backwards speech. For a left temporal cluster, 8-12 Hz decreases in power (alpha event-related desynchronization; ERD were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10-16 Hz power (upper-alpha/low-beta ERS. The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance.

  20. Subcortical plasticity following perceptual learning in a pitch discrimination task

    OpenAIRE

    Carcagno, Samuele; Plack, Christopher J.

    2011-01-01

    Practice can lead to dramatic improvements in the discrimination of auditory stimuli. In this study, we investigated changes of the frequency-following response (FFR), a subcortical component of the auditory evoked potentials, after a period of pitch discrimination training. Twenty-seven adult listeners were trained for 10 h on a pitch discrimination task using one of three different complex tone stimuli. One had a static pitch contour, one had a rising pitch contour, and one had a falling pi...

  1. Shape-specific perceptual learning in a figure-ground segregation task.

    Science.gov (United States)

    Yi, Do-Joon; Olson, Ingrid R; Chun, Marvin M

    2006-03-01

    What does perceptual experience contribute to figure-ground segregation? To study this question, we trained observers to search for symmetric dot patterns embedded in random dot backgrounds. Training improved shape segmentation, but learning did not completely transfer either to untrained locations or to untrained shapes. Such partial specificity persisted for a month after training. Interestingly, training on shapes in empty backgrounds did not help segmentation of the trained shapes in noisy backgrounds. Our results suggest that perceptual training increases the involvement of early sensory neurons in the segmentation of trained shapes, and that successful segmentation requires perceptual skills beyond shape recognition alone.

  2. Learning Disabilities and the School Health Worker

    Science.gov (United States)

    Freeman, Stephen W.

    1973-01-01

    This article offers three listings of signs and symptoms useful in detection of learning and perceptual deficiencies. The first list presents symptoms of the learning-disabled child; the second gives specific visual perceptual deficits (poor discrimination, figure-ground problems, reversals, etc.); and the third gives auditory perceptual deficits…

  3. Visual Perceptual Learning and its Specificity and Transfer: A New Perspective

    Directory of Open Access Journals (Sweden)

    Cong Yu

    2011-05-01

    Full Text Available Visual perceptual learning is known to be location and orientation specific, and is thus assumed to reflect the neuronal plasticity in the early visual cortex. However, in recent studies we created “Double training” and “TPE” procedures to demonstrate that these “fundamental” specificities of perceptual learning are in some sense artifacts and that learning can completely transfer to a new location or orientation. We proposed a rule-based learning theory to reinterpret perceptual learning and its specificity and transfer: A high-level decision unit learns the rules of performing a visual task through training. However, the learned rules cannot be applied to a new location or orientation automatically because the decision unit cannot functionally connect to new visual inputs with sufficient strength because these inputs are unattended or even suppressed during training. It is double training and TPE training that reactivate these new inputs, so that the functional connections can be strengthened to enable rule application and learning transfer. Currently we are investigating the properties of perceptual learning free from the bogus specificities, and the results provide some preliminary but very interesting insights into how training reshapes the functional connections between the high-level decision units and sensory inputs in the brain.

  4. Comparação entre as análises auditiva e acústica nas disartrias Comparison between auditory-perceptual and acoustic analyses in dysarthrias

    Directory of Open Access Journals (Sweden)

    Karin Zazo Ortiz

    2008-01-01

    Full Text Available OBJETIVO: Comparar os dados da análise perceptivo-auditiva (subjetiva com os dados da análise acústica (objetiva. MÉTODOS: Quarenta e dois pacientes disártricos, com diagnósticos neurológicos definidos, 21 do sexo masculino e 21 do sexo feminino foram submetidos à análise perceptual-auditiva e acústica. Todos os pacientes foram submetidos à gravação da voz, tendo sido avaliados, na análise auditiva, tipo de voz, ressonância (equilibrada, hipernasal ou laringo-faríngea, loudness (adequado, diminuído ou aumentado, pitch (adequado, grave, agudo ataque vocal (isocrônico, brusco ou soproso, e estabilidade (estável ou instável. Para a análise acústica foram utilizados os programas GRAM 5.1.7; para a análise da qualidade vocal e comportamento dos harmônicos na espectrografia e o Programa Vox Metria, para a obtenção das medidas objetivas. RESULTADOS: A comparação entre os achados das análises auditiva e acústica em sua maioria não foi significante, ou seja, não houve uma relação direta entre os achados subjetivos e os dados objetivos. Houve diferença estatisticamente significante apenas entre voz soprosa e Shimmer alterado (p=0,048 e entre a definição dos harmônicos e voz soprosa (p=0,040, sendo assim, observou-se correlação entre a presença de ruído à emissão e soprosidade. CONCLUSÕES: As análises perceptual-auditiva e acústica forneceram dados diferentes, porém complementares, auxiliando, de forma conjunta, no diagnóstico clínico das disartrias.PURPOSE: To compare data found in auditory-perceptual analyses (subjective and acoustic analyses (objective in dysarthric patients. METHODS: Forty-two patients with well defined neurological diagnosis, 21 male and 21 female, were evaluated in auditory-perceptual parameters and acoustic measures. All patients had their voices recorded. Auditory-perceptual voice analyses were made considering type of voice, resonance (balanced, hipernasal or laryngopharyngeal

  5. Metacognitive Confidence Increases with, but Does Not Determine, Visual Perceptual Learning.

    Science.gov (United States)

    Zizlsperger, Leopold; Kümmel, Florian; Haarmeier, Thomas

    2016-01-01

    While perceptual learning increases objective sensitivity, the effects on the constant interaction of the process of perception and its metacognitive evaluation have been rarely investigated. Visual perception has been described as a process of probabilistic inference featuring metacognitive evaluations of choice certainty. For visual motion perception in healthy, naive human subjects here we show that perceptual sensitivity and confidence in it increased with training. The metacognitive sensitivity-estimated from certainty ratings by a bias-free signal detection theoretic approach-in contrast, did not. Concomitant 3Hz transcranial alternating current stimulation (tACS) was applied in compliance with previous findings on effective high-low cross-frequency coupling subserving signal detection. While perceptual accuracy and confidence in it improved with training, there were no statistically significant tACS effects. Neither metacognitive sensitivity in distinguishing between their own correct and incorrect stimulus classifications, nor decision confidence itself determined the subjects' visual perceptual learning. Improvements of objective performance and the metacognitive confidence in it were rather determined by the perceptual sensitivity at the outset of the experiment. Post-decision certainty in visual perceptual learning was neither independent of objective performance, nor requisite for changes in sensitivity, but rather covaried with objective performance. The exact functional role of metacognitive confidence in human visual perception has yet to be determined.

  6. Exogenous and endogenous attention during perceptual learning differentially affect post-training target thresholds

    Science.gov (United States)

    Mukai, Ikuko; Bahadur, Kandy; Kesavabhotla, Kartik; Ungerleider, Leslie G.

    2012-01-01

    There is conflicting evidence in the literature regarding the role played by attention in perceptual learning. To further examine this issue, we independently manipulated exogenous and endogenous attention and measured the rate of perceptual learning of oriented Gabor patches presented in different quadrants of the visual field. In this way, we could track learning at attended, divided-attended, and unattended locations. We also measured contrast thresholds of the Gabor patches before and after training. Our results showed that, for both exogenous and endogenous attention, accuracy in performing the orientation discrimination improved to a greater extent at attended than at unattended locations. Importantly, however, only exogenous attention resulted in improved contrast thresholds. These findings suggest that both exogenous and endogenous attention facilitate perceptual learning, but that these two types of attention may be mediated by different neural mechanisms. PMID:21282340

  7. Correlation of the Dysphonia Severity Index (DSI), Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V), and Gender in Brazilians With and Without Voice Disorders.

    Science.gov (United States)

    Nemr, Katia; Simões-Zenari, Marcia; de Souza, Glaucia S; Hachiya, Adriana; Tsuji, Domingos H

    2016-11-01

    This study aims to analyze the Dysphonia Severity Index (DSI) in Brazilians with or without voice disorders and investigate DSI's correlation with gender and auditory-perceptual evaluation data obtained via the Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V) protocol. A total of 66 Brazilian adults from both genders participated in the study, including 24 patients with dysphonia confirmed on laryngeal examination (dysphonic group [DG]) and 42 volunteers without voice or hearing complaints and without auditory-perceptual voice disorders (nondysphonic group [NDG]). The vocal tasks included in CAPE-V and DSI were performed and recorded. Data were analyzed by means of the independent t test, the Mann-Whitney U test, and Pearson correlation at the 5% significance level. Differences were found in the mean DSI values between the DG and the NDG. Differences were also found in all DSI items between the groups, except for the highest frequency parameter. In the DG, a moderate negative correlation was detected between overall dysphonia severity (CAPE-V) and DSI value, and between breathiness and DSI value, and a weak negative correlation was detected between DSI value and roughness. In the NDG, the maximum phonation time was higher among males. In both groups, the highest frequency parameter was higher among females. The DSI discriminated among Brazilians with or without voice disorders. A correlation was found between some aspects of the DSI and the CAPE-V but not between DSI and gender. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  8. The Birth of Words: Ten-Month-Olds Learn Words through Perceptual Salience

    Science.gov (United States)

    Pruden, Shannon M.; Hirsh-Pasek, Kathy; Golinkoff, Roberta Michnick; Hennon, Elizabeth A.

    2006-01-01

    A core task in language acquisition is mapping words onto objects, actions, and events. Two studies investigated how children learn to map novel labels onto novel objects. Study 1 investigated whether 10-month-olds use both perceptual and social cues to learn a word. Study 2, a control study, tested whether infants paired the label with a…

  9. Perceptual Learning in Early Mathematics: Interacting with Problem Structure Improves Mapping, Solving and Fluency

    Science.gov (United States)

    Thai, Khanh-Phuong; Son, Ji Y.; Hoffman, Jessica; Devers, Christopher; Kellman, Philip J.

    2014-01-01

    Mathematics is the study of structure but students think of math as solving problems according to rules. Students can learn procedures, but they often have trouble knowing when to apply learned procedures, especially to problems unlike those they trained with. In this study, the authors rely on the psychological mechanism of perceptual learning…

  10. Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.

    Science.gov (United States)

    Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K

    2017-08-30

    A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a

  11. The role of experience-based perceptual learning in the face inversion effect.

    Science.gov (United States)

    Civile, Ciro; Obhi, Sukhvinder S; McLaren, I P L

    2018-04-03

    Perceptual learning of the type we consider here is a consequence of experience with a class of stimuli. It amounts to an enhanced ability to discriminate between stimuli. We argue that it contributes to the ability to distinguish between faces and recognize individuals, and in particular contributes to the face inversion effect (better recognition performance for upright vs inverted faces). Previously, we have shown that experience with a prototype defined category of checkerboards leads to perceptual learning, that this produces an inversion effect, and that this effect can be disrupted by Anodal tDCS to Fp3 during pre-exposure. If we can demonstrate that the same tDCS manipulation also disrupts the inversion effect for faces, then this will strengthen the claim that perceptual learning contributes to that effect. The important question, then, is whether this tDCS procedure would significantly reduce the inversion effect for faces; stimuli that we have lifelong expertise with and for which perceptual learning has already occurred. Consequently, in the experiment reported here we investigated the effects of anodal tDCS at Fp3 during an old/new recognition task for upright and inverted faces. Our results show that stimulation significantly reduced the face inversion effect compared to controls. The effect was one of reducing recognition performance for upright faces. This result is the first to show that tDCS affects perceptual learning that has already occurred, disrupting individuals' ability to recognize upright faces. It provides further support for our account of perceptual learning and its role as a key factor in face recognition. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Neural mechanisms of human perceptual learning: electrophysiological evidence for a two-stage process.

    Science.gov (United States)

    Hamamé, Carlos M; Cosmelli, Diego; Henriquez, Rodrigo; Aboitiz, Francisco

    2011-04-26

    Humans and other animals change the way they perceive the world due to experience. This process has been labeled as perceptual learning, and implies that adult nervous systems can adaptively modify the way in which they process sensory stimulation. However, the mechanisms by which the brain modifies this capacity have not been sufficiently analyzed. We studied the neural mechanisms of human perceptual learning by combining electroencephalographic (EEG) recordings of brain activity and the assessment of psychophysical performance during training in a visual search task. All participants improved their perceptual performance as reflected by an increase in sensitivity (d') and a decrease in reaction time. The EEG signal was acquired throughout the entire experiment revealing amplitude increments, specific and unspecific to the trained stimulus, in event-related potential (ERP) components N2pc and P3 respectively. P3 unspecific modification can be related to context or task-based learning, while N2pc may be reflecting a more specific attentional-related boosting of target detection. Moreover, bell and U-shaped profiles of oscillatory brain activity in gamma (30-60 Hz) and alpha (8-14 Hz) frequency bands may suggest the existence of two phases for learning acquisition, which can be understood as distinctive optimization mechanisms in stimulus processing. We conclude that there are reorganizations in several neural processes that contribute differently to perceptual learning in a visual search task. We propose an integrative model of neural activity reorganization, whereby perceptual learning takes place as a two-stage phenomenon including perceptual, attentional and contextual processes.

  13. New perspectives on the auditory cortex: learning and memory.

    Science.gov (United States)

    Weinberger, Norman M

    2015-01-01

    Primary ("early") sensory cortices have been viewed as stimulus analyzers devoid of function in learning, memory, and cognition. However, studies combining sensory neurophysiology and learning protocols have revealed that associative learning systematically modifies the encoding of stimulus dimensions in the primary auditory cortex (A1) to accentuate behaviorally important sounds. This "representational plasticity" (RP) is manifest at different levels. The sensitivity and selectivity of signal tones increase near threshold, tuning above threshold shifts toward the frequency of acoustic signals, and their area of representation can increase within the tonotopic map of A1. The magnitude of area gain encodes the level of behavioral stimulus importance and serves as a substrate of memory strength. RP has the same characteristics as behavioral memory: it is associative, specific, develops rapidly, consolidates, and can last indefinitely. Pairing tone with stimulation of the cholinergic nucleus basalis induces RP and implants specific behavioral memory, while directly increasing the representational area of a tone in A1 produces matching behavioral memory. Thus, RP satisfies key criteria for serving as a substrate of auditory memory. The findings suggest a basis for posttraumatic stress disorder in abnormally augmented cortical representations and emphasize the need for a new model of the cerebral cortex. © 2015 Elsevier B.V. All rights reserved.

  14. Generalization of perceptual and motor learning: a causal link with memory encoding and consolidation?

    Science.gov (United States)

    Censor, N

    2013-10-10

    In both perceptual and motor learning, numerous studies have shown specificity of learning to the trained eye or hand and to the physical features of the task. However, generalization of learning is possible in both perceptual and motor domains. Here, I review evidence for perceptual and motor learning generalization, suggesting that generalization patterns are affected by the way in which the original memory is encoded and consolidated. Generalization may be facilitated during fast learning, with possible engagement of higher-order brain areas recurrently interacting with the primary visual or motor cortices encoding the stimuli or movements' memories. Such generalization may be supported by sleep, involving functional interactions between low and higher-order brain areas. Repeated exposure to the task may alter generalization patterns of learning and overall offline learning. Development of unifying frameworks across learning modalities and better understanding of the conditions under which learning can generalize may enable to gain insight regarding the neural mechanisms underlying procedural learning and have useful clinical implications. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Pure perceptual-based learning of second-, third-, and fourth-order sequential probabilities.

    Science.gov (United States)

    Remillard, Gilbert

    2011-07-01

    There is evidence that sequence learning in the traditional serial reaction time task (SRTT), where target location is the response dimension, and sequence learning in the perceptual SRTT, where target location is not the response dimension, are handled by different mechanisms. The ability of the latter mechanism to learn sequential contingencies that can be learned by the former mechanism was examined. Prior research has established that people can learn second-, third-, and fourth-order probabilities in the traditional SRTT. The present study reveals that people can learn such probabilities in the perceptual SRTT. This suggests that the two mechanisms may have similar architectures. A possible neural basis of the two mechanisms is discussed.

  16. Practice makes it better: A psychophysical study of visual perceptual learning and its transfer effects on aging.

    Science.gov (United States)

    Li, Xuan; Allen, Philip A; Lien, Mei-Ching; Yamamoto, Naohide

    2017-02-01

    Previous studies on perceptual learning, acquiring a new skill through practice, appear to stimulate brain plasticity and enhance performance (Fiorentini & Berardi, 1981). The present study aimed to determine (a) whether perceptual learning can be used to compensate for age-related declines in perceptual abilities, and (b) whether the effect of perceptual learning can be transferred to untrained stimuli and subsequently improve capacity of visual working memory (VWM). We tested both healthy younger and older adults in a 3-day training session using an orientation discrimination task. A matching-to-sample psychophysical method was used to measure improvements in orientation discrimination thresholds and reaction times (RTs). Results showed that both younger and older adults improved discrimination thresholds and RTs with similar learning rates and magnitudes. Furthermore, older adults exhibited a generalization of improvements to 3 untrained orientations that were close to the training orientation and benefited more compared with younger adults from the perceptual learning as they transferred learning effects to the VWM performance. We conclude that through perceptual learning, older adults can partially counteract age-related perceptual declines, generalize the learning effect to other stimulus conditions, and further overcome the limitation of using VWM capacity to perform a perceptual task. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Heterogeneity in Perceptual Category Learning by High Functioning Children with Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Eduardo eMercado

    2015-06-01

    Full Text Available Previous research suggests that high functioning children with Autism Spectrum Disorder (ASD sometimes have problems learning categories, but often appear to perform normally in categorization tasks. The deficits that individuals with ASD show when learning categories have been attributed to executive dysfunction, general deficits in implicit learning, atypical cognitive strategies, or abnormal perceptual biases and abilities. Several of these psychological explanations for category learning deficits have been associated with neural abnormalities such as cortical underconnectivity. The present study evaluated how well existing neurally-based theories account for atypical perceptual category learning shown by high functioning children with ASD across multiple category learning tasks involving novel, abstract shapes. Consistent with earlier results, children’s performances revealed two distinct patterns of learning and generalization associated with ASD: one was indistinguishable from performance in typically developing children; the other revealed dramatic impairments. These two patterns were evident regardless of training regimen or stimulus set. Surprisingly, some children with ASD showed both patterns. Simulations of perceptual category learning could account for the two observed patterns in terms of differences in neural plasticity. However, no current psychological or neural theory adequately explains why a child with ASD might show such large fluctuations in category learning ability across training conditions or stimulus sets.

  18. Heterogeneity in perceptual category learning by high functioning children with autism spectrum disorder.

    Science.gov (United States)

    Mercado, Eduardo; Church, Barbara A; Coutinho, Mariana V C; Dovgopoly, Alexander; Lopata, Christopher J; Toomey, Jennifer A; Thomeer, Marcus L

    2015-01-01

    Previous research suggests that high functioning (HF) children with autism spectrum disorder (ASD) sometimes have problems learning categories, but often appear to perform normally in categorization tasks. The deficits that individuals with ASD show when learning categories have been attributed to executive dysfunction, general deficits in implicit learning, atypical cognitive strategies, or abnormal perceptual biases and abilities. Several of these psychological explanations for category learning deficits have been associated with neural abnormalities such as cortical underconnectivity. The present study evaluated how well existing neurally based theories account for atypical perceptual category learning shown by HF children with ASD across multiple category learning tasks involving novel, abstract shapes. Consistent with earlier results, children's performances revealed two distinct patterns of learning and generalization associated with ASD: one was indistinguishable from performance in typically developing children; the other revealed dramatic impairments. These two patterns were evident regardless of training regimen or stimulus set. Surprisingly, some children with ASD showed both patterns. Simulations of perceptual category learning could account for the two observed patterns in terms of differences in neural plasticity. However, no current psychological or neural theory adequately explains why a child with ASD might show such large fluctuations in category learning ability across training conditions or stimulus sets.

  19. A perceptual learning deficit in Chinese developmental dyslexia as revealed by visual texture discrimination training.

    Science.gov (United States)

    Wang, Zhengke; Cheng-Lai, Alice; Song, Yan; Cutting, Laurie; Jiang, Yuzheng; Lin, Ou; Meng, Xiangzhi; Zhou, Xiaolin

    2014-08-01

    Learning to read involves discriminating between different written forms and establishing connections with phonology and semantics. This process may be partially built upon visual perceptual learning, during which the ability to process the attributes of visual stimuli progressively improves with practice. The present study investigated to what extent Chinese children with developmental dyslexia have deficits in perceptual learning by using a texture discrimination task, in which participants were asked to discriminate the orientation of target bars. Experiment l demonstrated that, when all of the participants started with the same initial stimulus-to-mask onset asynchrony (SOA) at 300 ms, the threshold SOA, adjusted according to response accuracy for reaching 80% accuracy, did not show a decrement over 5 days of training for children with dyslexia, whereas this threshold SOA steadily decreased over the training for the control group. Experiment 2 used an adaptive procedure to determine the threshold SOA for each participant during training. Results showed that both the group of dyslexia and the control group attained perceptual learning over the sessions in 5 days, although the threshold SOAs were significantly higher for the group of dyslexia than for the control group; moreover, over individual participants, the threshold SOA negatively correlated with their performance in Chinese character recognition. These findings suggest that deficits in visual perceptual processing and learning might, in part, underpin difficulty in reading Chinese. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Monetary reward modulates task-irrelevant perceptual learning for invisible stimuli.

    Directory of Open Access Journals (Sweden)

    David Pascucci

    Full Text Available Task Irrelevant Perceptual Learning (TIPL shows that the brain's discriminative capacity can improve also for invisible and unattended visual stimuli. It has been hypothesized that this form of "unconscious" neural plasticity is mediated by an endogenous reward mechanism triggered by the correct task performance. Although this result has challenged the mandatory role of attention in perceptual learning, no direct evidence exists of the hypothesized link between target recognition, reward and TIPL. Here, we manipulated the reward value associated with a target to demonstrate the involvement of reinforcement mechanisms in sensory plasticity for invisible inputs. Participants were trained in a central task associated with either high or low monetary incentives, provided only at the end of the experiment, while subliminal stimuli were presented peripherally. Our results showed that high incentive-value targets induced a greater degree of perceptual improvement for the subliminal stimuli, supporting the role of reinforcement mechanisms in TIPL.

  1. Monetary reward modulates task-irrelevant perceptual learning for invisible stimuli.

    Science.gov (United States)

    Pascucci, David; Mastropasqua, Tommaso; Turatto, Massimo

    2015-01-01

    Task Irrelevant Perceptual Learning (TIPL) shows that the brain's discriminative capacity can improve also for invisible and unattended visual stimuli. It has been hypothesized that this form of "unconscious" neural plasticity is mediated by an endogenous reward mechanism triggered by the correct task performance. Although this result has challenged the mandatory role of attention in perceptual learning, no direct evidence exists of the hypothesized link between target recognition, reward and TIPL. Here, we manipulated the reward value associated with a target to demonstrate the involvement of reinforcement mechanisms in sensory plasticity for invisible inputs. Participants were trained in a central task associated with either high or low monetary incentives, provided only at the end of the experiment, while subliminal stimuli were presented peripherally. Our results showed that high incentive-value targets induced a greater degree of perceptual improvement for the subliminal stimuli, supporting the role of reinforcement mechanisms in TIPL.

  2. The Effect of Auditory and Visual Motion Picture Descriptive Modalities in Teaching Perceptual-Motor Skills Used in the Grading of Cereal Grains.

    Science.gov (United States)

    Hannemann, James William

    This study was designed to discover whether a student learns to imitate the skills demonstrated in a motion picture more accurately when the supportive descriptive terminology is presented in an auditory (spoken) form or in a visual (captions) form. A six-minute color 16mm film was produced--"Determining the Test Weight per Bushel of Yellow Corn".…

  3. Perceptual statistical learning over one week in child speech production.

    Science.gov (United States)

    Richtsmeier, Peter T; Goffman, Lisa

    2017-07-01

    What cognitive mechanisms account for the trajectory of speech sound development, in particular, gradually increasing accuracy during childhood? An intriguing potential contributor is statistical learning, a type of learning that has been studied frequently in infant perception but less often in child speech production. To assess the relevance of statistical learning to developing speech accuracy, we carried out a statistical learning experiment with four- and five-year-olds in which statistical learning was examined over one week. Children were familiarized with and tested on word-medial consonant sequences in novel words. There was only modest evidence for statistical learning, primarily in the first few productions of the first session. This initial learning effect nevertheless aligns with previous statistical learning research. Furthermore, the overall learning effect was similar to an estimate of weekly accuracy growth based on normative studies. The results implicate other important factors in speech sound development, particularly learning via production. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. The Relationship between Perceptual Learning Style Preferences and Multiple Intelligences among Iranian EFL Learners

    Science.gov (United States)

    Baleghizadeh, Sasan; Shayeghi, Rose

    2014-01-01

    The purpose of the present study is to investigate the relationships between preferences of Multiple Intelligences and perceptual/social learning styles. Two self-report questionnaires were administered to a total of 207 male and female participants. Pearson correlation results revealed statistically significant positive relations between…

  5. Exploring the Differences of Undergraduate Students' Perceptual Learning Styles in International Business Study

    Science.gov (United States)

    Ding, Ning; Lin, Wei

    2013-01-01

    More than 45,000 international students are now studying for bachelor programs in The Netherlands. The number of Asian students increased dramatically in the past decade. The current research aims at examining the differences between Western European and Asian students' perceptual learning styles, and exploring the relationships between students'…

  6. Exploring the differences of undergraduate students’ perceptual learning styles in international business study

    NARCIS (Netherlands)

    Ding, Ning; Lin, Wei

    2013-01-01

    More than 45,000 international students are now studying for bachelor programs in the Netherlands. The number of Asian students increased dramatically in the past decade. The current research aims at examining the differences between Western European and Asian students’ perceptual learning styles,

  7. Semantic Features, Perceptual Expectations, and Frequency as Factors in the Learning of Polar Spatial Adjective Concepts.

    Science.gov (United States)

    Dunckley, Candida J. Lutes; Radtke, Robert C.

    Two semantic theories of word learning, a perceptual complexity hypothesis (H. Clark, 1970) and a quantitative complexity hypothesis (E. Clark, 1972) were tested by teaching 24 preschoolers and 16 college students CVC labels for five polar spatial adjective concepts having single word representations in English, and for three having no direct…

  8. Learning perceptual aspects of diagnosis in medicine via eye movement modeling examples on patient video cases

    NARCIS (Netherlands)

    Jarodzka, Halszka; Balslev, Thomas; Holmqvist, Kenneth; Nyström, Marcus; Scheiter, Katharina; Gerjets, Peter; Eika, Berit

    2010-01-01

    Jarodzka, H., Balslev, T., Holmqvist, K., Nyström, M., Scheiter, K., Gerjets, P., & Eika, B. (2010). Learning perceptual aspects of diagnosis in medicine via eye movement modeling examples on patient video cases. In S. Ohlsson & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the

  9. Learning perceptual aspects of diagnosis in medicine via eye movement modeling examples on patient video cases

    NARCIS (Netherlands)

    Jarodzka, Halszka; Balslev, Thomas; Holmqvist, Kenneth; Nyström, Marcus; Scheiter, Katharina; Gerjets, Peter; Eika, Berit

    2010-01-01

    Jarodzka, H., Balslev, T., Holmqvist, K., Nyström, M., Scheiter, K., Gerjets, P., & Eika, B. (2010, August). Learning perceptual aspects of diagnosis in medicine via eye movement modeling examples on patient video cases. Poster presented at the 32nd Annual Conference of the Cognitive Science

  10. Perceptual Learning in Children With Visual Impairment Improves Near Visual Acuity

    NARCIS (Netherlands)

    Huurneman, Bianca; Boonstra, F. Nienke; Cox, Ralf F. A.; van Rens, Ger; Cillessen, Antonius H. N.

    PURPOSE. This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four-to nine-year-old children with visual impairment. METHODS. Participants were 45 children with visual impairment and 29 children with normal vision. Children

  11. Perceptual Learning in Children With Visual Impairment Improves Near Visual Acuity

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.; Cox, R.F.A.; van Rens, G.H.M.B.; Cillessen, A.H.N.

    2013-01-01

    Purpose. This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Methods. Participants were 45 children with visual impairment and 29 children with normal vision. Children

  12. Perceptual learning in children with visual impairment improves near visual acuity

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.; Cox, R.F.; Rens, G. van; Cillessen, A.H.

    2013-01-01

    PURPOSE: This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. METHODS: Participants were 45 children with visual impairment and 29 children with normal vision. Children

  13. Perceptual Learning in Children With Visual Impairment Improves Near Visual Acuity

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.; Cox, R.F.A.; Rens, G.H.M.B. van; Cillessen, A.H.N.

    2013-01-01

    PURPOSE. This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four-to nine-year-old children with visual impairment. METHODS. Participants were 45 children with visual impairment and 29 children with normal vision. Children

  14. Subcortical plasticity following perceptual learning in a pitch discrimination task.

    Science.gov (United States)

    Carcagno, Samuele; Plack, Christopher J

    2011-02-01

    Practice can lead to dramatic improvements in the discrimination of auditory stimuli. In this study, we investigated changes of the frequency-following response (FFR), a subcortical component of the auditory evoked potentials, after a period of pitch discrimination training. Twenty-seven adult listeners were trained for 10 h on a pitch discrimination task using one of three different complex tone stimuli. One had a static pitch contour, one had a rising pitch contour, and one had a falling pitch contour. Behavioral measures of pitch discrimination and FFRs for all the stimuli were measured before and after the training phase for these participants, as well as for an untrained control group (n = 12). Trained participants showed significant improvements in pitch discrimination compared to the control group for all three trained stimuli. These improvements were partly specific for stimuli with the same pitch modulation (dynamic vs. static) and with the same pitch trajectory (rising vs. falling) as the trained stimulus. Also, the robustness of FFR neural phase locking to the sound envelope increased significantly more in trained participants compared to the control group for the static and rising contour, but not for the falling contour. Changes in FFR strength were partly specific for stimuli with the same pitch modulation (dynamic vs. static) of the trained stimulus. Changes in FFR strength, however, were not specific for stimuli with the same pitch trajectory (rising vs. falling) as the trained stimulus. These findings indicate that even relatively low-level processes in the mature auditory system are subject to experience-related change.

  15. Auditory verbal learning in drug-free Ecstasy polydrug users.

    Science.gov (United States)

    Fox, H. C.; Toplis, A. S.; Turner, J. J. D.; Parrott, A. C.

    2001-12-01

    Drug-free Ecstasy polydrug users have shown impairment on tasks of verbal working memory and memory span. Current research aims to investigate how these deficits may affect the learning of verbal material by administration of the Auditory Verbal Learning Task (AVLT) (Rey, 1964). The task provides a learning curve by assessing immediate memory span over multiple trials. Learning strategies are further analysed by tendencies to confabulate as well as demonstrate either proactive or retroactive interference elicited by a novel 'distractor' list. Three groups completed the task: two groups of 14 Ecstasy users (short- and long-term) and one group of 14 polydrug controls. Compared with controls both Ecstasy groups recalled significantly fewer words and made more confabulation errors on the initial three recall trials as well as a delayed recall trial. Long-term users demonstrated increased confabulation on the initial trials and the novel 'distractor7' trial, compared with short-term users. Only following repeated presentations were both short- and long-term users shown to perform at control levels. As such, deficits in verbal learning may be more related to storage and/or retrieval problems than problems associated with capacity per se. No interference errors were demonstrated by either of the Ecstasy groups. However, a high level of intrusion errors may indicate selective working memory problems associated with longer-term use of the drug. Copyright 2001 John Wiley & Sons, Ltd.

  16. Differential Effects of Music and Video Gaming During Breaks on Auditory and Visual Learning.

    Science.gov (United States)

    Liu, Shuyan; Kuschpel, Maxim S; Schad, Daniel J; Heinz, Andreas; Rapp, Michael A

    2015-11-01

    The interruption of learning processes by breaks filled with diverse activities is common in everyday life. This study investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on auditory versus visual memory performance. Young adults were exposed to breaks involving (a) open eyes resting, (b) listening to music, and (c) playing a video game, immediately after memorizing auditory versus visual stimuli. To assess learning performance, words were recalled directly after the break (an 8:30 minute delay) and were recalled and recognized again after 7 days. Based on linear mixed-effects modeling, it was found that playing the Angry Birds video game during a short learning break impaired long-term retrieval in auditory learning but enhanced long-term retrieval in visual learning compared with the music and rest conditions. These differential effects of video games on visual versus auditory learning suggest specific interference of common break activities on learning.

  17. Trial-dependent psychometric functions accounting for perceptual learning in 2-AFC discrimination tasks.

    Science.gov (United States)

    Kattner, Florian; Cochrane, Aaron; Green, C Shawn

    2017-09-01

    The majority of theoretical models of learning consider learning to be a continuous function of experience. However, most perceptual learning studies use thresholds estimated by fitting psychometric functions to independent blocks, sometimes then fitting a parametric function to these block-wise estimated thresholds. Critically, such approaches tend to violate the basic principle that learning is continuous through time (e.g., by aggregating trials into large "blocks" for analysis that each assume stationarity, then fitting learning functions to these aggregated blocks). To address this discrepancy between base theory and analysis practice, here we instead propose fitting a parametric function to thresholds from each individual trial. In particular, we implemented a dynamic psychometric function whose parameters were allowed to change continuously with each trial, thus parameterizing nonstationarity. We fit the resulting continuous time parametric model to data from two different perceptual learning tasks. In nearly every case, the quality of the fits derived from the continuous time parametric model outperformed the fits derived from a nonparametric approach wherein separate psychometric functions were fit to blocks of trials. Because such a continuous trial-dependent model of perceptual learning also offers a number of additional advantages (e.g., the ability to extrapolate beyond the observed data; the ability to estimate performance on individual critical trials), we suggest that this technique would be a useful addition to each psychophysicist's analysis toolkit.

  18. Time course influences transfer of visual perceptual learning across spatial location.

    Science.gov (United States)

    Larcombe, S J; Kennard, C; Bridge, H

    2017-06-01

    Visual perceptual learning describes the improvement of visual perception with repeated practice. Previous research has established that the learning effects of perceptual training may be transferable to untrained stimulus attributes such as spatial location under certain circumstances. However, the mechanisms involved in transfer have not yet been fully elucidated. Here, we investigated the effect of altering training time course on the transferability of learning effects. Participants were trained on a motion direction discrimination task or a sinusoidal grating orientation discrimination task in a single visual hemifield. The 4000 training trials were either condensed into one day, or spread evenly across five training days. When participants were trained over a five-day period, there was transfer of learning to both the untrained visual hemifield and the untrained task. In contrast, when the same amount of training was condensed into a single day, participants did not show any transfer of learning. Thus, learning time course may influence the transferability of perceptual learning effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Increase in MST activity correlates with visual motion learning: A functional MRI study of perceptual learning.

    Science.gov (United States)

    Larcombe, Stephanie J; Kennard, Chris; Bridge, Holly

    2018-01-01

    Repeated practice of a specific task can improve visual performance, but the neural mechanisms underlying this improvement in performance are not yet well understood. Here we trained healthy participants on a visual motion task daily for 5 days in one visual hemifield. Before and after training, we used functional magnetic resonance imaging (fMRI) to measure the change in neural activity. We also imaged a control group of participants on two occasions who did not receive any task training. While in the MRI scanner, all participants completed the motion task in the trained and untrained visual hemifields separately. Following training, participants improved their ability to discriminate motion direction in the trained hemifield and, to a lesser extent, in the untrained hemifield. The amount of task learning correlated positively with the change in activity in the medial superior temporal (MST) area. MST is the anterior portion of the human motion complex (hMT+). MST changes were localized to the hemisphere contralateral to the region of the visual field, where perceptual training was delivered. Visual areas V2 and V3a showed an increase in activity between the first and second scan in the training group, but this was not correlated with performance. The contralateral anterior hippocampus and bilateral dorsolateral prefrontal cortex (DLPFC) and frontal pole showed changes in neural activity that also correlated with the amount of task learning. These findings emphasize the importance of MST in perceptual learning of a visual motion task. Hum Brain Mapp 39:145-156, 2018. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  20. Perceptual Learning as a potential treatment for amblyopia: a mini-review

    Science.gov (United States)

    Levi, Dennis M.; Li, Roger W.

    2009-01-01

    Amblyopia is a developmental abnormality that results from physiological alterations in the visual cortex and impairs form vision. It is a consequence of abnormal binocular visual experience during the “sensitive period” early in life. While amblyopia can often be reversed when treated early, conventional treatment is generally not undertaken in older children and adults. A number of studies over the last twelve years or so suggest that Perceptual Learning (PL) may provide an important new method for treating amblyopia. The aim of this mini-review is to provide a critical review and “meta-analysis” of perceptual learning in adults and children with amblyopia, with a view to extracting principles that might make PL more effective and efficient. Specifically we evaluate: What factors influence the outcome of perceptual learning?Specificity and generalization – two sides of the coin.Do the improvements last?How does PL improve visual function?Should PL be part of the treatment armamentarium? A review of the extant studies makes it clear that practicing a visual task results in a long-lasting improvement in performance in an amblyopic eye. The improvement is generally strongest for the trained eye, task, stimulus and orientation, but appears to have a broader spatial frequency bandwidth than in normal vision. Importantly, practicing on a variety of different tasks and stimuli seems to transfer to improved visual acuity. Perceptual learning operates via a reduction of internal neural noise and/or through more efficient use of the stimulus information by retuning the weighting of the information. The success of PL raises the question of whether it should become a standard part of the armamentarium for the clinical treatment of amblyopia, and suggests several important principles for effective perceptual learning in amblyopia. PMID:19250947

  1. Tensor Voting A Perceptual Organization Approach to Computer Vision and Machine Learning

    CERN Document Server

    Mordohai, Philippos

    2006-01-01

    This lecture presents research on a general framework for perceptual organization that was conducted mainly at the Institute for Robotics and Intelligent Systems of the University of Southern California. It is not written as a historical recount of the work, since the sequence of the presentation is not in chronological order. It aims at presenting an approach to a wide range of problems in computer vision and machine learning that is data-driven, local and requires a minimal number of assumptions. The tensor voting framework combines these properties and provides a unified perceptual organiza

  2. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    Science.gov (United States)

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.

  3. Dissociation of rapid response learning and facilitation in perceptual and conceptual networks of person recognition.

    Science.gov (United States)

    Valt, Christian; Klein, Christoph; Boehm, Stephan G

    2015-08-01

    Repetition priming is a prominent example of non-declarative memory, and it increases the accuracy and speed of responses to repeatedly processed stimuli. Major long-hold memory theories posit that repetition priming results from facilitation within perceptual and conceptual networks for stimulus recognition and categorization. Stimuli can also be bound to particular responses, and it has recently been suggested that this rapid response learning, not network facilitation, provides a sound theory of priming of object recognition. Here, we addressed the relevance of network facilitation and rapid response learning for priming of person recognition with a view to advance general theories of priming. In four experiments, participants performed conceptual decisions like occupation or nationality judgments for famous faces. The magnitude of rapid response learning varied across experiments, and rapid response learning co-occurred and interacted with facilitation in perceptual and conceptual networks. These findings indicate that rapid response learning and facilitation in perceptual and conceptual networks are complementary rather than competing theories of priming. Thus, future memory theories need to incorporate both rapid response learning and network facilitation as individual facets of priming. © 2014 The British Psychological Society.

  4. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  5. Statistical learning and auditory processing in children with music training: An ERP study.

    Science.gov (United States)

    Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne

    2017-07-01

    The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  6. Predictive codes of familiarity and context during the perceptual learning of facial identities

    Science.gov (United States)

    Apps, Matthew A. J.; Tsakiris, Manos

    2013-11-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  7. The role of alpha-rhythm states in perceptual learning: insights from experiments and computational models

    Science.gov (United States)

    Sigala, Rodrigo; Haufe, Sebastian; Roy, Dipanjan; Dinse, Hubert R.; Ritter, Petra

    2014-01-01

    During the past two decades growing evidence indicates that brain oscillations in the alpha band (~10 Hz) not only reflect an “idle” state of cortical activity, but also take a more active role in the generation of complex cognitive functions. A recent study shows that more than 60% of the observed inter-subject variability in perceptual learning can be ascribed to ongoing alpha activity. This evidence indicates a significant role of alpha oscillations for perceptual learning and hence motivates to explore the potential underlying mechanisms. Hence, it is the purpose of this review to highlight existent evidence that ascribes intrinsic alpha oscillations a role in shaping our ability to learn. In the review, we disentangle the alpha rhythm into different neural signatures that control information processing within individual functional building blocks of perceptual learning. We further highlight computational studies that shed light on potential mechanisms regarding how alpha oscillations may modulate information transfer and connectivity changes relevant for learning. To enable testing of those model based hypotheses, we emphasize the need for multidisciplinary approaches combining assessment of behavior and multi-scale neuronal activity, active modulation of ongoing brain states and computational modeling to reveal the mathematical principles of the complex neuronal interactions. In particular we highlight the relevance of multi-scale modeling frameworks such as the one currently being developed by “The Virtual Brain” project. PMID:24772077

  8. The role of alpha-rhythm states in perceptual learning: insights from experiments and computational models

    Directory of Open Access Journals (Sweden)

    Rodrigo eSigala

    2014-04-01

    Full Text Available During the past two decades growing evidence indicates that brain oscillations in the alpha band (~10 Hz not only reflect an ‘idle’ state of cortical activity, but also take a more active role in the generation of complex cognitive functions. A recent study shows that more than 60% of the observed inter-subject variability in perceptual learning can be ascribed to ongoing alpha activity. This evidence indicates a significant role of alpha oscillations for perceptual learning and hence motivates to explore the potential underlying mechanisms. Hence, it is the purpose of this review to highlight existent evidence that ascribes intrinsic alpha oscillations a role in shaping our ability to learn. In the review, we disentangle the alpha rhythm into different neural signatures that control information processing within individual functional building blocks of perceptual learning. We further highlight computational studies that shed light on potential mechanisms regarding how alpha oscillations may modulate information transfer and connectivity changes relevant for learning. To enable testing of those model based hypotheses, we emphasize the need for multidisciplinary approaches combining assessment of behavior and multi-scale neuronal activity, active modulation of ongoing brain states and computational modeling to reveal the mathematical principles of the complex neuronal interactions. In particular we highlight the relevance of multi-scale modeling frameworks such as the one currently being developed by The Virtual Brain project.

  9. Perceptual-motor skill learning in Gilles de la Tourette syndrome. Evidence for multiple procedural learning and memory systems.

    Science.gov (United States)

    Marsh, Rachel; Alexander, Gerianne M; Packard, Mark G; Zhu, Hongtu; Peterson, Bradley S

    2005-01-01

    Procedural learning and memory systems likely comprise several skills that are differentially affected by various illnesses of the central nervous system, suggesting their relative functional independence and reliance on differing neural circuits. Gilles de la Tourette syndrome (GTS) is a movement disorder that involves disturbances in the structure and function of the striatum and related circuitry. Recent studies suggest that patients with GTS are impaired in performance of a probabilistic classification task that putatively involves the acquisition of stimulus-response (S-R)-based habits. Assessing the learning of perceptual-motor skills and probabilistic classification in the same samples of GTS and healthy control subjects may help to determine whether these various forms of procedural (habit) learning rely on the same or differing neuroanatomical substrates and whether those substrates are differentially affected in persons with GTS. Therefore, we assessed perceptual-motor skill learning using the pursuit-rotor and mirror tracing tasks in 50 patients with GTS and 55 control subjects who had previously been compared at learning a task of probabilistic classifications. The GTS subjects did not differ from the control subjects in performance of either the pursuit rotor or mirror-tracing tasks, although they were significantly impaired in the acquisition of a probabilistic classification task. In addition, learning on the perceptual-motor tasks was not correlated with habit learning on the classification task in either the GTS or healthy control subjects. These findings suggest that the differing forms of procedural learning are dissociable both functionally and neuroanatomically. The specific deficits in the probabilistic classification form of habit learning in persons with GTS are likely to be a consequence of disturbances in specific corticostriatal circuits, but not the same circuits that subserve the perceptual-motor form of habit learning.

  10. Auditory Processing, Linguistic Prosody Awareness, and Word Reading in Mandarin-Speaking Children Learning English

    Science.gov (United States)

    Chung, Wei-Lun; Jarmulowicz, Linda; Bidelman, Gavin M.

    2017-01-01

    This study examined language-specific links among auditory processing, linguistic prosody awareness, and Mandarin (L1) and English (L2) word reading in 61 Mandarin-speaking, English-learning children. Three auditory discrimination abilities were measured: pitch contour, pitch interval, and rise time (rate of intensity change at tone onset).…

  11. Interaction between age and perceptual similarity in olfactory discrimination learning in F344 rats: relationships with spatial learning

    Science.gov (United States)

    Yoder, Wendy M.; Gaynor, Leslie S.; Burke, Sara N.; Setlow, Barry; Smith, David W.; Bizon, Jennifer L.

    2017-01-01

    Emerging evidence suggests that aging is associated with a reduced ability to distinguish perceptually similar stimuli in one’s environment. As the ability to accurately perceive and encode sensory information is foundational for explicit memory, understanding the neurobiological underpinnings of discrimination impairments that emerge with advancing age could help elucidate the mechanisms of mnemonic decline. To this end, there is a need for preclinical approaches that robustly and reliably model age-associated perceptual discrimination deficits. Taking advantage of rodents’ exceptional olfactory abilities, the present study applied rigorous psychophysical techniques to the evaluation of discrimination learning in young and aged F344 rats. Aging did not influence odor detection thresholds or the ability to discriminate between perceptually distinct odorants. In contrast, aged rats were disproportionately impaired relative to young on problems that required discriminations between perceptually similar olfactory stimuli. Importantly, these disproportionate impairments in discrimination learning did not simply reflect a global learning impairment in aged rats, as they performed other types of difficult discriminations on par with young rats. Among aged rats, discrimination deficits were strongly associated with spatial learning deficits. These findings reveal a new, sensitive behavioral approach for elucidating the neural mechanisms of cognitive decline associated with normal aging. PMID:28259065

  12. The impact of memory load and perceptual cues on puzzle learning by 24-month olds.

    Science.gov (United States)

    Barr, Rachel; Moser, Alecia; Rusnak, Sylvia; Zimmermann, Laura; Dickerson, Kelly; Lee, Herietta; Gerhardstein, Peter

    2016-11-01

    Early childhood is characterized by memory capacity limitations and rapid perceptual and motor development [Rovee-Collier (1996). Infant Behavior & Development, 19, 385-400]. The present study examined 2-year olds' reproduction of a sliding action to complete an abstract fish puzzle under different levels of memory load and perceptual feature support. Experimental groups were compared to baseline controls to assess spontaneous rates of production of the target actions; baseline production was low across all experiments. Memory load was manipulated in Exp. 1 by adding pieces to the puzzle, increasing sequence length from 2 to 3 items, and to 3 items plus a distractor. Although memory load did not influence how toddlers learned to manipulate the puzzle pieces, it did influence toddlers' achievement of the goal-constructing the fish. Overall, girls were better at constructing the puzzle than boys. In Exp. 2, the perceptual features of the puzzle were altered by changing shape boundaries to create a two-piece horizontally cut puzzle (displaying bilateral symmetry), and by adding a semantically supportive context to the vertically cut puzzle (iconic). Toddlers were able to achieve the goal of building the fish equally well across the 2-item puzzle types (bilateral symmetry, vertical, iconic), but how they learned to manipulate the puzzle pieces varied as a function of the perceptual features. Here, as in Exp. 1, girls showed a different pattern of performance from the boys. This study demonstrates that changes in memory capacity and perceptual processing influence both goal-directed imitation learning and motoric performance. © 2016 Wiley Periodicals, Inc.

  13. A Measure of the Auditory-perceptual Quality of Strain from Electroglottographic Analysis of Continuous Dysphonic Speech: Application to Adductor Spasmodic Dysphonia.

    Science.gov (United States)

    Somanath, Keerthan; Mau, Ted

    2016-11-01

    (1) To develop an automated algorithm to analyze electroglottographic (EGG) signal in continuous dysphonic speech, and (2) to identify EGG waveform parameters that correlate with the auditory-perceptual quality of strain in the speech of patients with adductor spasmodic dysphonia (ADSD). Software development with application in a prospective controlled study. EGG was recorded from 12 normal speakers and 12 subjects with ADSD reading excerpts from the Rainbow Passage. Data were processed by a new algorithm developed with the specific goal of analyzing continuous dysphonic speech. The contact quotient, pulse width, a new parameter peak skew, and various contact closing slope quotient and contact opening slope quotient measures were extracted. EGG parameters were compared between normal and ADSD speech. Within the ADSD group, intra-subject comparison was also made between perceptually strained syllables and unstrained syllables. The opening slope quotient SO7525 distinguished strained syllables from unstrained syllables in continuous speech within individual subjects with ADSD. The standard deviations, but not the means, of contact quotient, EGGW50, peak skew, and SO7525 were different between normal and ADSD speakers. The strain-stress pattern in continuous speech can be visualized as color gradients based on the variation of EGG parameter values. EGG parameters may provide a within-subject measure of vocal strain and serve as a marker for treatment response. The addition of EGG to multidimensional assessment may lead to improved characterization of the voice disturbance in ADSD. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  14. Enhanced attentional gain as a mechanism for generalized perceptual learning in human visual cortex.

    Science.gov (United States)

    Byers, Anna; Serences, John T

    2014-09-01

    Learning to better discriminate a specific visual feature (i.e., a specific orientation in a specific region of space) has been associated with plasticity in early visual areas (sensory modulation) and with improvements in the transmission of sensory information from early visual areas to downstream sensorimotor and decision regions (enhanced readout). However, in many real-world scenarios that require perceptual expertise, observers need to efficiently process numerous exemplars from a broad stimulus class as opposed to just a single stimulus feature. Some previous data suggest that perceptual learning leads to highly specific neural modulations that support the discrimination of specific trained features. However, the extent to which perceptual learning acts to improve the discriminability of a broad class of stimuli via the modulation of sensory responses in human visual cortex remains largely unknown. Here, we used functional MRI and a multivariate analysis method to reconstruct orientation-selective response profiles based on activation patterns in the early visual cortex before and after subjects learned to discriminate small offsets in a set of grating stimuli that were rendered in one of nine possible orientations. Behavioral performance improved across 10 training sessions, and there was a training-related increase in the amplitude of orientation-selective response profiles in V1, V2, and V3 when orientation was task relevant compared with when it was task irrelevant. These results suggest that generalized perceptual learning can lead to modified responses in the early visual cortex in a manner that is suitable for supporting improved discriminability of stimuli drawn from a large set of exemplars. Copyright © 2014 the American Physiological Society.

  15. Pretraining Cortical Thickness Predicts Subsequent Perceptual Learning Rate in a Visual Search Task.

    Science.gov (United States)

    Frank, Sebastian M; Reavis, Eric A; Greenlee, Mark W; Tse, Peter U

    2016-03-01

    We report that preexisting individual differences in the cortical thickness of brain areas involved in a perceptual learning task predict the subsequent perceptual learning rate. Participants trained in a motion-discrimination task involving visual search for a "V"-shaped target motion trajectory among inverted "V"-shaped distractor trajectories. Motion-sensitive area MT+ (V5) was functionally identified as critical to the task: after 3 weeks of training, activity increased in MT+ during task performance, as measured by functional magnetic resonance imaging. We computed the cortical thickness of MT+ from anatomical magnetic resonance imaging volumes collected before training started, and found that it significantly predicted subsequent perceptual learning rates in the visual search task. Participants with thicker neocortex in MT+ before training learned faster than those with thinner neocortex in that area. A similar association between cortical thickness and training success was also found in posterior parietal cortex (PPC). © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Reduction in the retinotopic early visual cortex with normal aging and magnitude of perceptual learning.

    Science.gov (United States)

    Chang, Li-Hung; Yotsumoto, Yuko; Salat, David H; Andersen, George J; Watanabe, Takeo; Sasaki, Yuka

    2015-01-01

    Although normal aging is known to reduce cortical structures globally, the effects of aging on local structures and functions of early visual cortex are less understood. Here, using standard retinotopic mapping and magnetic resonance imaging morphologic analyses, we investigated whether aging affects areal size of the early visual cortex, which were retinotopically localized, and whether those morphologic measures were associated with individual performance on visual perceptual learning. First, significant age-associated reduction was found in the areal size of V1, V2, and V3. Second, individual ability of visual perceptual learning was significantly correlated with areal size of V3 in older adults. These results demonstrate that aging changes local structures of the early visual cortex, and the degree of change may be associated with individual visual plasticity. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Consequences of comorbidity of developmental coordination disorders and learning disabilities for severity and pattern of perceptual-motor dysfunction

    NARCIS (Netherlands)

    Jongmans, MJ; Smits-Engelsman, BCM; Schoemaker, MM

    2003-01-01

    Children with developmental coordination disorder (DCD) have difficulty learning and performing age-appropriate perceptual-motor skills in the absence of diagnosable neurological disorders. Descriptive studies have shown that comorbidity of DCD exists with attention-deficit/hyperactivity disorder

  18. Profiles of Types of Central Auditory Processing Disorders in Children with Learning Disabilities.

    Science.gov (United States)

    Musiek, Frank E.; And Others

    1985-01-01

    The article profiles five cases of children (8-17 years old) with learning disabilities and auditory processing problems. Possible correlations between the presumed etiology and the unique audiological pattern on the central test battery are analyzed. (Author/CL)

  19. Monocular perceptual learning of contrast detection facilitates binocular combination in adults with anisometropic amblyopia

    OpenAIRE

    Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin

    2016-01-01

    Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n?=?13) were asked to complete two psychophysical supra-threshold binoc...

  20. Transfer of Perceptual Learning of Depth Discrimination Between Local and Global Stereograms

    OpenAIRE

    Gantz, Liat; Bedell, Harold

    2010-01-01

    Several previous studies reported differences when stereothresholds are assessed with local-contour stereograms vs. complex random-dot stereograms (RDSs). Dissimilar thresholds may be due to differences in the properties of the stereograms (e.g., spatial frequency content, contrast, inter-element separation, area) or to different underlying processing mechanisms. This study examined the transfer of perceptual learning of depth discrimination between local and global RDSs with similar properti...

  1. Perceptual learning rules based on reinforcers and attention

    NARCIS (Netherlands)

    Roelfsema, Pieter R.; van Ooyen, Arjen; Watanabe, Takeo

    2010-01-01

    How does the brain learn those visual features that are relevant for behavior? In this article, we focus on two factors that guide plasticity of visual representations. First, reinforcers cause the global release of diffusive neuromodulatory signals that gate plasticity. Second, attentional feedback

  2. Treatment of amblyopia in the adult: insights from a new rodent model of visual perceptual learning.

    Science.gov (United States)

    Bonaccorsi, Joyce; Berardi, Nicoletta; Sale, Alessandro

    2014-01-01

    Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1-5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e., the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat.

  3. Treatment of amblyopia in the adult: insights from a new rodent model of visual perceptual learning

    Science.gov (United States)

    Bonaccorsi, Joyce; Berardi, Nicoletta; Sale, Alessandro

    2014-01-01

    Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1–5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e., the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat. PMID:25076874

  4. Perceptual learning eases crowding by reducing recognition errors but not position errors.

    Science.gov (United States)

    Xiong, Ying-Zi; Yu, Cong; Zhang, Jun-Yun

    2015-08-01

    When an observer reports a letter flanked by additional letters in the visual periphery, the response errors (the crowding effect) may result from failure to recognize the target letter (recognition errors), from mislocating a correctly recognized target letter at a flanker location (target misplacement errors), or from reporting a flanker as the target letter (flanker substitution errors). Crowding can be reduced through perceptual learning. However, it is not known how perceptual learning operates to reduce crowding. In this study we trained observers with a partial-report task (Experiment 1), in which they reported the central target letter of a three-letter string presented in the visual periphery, or a whole-report task (Experiment 2), in which they reported all three letters in order. We then assessed the impact of training on recognition of both unflanked and flanked targets, with particular attention to how perceptual learning affected the types of errors. Our results show that training improved target recognition but not single-letter recognition, indicating that training indeed affected crowding. However, training did not reduce target misplacement errors or flanker substitution errors. This dissociation between target recognition and flanker substitution errors supports the view that flanker substitution may be more likely a by-product (due to response bias), rather than a cause, of crowding. Moreover, the dissociation is not consistent with hypothesized mechanisms of crowding that would predict reduced positional errors.

  5. Prediction of HDR quality by combining perceptually transformed display measurements with machine learning

    Science.gov (United States)

    Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott

    2017-09-01

    We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.

  6. Influence of cue word perceptual information on metamemory accuracy in judgement of learning.

    Science.gov (United States)

    Hu, Xiao; Liu, Zhaomin; Li, Tongtong; Luo, Liang

    2016-01-01

    Previous studies have suggested that perceptual information regarding to-be-remembered words in the study phase affects the accuracy of judgement of learning (JOL). However, few have investigated whether the perceptual information in the JOL phase influences JOL accuracy. This study examined the influence of cue word perceptual information in the JOL phase on immediate and delayed JOL accuracy through changes in cue word font size. In Experiment 1, large-cue word pairs had significantly higher mean JOL magnitude than small-cue word pairs in immediate JOLs and higher relative accuracy than small-cue pairs in delayed JOLs, but font size had no influence on recall performance. Experiment 2 increased the JOL time, and mean JOL magnitude did not reliably differ for large-cue compared with small-cue pairs in immediate JOLs. However, the influence on relative accuracy still existed in delayed JOLs. Experiment 3 increased the familiarity of small-cue words in the delayed JOL phase by adding a lexical decision task. The results indicated that cue word font size no longer affected relative accuracy in delayed JOLs. The three experiments in our study indicated that the perceptual information regarding cue words in the JOL phase affects immediate and delayed JOLs in different ways.

  7. Relations between perceptual measures of temporal processing, auditory-evoked brainstem responses and speech intelligibility in noise

    DEFF Research Database (Denmark)

    Papakonstantinou, Alexandra; Strelcyk, Olaf; Dau, Torsten

    2011-01-01

    This study investigates behavioural and objective measures of temporal auditory processing and their relation to the ability to understand speech in noise. The experiments were carried out on a homogeneous group of seven hearing-impaired listeners with normal sensitivity at low frequencies (up to 1...... kHz) and steeply sloping hearing losses above 1 kHz. For comparison, data were also collected for five normalhearing listeners. Temporal processing was addressed at low frequencies by means of psychoacoustical frequency discrimination, binaural masked detection and amplitude modulation (AM......) detection. In addition, auditory brainstem responses (ABRs) to clicks and broadband rising chirps were recorded. Furthermore, speech reception thresholds (SRTs) were determined for Danish sentences in speechshaped noise. The main findings were: (1) SRTs were neither correlated with hearing sensitivity...

  8. Neural Correlates of Auditory Processing, Learning and Memory Formation in Songbirds

    Science.gov (United States)

    Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.

    Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.

  9. The effect of normal aging and age-related macular degeneration on perceptual learning.

    Science.gov (United States)

    Astle, Andrew T; Blighe, Alan J; Webb, Ben S; McGraw, Paul V

    2015-01-01

    We investigated whether perceptual learning could be used to improve peripheral word identification speed. The relationship between the magnitude of learning and age was established in normal participants to determine whether perceptual learning effects are age invariant. We then investigated whether training could lead to improvements in patients with age-related macular degeneration (AMD). Twenty-eight participants with normal vision and five participants with AMD trained on a word identification task. They were required to identify three-letter words, presented 10° from fixation. To standardize crowding across each of the letters that made up the word, words were flanked laterally by randomly chosen letters. Word identification performance was measured psychophysically using a staircase procedure. Significant improvements in peripheral word identification speed were demonstrated following training (71% ± 18%). Initial task performance was correlated with age, with older participants having poorer performance. However, older adults learned more rapidly such that, following training, they reached the same level of performance as their younger counterparts. As a function of number of trials completed, patients with AMD learned at an equivalent rate as age-matched participants with normal vision. Improvements in word identification speed were maintained at least 6 months after training. We have demonstrated that temporal aspects of word recognition can be improved in peripheral vision with training across a range of ages and these learned improvements are relatively enduring. However, training targeted at other bottlenecks to peripheral reading ability, such as visual crowding, may need to be incorporated to optimize this approach.

  10. Perceptual learning of basic visual features remains task specific with Training-Plus-Exposure (TPE) training.

    Science.gov (United States)

    Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun

    2016-01-01

    Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer.

  11. Perceptual learning to reduce sensory eye dominance beyond the focus of top-down visual attention.

    Science.gov (United States)

    Xu, Jingping P; He, Zijiang J; Ooi, Teng Leng

    2012-05-15

    Perceptual learning is an important means for the brain to maintain its agility in a dynamic environment. Top-down focal attention, which selects task-relevant stimuli against competing ones in the background, is known to control and select what is learned in adults. Still unknown, is whether the adult brain is able to learn highly visible information beyond the focus of top-down attention. If it is, we should be able to reveal a purely stimulus-driven perceptual learning occurring in functions that are largely determined by the early cortical level, where top-down attention modulation is weak. Such an automatic, stimulus-driven learning mechanism is commonly assumed to operate only in the juvenile brain. We performed perceptual training to reduce sensory eye dominance (SED), a function that taps on the eye-of-origin information represented in the early visual cortex. Two retinal locations were simultaneously stimulated with suprathreshold, dichoptic orthogonal gratings. At each location, monocular cueing triggered perception of the grating images of the weak eye and suppression of the strong eye. Observers attended only to one location and performed orientation discrimination of the gratings seen by the weak eye, while ignoring the highly visible gratings at the second, unattended, location. We found SED was not only reduced at the attended location, but also at the unattended location. Furthermore, other untrained visual functions mediated by higher cortical levels improved. An automatic, stimulus-driven learning mechanism causes synaptic alterations in the early cortical level, with a far-reaching impact on the later cortical levels. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Perceptual category learning and visual processing: An exercise in computational cognitive neuroscience.

    Science.gov (United States)

    Cantwell, George; Riesenhuber, Maximilian; Roeder, Jessica L; Ashby, F Gregory

    2017-05-01

    The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detailed computational models that account for both behavioral and neuroscience data. This article leverages a key advantage of CCN-namely, that it should be possible to interface different CCN models in a plug-and-play fashion-to produce a new and biologically detailed model of perceptual category learning. The new model was created from two existing CCN models: the HMAX model of visual object processing and the COVIS model of category learning. Using bitmap images as inputs and by adjusting only a couple of learning-rate parameters, the new HMAX/COVIS model provides impressively good fits to human category-learning data from two qualitatively different experiments that used different types of category structures and different types of visual stimuli. Overall, the model provides a comprehensive neural and behavioral account of basal ganglia-mediated learning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Investigating Verbal and Visual Auditory Learning After Conformal Radiation Therapy for Childhood Ependymoma

    International Nuclear Information System (INIS)

    Di Pinto, Marcos; Conklin, Heather M.; Li Chenghong; Xiong Xiaoping; Merchant, Thomas E.

    2010-01-01

    Purpose: The primary objective of this study was to determine whether children with localized ependymoma experience a decline in verbal or visual-auditory learning after conformal radiation therapy (CRT). The secondary objective was to investigate the impact of age and select clinical factors on learning before and after treatment. Methods and Materials: Learning in a sample of 71 patients with localized ependymoma was assessed with the California Verbal Learning Test (CVLT-C) and the Visual-Auditory Learning Test (VAL). Learning measures were administered before CRT, at 6 months, and then yearly for a total of 5 years. Results: There was no significant decline on measures of verbal or visual-auditory learning after CRT; however, younger age, more surgeries, and cerebrospinal fluid shunting did predict lower scores at baseline. There were significant longitudinal effects (improved learning scores after treatment) among older children on the CVLT-C and children that did not receive pre-CRT chemotherapy on the VAL. Conclusion: There was no evidence of global decline in learning after CRT in children with localized ependymoma. Several important implications from the findings include the following: (1) identification of and differentiation among variables with transient vs. long-term effects on learning, (2) demonstration that children treated with chemotherapy before CRT had greater risk of adverse visual-auditory learning performance, and (3) establishment of baseline and serial assessment as critical in ascertaining necessary sensitivity and specificity for the detection of modest effects.

  14. Learning of perceptual grouping for object segmentation on RGB-D data.

    Science.gov (United States)

    Richtsfeld, Andreas; Mörwald, Thomas; Prankl, Johann; Zillich, Michael; Vincze, Markus

    2014-01-01

    Object segmentation of unknown objects with arbitrary shape in cluttered scenes is an ambitious goal in computer vision and became a great impulse with the introduction of cheap and powerful RGB-D sensors. We introduce a framework for segmenting RGB-D images where data is processed in a hierarchical fashion. After pre-clustering on pixel level parametric surface patches are estimated. Different relations between patch-pairs are calculated, which we derive from perceptual grouping principles, and support vector machine classification is employed to learn Perceptual Grouping. Finally, we show that object hypotheses generation with Graph-Cut finds a globally optimal solution and prevents wrong grouping. Our framework is able to segment objects, even if they are stacked or jumbled in cluttered scenes. We also tackle the problem of segmenting objects when they are partially occluded. The work is evaluated on publicly available object segmentation databases and also compared with state-of-the-art work of object segmentation.

  15. Polarity-Specific Transcranial Direct Current Stimulation Disrupts Auditory Pitch Learning

    Directory of Open Access Journals (Sweden)

    Reiko eMatsushita

    2015-05-01

    Full Text Available Transcranial direct current stimulation (tDCS is attracting increasing interest because of its potential for therapeutic use. While its effects have been investigated mainly with motor and visual tasks, less is known in the auditory domain. Past tDCS studies with auditory tasks demonstrated various behavioural outcomes, possibly due to differences in stimulation parameters or task measurements used in each study. Further research using well-validated tasks are therefore required for clarification of behavioural effects of tDCS on the auditory system. Here, we took advantage of findings from a prior functional magnetic resonance imaging study, which demonstrated that the right auditory cortex is modulated during fine-grained pitch learning of microtonal melodic patterns. Targeting the right auditory cortex with tDCS using this same task thus allowed us to test the hypothesis that this region is causally involved in pitch learning. Participants in the current study were trained for three days while we measured pitch discrimination thresholds using microtonal melodies on each day using a psychophysical staircase procedure. We administered anodal, cathodal, or sham tDCS to three groups of participants over the right auditory cortex on the second day of training during performance of the task. Both the sham and the cathodal groups showed the expected significant learning effect (decreased pitch threshold over the three days of training; in contrast we observed a blocking effect of anodal tDCS on auditory pitch learning, such that this group showed no significant change in thresholds over the three days. The results support a causal role for the right auditory cortex in pitch discrimination learning.

  16. The application of online transcranial random noise stimulation and perceptual learning in the improvement of visual functions in mild myopia.

    Science.gov (United States)

    Camilleri, Rebecca; Pavan, Andrea; Campana, Gianluca

    2016-08-01

    It has recently been demonstrated how perceptual learning, that is an improvement in a sensory/perceptual task upon practice, can be boosted by concurrent high-frequency transcranial random noise stimulation (tRNS). It has also been shown that perceptual learning can generalize and produce an improvement of visual functions in participants with mild refractive defects. By using three different groups of participants (single-blind study), we tested the efficacy of a short training (8 sessions) using a single Gabor contrast-detection task with concurrent hf-tRNS in comparison with the same training with sham stimulation or hf-tRNS with no concurrent training, in improving visual acuity (VA) and contrast sensitivity (CS) of individuals with uncorrected mild myopia. A short training with a contrast detection task is able to improve VA and CS only if coupled with hf-tRNS, whereas no effect on VA and marginal effects on CS are seen with the sole administration of hf-tRNS. Our results support the idea that, by boosting the rate of perceptual learning via the modulation of neuronal plasticity, hf-tRNS can be successfully used to reduce the duration of the perceptual training and/or to increase its efficacy in producing perceptual learning and generalization to improved VA and CS in individuals with uncorrected mild myopia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Differences in perceptual learning transfer as a function of training task.

    Science.gov (United States)

    Green, C Shawn; Kattner, Florian; Siegel, Max H; Kersten, Daniel; Schrater, Paul R

    2015-01-01

    A growing body of research--including results from behavioral psychology, human structural and functional imaging, single-cell recordings in nonhuman primates, and computational modeling--suggests that perceptual learning effects are best understood as a change in the ability of higher-level integration or association areas to read out sensory information in the service of particular decisions. Work in this vein has argued that, depending on the training experience, the "rules" for this read-out can either be applicable to new contexts (thus engendering learning generalization) or can apply only to the exact training context (thus resulting in learning specificity). Here we contrast learning tasks designed to promote either stimulus-specific or stimulus-general rules. Specifically, we compare learning transfer across visual orientation following training on three different tasks: an orientation categorization task (which permits an orientation-specific learning solution), an orientation estimation task (which requires an orientation-general learning solution), and an orientation categorization task in which the relevant category boundary shifts on every trial (which lies somewhere between the two tasks above). While the simple orientation-categorization training task resulted in orientation-specific learning, the estimation and moving categorization tasks resulted in significant orientation learning generalization. The general framework tested here--that task specificity or generality can be predicted via an examination of the optimal learning solution--may be useful in building future training paradigms with certain desired outcomes.

  18. Object-based implicit learning in visual search: perceptual segmentation constrains contextual cueing.

    Science.gov (United States)

    Conci, Markus; Müller, Hermann J; von Mühlenen, Adrian

    2013-07-09

    In visual search, detection of a target is faster when it is presented within a spatial layout of repeatedly encountered nontarget items, indicating that contextual invariances can guide selective attention (contextual cueing; Chun & Jiang, 1998). However, perceptual regularities may interfere with contextual learning; for instance, no contextual facilitation occurs when four nontargets form a square-shaped grouping, even though the square location predicts the target location (Conci & von Mühlenen, 2009). Here, we further investigated potential causes for this interference-effect: We show that contextual cueing can reliably occur for targets located within the region of a segmented object, but not for targets presented outside of the object's boundaries. Four experiments demonstrate an object-based facilitation in contextual cueing, with a modulation of context-based learning by relatively subtle grouping cues including closure, symmetry, and spatial regularity. Moreover, the lack of contextual cueing for targets located outside the segmented region was due to an absence of (latent) learning of contextual layouts, rather than due to an attentional bias towards the grouped region. Taken together, these results indicate that perceptual segmentation provides a basic structure within which contextual scene regularities are acquired. This in turn argues that contextual learning is constrained by object-based selection.

  19. Memory Processes in Learning Disability Subtypes of Children Born Preterm

    OpenAIRE

    McCoy, Thomasin E.; Conrad, Amy L.; Richman, Lynn C.; Nopoulos, Peg C.; Bell, Edward F.

    2012-01-01

    The purpose of this study was to evaluate immediate auditory and visual memory processes in learning disability subtypes of 40 children born preterm. Three subgroups of children were examined: (a) primary language disability group (n = 13), (b) perceptual-motor disability group (n = 14), and (c) no learning disability diagnosis group without identified language or perceptual-motor learning disability (n = 13). Between-group comparisons indicate no significant differences in immediate auditory...

  20. The role of training structure in perceptual learning of accented speech.

    Science.gov (United States)

    Tzeng, Christina Y; Alexander, Jessica E D; Sidaras, Sabrina K; Nygaard, Lynne C

    2016-11-01

    Foreign-accented speech contains multiple sources of variation that listeners learn to accommodate. Extending previous findings showing that exposure to high-variation training facilitates perceptual learning of accented speech, the current study examines to what extent the structure of training materials affects learning. During training, native adult speakers of American English transcribed sentences spoken in English by native Spanish-speaking adults. In Experiment 1, training stimuli were blocked by speaker, sentence, or randomized with respect to speaker and sentence (Variable training). At test, listeners transcribed novel English sentences produced by unfamiliar Spanish-accented speakers. Listeners' transcription accuracy was highest in the Variable condition, suggesting that varying both speaker identity and sentence across training trials enabled listeners to generalize their learning to novel speakers and linguistic content. Experiment 2 assessed the extent to which ordering of training tokens by a single factor, speaker intelligibility, would facilitate speaker-independent accent learning, finding that listeners' test performance did not reliably differ from that in the no-training control condition. Overall, these results suggest that the structure of training exposure, specifically trial-to-trial variation on both speaker's voice and linguistic content, facilitates learning of the systematic properties of accented speech. The current findings suggest a crucial role of training structure in optimizing perceptual learning. Beyond characterizing the types of variation listeners encode in their representations of spoken utterances, theories of spoken language processing should incorporate the role of training structure in learning lawful variation in speech. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. Auditory access, language access, and implicit sequence learning in deaf children.

    Science.gov (United States)

    Hall, Matthew L; Eigsti, Inge-Marie; Bortfeld, Heather; Lillo-Martin, Diane

    2018-05-01

    Developmental psychology plays a central role in shaping evidence-based best practices for prelingually deaf children. The Auditory Scaffolding Hypothesis (Conway et al., 2009) asserts that a lack of auditory stimulation in deaf children leads to impoverished implicit sequence learning abilities, measured via an artificial grammar learning (AGL) task. However, prior research is confounded by a lack of both auditory and language input. The current study examines implicit learning in deaf children who were (Deaf native signers) or were not (oral cochlear implant users) exposed to language from birth, and in hearing children, using both AGL and Serial Reaction Time (SRT) tasks. Neither deaf nor hearing children across the three groups show evidence of implicit learning on the AGL task, but all three groups show robust implicit learning on the SRT task. These findings argue against the Auditory Scaffolding Hypothesis, and suggest that implicit sequence learning may be resilient to both auditory and language deprivation, within the tested limits. A video abstract of this article can be viewed at: https://youtu.be/EeqfQqlVHLI [Correction added on 07 August 2017, after first online publication: The video abstract link was added.]. © 2017 John Wiley & Sons Ltd.

  2. Implicit perceptual-motor skill learning in mild cognitive impairment and Parkinson's disease.

    Science.gov (United States)

    Gobel, Eric W; Blomeke, Kelsey; Zadikoff, Cindy; Simuni, Tanya; Weintraub, Sandra; Reber, Paul J

    2013-05-01

    Implicit skill learning is hypothesized to depend on nondeclarative memory that operates independent of the medial temporal lobe (MTL) memory system and instead depends on cortico striatal circuits between the basal ganglia and cortical areas supporting motor function and planning. Research with the Serial Reaction Time (SRT) task suggests that patients with memory disorders due to MTL damage exhibit normal implicit sequence learning. However, reports of intact learning rely on observations of no group differences, leading to speculation as to whether implicit sequence learning is fully intact in these patients. Patients with Parkinson's disease (PD) often exhibit impaired sequence learning, but this impairment is not universally observed. Implicit perceptual-motor sequence learning was examined using the Serial Interception Sequence Learning (SISL) task in patients with amnestic Mild Cognitive Impairment (MCI; n = 11) and patients with PD (n = 15). Sequence learning in SISL is resistant to explicit learning and individually adapted task difficulty controls for baseline performance differences. Patients with MCI exhibited robust sequence learning, equivalent to healthy older adults (n = 20), supporting the hypothesis that the MTL does not contribute to learning in this task. In contrast, the majority of patients with PD exhibited no sequence-specific learning in spite of matched overall task performance. Two patients with PD exhibited performance indicative of an explicit compensatory strategy suggesting that impaired implicit learning may lead to greater reliance on explicit memory in some individuals. The differences in learning between patient groups provides strong evidence in favor of implicit sequence learning depending solely on intact basal ganglia function with no contribution from the MTL memory system.

  3. A novel perceptual discrimination training task: Reducing fear overgeneralization in the context of fear learning.

    Science.gov (United States)

    Ginat-Frolich, Rivkah; Klein, Zohar; Katz, Omer; Shechner, Tomer

    2017-06-01

    Generalization is an adaptive learning mechanism, but it can be maladaptive when it occurs in excess. A novel perceptual discrimination training task was therefore designed to moderate fear overgeneralization. We hypothesized that improvement in basic perceptual discrimination would translate into lower fear overgeneralization in affective cues. Seventy adults completed a fear-conditioning task prior to being allocated into training or placebo groups. Predesignated geometric shape pairs were constructed for the training task. A target shape from each pair was presented. Thereafter, participants in the training group were shown both shapes and asked to identify the image that differed from the target. Placebo task participants only indicated the location of each shape on the screen. All participants then viewed new geometric pairs and indicated whether they were identical or different. Finally, participants completed a fear generalization test consisting of perceptual morphs ranging from the CS + to the CS-. Fear-conditioning was observed through physiological and behavioural measures. Furthermore, the training group performed better than the placebo group on the assessment task and exhibited decreased fear generalization in response to threat/safety cues. The findings offer evidence for the effectiveness of the novel discrimination training task, setting the stage for future research with clinical populations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Functional consequences of experience-dependent plasticity on tactile perception following perceptual learning.

    Science.gov (United States)

    Trzcinski, Natalie K; Gomez-Ramirez, Manuel; Hsiao, Steven S

    2016-09-01

    Continuous training enhances perceptual discrimination and promotes neural changes in areas encoding the experienced stimuli. This type of experience-dependent plasticity has been demonstrated in several sensory and motor systems. Particularly, non-human primates trained to detect consecutive tactile bar indentations across multiple digits showed expanded excitatory receptive fields (RFs) in somatosensory cortex. However, the perceptual implications of these anatomical changes remain undetermined. Here, we trained human participants for 9 days on a tactile task that promoted expansion of multi-digit RFs. Participants were required to detect consecutive indentations of bar stimuli spanning multiple digits. Throughout the training regime we tracked participants' discrimination thresholds on spatial (grating orientation) and temporal tasks on the trained and untrained hands in separate sessions. We hypothesized that training on the multi-digit task would decrease perceptual thresholds on tasks that require stimulus processing across multiple digits, while also increasing thresholds on tasks requiring discrimination on single digits. We observed an increase in orientation thresholds on a single digit. Importantly, this effect was selective for the stimulus orientation and hand used during multi-digit training. We also found that temporal acuity between digits improved across trained digits, suggesting that discriminating the temporal order of multi-digit stimuli can transfer to temporal discrimination of other tactile stimuli. These results suggest that experience-dependent plasticity following perceptual learning improves and interferes with tactile abilities in manners predictive of the task and stimulus features used during training. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  5. Effect of tDCS on task relevant and irrelevant perceptual learning of complex objects.

    Science.gov (United States)

    Van Meel, Chayenne; Daniels, Nicky; de Beeck, Hans Op; Baeck, Annelies

    2016-01-01

    During perceptual learning the visual representations in the brain are altered, but these changes' causal role has not yet been fully characterized. We used transcranial direct current stimulation (tDCS) to investigate the role of higher visual regions in lateral occipital cortex (LO) in perceptual learning with complex objects. We also investigated whether object learning is dependent on the relevance of the objects for the learning task. Participants were trained in two tasks: object recognition using a backward masking paradigm and an orientation judgment task. During both tasks, an object with a red line on top of it were presented in each trial. The crucial difference between both tasks was the relevance of the object: the object was relevant for the object recognition task, but not for the orientation judgment task. During training, half of the participants received anodal tDCS stimulation targeted at the lateral occipital cortex (LO). Afterwards, participants were tested on how well they recognized the trained objects, the irrelevant objects presented during the orientation judgment task and a set of completely new objects. Participants stimulated with tDCS during training showed larger improvements of performance compared to participants in the sham condition. No learning effect was found for the objects presented during the orientation judgment task. To conclude, this study suggests a causal role of LO in relevant object learning, but given the rather low spatial resolution of tDCS, more research on the specificity of this effect is needed. Further, mere exposure is not sufficient to train object recognition in our paradigm.

  6. Auditory-perceptual speech analysis in children with cerebellar tumours: a long-term follow-up study.

    Science.gov (United States)

    De Smet, Hyo Jung; Catsman-Berrevoets, Coriene; Aarsen, Femke; Verhoeven, Jo; Mariën, Peter; Paquier, Philippe F

    2012-09-01

    Mutism and Subsequent Dysarthria (MSD) and the Posterior Fossa Syndrome (PFS) have become well-recognized clinical entities which may develop after resection of cerebellar tumours. However, speech characteristics following a period of mutism have not been documented in much detail. This study carried out a perceptual speech analysis in 24 children and adolescents (of whom 12 became mute in the immediate postoperative phase) 1-12.2 years after cerebellar tumour resection. The most prominent speech deficits in this study were distorted vowels, slow rate, voice tremor, and monopitch. Factors influencing long-term speech disturbances are presence or absence of postoperative PFS, the localisation of the surgical lesion and the type of adjuvant treatment. Long-term speech deficits may be present up to 12 years post-surgery. The speech deficits found in children and adolescents with cerebellar lesions following cerebellar tumour surgery do not necessarily resemble adult speech characteristics of ataxic dysarthria. Copyright © 2012 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.

  7. Comparing Auditory-Only and Audiovisual Word Learning for Children with Hearing Loss.

    Science.gov (United States)

    McDaniel, Jena; Camarata, Stephen; Yoder, Paul

    2018-05-15

    Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed.

  8. Perceptual Learning in Children With Infantile Nystagmus: Effects on Visual Performance.

    Science.gov (United States)

    Huurneman, Bianca; Boonstra, F Nienke; Goossens, Jeroen

    2016-08-01

    To evaluate whether computerized training with a crowded or uncrowded letter-discrimination task reduces visual impairment (VI) in 6- to 11-year-old children with infantile nystagmus (IN) who suffer from increased foveal crowding, reduced visual acuity, and reduced stereopsis. Thirty-six children with IN were included. Eighteen had idiopathic IN and 18 had oculocutaneous albinism. These children were divided in two training groups matched on age and diagnosis: a crowded training group (n = 18) and an uncrowded training group (n = 18). Training occurred two times per week during 5 weeks (3500 trials per training). Eleven age-matched children with normal vision were included to assess baseline differences in task performance and test-retest learning. Main outcome measures were task-specific performance, distance and near visual acuity (DVA and NVA), intensity and extent of (foveal) crowding at 5 m and 40 cm, and stereopsis. Training resulted in task-specific improvements. Both training groups also showed uncrowded and crowded DVA improvements (0.10 ± 0.02 and 0.11 ± 0.02 logMAR) and improved stereopsis (670 ± 249″). Crowded NVA improved only in the crowded training group (0.15 ± 0.02 logMAR), which was also the only group showing a reduction in near crowding intensity (0.08 ± 0.03 logMAR). Effects were not due to test-retest learning. Perceptual learning with or without distractors reduces the extent of crowding and improves visual acuity in children with IN. Training with distractors improves near vision more than training with single optotypes. Perceptual learning also transfers to DVA and NVA under uncrowded and crowded conditions and even stereopsis. Learning curves indicated that improvements may be larger after longer training.

  9. Alpha-gamma phase amplitude coupling subserves information transfer during perceptual sequence learning.

    Science.gov (United States)

    Tzvi, Elinor; Bauhaus, Leon J; Kessler, Till U; Liebrand, Matthias; Wöstmann, Malte; Krämer, Ulrike M

    2018-03-01

    Cross-frequency coupling is suggested to serve transfer of information between wide-spread neuronal assemblies and has been shown to underlie many cognitive functions including learning and memory. In previous work, we found that alpha (8-13 Hz) - gamma (30-48 Hz) phase amplitude coupling (αγPAC) is decreased during sequence learning in bilateral frontal cortex and right parietal cortex. We interpreted this to reflect decreased demands for visuo-motor mapping once the sequence has been encoded. In the present study, we put this hypothesis to the test by adding a "simple" condition to the standard serial reaction time task (SRTT) with minimal needs for visuo-motor mapping. The standard SRTT in our paradigm entailed a perceptual sequence allowing for implicit learning of a sequence of colors with randomly assigned motor responses. Sequence learning in this case was thus not associated with reduced demands for visuo-motor mapping. Analysis of oscillatory power revealed a learning-related alpha decrease pointing to a stronger recruitment of occipito-parietal areas when encoding the perceptual sequence. Replicating our previous findings but in contrast to our hypothesis, αγPAC was decreased in sequence compared to random trials over right frontal and parietal cortex. It also tended to be smaller compared to trials requiring a simple motor sequence. We additionally analyzed αγPAC in resting-state data of a separate cohort. PAC in electrodes over right parietal cortex was significantly stronger compared to sequence trials and tended to be higher compared to simple and random trials of the SRTT data. We suggest that αγPAC in right parietal cortex reflects a "default-mode" brain state, which gets perturbed to allow for encoding of visual regularities into memory. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. The Effect of Semantic Mapping as a Vocabulary Instruction Technique on EFL Learners with Different Perceptual Learning Styles

    Directory of Open Access Journals (Sweden)

    Esmaeel Abdollahzadeh

    2009-05-01

    Full Text Available Traditional and modern vocabulary instruction techniques have been introduced in the past few decades to improve the learners’ performance in reading comprehension. Semantic mapping, which entails drawing learners’ attention to the interrelationships among lexical items through graphic organizers, is claimed to enhance vocabulary learning significantly. However, whether this technique suits all types of learners has not been adequately investigated. This study examines the effectiveness of employing semantic mapping versus traditional approaches in vocabulary instruction to EFL learners with different perceptual modalities. A modified version of Reid’s (1987 perceptual learning style questionnaire was used to determine the learners’ modality types. The results indicate that semantic mapping in comparison to the traditional approaches significantly enhances vocabulary learning of EFL learners. However, although visual learners slightly outperformed other types of learners on the post-test, no significant differences were observed among intermediate learners with different perceptual modalities employing semantic mapping for vocabulary practice.

  11. Perceptual context and individual differences in the language proficiency of preschool children.

    Science.gov (United States)

    Banai, Karen; Yifat, Rachel

    2016-02-01

    Although the contribution of perceptual processes to language skills during infancy is well recognized, the role of perception in linguistic processing beyond infancy is not well understood. In the experiments reported here, we asked whether manipulating the perceptual context in which stimuli are presented across trials influences how preschool children perform visual (shape-size identification; Experiment 1) and auditory (syllable identification; Experiment 2) tasks. Another goal was to determine whether the sensitivity to perceptual context can explain part of the variance in oral language skills in typically developing preschool children. Perceptual context was manipulated by changing the relative frequency with which target visual (Experiment 1) and auditory (Experiment 2) stimuli were presented in arrays of fixed size, and identification of the target stimuli was tested. Oral language skills were assessed using vocabulary, word definition, and phonological awareness tasks. Changes in perceptual context influenced the performance of the majority of children on both identification tasks. Sensitivity to perceptual context accounted for 7% to 15% of the variance in language scores. We suggest that context effects are an outcome of a statistical learning process. Therefore, the current findings demonstrate that statistical learning can facilitate both visual and auditory identification processes in preschool children. Furthermore, consistent with previous findings in infants and in older children and adults, individual differences in statistical learning were found to be associated with individual differences in language skills of preschool children. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Thalamocortical dynamics of the McCollough effect: boundary-surface alignment through perceptual learning.

    Science.gov (United States)

    Grossberg, Stephen; Hwang, Seungwoo; Mingolla, Ennio

    2002-05-01

    This article further develops the FACADE neural model of 3-D vision and figure-ground perception to quantitatively explain properties of the McCollough effect (ME). The model proposes that many ME data result from visual system mechanisms whose primary function is to adaptively align, through learning, boundary and surface representations that are positionally shifted due to the process of binocular fusion. For example, binocular boundary representations are shifted by binocular fusion relative to monocular surface representations, yet the boundaries must become positionally aligned with the surfaces to control binocular surface capture and filling-in. The model also includes perceptual reset mechanisms that use habituative transmitters in opponent processing circuits. Thus the model shows how ME data may arise from a combination of mechanisms that have a clear functional role in biological vision. Simulation results with a single set of parameters quantitatively fit data from 13 experiments that probe the nature of achromatic/chromatic and monocular/binocular interactions during induction of the ME. The model proposes how perceptual learning, opponent processing, and habituation at both monocular and binocular surface representations are involved, including early thalamocortical sites. In particular, it explains the anomalous ME utilizing these multiple processing sites. Alternative models of the ME are also summarized and compared with the present model.

  13. Localized brain activation related to the strength of auditory learning in a parrot.

    Directory of Open Access Journals (Sweden)

    Hiroko Eda-Fujiwara

    Full Text Available Parrots and songbirds learn their vocalizations from a conspecific tutor, much like human infants acquire spoken language. Parrots can learn human words and it has been suggested that they can use them to communicate with humans. The caudomedial pallium in the parrot brain is homologous with that of songbirds, and analogous to the human auditory association cortex, involved in speech processing. Here we investigated neuronal activation, measured as expression of the protein product of the immediate early gene ZENK, in relation to auditory learning in the budgerigar (Melopsittacus undulatus, a parrot. Budgerigar males successfully learned to discriminate two Japanese words spoken by another male conspecific. Re-exposure to the two discriminanda led to increased neuronal activation in the caudomedial pallium, but not in the hippocampus, compared to untrained birds that were exposed to the same words, or were not exposed to words. Neuronal activation in the caudomedial pallium of the experimental birds was correlated significantly and positively with the percentage of correct responses in the discrimination task. These results suggest that in a parrot, the caudomedial pallium is involved in auditory learning. Thus, in parrots, songbirds and humans, analogous brain regions may contain the neural substrate for auditory learning and memory.

  14. Incremental learning of perceptual and conceptual representations and the puzzle of neural repetition suppression.

    Science.gov (United States)

    Gotts, Stephen J

    2016-08-01

    Incremental learning models of long-term perceptual and conceptual knowledge hold that neural representations are gradually acquired over many individual experiences via Hebbian-like activity-dependent synaptic plasticity across cortical connections of the brain. In such models, variation in task relevance of information, anatomic constraints, and the statistics of sensory inputs and motor outputs lead to qualitative alterations in the nature of representations that are acquired. Here, the proposal that behavioral repetition priming and neural repetition suppression effects are empirical markers of incremental learning in the cortex is discussed, and research results that both support and challenge this position are reviewed. Discussion is focused on a recent fMRI-adaptation study from our laboratory that shows decoupling of experience-dependent changes in neural tuning, priming, and repetition suppression, with representational changes that appear to work counter to the explicit task demands. Finally, critical experiments that may help to clarify and resolve current challenges are outlined.

  15. [Improvement of vision through perceptual learning in the case of refractive errors and presbyopia : A critical valuation].

    Science.gov (United States)

    Heinrich, S P

    2017-02-01

    The idea of compensating or even rectifying refractive errors and presbyopia with the help of vision training is not new. For most approaches, however, scientific evidence is insufficient. A currently promoted method is "perceptual learning", which is assumed to improve stimulus processing in the brain. The basic phenomena of perceptual learning have been demonstrated by a multitude of studies. Some of these specifically address the case of refractive errors and presbyopia. However, many open questions remain, in particular with respect to the transfer of practice effects to every-day vision. At present, the method should therefore be judged with caution.

  16. Thalamic and parietal brain morphology predicts auditory category learning.

    Science.gov (United States)

    Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas

    2014-01-01

    Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties. © 2013 Published by Elsevier Ltd.

  17. Infants Learn Phonotactic Regularities from Brief Auditory Experience.

    Science.gov (United States)

    Chambers, Kyle E.; Onishi, Kristine H.; Fisher, Cynthia

    2003-01-01

    Two experiments investigated whether novel phonotactic regularities, not present in English, could be acquired by 16.5-month-olds from brief auditory experience. Subjects listened to consonant-vowel-consonant syllables in which particular consonants were artificially restricted to either initial or final position. Findings in a subsequent…

  18. Comparative Evaluation of Auditory Attention in 7 to 9 Year Old Learning Disabled Students

    Directory of Open Access Journals (Sweden)

    Fereshteh Amiriani

    2011-06-01

    Full Text Available Background and Aim: Learning disability is a term referes to a group of disorders manifesting listening, reading, writing, or mathematical problems. These children mostly have attention difficulties in classroom that leads to many learning problems. In this study we aimed to compare the auditory attention of 7 to 9 year old children with learning disability to non- learning disability age matched normal group.Methods: Twenty seven male 7 to 9 year old students with learning disability and 27 age and sex matched normal conrols were selected with unprobable simple sampling. 27 In order to evaluate auditory selective and divided attention, Farsi versions of speech in noise and dichotic digit test were used respectively.Results: Comparison of mean scores of Farsi versions of speech in noise in both ears of 7 and 8 year-old students in two groups indicated no significant difference (p>0.05 Mean scores of 9 year old controls was significant more than those of the cases only in the right ear (p=0.033. However, no significant difference was observed between mean scores of dichotic digit test assessing the right ear of 9 year-old learning disability and non learning disability students (p>0.05. Moreover, mean scores of 7 and 8 year- old students with learning disability was less than those of their normal peers in the left ear (p>0.05.Conclusion: Selective auditory attention is not affected in the optimal signal to noise ratio, while divided attention seems to be affected by maturity delay of auditory system or central auditory system disorders.

  19. Learning-induced uncertainty reduction in perceptual decisions is task-dependent

    Directory of Open Access Journals (Sweden)

    Feitong eYang

    2014-05-01

    Full Text Available Perceptual decision making in which decisions are reached primarily from extracting and evaluating sensory information requires close interactions between the sensory system and decision-related networks in the brain. Uncertainty pervades every aspect of this process and can be considered related to either the stimulus signal or decision criterion. Here, we investigated the learning-induced reduction of both the signal and criterion uncertainty in two perceptual decision tasks based on two Glass pattern stimulus sets. This was achieved by manipulating spiral angle and signal level of radial and concentric Glass patterns. The behavioral results showed that the participants trained with a task based on criterion comparison improved their categorization accuracy for both tasks, whereas the participants who were trained on a task based on signal detection improved their categorization accuracy only on their trained task. We fitted the behavioral data with a computational model that can dissociate the contribution of the signal and criterion uncertainties. The modeling results indicated that the participants trained on the criterion comparison task reduced both the criterion and signal uncertainty. By contrast, the participants who were trained on the signal detection task only reduced their signal uncertainty after training. Our results suggest that the signal uncertainty can be resolved by training participants to extract signals from noisy environments and to discriminate between clear signals, which are evidenced by reduced perception variance after both training procedures. Conversely, the criterion uncertainty can only be resolved by the training of fine discrimination. These findings demonstrate that uncertainty in perceptual decision-making can be reduced with training but that the reduction of different types of uncertainty is task-dependent.

  20. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    Science.gov (United States)

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  1. Analysis of previous perceptual and motor experience in breaststroke kick learning

    Directory of Open Access Journals (Sweden)

    Ried Bettina

    2015-12-01

    Full Text Available One of the variables that influence motor learning is the learner’s previous experience, which may provide perceptual and motor elements to be transferred to a novel motor skill. For swimming skills, several motor experiences may prove effective. Purpose. The aim was to analyse the influence of previous experience in playing in water, swimming lessons, and music or dance lessons on learning the breaststroke kick. Methods. The study involved 39 Physical Education students possessing basic swimming skills, but not the breaststroke, who performed 400 acquisition trials followed by 50 retention and 50 transfer trials, during which stroke index as well as rhythmic and spatial configuration indices were mapped, and answered a yes/no questionnaire regarding previous experience. Data were analysed by ANOVA (p = 0.05 and the effect size (Cohen’s d ≥0.8 indicating large effect size. Results. The whole sample improved their stroke index and spatial configuration index, but not their rhythmic configuration index. Although differences between groups were not significant, two types of experience showed large practical effects on learning: childhood water playing experience only showed major practically relevant positive effects, and no experience in any of the three fields hampered the learning process. Conclusions. The results point towards diverse impact of previous experience regarding rhythmic activities, swimming lessons, and especially with playing in water during childhood, on learning the breaststroke kick.

  2. Implicit learning of predictable sound sequences modulates human brain responses at different levels of the auditory hierarchy

    Directory of Open Access Journals (Sweden)

    Françoise eLecaignard

    2015-09-01

    Full Text Available Deviant stimuli, violating regularities in a sensory environment, elicit the Mismatch Negativity (MMN, largely described in the Event-Related Potential literature. While it is widely accepted that the MMN reflects more than basic change detection, a comprehensive description of mental processes modulating this response is still lacking. Within the framework of predictive coding, deviance processing is part of an inference process where prediction errors (the mismatch between incoming sensations and predictions established through experience are minimized. In this view, the MMN is a measure of prediction error, which yields specific expectations regarding its modulations by various experimental factors. In particular, it predicts that the MMN should decrease as the occurrence of a deviance becomes more predictable. We conducted a passive oddball EEG study and manipulated the predictability of sound sequences by means of different temporal structures. Importantly, our design allows comparing mismatch responses elicited by predictable and unpredictable violations of a simple repetition rule and therefore departs from previous studies that investigate violations of different time-scale regularities. We observed a decrease of the MMN with predictability and interestingly, a similar effect at earlier latencies, within 70 ms after deviance onset. Following these pre-attentive responses, a reduced P3a was measured in the case of predictable deviants. We conclude that early and late deviance responses reflect prediction errors, triggering belief updating within the auditory hierarchy. Beside, in this passive study, such perceptual inference appears to be modulated by higher-level implicit learning of sequence statistical structures. Our findings argue for a hierarchical model of auditory processing where predictive coding enables implicit extraction of environmental regularities.

  3. The benefits of cholinergic enhancement during perceptual learning are long-lasting

    Directory of Open Access Journals (Sweden)

    Ariel eRokem

    2013-05-01

    Full Text Available The neurotransmitter acetylcholine (ACh regulates many aspects of cognition, including attention and memory. Previous research in animal models has shown that plasticity in sensory systems often depends on the behavioral relevance of a stimulus and/or task. However, experimentally increasing ACh release in the cortex can result in experience-dependent plasticity, even in the absence of behavioral relevance. In humans, the pharmacological enhancement of ACh transmission by administration of the cholinesterase inhibitor donepezil during performance of a perceptual task increases the magnitude of perceptual learning (PL and its specificity to physical parameters of the stimuli used for training. Behavioral effects of PL have previously been shown to persist for many months. In the present study, we tested whether enhancement of PL by donepezil is also long-lasting. Healthy human subjects were trained on a motion direction discrimination task during cholinergic enhancement, and follow-up testing was performed 5-15 months after the end of training and without additional drug administration. Increases in performance associated with training under donepezil were evident in follow-up retesting, indicating that cholinergic enhancement has beneficial long-term effects on PL. These findings suggest that cholinergic enhancement of training procedures used to treat clinical disorders should improve long-term outcomes of these procedures.

  4. Precise auditory-vocal mirroring in neurons for learned vocal communication.

    Science.gov (United States)

    Prather, J F; Peters, S; Nowicki, S; Mooney, R

    2008-01-17

    Brain mechanisms for communication must establish a correspondence between sensory and motor codes used to represent the signal. One idea is that this correspondence is established at the level of single neurons that are active when the individual performs a particular gesture or observes a similar gesture performed by another individual. Although neurons that display a precise auditory-vocal correspondence could facilitate vocal communication, they have yet to be identified. Here we report that a certain class of neurons in the swamp sparrow forebrain displays a precise auditory-vocal correspondence. We show that these neurons respond in a temporally precise fashion to auditory presentation of certain note sequences in this songbird's repertoire and to similar note sequences in other birds' songs. These neurons display nearly identical patterns of activity when the bird sings the same sequence, and disrupting auditory feedback does not alter this singing-related activity, indicating it is motor in nature. Furthermore, these neurons innervate striatal structures important for song learning, raising the possibility that singing-related activity in these cells is compared to auditory feedback to guide vocal learning.

  5. Learning Auditory Discrimination with Computer-Assisted Instruction: A Comparison of Two Different Performance Objectives.

    Science.gov (United States)

    Steinhaus, Kurt A.

    A 12-week study of two groups of 14 college freshmen music majors was conducted to determine which group demonstrated greater achievement in learning auditory discrimination using computer-assisted instruction (CAI). The method employed was a pre-/post-test experimental design using subjects randomly assigned to a control group or an experimental…

  6. Perceptual learning increases the strength of the earliest signals in visual cortex.

    Science.gov (United States)

    Bao, Min; Yang, Lin; Rios, Cristina; He, Bin; Engel, Stephen A

    2010-11-10

    Training improves performance on most visual tasks. Such perceptual learning can modify how information is read out from, and represented in, later visual areas, but effects on early visual cortex are controversial. In particular, it remains unknown whether learning can reshape neural response properties in early visual areas independent from feedback arising in later cortical areas. Here, we tested whether learning can modify feedforward signals in early visual cortex as measured by the human electroencephalogram. Fourteen subjects were trained for >24 d to detect a diagonal grating pattern in one quadrant of the visual field. Training improved performance, reducing the contrast needed for reliable detection, and also reliably increased the amplitude of the earliest component of the visual evoked potential, the C1. Control orientations and locations showed smaller effects of training. Because the C1 arises rapidly and has a source in early visual cortex, our results suggest that learning can increase early visual area response through local receptive field changes without feedback from later areas.

  7. Effects of Semantic Context and Feedback on Perceptual Learning of Speech Processed through an Acoustic Simulation of a Cochlear Implant

    Science.gov (United States)

    Loebach, Jeremy L.; Pisoni, David B.; Svirsky, Mario A.

    2010-01-01

    The effect of feedback and materials on perceptual learning was examined in listeners with normal hearing who were exposed to cochlear implant simulations. Generalization was most robust when feedback paired the spectrally degraded sentences with their written transcriptions, promoting mapping between the degraded signal and its acoustic-phonetic…

  8. Visual perceptual learning by operant conditioning training follows rules of contingency

    Science.gov (United States)

    Kim, Dongho; Seitz, Aaron R; Watanabe, Takeo

    2015-01-01

    Visual perceptual learning (VPL) can occur as a result of a repetitive stimulus-reward pairing in the absence of any task. This suggests that rules that guide Conditioning, such as stimulus-reward contingency (e.g. that stimulus predicts the likelihood of reward), may also guide the formation of VPL. To address this question, we trained subjects with an operant conditioning task in which there were contingencies between the response to one of three orientations and the presence of reward. Results showed that VPL only occurred for positive contingencies, but not for neutral or negative contingencies. These results suggest that the formation of VPL is influenced by similar rules that guide the process of Conditioning. PMID:26028984

  9. Monocular perceptual learning of contrast detection facilitates binocular combination in adults with anisometropic amblyopia.

    Science.gov (United States)

    Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin

    2016-02-01

    Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n = 13) were asked to complete two psychophysical supra-threshold binocular summation tasks: (1) binocular phase combination and (2) dichoptic global motion coherence before and after monocular training to investigate this question. We showed that these participants benefited from monocular training in terms of binocular combination. More importantly, the improvements observed with the area under log CSF (AULCSF) were found to be correlated with the improvements in binocular phase combination.

  10. A perceptual advantage for onomatopoeia in early word learning: Evidence from eye-tracking.

    Science.gov (United States)

    Laing, Catherine E

    2017-09-01

    A perceptual advantage for iconic forms in infant language learning has been widely reported in the literature, termed the "sound symbolism bootstrapping hypothesis" by Imai and Kita (2014). However, empirical research in this area is limited mainly to sound symbolic forms, which are very common in languages such as Japanese but less so in Indo-European languages such as English. In this study, we extended this body of research to onomatopoeia-words that are thought to be present across most of the world's languages and that are known to be dominant in infants' early lexicons. In a picture-mapping task, 10- and 11-month-old infants showed a processing advantage for onomatopoeia (e.g., woof woof) over their conventional counterparts (e.g., doggie). However, further analysis suggests that the input may play a key role in infants' experience and processing of these forms. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Visual perceptual learning by operant conditioning training follows rules of contingency.

    Science.gov (United States)

    Kim, Dongho; Seitz, Aaron R; Watanabe, Takeo

    2015-01-01

    Visual perceptual learning (VPL) can occur as a result of a repetitive stimulus-reward pairing in the absence of any task. This suggests that rules that guide Conditioning, such as stimulus-reward contingency (e.g. that stimulus predicts the likelihood of reward), may also guide the formation of VPL. To address this question, we trained subjects with an operant conditioning task in which there were contingencies between the response to one of three orientations and the presence of reward. Results showed that VPL only occurred for positive contingencies, but not for neutral or negative contingencies. These results suggest that the formation of VPL is influenced by similar rules that guide the process of Conditioning.

  12. Perceptual learning improves contrast sensitivity, visual acuity, and foveal crowding in amblyopia.

    Science.gov (United States)

    Barollo, Michele; Contemori, Giulio; Battaglini, Luca; Pavan, Andrea; Casco, Clara

    2017-01-01

    Amblyopic observers present abnormal spatial interactions between a low-contrast sinusoidal target and high-contrast collinear flankers. It has been demonstrated that perceptual learning (PL) can modulate these low-level lateral interactions, resulting in improved visual acuity and contrast sensitivity. We measured the extent and duration of generalization effects to various spatial tasks (i.e., visual acuity, Vernier acuity, and foveal crowding) through PL on the target's contrast detection. Amblyopic observers were trained on a contrast-detection task for a central target (i.e., a Gabor patch) flanked above and below by two high-contrast Gabor patches. The pre- and post-learning tasks included lateral interactions at different target-to-flankers separations (i.e., 2, 3, 4, 8λ) and included a range of spatial frequencies and stimulus durations as well as visual acuity, Vernier acuity, contrast-sensitivity function, and foveal crowding. The results showed that perceptual training reduced the target's contrast-detection thresholds more for the longest target-to-flanker separation (i.e., 8λ). We also found generalization of PL to different stimuli and tasks: contrast sensitivity for both trained and untrained spatial frequencies, visual acuity for Sloan letters, and foveal crowding, and partially for Vernier acuity. Follow-ups after 5-7 months showed not only complete maintenance of PL effects on visual acuity and contrast sensitivity function but also further improvement in these tasks. These results suggest that PL improves facilitatory lateral interactions in amblyopic observers, which usually extend over larger separations than in typical foveal vision. The improvement in these basic visual spatial operations leads to a more efficient capability of performing spatial tasks involving high levels of visual processing, possibly due to the refinement of bottom-up and top-down networks of visual areas.

  13. Effect of task-related continuous auditory feedback during learning of tracking motion exercises

    Directory of Open Access Journals (Sweden)

    Rosati Giulio

    2012-10-01

    Full Text Available Abstract Background This paper presents the results of a set of experiments in which we used continuous auditory feedback to augment motor training exercises. This feedback modality is mostly underexploited in current robotic rehabilitation systems, which usually implement only very basic auditory interfaces. Our hypothesis is that properly designed continuous auditory feedback could be used to represent temporal and spatial information that could in turn, improve performance and motor learning. Methods We implemented three different experiments on healthy subjects, who were asked to track a target on a screen by moving an input device (controller with their hand. Different visual and auditory feedback modalities were envisaged. The first experiment investigated whether continuous task-related auditory feedback can help improve performance to a greater extent than error-related audio feedback, or visual feedback alone. In the second experiment we used sensory substitution to compare different types of auditory feedback with equivalent visual feedback, in order to find out whether mapping the same information on a different sensory channel (the visual channel yielded comparable effects with those gained in the first experiment. The final experiment applied a continuously changing visuomotor transformation between the controller and the screen and mapped kinematic information, computed in either coordinate system (controller or video, to the audio channel, in order to investigate which information was more relevant to the user. Results Task-related audio feedback significantly improved performance with respect to visual feedback alone, whilst error-related feedback did not. Secondly, performance in audio tasks was significantly better with respect to the equivalent sensory-substituted visual tasks. Finally, with respect to visual feedback alone, video-task-related sound feedback decreased the tracking error during the learning of a novel

  14. The Effect of Perceptual Learning on L2 Vocabulary Learning and Retention

    OpenAIRE

    BEDİR, Gülay; BEKTAŞ BEDİR, Sevgi

    2018-01-01

    It is thought that learning styles have an effecton learning foreign language. This study aims to determine effects ofperceptual learning styles on L2 vocabulary learning and retention. Learningstyle preferences were assessed in the current study through the section ofCohen et al.’s Learning Style Survey (LSS) corresponding to the perceptualmodalities and achievement tests developed by the researcher was used to assessvocabulary learning and retention. And an open-ended question is tried toan...

  15. Distinct effects of perceptual quality on auditory word recognition, memory formation and recall in a neural model of sequential memory

    Directory of Open Access Journals (Sweden)

    Paul Miller

    2010-06-01

    Full Text Available Adults with sensory impairment, such as reduced hearing acuity, have impaired ability to recall identifiable words, even when their memory is otherwise normal. We hypothesize that poorer stimulus quality causes weaker activity in neurons responsive to the stimulus and more time to elapse between stimulus onset and identification. The weaker activity and increased delay to stimulus identification reduce the necessary strengthening of connections between neurons active before stimulus presentation and neurons active at the time of stimulus identification. We test our hypothesis through a biologically motivated computational model, which performs item recognition, memory formation and memory retrieval. In our simulations, spiking neurons are distributed into pools representing either items or context, in two separate, but connected winner-takes-all (WTA networks. We include associative, Hebbian learning, by comparing multiple forms of spike-timing dependent plasticity (STDP, which strengthen synapses between coactive neurons during stimulus identification. Synaptic strengthening by STDP can be sufficient to reactivate neurons during recall if their activity during a prior stimulus rose strongly and rapidly. We find that a single poor quality stimulus impairs recall of neighboring stimuli as well as the weak stimulus itself. We demonstrate that within the WTA paradigm of word recognition, reactivation of separate, connected sets of non-word, context cells permits reverse recall. Also, only with such coactive context cells, does slowing the rate of stimulus presentation increase recall probability. We conclude that significant temporal overlap of neural activity patterns, absent from individual WTA networks, is necessary to match behavioral data for word recall.

  16. Effects of hand gestures on auditory learning of second-language vowel length contrasts.

    Science.gov (United States)

    Hirata, Yukari; Kelly, Spencer D; Huang, Jessica; Manansala, Michael

    2014-12-01

    Research has shown that hand gestures affect comprehension and production of speech at semantic, syntactic, and pragmatic levels for both native language and second language (L2). This study investigated a relatively less explored question: Do hand gestures influence auditory learning of an L2 at the segmental phonology level? To examine auditory learning of phonemic vowel length contrasts in Japanese, 88 native English-speaking participants took an auditory test before and after one of the following 4 types of training in which they (a) observed an instructor in a video speaking Japanese words while she made syllabic-rhythm hand gesture, (b) produced this gesture with the instructor, (c) observed the instructor speaking those words and her moraic-rhythm hand gesture, or (d) produced the moraic-rhythm gesture with the instructor. All of the training types yielded similar auditory improvement in identifying vowel length contrast. However, observing the syllabic-rhythm hand gesture yielded the most balanced improvement between word-initial and word-final vowels and between slow and fast speaking rates. The overall effect of hand gesture on learning of segmental phonology is limited. Implications for theories of hand gesture are discussed in terms of the role it plays at different linguistic levels.

  17. Inferior frontal gyrus activation predicts individual differences in perceptual learning of cochlear-implant simulations.

    Science.gov (United States)

    Eisner, Frank; McGettigan, Carolyn; Faulkner, Andrew; Rosen, Stuart; Scott, Sophie K

    2010-05-26

    This study investigated the neural plasticity associated with perceptual learning of a cochlear implant (CI) simulation. Normal-hearing listeners were trained with vocoded and spectrally shifted speech simulating a CI while cortical responses were measured with functional magnetic resonance imaging (fMRI). A condition in which the vocoded speech was spectrally inverted provided a control for learnability and adaptation. Behavioral measures showed considerable individual variability both in the ability to learn to understand the degraded speech, and in phonological working memory capacity. Neurally, left-lateralized regions in superior temporal sulcus and inferior frontal gyrus (IFG) were sensitive to the learnability of the simulations, but only the activity in prefrontal cortex correlated with interindividual variation in intelligibility scores and phonological working memory. A region in left angular gyrus (AG) showed an activation pattern that reflected learning over the course of the experiment, and covariation of activity in AG and IFG was modulated by the learnability of the stimuli. These results suggest that variation in listeners' ability to adjust to vocoded and spectrally shifted speech is partly reflected in differences in the recruitment of higher-level language processes in prefrontal cortex, and that this variability may further depend on functional links between the left inferior frontal gyrus and angular gyrus. Differences in the engagement of left inferior prefrontal cortex, and its covariation with posterior parietal areas, may thus underlie some of the variation in speech perception skills that have been observed in clinical populations of CI users.

  18. Comparison of Auditory/Visual and Visual/Motor Practice on the Spelling Accuracy of Learning Disabled Children.

    Science.gov (United States)

    Aleman, Cheryl; And Others

    1990-01-01

    Compares auditory/visual practice to visual/motor practice in spelling with seven elementary school learning-disabled students enrolled in a resource room setting. Finds that the auditory/visual practice was superior to the visual/motor practice on the weekly spelling performance for all seven students. (MG)

  19. Effect of Auditory Constraints on Motor Learning Depends on Stage of Recovery Post Stroke

    Directory of Open Access Journals (Sweden)

    Viswanath eAluru

    2014-06-01

    Full Text Available In order to develop evidence-based rehabilitation protocols post stroke, one must first reconcile the vast heterogeneity in the post-stroke population and develop protocols to facilitate motor learning in the various subgroups. The main purpose of this study is to show that auditory constraints interact with the stage of recovery post stroke to influence motor learning. We characterized the stages of upper limb recovery using task-based kinematic measures in twenty subjects with chronic hemiparesis, and used a bimanual wrist extension task using a custom-made wrist trainer to facilitate learning of wrist extension in the paretic hand under four auditory conditions: 1 without auditory cueing; 2 to non-musical happy sounds; 3 to self-selected music; and 4 to a metronome beat set at a comfortable tempo. Two bimanual trials (15 s each were followed by one unimanual trial with the paretic hand over six cycles under each condition. Clinical metrics, wrist and arm kinematics and electromyographic activity were recorded. Hierarchical cluster analysis with the Mahalanobis metric based on baseline speed and extent of wrist movement stratified subjects into three distinct groups which reflected their stage of recovery: spastic paresis, spastic co-contraction, and minimal paresis. In spastic paresis, the metronome beat increased wrist extension, but also increased muscle co-activation across the wrist. In contrast, in spastic co-contraction, no auditory stimulation increased wrist extension and reduced co-activation. In minimal paresis, wrist extension did not improve under any condition. The results suggest that auditory task constraints interact with stage of recovery during motor learning after stroke, perhaps due to recruitment of distinct neural substrates over the course of recovery. The findings advance our understanding of the mechanisms of progression of motor recovery and lay the foundation for personalized treatment algorithms post stroke.

  20. Transfer of tactile perceptual learning to untrained neighboring fingers reflects natural use relationships.

    Science.gov (United States)

    Dempsey-Jones, Harriet; Harrar, Vanessa; Oliver, Jonathan; Johansen-Berg, Heidi; Spence, Charles; Makin, Tamar R

    2016-03-01

    Tactile learning transfers from trained to untrained fingers in a pattern that reflects overlap between the representations of fingers in the somatosensory system (e.g., neurons with multifinger receptive fields). While physical proximity on the body is known to determine the topography of somatosensory representations, tactile coactivation is also an established organizing principle of somatosensory topography. In this study we investigated whether tactile coactivation, induced by habitual inter-finger cooperative use (use pattern), shapes inter-finger overlap. To this end, we used psychophysics to compare the transfer of tactile learning from the middle finger to its adjacent fingers. This allowed us to compare transfer to two fingers that are both physically and cortically adjacent to the middle finger but have differing use patterns. Specifically, the middle finger is used more frequently with the ring than with the index finger. We predicted this should lead to greater representational overlap between the former than the latter pair. Furthermore, this difference in overlap should be reflected in differential learning transfer from the middle to index vs. ring fingers. Subsequently, we predicted temporary learning-related changes in the middle finger's representation (e.g., cortical magnification) would cause transient interference in perceptual thresholds of the ring, but not the index, finger. Supporting this, longitudinal analysis revealed a divergence where learning transfer was fast to the index finger but relatively delayed to the ring finger. Our results support the theory that tactile coactivation patterns between digits affect their topographic relationships. Our findings emphasize how action shapes perception and somatosensory organization. Copyright © 2016 the American Physiological Society.

  1. Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain.

    Science.gov (United States)

    Lu, Kai; Vicario, David S

    2014-10-07

    Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds.

  2. Assessing learning preferences of dental students using visual, auditory, reading-writing, and kinesthetic questionnaire

    Directory of Open Access Journals (Sweden)

    Darshana Bennadi

    2015-01-01

    Full Text Available Introduction: Educators of the health care profession (teachers are committed in preparing future health care providers, but are facing many challenges in transmitting their ever expanding knowledge to the students. This study was done to focus on different learning styles among dental students. Aim: To assess different learning preferences among dental students. Materials and Methods: This is a descriptive cross-sectional questionnaire study using visual, auditory, reading-writing, and kinesthetic questionnaire among dental students. Results: Majority 75.8% of the students preferred multimodal learning style. Multimodal learning was common among clinical students. No statistical significant difference of learning styles in relation to gender (P > 0.05. Conclusion: In the present study, majority of students preferred multimodal learning preference. Knowledge about the learning style preference of different profession can help to enhance the teaching method for the students.

  3. Optimization of perceptual learning: effects of task difficulty and external noise in older adults.

    Science.gov (United States)

    DeLoss, Denton J; Watanabe, Takeo; Andersen, George J

    2014-06-01

    Previous research has shown a wide array of age-related declines in vision. The current study examined the effects of perceptual learning (PL), external noise, and task difficulty in fine orientation discrimination with older individuals (mean age 71.73, range 65-91). Thirty-two older subjects participated in seven 1.5-h sessions conducted on separate days over a three-week period. A two-alternative forced choice procedure was used in discriminating the orientation of Gabor patches. Four training groups were examined in which the standard orientations for training were either easy or difficult and included either external noise (additive Gaussian noise) or no external noise. In addition, the transfer to an untrained orientation and noise levels were examined. An analysis of the four groups prior to training indicated no significant differences between the groups. An analysis of the change in performance post-training indicated that the degree of learning was related to task difficulty and the presence of external noise during training. In addition, measurements of pupil diameter indicated that changes in orientation discrimination were not associated with changes in retinal illuminance. These results suggest that task difficulty and training in noise are factors important for optimizing the effects of training among older individuals. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Generalization of Auditory Sensory and Cognitive Learning in Typically Developing Children.

    Directory of Open Access Journals (Sweden)

    Cristina F B Murphy

    Full Text Available Despite the well-established involvement of both sensory ("bottom-up" and cognitive ("top-down" processes in literacy, the extent to which auditory or cognitive (memory or attention learning transfers to phonological and reading skills remains unclear. Most research has demonstrated learning of the trained task or even learning transfer to a closely related task. However, few studies have reported "far-transfer" to a different domain, such as the improvement of phonological and reading skills following auditory or cognitive training. This study assessed the effectiveness of auditory, memory or attention training on far-transfer measures involving phonological and reading skills in typically developing children. Mid-transfer was also assessed through untrained auditory, attention and memory tasks. Sixty 5- to 8-year-old children with normal hearing were quasi-randomly assigned to one of five training groups: attention group (AG, memory group (MG, auditory sensory group (SG, placebo group (PG; drawing, painting, and a control, untrained group (CG. Compliance, mid-transfer and far-transfer measures were evaluated before and after training. All trained groups received 12 x 45-min training sessions over 12 weeks. The CG did not receive any intervention. All trained groups, especially older children, exhibited significant learning of the trained task. On pre- to post-training measures (test-retest, most groups exhibited improvements on most tasks. There was significant mid-transfer for a visual digit span task, with highest span in the MG, relative to other groups. These results show that both sensory and cognitive (memory or attention training can lead to learning in the trained task and to mid-transfer learning on a task (visual digit span within the same domain as the trained tasks. However, learning did not transfer to measures of language (reading and phonological awareness, as the PG and CG improved as much as the other trained groups. Further

  5. Generalization of Auditory Sensory and Cognitive Learning in Typically Developing Children.

    Science.gov (United States)

    Murphy, Cristina F B; Moore, David R; Schochat, Eliane

    2015-01-01

    Despite the well-established involvement of both sensory ("bottom-up") and cognitive ("top-down") processes in literacy, the extent to which auditory or cognitive (memory or attention) learning transfers to phonological and reading skills remains unclear. Most research has demonstrated learning of the trained task or even learning transfer to a closely related task. However, few studies have reported "far-transfer" to a different domain, such as the improvement of phonological and reading skills following auditory or cognitive training. This study assessed the effectiveness of auditory, memory or attention training on far-transfer measures involving phonological and reading skills in typically developing children. Mid-transfer was also assessed through untrained auditory, attention and memory tasks. Sixty 5- to 8-year-old children with normal hearing were quasi-randomly assigned to one of five training groups: attention group (AG), memory group (MG), auditory sensory group (SG), placebo group (PG; drawing, painting), and a control, untrained group (CG). Compliance, mid-transfer and far-transfer measures were evaluated before and after training. All trained groups received 12 x 45-min training sessions over 12 weeks. The CG did not receive any intervention. All trained groups, especially older children, exhibited significant learning of the trained task. On pre- to post-training measures (test-retest), most groups exhibited improvements on most tasks. There was significant mid-transfer for a visual digit span task, with highest span in the MG, relative to other groups. These results show that both sensory and cognitive (memory or attention) training can lead to learning in the trained task and to mid-transfer learning on a task (visual digit span) within the same domain as the trained tasks. However, learning did not transfer to measures of language (reading and phonological awareness), as the PG and CG improved as much as the other trained groups. Further research

  6. Real-Time Strategy Video Game Experience and Visual Perceptual Learning.

    Science.gov (United States)

    Kim, Yong-Hwan; Kang, Dong-Wha; Kim, Dongho; Kim, Hye-Jin; Sasaki, Yuka; Watanabe, Takeo

    2015-07-22

    Visual perceptual learning (VPL) is defined as long-term improvement in performance on a visual-perception task after visual experiences or training. Early studies have found that VPL is highly specific for the trained feature and location, suggesting that VPL is associated with changes in the early visual cortex. However, the generality of visual skills enhancement attributable to action video-game experience suggests that VPL can result from improvement in higher cognitive skills. If so, experience in real-time strategy (RTS) video-game play, which may heavily involve cognitive skills, may also facilitate VPL. To test this hypothesis, we compared VPL between RTS video-game players (VGPs) and non-VGPs (NVGPs) and elucidated underlying structural and functional neural mechanisms. Healthy young human subjects underwent six training sessions on a texture discrimination task. Diffusion-tensor and functional magnetic resonance imaging were performed before and after training. VGPs performed better than NVGPs in the early phase of training. White-matter connectivity between the right external capsule and visual cortex and neuronal activity in the right inferior frontal gyrus (IFG) and anterior cingulate cortex (ACC) were greater in VGPs than NVGPs and were significantly correlated with RTS video-game experience. In both VGPs and NVGPs, there was task-related neuronal activity in the right IFG, ACC, and striatum, which was strengthened after training. These results indicate that RTS video-game experience, associated with changes in higher-order cognitive functions and connectivity between visual and cognitive areas, facilitates VPL in early phases of training. The results support the hypothesis that VPL can occur without involvement of only visual areas. Significance statement: Although early studies found that visual perceptual learning (VPL) is associated with involvement of the visual cortex, generality of visual skills enhancement by action video-game experience

  7. Baseline performance and learning rate of conceptual and perceptual skill-learning tasks: the effect of moderate to severe traumatic brain injury.

    Science.gov (United States)

    Vakil, Eli; Lev-Ran Galon, Carmit

    2014-01-01

    Existing literature presents a complex and inconsistent picture of the specific deficiencies involved in skill learning following traumatic brain injury (TBI). In an attempt to address this difficulty, individuals with moderate to severe TBI (n = 29) and a control group (n = 29) were tested with two different skill-learning tasks: conceptual (i.e., Tower of Hanoi Puzzle, TOHP) and perceptual (i.e., mirror reading, MR). Based on previous studies of the effect of divided attention on these tasks and findings regarding the effect of TBI on conceptual and perceptual priming tasks, it was predicted that the group with TBI would show impaired baseline performance compared to controls in the TOHP task though their learning rate would be maintained, while both baseline performance and learning rate on the MR task would be maintained. Consistent with our predictions, overall baseline performance of the group with TBI was impaired in the TOHP test, while the learning rate was not. The learning rate on the MR task was preserved but, contrary to our prediction, response time of the group with TBI was slower than that of controls. The pattern of results observed in the present study was interpreted to possibly reflect an impairment of both the frontal lobes as well as that of diffuse axonal injury, which is well documented as being affected by TBI. The former impairment affects baseline performance of the conceptual learning skill, while the latter affects the overall slower performance of the perceptual learning skill.

  8. Emotional Intelligence among Auditory, Reading, and Kinesthetic Learning Styles of Elementary School Students in Ambon-Indonesia

    Science.gov (United States)

    Leasa, Marleny; Corebima, Aloysius D.; Ibrohim; Suwono, Hadi

    2017-01-01

    Students have unique ways in managing the information in their learning process. VARK learning styles associated with memory are considered to have an effect on emotional intelligence. This quasi-experimental research was conducted to compare the emotional intelligence among the students having auditory, reading, and kinesthetic learning styles in…

  9. A Mouse Model of Visual Perceptual Learning Reveals Alterations in Neuronal Coding and Dendritic Spine Density in the Visual Cortex

    OpenAIRE

    Wang, Yan; Wu, Wei; Zhang, Xian; Hu, Xu; Li, Yue; Lou, Shihao; Ma, Xiao; An, Xu; Liu, Hui; Peng, Jing; Ma, Danyi; Zhou, Yifeng; Yang, Yupeng

    2016-01-01

    Visual perceptual learning (VPL) can improve spatial vision in normally sighted and visually impaired individuals. Although previous studies of humans and large animals have explored the neural basis of VPL, elucidation of the underlying cellular and molecular mechanisms remains a challenge. Owing to the advantages of molecular genetic and optogenetic manipulations, the mouse is a promising model for providing a mechanistic understanding of VPL. Here, we thoroughly evaluated the effects and p...

  10. Aesthetic concepts, perceptual learning, and linguistic enculturation: considerations from Wittgenstein, language, and music.

    Science.gov (United States)

    Croom, Adam M

    2012-03-01

    Aesthetic non-cognitivists deny that aesthetic statements express genuinely aesthetic beliefs and instead hold that they work primarily to express something non-cognitive, such as attitudes of approval or disapproval, or desire. Non-cognitivists deny that aesthetic statements express aesthetic beliefs because they deny that there are aesthetic features in the world for aesthetic beliefs to represent. Their assumption, shared by scientists and theorists of mind alike, was that language-users possess cognitive mechanisms with which to objectively grasp abstract rules fixed independently of human responses, and that cognizers are thereby capable of grasping rules for the correct application of aesthetic concepts without relying on evaluation or enculturation. However, in this article I use Wittgenstein's rule-following considerations to argue that psychological theories grounded upon this so-called objective model of rule-following fail to adequately account for concept acquisition and mastery. I argue that this is because linguistic enculturation, and the perceptual learning that's often involved, influences and enables the mastery of aesthetic concepts. I argue that part of what's involved in speaking aesthetically is to belong to a cultural practice of making sense of things aesthetically, and that it's within a socio-linguistic community, and that community's practices, that such aesthetic sense can be made intelligible.

  11. Learning Disabilities and the Auditory and Visual Matching Computer Program

    Science.gov (United States)

    Tormanen, Minna R. K.; Takala, Marjatta; Sajaniemi, Nina

    2008-01-01

    This study examined whether audiovisual computer training without linguistic material had a remedial effect on different learning disabilities, like dyslexia and ADD (Attention Deficit Disorder). This study applied a pre-test-intervention-post-test design with students (N = 62) between the ages of 7 and 19. The computer training lasted eight weeks…

  12. Dopamine modulates memory consolidation of discrimination learning in the auditory cortex.

    Science.gov (United States)

    Schicknick, Horst; Reichenbach, Nicole; Smalla, Karl-Heinz; Scheich, Henning; Gundelfinger, Eckart D; Tischmeyer, Wolfgang

    2012-03-01

    In Mongolian gerbils, the auditory cortex is critical for discriminating rising vs. falling frequency-modulated tones. Based on our previous studies, we hypothesized that dopaminergic inputs to the auditory cortex during and shortly after acquisition of the discrimination strategy control long-term memory formation. To test this hypothesis, we studied frequency-modulated tone discrimination learning of gerbils in a shuttle box GO/NO-GO procedure following differential treatments. (i) Pre-exposure of gerbils to the frequency-modulated tones at 1 day before the first discrimination training session severely impaired the accuracy of the discrimination acquired in that session during the initial trials of a second training session, performed 1 day later. (ii) Local injection of the D1/D5 dopamine receptor antagonist SCH-23390 into the auditory cortex after task acquisition caused a discrimination deficit of similar extent and time course as with pre-exposure. This effect was dependent on the dose and time point of injection. (iii) Injection of the D1/D5 dopamine receptor agonist SKF-38393 into the auditory cortex after retraining caused a further discrimination improvement at the beginning of subsequent sessions. All three treatments, which supposedly interfered with dopamine signalling during conditioning and/or retraining, had a substantial impact on the dynamics of the discrimination performance particularly at the beginning of subsequent training sessions. These findings suggest that auditory-cortical dopamine activity after acquisition of a discrimination of complex sounds and after retrieval of weak frequency-modulated tone discrimination memory further improves memory consolidation, i.e. the correct association of two sounds with their respective GO/NO-GO meaning, in support of future memory recall. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  13. A Mouse Model of Visual Perceptual Learning Reveals Alterations in Neuronal Coding and Dendritic Spine Density in the Visual Cortex.

    Science.gov (United States)

    Wang, Yan; Wu, Wei; Zhang, Xian; Hu, Xu; Li, Yue; Lou, Shihao; Ma, Xiao; An, Xu; Liu, Hui; Peng, Jing; Ma, Danyi; Zhou, Yifeng; Yang, Yupeng

    2016-01-01

    Visual perceptual learning (VPL) can improve spatial vision in normally sighted and visually impaired individuals. Although previous studies of humans and large animals have explored the neural basis of VPL, elucidation of the underlying cellular and molecular mechanisms remains a challenge. Owing to the advantages of molecular genetic and optogenetic manipulations, the mouse is a promising model for providing a mechanistic understanding of VPL. Here, we thoroughly evaluated the effects and properties of VPL on spatial vision in C57BL/6J mice using a two-alternative, forced-choice visual water task. Briefly, the mice underwent prolonged training at near the individual threshold of contrast or spatial frequency (SF) for pattern discrimination or visual detection for 35 consecutive days. Following training, the contrast-threshold trained mice showed an 87% improvement in contrast sensitivity (CS) and a 55% gain in visual acuity (VA). Similarly, the SF-threshold trained mice exhibited comparable and long-lasting improvements in VA and significant gains in CS over a wide range of SFs. Furthermore, learning largely transferred across eyes and stimulus orientations. Interestingly, learning could transfer from a pattern discrimination task to a visual detection task, but not vice versa. We validated that this VPL fully restored VA in adult amblyopic mice and old mice. Taken together, these data indicate that mice, as a species, exhibit reliable VPL. Intrinsic signal optical imaging revealed that mice with perceptual training had higher cut-off SFs in primary visual cortex (V1) than those without perceptual training. Moreover, perceptual training induced an increase in the dendritic spine density in layer 2/3 pyramidal neurons of V1. These results indicated functional and structural alterations in V1 during VPL. Overall, our VPL mouse model will provide a platform for investigating the neurobiological basis of VPL.

  14. A mouse model of visual perceptual learning reveals alterations in neuronal coding and dendritic spine density in the visual cortex

    Directory of Open Access Journals (Sweden)

    Yan eWang

    2016-03-01

    Full Text Available Visual perceptual learning (VPL can improve spatial vision in normally sighted and visually impaired individuals. Although previous studies of humans and large animals have explored the neural basis of VPL, elucidation of the underlying cellular and molecular mechanisms remains a challenge. Owing to the advantages of molecular genetic and optogenetic manipulations, the mouse is a promising model for providing a mechanistic understanding of VPL. Here, we thoroughly evaluated the effects and properties of VPL on spatial vision in C57BL/6J mice using a two-alternative, forced-choice visual water task. Briefly, the mice underwent prolonged training at near the individual threshold of contrast or spatial frequency (SF for pattern discrimination or visual detection for 35 consecutive days. Following training, the contrast-threshold trained mice showed an 87% improvement in contrast sensitivity (CS and a 55% gain in visual acuity (VA. Similarly, the SF-threshold trained mice exhibited comparable and long-lasting improvements in VA and significant gains in CS over a wide range of SFs. Furthermore, learning largely transferred across eyes and stimulus orientations. Interestingly, learning could transfer from a pattern discrimination task to a visual detection task, but not vice versa. We validated that this VPL fully restored VA in adult amblyopic mice and old mice. Taken together, these data indicate that mice, as a species, exhibit reliable VPL. Intrinsic signal optical imaging revealed that mice with perceptual training had higher cut-off SFs in primary visual cortex (V1 than those without perceptual training. Moreover, perceptual training induced an increase in the dendritic spine density in layer 2/3 pyramidal neurons of V1. These results indicated functional and structural alterations in V1 during VPL. Overall, our VPL mouse model will provide a platform for investigating the neurobiological basis of VPL.

  15. Word learning in deaf children with cochlear implants: effects of early auditory experience.

    Science.gov (United States)

    Houston, Derek M; Stewart, Jessica; Moberly, Aaron; Hollich, George; Miyamoto, Richard T

    2012-05-01

    Word-learning skills were tested in normal-hearing 12- to 40-month-olds and in deaf 22- to 40-month-olds 12 to 18 months after cochlear implantation. Using the Intermodal Preferential Looking Paradigm (IPLP), children were tested for their ability to learn two novel-word/novel-object pairings. Normal-hearing children demonstrated learning on this task at approximately 18 months of age and older. For deaf children, performance on this task was significantly correlated with early auditory experience: Children whose cochlear implants were switched on by 14 months of age or who had relatively more hearing before implantation demonstrated learning in this task, but later implanted profoundly deaf children did not. Performance on this task also correlated with later measures of vocabulary size. Taken together, these findings suggest that early auditory experience facilitates word learning and that the IPLP may be useful for identifying children who may be at high risk for poor vocabulary development. © 2012 Blackwell Publishing Ltd.

  16. The development of interactive multimedia based on auditory, intellectually, repetition in repetition algorithm learning to increase learning outcome

    Science.gov (United States)

    Munir; Sutarno, H.; Aisyah, N. S.

    2018-05-01

    This research aims to find out how the development of interactive multimedia based on auditory, intellectually, and repetition can improve student learning outcomes. This interactive multimedia is developed through 5 stages. Analysis stages include the study of literature, questionnaire, interviews and observations. The design phase is done by the database design, flowchart, storyboards and repetition algorithm material while the development phase is done by the creation of web-based framework. Presentation material is adapted to the model of learning such as auditory, intellectually, repetition. Auditory points are obtained by recording the narrative material that presented by a variety of intellectual points. Multimedia as a product is validated by material and media experts. Implementation phase conducted on grade XI-TKJ2 SMKN 1 Garut. Based on index’s gain, an increasing of student learning outcomes in this study is 0.46 which is fair due to interest of student in using interactive multimedia. While the multimedia assessment earned 84.36% which is categorized as very well.

  17. Identification of Auditory Object-Specific Attention from Single-Trial Electroencephalogram Signals via Entropy Measures and Machine Learning

    Directory of Open Access Journals (Sweden)

    Yun Lu

    2018-05-01

    Full Text Available Existing research has revealed that auditory attention can be tracked from ongoing electroencephalography (EEG signals. The aim of this novel study was to investigate the identification of peoples’ attention to a specific auditory object from single-trial EEG signals via entropy measures and machine learning. Approximate entropy (ApEn, sample entropy (SampEn, composite multiscale entropy (CmpMSE and fuzzy entropy (FuzzyEn were used to extract the informative features of EEG signals under three kinds of auditory object-specific attention (Rest, Auditory Object1 Attention (AOA1 and Auditory Object2 Attention (AOA2. The linear discriminant analysis and support vector machine (SVM, were used to construct two auditory attention classifiers. The statistical results of entropy measures indicated that there were significant differences in the values of ApEn, SampEn, CmpMSE and FuzzyEn between Rest, AOA1 and AOA2. For the SVM-based auditory attention classifier, the auditory object-specific attention of Rest, AOA1 and AOA2 could be identified from EEG signals using ApEn, SampEn, CmpMSE and FuzzyEn as features and the identification rates were significantly different from chance level. The optimal identification was achieved by the SVM-based auditory attention classifier using CmpMSE with the scale factor τ = 10. This study demonstrated a novel solution to identify the auditory object-specific attention from single-trial EEG signals without the need to access the auditory stimulus.

  18. Learning Style Preferences of Southeast Asian Students.

    Science.gov (United States)

    Park, Clara C.

    2000-01-01

    Investigated the perceptual learning style preferences (auditory, visual, kinesthetic, and tactile) and preferences for group and individual learning of Southeast Asian students compared to white students. Surveys indicated significant differences in learning style preferences between Southeast Asian and white students and between the diverse…

  19. Creating Objects and Object Categories for Studying Perception and Perceptual Learning

    Science.gov (United States)

    Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay

    2012-01-01

    In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created

  20. At the interface of the auditory and vocal motor systems: NIf and its role in vocal processing, production and learning.

    Science.gov (United States)

    Lewandowski, Brian; Vyssotski, Alexei; Hahnloser, Richard H R; Schmidt, Marc

    2013-06-01

    Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC's auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf's involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans. Copyright © 2013 Elsevier

  1. Transfer of perceptual learning of depth discrimination between local and global stereograms.

    Science.gov (United States)

    Gantz, Liat; Bedell, Harold E

    2010-08-23

    Several previous studies reported differences when stereothresholds are assessed with local-contour stereograms vs. complex random-dot stereograms (RDSs). Dissimilar thresholds may be due to differences in the properties of the stereograms (e.g. spatial frequency content, contrast, inter-element separation, area) or to different underlying processing mechanisms. This study examined the transfer of perceptual learning of depth discrimination between local and global RDSs with similar properties, and vice versa. If global and local stereograms are processed by separate neural mechanisms, then the magnitude and rate of training for the two types of stimuli are likely to differ, and the transfer of training from one stimulus type to the other should be minimal. Based on previous results, we chose RDSs with element densities of 0.17% and 28.3% to serve as the local and global stereograms, respectively. Fourteen inexperienced subjects with normal binocular vision were randomly assigned to either a local- or global- RDS training group. Stereothresholds for both stimulus types were measured before and after 7700 training trials distributed over 10 sessions. Stereothresholds for the trained condition improve for approximately 3000 trials, by an average of 0.36+/-0.08 for local and 0.29+/-0.10 for global RDSs, and level off thereafter. Neither the rate nor the magnitude of improvement differ statistically between the local- and global-training groups. Further, no significant difference exists in the amount of improvement on the trained vs. the untrained targets for either training group. These results are consistent with the operation of a single mechanism to process both local and global stereograms. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. Análise de parâmetros perceptivo-auditivos e acústicos em indivíduos gagos Analysis of acoustic and auditory-perceptual parameters in stutterers

    Directory of Open Access Journals (Sweden)

    Bruna Ferreira Valenzuela de Oliveira

    2009-01-01

    Full Text Available OBJETIVO: Analisar parâmetros perceptivo-auditivos e acústicos da voz em indivíduos adultos gagos. MÉTODOS: Foram analisados 15 indivíduos gagos do gênero masculino na faixa etária de 21 a 41 anos (média 26,6 anos, atendidos no Centro Clínico de Fonoaudiologia da instituição no período de fevereiro de 2005 a julho de 2007. Os parâmetros perceptivo-auditivos analisados envolveram a qualidade vocal, tipo de voz, ressonância, tensão vocal, velocidade de fala, coordenação pneumofônica, ataque vocal e gama tonal; quanto aos parâmetros acústicos, foram analisadas a frequência fundamental e sua variabilidade durante a fala espontânea. RESULTADOS: A análise perceptivo-auditiva mostrou que as características mais frequentes nos indivíduos gagos foram: qualidade vocal normal (60%, ressonância alterada (66%, tensão vocal (86%, ataque vocal alterado (73%, velocidade de fala normal (54%, gama tonal alterada (80% e coordenação pneumofônica alterada (100%. No entanto, a análise estatística revelou que apenas a presença de tensão vocal, coordenação pneumofônica e a gama tonal alteradas apresentaram-se estatisticamente significativas nos indivíduos gagos estudados. Na análise acústica, a frequência fundamental variou de 125,54 a 149,59 Hz e a variabilidade da frequência fundamental foi de 16 a 21 semitons ou 112,50 a 172,40 Hz. CONCLUSÃO: Os parâmetros perceptivo-auditivos analisados que tiveram frequência significativa nos indivíduos gagos estudados foram: presença de tensão vocal, alteração da gama tonal e na coordenação pneumofônica. Desta forma, é importante avaliar os aspectos vocais nesses pacientes, pois a desordem da fluência pode comprometer alguns parâmetros vocais podendo ocasionar disfonia.PURPOSE: To analyze auditory-perceptual and acoustic parameters of the voices of adult stutterers. METHODS: Fifteen male stutterers in the age range from 21 to 41 years (mean 26.6 years, attended at the

  3. WISC-R Scatter and Patterns in Three Types of Learning Disabled Children.

    Science.gov (United States)

    Tabachnick, Barbara G.; Turbey, Carolyn B.

    Wechsler Intelligence Scale for Children-Revised (WISC-R) subtest scatter and Bannatyne recategorization scores were investigated with three types of learning disabilities in children 6 to 16 years old: visual-motor and visual-perceptual disability (N=66); auditory-perceptual and receptive language deficit (N=18); and memory deficit (N=12). Three…

  4. The effect of electroconvulsive therapy (ECT) on implicit memory: skill learning and perceptual priming in patients with major depression.

    Science.gov (United States)

    Vakil, E; Grunhaus, L; Nagar, I; Ben-Chaim, E; Dolberg, O T; Dannon, P N; Schreiber, S

    2000-01-01

    While explicit memory in amnesics is impaired, their implicit memory remains preserved. Memory impairment is one of the side effects of electroconvulsive therapy (ECT). ECT patients are expected to show impairment on explicit but not implicit tasks. The present study examined 17 normal controls and 17 patients with severe major depressive disorder who underwent right unilateral ECT. Patients were tested in three sessions: 24-48 hours prior to, 24-48 hours following the first ECT, and 24-48 hours following the eighth ECT. The controls were tested in three sessions, at time intervals that paralleled those of the patients. Implicit memory was tested by the perceptual priming task - Partial Picture-Identification (PPI). The skill learning task used entailed solving the Tower of Hanoi puzzle (TOHP). Explicit memory was tested by picture recall from the PPI task, verbal recall of information regarding the TOHP, and by the Visual Paired Association (VPA) test. Results showed that explicit questions about the implicit tasks were impaired following ECT treatment. Patients' learning ability, as measured by the VPA task, was only impaired in the first testing session, prior to ECT treatment, reflecting the effect of depression. In addition, groups only differed in the first session on the learning rate of the skill learning task. Perceptual priming was preserved in the patients' group in all sessions, indicating that it is resilient to the effect of depression and ECT. The results are interpreted in terms of the differential effect of depression and ECT on explicit and implicit memory.

  5. Efficacy of the LiSN & Learn auditory training software: randomized blinded controlled study

    Directory of Open Access Journals (Sweden)

    Sharon Cameron

    2012-09-01

    Full Text Available Children with a spatial processing disorder (SPD require a more favorable signal-to-noise ratio in the classroom because they have difficulty perceiving sound source location cues. Previous research has shown that a novel training program - LiSN & Learn - employing spatialized sound, overcomes this deficit. Here we investigate whether improvements in spatial processing ability are specific to the LiSN & Learn training program. Participants were ten children (aged between 6;0 [years;months] and 9;9 with normal peripheral hearing who were diagnosed as having SPD using the Listening in Spatialized Noise - Sentences test (LiSN-S. In a blinded controlled study, the participants were randomly allocated to train with either the LiSN & Learn or another auditory training program - Earobics - for approximately 15 min per day for twelve weeks. There was a significant improvement post-training on the conditions of the LiSN-S that evaluate spatial processing ability for the LiSN & Learn group (P=0.03 to 0.0008, η 2=0.75 to 0.95, n=5, but not for the Earobics group (P=0.5 to 0.7, η 2=0.1 to 0.04, n=5. Results from questionnaires completed by the participants and their parents and teachers revealed improvements in real-world listening performance post-training were greater in the LiSN & Learn group than the Earobics group. LiSN & Learn training improved binaural processing ability in children with SPD, enhancing their ability to understand speech in noise. Exposure to non-spatialized auditory training does not produce similar outcomes, emphasizing the importance of deficit-specific remediation.

  6. Efficacy of the LiSN & Learn Auditory Training Software: randomized blinded controlled study

    Directory of Open Access Journals (Sweden)

    Sharon Cameron

    2012-01-01

    Full Text Available Background: Children with a spatial processing disorder (SPD require a more favorable signal-to-noise ratio in the classroom because they have difficulty perceiving sound source location cues. Previous research has shown that a novel training program - LiSN & Learn - employing spatialized sound, overcomes this deficit. Here we investigate whether improvements in spatial processing ability are specific to the LiSN & Learn training program. Materials and methods: Participants were ten children (aged between 6;0 [years;months] and 9;9 with normal peripheral hearing who were diagnosed as having SPD using the Listening in Spatialized Noise – Sentences Test (LISN-S. In a blinded controlled study, the participants were randomly allocated to train with either the LiSN & Learn or another auditory training program – Earobics - for approximately 15 minutes per day for twelve weeks. Results: There was a significant improvement post-training on the conditions of the LiSN-S that evaluate spatial processing ability for the LiSN & Learn group (p=0.03 to 0.0008, η2=0.75 to 0.95, n=5, but not for the Earobics group (p=0.5 to 0.7, η2=0.1 to 0.04, n=5. Results from questionnaires completed by the participants and their parents and teachers revealed improvements in real-world listening performance post-training were greater in the LiSN & Learn group than the Earobics group. Conclusions: LiSN & Learn training improved binaural processing ability in children with SPD, enhancing their ability to understand speech in noise. Exposure to non-spatialized auditory training does not produce similar outcomes, emphasizing the importance of deficit-specific remediation.

  7. Attention Cueing and Activity Equally Reduce False Alarm Rate in Visual-Auditory Associative Learning through Improving Memory.

    Science.gov (United States)

    Nikouei Mahani, Mohammad-Ali; Haghgoo, Hojjat Allah; Azizi, Solmaz; Nili Ahmadabadi, Majid

    2016-01-01

    In our daily life, we continually exploit already learned multisensory associations and form new ones when facing novel situations. Improving our associative learning results in higher cognitive capabilities. We experimentally and computationally studied the learning performance of healthy subjects in a visual-auditory sensory associative learning task across active learning, attention cueing learning, and passive learning modes. According to our results, the learning mode had no significant effect on learning association of congruent pairs. In addition, subjects' performance in learning congruent samples was not correlated with their vigilance score. Nevertheless, vigilance score was significantly correlated with the learning performance of the non-congruent pairs. Moreover, in the last block of the passive learning mode, subjects significantly made more mistakes in taking non-congruent pairs as associated and consciously reported lower confidence. These results indicate that attention and activity equally enhanced visual-auditory associative learning for non-congruent pairs, while false alarm rate in the passive learning mode did not decrease after the second block. We investigated the cause of higher false alarm rate in the passive learning mode by using a computational model, composed of a reinforcement learning module and a memory-decay module. The results suggest that the higher rate of memory decay is the source of making more mistakes and reporting lower confidence in non-congruent pairs in the passive learning mode.

  8. Auditory-Visual Speech Perception in Three- and Four-Year-Olds and Its Relationship to Perceptual Attunement and Receptive Vocabulary

    Science.gov (United States)

    Erdener, Dogu; Burnham, Denis

    2018-01-01

    Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception--lip-reading and visual…

  9. The role of timing in the induction of neuromodulation in perceptual learning by transcranial electric stimulation.

    Science.gov (United States)

    Pirulli, Cornelia; Fertonani, Anna; Miniussi, Carlo

    2013-07-01

    Transcranial electric stimulation (tES) protocols are able to induce neuromodulation, offering important insights to focus and constrain theories of the relationship between brain and behavior. Previous studies have shown that different types of tES (i.e., direct current stimulation - tDCS, and random noise stimulation - tRNS) induce different facilitatory behavioral effects. However to date is not clear which is the optimal timing to apply tES in relation to the induction of robust facilitatory effects. The goal of this work was to investigate how different types of tES (tDCS and tRNS) can modulate behavioral performance in the healthy adult brain in relation to their timing of application. We applied tES protocols before (offline) or during (online) the execution of a visual perceptual learning (PL) task. PL is a form of implicit memory that is characterized by an improvement in sensory discrimination after repeated exposure to a particular type of stimulus and is considered a manifestation of neural plasticity. Our aim was to understand if the timing of tES is critical for the induction of differential neuromodulatory effects in the primary visual cortex (V1). We applied high-frequency tRNS, anodal tDCS and sham tDCS on V1 before or during the execution of an orientation discrimination task. The experimental design was between subjects and performance was measured in terms of d' values. The ideal timing of application varied depending on the stimulation type. tRNS facilitated task performance only when it was applied during task execution, whereas anodal tDCS induced a larger facilitation if it was applied before task execution. The main result of this study is the finding that the timing of identical tES protocols yields opposite effects on performance. These results provide important guidelines for designing neuromodulation induction protocols and highlight the different optimal timing of the two excitatory techniques. Copyright © 2013 Elsevier Inc. All

  10. Perceptually-Inspired Computing

    Directory of Open Access Journals (Sweden)

    Ming Lin

    2015-08-01

    Full Text Available Human sensory systems allow individuals to see, hear, touch, and interact with the surrounding physical environment. Understanding human perception and its limit enables us to better exploit the psychophysics of human perceptual systems to design more efficient, adaptive algorithms and develop perceptually-inspired computational models. In this talk, I will survey some of recent efforts on perceptually-inspired computing with applications to crowd simulation and multimodal interaction. In particular, I will present data-driven personality modeling based on the results of user studies, example-guided physics-based sound synthesis using auditory perception, as well as perceptually-inspired simplification for multimodal interaction. These perceptually guided principles can be used to accelerating multi-modal interaction and visual computing, thereby creating more natural human-computer interaction and providing more immersive experiences. I will also present their use in interactive applications for entertainment, such as video games, computer animation, and shared social experience. I will conclude by discussing possible future research directions.

  11. Perceptual dimensions differentiate emotions.

    Science.gov (United States)

    Cavanaugh, Lisa A; MacInnis, Deborah J; Weiss, Allen M

    2015-08-26

    Individuals often describe objects in their world in terms of perceptual dimensions that span a variety of modalities; the visual (e.g., brightness: dark-bright), the auditory (e.g., loudness: quiet-loud), the gustatory (e.g., taste: sour-sweet), the tactile (e.g., hardness: soft vs. hard) and the kinaesthetic (e.g., speed: slow-fast). We ask whether individuals use perceptual dimensions to differentiate emotions from one another. Participants in two studies (one where respondents reported on abstract emotion concepts and a second where they reported on specific emotion episodes) rated the extent to which features anchoring 29 perceptual dimensions (e.g., temperature, texture and taste) are associated with 8 emotions (anger, fear, sadness, guilt, contentment, gratitude, pride and excitement). Results revealed that in both studies perceptual dimensions differentiate positive from negative emotions and high arousal from low arousal emotions. They also differentiate among emotions that are similar in arousal and valence (e.g., high arousal negative emotions such as anger and fear). Specific features that anchor particular perceptual dimensions (e.g., hot vs. cold) are also differentially associated with emotions.

  12. Deficiencies within the education system with regard to perceptual motor learning

    Directory of Open Access Journals (Sweden)

    Myrtle Erasmus

    2011-12-01

    needs within the education environment and that many schools are under-supplied in terms of resources and equipment. It is recommended that these teachers receive inservice training on learners’ perceptual motor development and that the Department of Education should provide schools with resources and equipment to prevent these deficiencies in the education system.

  13. Perceptual learning of motion direction discrimination with suppressed and unsuppressed MT in humans: an fMRI study.

    Directory of Open Access Journals (Sweden)

    Benjamin Thompson

    Full Text Available The middle temporal area of the extrastriate visual cortex (area MT is integral to motion perception and is thought to play a key role in the perceptual learning of motion tasks. We have previously found, however, that perceptual learning of a motion discrimination task is possible even when the training stimulus contains locally balanced, motion opponent signals that putatively suppress the response of MT. Assuming at least partial suppression of MT, possible explanations for this learning are that 1 training made MT more responsive by reducing motion opponency, 2 MT remained suppressed and alternative visual areas such as V1 enabled learning and/or 3 suppression of MT increased with training, possibly to reduce noise. Here we used fMRI to test these possibilities. We first confirmed that the motion opponent stimulus did indeed suppress the BOLD response within hMT+ compared to an almost identical stimulus without locally balanced motion signals. We then trained participants on motion opponent or non-opponent stimuli. Training with the motion opponent stimulus reduced the BOLD response within hMT+ and greater reductions in BOLD response were correlated with greater amounts of learning. The opposite relationship between BOLD and behaviour was found at V1 for the group trained on the motion-opponent stimulus and at both V1 and hMT+ for the group trained on the non-opponent motion stimulus. As the average response of many cells within MT to motion opponent stimuli is the same as their response to non-directional flickering noise, the reduced activation of hMT+ after training may reflect noise reduction.

  14. One Way or Another: Evidence for Perceptual Asymmetry in Pre-attentive Learning of Non-native Contrasts

    Directory of Open Access Journals (Sweden)

    Liquan Liu

    2018-03-01

    Full Text Available Research investigating listeners’ neural sensitivity to speech sounds has largely focused on segmental features. We examined Australian English listeners’ perception and learning of a supra-segmental feature, pitch direction in a non-native tonal contrast, using a passive oddball paradigm and electroencephalography. The stimuli were two contours generated from naturally produced high-level and high-falling tones in Mandarin Chinese, differing only in pitch direction (Liu and Kager, 2014. While both contours had similar pitch onsets, the pitch offset of the falling contour was lower than that of the level one. The contrast was presented in two orientations (standard and deviant reversed and tested in two blocks with the order of block presentation counterbalanced. Mismatch negativity (MMN responses showed that listeners discriminated the non-native tonal contrast only in the second block, reflecting indications of learning through exposure during the first block. In addition, listeners showed a later MMN peak for their second block of test relative to listeners who did the same block first, suggesting linguistic (as opposed to acoustic processing or a misapplication of perceptual strategies from the first to the second block. The results also showed a perceptual asymmetry for change in pitch direction: listeners who encountered a falling tone deviant in the first block had larger frontal MMN amplitudes than listeners who encountered a level tone deviant in the first block. The implications of our findings for second language speech and the developmental trajectory for tone perception are discussed.

  15. Audiovisual speech perception development at varying levels of perceptual processing

    OpenAIRE

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-01-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the le...

  16. Effects of semantic context and feedback on perceptual learning of speech processed through an acoustic simulation of a cochlear implant.

    Science.gov (United States)

    Loebach, Jeremy L; Pisoni, David B; Svirsky, Mario A

    2010-02-01

    The effect of feedback and materials on perceptual learning was examined in listeners with normal hearing who were exposed to cochlear implant simulations. Generalization was most robust when feedback paired the spectrally degraded sentences with their written transcriptions, promoting mapping between the degraded signal and its acoustic-phonetic representation. Transfer-appropriate processing theory suggests that such feedback was most successful because the original learning conditions were reinstated at testing: Performance was facilitated when both training and testing contained degraded stimuli. In addition, the effect of semantic context on generalization was assessed by training listeners on meaningful or anomalous sentences. Training with anomalous sentences was as effective as that with meaningful sentences, suggesting that listeners were encouraged to use acoustic-phonetic information to identify speech than to make predictions from semantic context.

  17. Can perceptual learning be used to treat amblyopia beyond the critical period of visual development?

    Science.gov (United States)

    Astle, Andrew T; Webb, Ben S; McGraw, Paul V

    2011-11-01

    Amblyopia presents early in childhood and affects approximately 3% of western populations. The monocular visual acuity loss is conventionally treated during the 'critical periods' of visual development by occluding or penalising the fellow eye to encourage use of the amblyopic eye. Despite the measurable success of this approach in many children, substantial numbers of people still suffer with amblyopia later in life because either they were never diagnosed in childhood, did not respond to the original treatment, the amblyopia was only partially remediated, or their acuity loss returned after cessation of treatment. In this review, we consider whether the visual deficits of this largely overlooked amblyopic group are amenable to conventional and innovative therapeutic interventions later in life, well beyond the age at which treatment is thought to be effective. There is a considerable body of evidence that residual plasticity is present in the adult visual brain and this can be harnessed to improve function in adults with amblyopia. Perceptual training protocols have been developed to optimise visual gains in this clinical population. Results thus far are extremely encouraging; marked visual improvements have been demonstrated, the perceptual benefits transfer to new visual tasks and appear to be relatively enduring. The essential ingredients of perceptual training protocols are being incorporated into video game formats, facilitating home-based interventions. Many studies support perceptual training as a tool for improving vision in amblyopes beyond the critical period. Should this novel form of treatment stand up to the scrutiny of a randomised controlled trial, clinicians may need to re-evaluate their therapeutic approach to adults with amblyopia. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.

  18. Can perceptual learning be used to treat amblyopia beyond the critical period of visual development?

    Science.gov (United States)

    Astle, Andrew T.; Webb, Ben S.; McGraw, Paul V.

    2012-01-01

    Background Amblyopia presents early in childhood and affects approximately 3% of western populations. The monocular visual acuity loss is conventionally treated during the “critical periods” of visual development by occluding or penalising the fellow eye to encourage use of the amblyopic eye. Despite the measurable success of this approach in many children, substantial numbers of people still suffer with amblyopia later in life because either they were never diagnosed in childhood, did not respond to the original treatment, the amblyopia was only partially remediated, or their acuity loss returned after cessation of treatment. Purpose In this review, we consider whether the visual deficits of this largely overlooked amblyopic group are amenable to conventional and innovative therapeutic interventions later in life, well beyond the age at which treatment is thought to be effective. Recent findings There is a considerable body of evidence that residual plasticity is present in the adult visual brain and this can be harnessed to improve function in adults with amblyopia. Perceptual training protocols have been developed to optimise visual gains in this clinical population. Results thus far are extremely encouraging: marked visual improvements have been demonstrated, the perceptual benefits transfer to new visual tasks and appear to be relatively enduring. The essential ingredients of perceptual training protocols are being incorporated into video game formats, facilitating home-based interventions. Summary Many studies support perceptual training as a tool for improving vision in amblyopes beyond the critical period. Should this novel form of treatment stand up to the scrutiny of a randomised controlled trial, clinicians may need to re-evaluate their therapeutic approach to adults with amblyopia. PMID:21981034

  19. Auditory Magnetoencephalographic Frequency-Tagged Responses Mirror the Ongoing Segmentation Processes Underlying Statistical Learning.

    Science.gov (United States)

    Farthouat, Juliane; Franco, Ana; Mary, Alison; Delpouve, Julie; Wens, Vincent; Op de Beeck, Marc; De Tiège, Xavier; Peigneux, Philippe

    2017-03-01

    Humans are highly sensitive to statistical regularities in their environment. This phenomenon, usually referred as statistical learning, is most often assessed using post-learning behavioural measures that are limited by a lack of sensibility and do not monitor the temporal dynamics of learning. In the present study, we used magnetoencephalographic frequency-tagged responses to investigate the neural sources and temporal development of the ongoing brain activity that supports the detection of regularities embedded in auditory streams. Participants passively listened to statistical streams in which tones were grouped as triplets, and to random streams in which tones were randomly presented. Results show that during exposure to statistical (vs. random) streams, tritone frequency-related responses reflecting the learning of regularities embedded in the stream increased in the left supplementary motor area and left posterior superior temporal sulcus (pSTS), whereas tone frequency-related responses decreased in the right angular gyrus and right pSTS. Tritone frequency-related responses rapidly developed to reach significance after 3 min of exposure. These results suggest that the incidental extraction of novel regularities is subtended by a gradual shift from rhythmic activity reflecting individual tone succession toward rhythmic activity synchronised with triplet presentation, and that these rhythmic processes are subtended by distinct neural sources.

  20. Auditory and Visual Working Memory Functioning in College Students with Attention-Deficit/Hyperactivity Disorder and/or Learning Disabilities.

    Science.gov (United States)

    Liebel, Spencer W; Nelson, Jason M

    2017-12-01

    We investigated the auditory and visual working memory functioning in college students with attention-deficit/hyperactivity disorder, learning disabilities, and clinical controls. We examined the role attention-deficit/hyperactivity disorder subtype status played in working memory functioning. The unique influence that both domains of working memory have on reading and math abilities was investigated. A sample of 268 individuals seeking postsecondary education comprise four groups of the present study: 110 had an attention-deficit/hyperactivity disorder diagnosis only, 72 had a learning disability diagnosis only, 35 had comorbid attention-deficit/hyperactivity disorder and learning disability diagnoses, and 60 individuals without either of these disorders comprise a clinical control group. Participants underwent a comprehensive neuropsychological evaluation, and licensed psychologists employed a multi-informant, multi-method approach in obtaining diagnoses. In the attention-deficit/hyperactivity disorder only group, there was no difference between auditory and visual working memory functioning, t(100) = -1.57, p = .12. In the learning disability group, however, auditory working memory functioning was significantly weaker compared with visual working memory, t(71) = -6.19, p attention-deficit/hyperactivity disorder only group, there were no auditory or visual working memory functioning differences between participants with either a predominantly inattentive type or a combined type diagnosis. Visual working memory did not incrementally contribute to the prediction of academic achievement skills. Individuals with attention-deficit/hyperactivity disorder did not demonstrate significant working memory differences compared with clinical controls. Individuals with a learning disability demonstrated weaker auditory working memory than individuals in either the attention-deficit/hyperactivity or clinical control groups. © The Author 2017. Published by Oxford University

  1. Alternative Forms of the Rey Auditory Verbal Learning Test: A Review

    Directory of Open Access Journals (Sweden)

    Keith A. Hawkins

    2004-01-01

    Full Text Available Practice effects in memory testing complicate the interpretation of score changes over repeated testings, particularly in clinical applications. Consequently, several alternative forms of the Auditory Verbal Learning Test (AVLT have been developed. Studies of these typically indicate that the forms examined are equivalent. However, the implication that the forms in the literature are interchangeable must be tempered by several caveats. Few studies of equivalence have been undertaken; most are restricted to the comparison of single pairs of forms, and the pairings vary across studies. These limitations are exacerbated by the minimal overlapping across studies in variables reported, or in the analyses of equivalence undertaken. The data generated by these studies are nonetheless valuable, as significant practice effects result from serial use of the same form. The available data on alternative AVLT forms are summarized, and recommendations regarding form development and the determination of form equivalence are offered.

  2. Can you hear me now? Musical training shapes functional brain networks for selective auditory attention and hearing speech in noise

    Directory of Open Access Journals (Sweden)

    Dana L Strait

    2011-06-01

    Full Text Available Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker’s voice amidst others. Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and nonmusicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not nonmusicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work from our laboratory documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development of language-related skills, musical training may aid in the prevention, habilitation and remediation of children with a wide range of attention-based language and learning impairments.

  3. Using neuroplasticity-based auditory training to improve verbal memory in schizophrenia.

    Science.gov (United States)

    Fisher, Melissa; Holland, Christine; Merzenich, Michael M; Vinogradov, Sophia

    2009-07-01

    Impaired verbal memory in schizophrenia is a key rate-limiting factor for functional outcome, does not respond to currently available medications, and shows only modest improvement after conventional behavioral remediation. The authors investigated an innovative approach to the remediation of verbal memory in schizophrenia, based on principles derived from the basic neuroscience of learning-induced neuroplasticity. The authors report interim findings in this ongoing study. Fifty-five clinically stable schizophrenia subjects were randomly assigned to either 50 hours of computerized auditory training or a control condition using computer games. Those receiving auditory training engaged in daily computerized exercises that placed implicit, increasing demands on auditory perception through progressively more difficult auditory-verbal working memory and verbal learning tasks. Relative to the control group, subjects who received active training showed significant gains in global cognition, verbal working memory, and verbal learning and memory. They also showed reliable and significant improvement in auditory psychophysical performance; this improvement was significantly correlated with gains in verbal working memory and global cognition. Intensive training in early auditory processes and auditory-verbal learning results in substantial gains in verbal cognitive processes relevant to psychosocial functioning in schizophrenia. These gains may be due to a training method that addresses the early perceptual impairments in the illness, that exploits intact mechanisms of repetitive practice in schizophrenia, and that uses an intensive, adaptive training approach.

  4. Modeling speech imitation and ecological learning of auditory-motor maps

    Directory of Open Access Journals (Sweden)

    Claudia eCanevari

    2013-06-01

    Full Text Available Classical models of speech consider an antero-posterior distinction between perceptive and productive functions. However, the selective alteration of neural activity in speech motor centers, via transcranial magnetic stimulation, was shown to affect speech discrimination. On the automatic speech recognition (ASR side, the recognition systems have classically relied solely on acoustic data, achieving rather good performance in optimal listening conditions. The main limitations of current ASR are mainly evident in the realistic use of such systems. These limitations can be partly reduced by using normalization strategies that minimize inter-speaker variability by either explicitly removing speakers’ peculiarities or adapting different speakers to a reference model. In this paper we aim at modeling a motor-based imitation learning mechanism in ASR. We tested the utility of a speaker normalization strategy that uses motor representations of speech and compare it with strategies that ignore the motor domain. Specifically, we first trained a regressor through state-of-the-art machine learning techniques to build an auditory-motor mapping, in a sense mimicking a human learner that tries to reproduce utterances produced by other speakers. This auditory-motor mapping maps the speech acoustics of a speaker into the motor plans of a reference speaker. Since, during recognition, only speech acoustics are available, the mapping is necessary to recover motor information. Subsequently, in a phone classification task, we tested the system on either one of the speakers that was used during training or a new one. Results show that in both cases the motor-based speaker normalization strategy almost always outperforms all other strategies where only acoustics is taken into account.

  5. Jumpstarting auditory learning in children with cochlear implants through music experiences.

    Science.gov (United States)

    Barton, Christine; Robbins, Amy McConkey

    2015-09-01

    Musical experiences are a valuable part of the lives of children with cochlear implants (CIs). In addition to the pleasure, relationships and emotional outlet provided by music, it serves to enhance or 'jumpstart' other auditory and cognitive skills that are critical for development and learning throughout the lifespan. Musicians have been shown to be 'better listeners' than non-musicians with regard to how they perceive and process sound. A heuristic model of music therapy is reviewed, including six modulating factors that may account for the auditory advantages demonstrated by those who participate in music therapy. The integral approach to music therapy is described along with the hybrid approach to pediatric language intervention. These approaches share the characteristics of placing high value on ecologically valid therapy experiences, i.e., engaging in 'real' music and 'real' communication. Music and language intervention techniques used by the authors are presented. It has been documented that children with CIs consistently have lower music perception scores than do their peers with normal hearing (NH). On the one hand, this finding matters a great deal because it provides parameters for setting reasonable expectations and highlights the work still required to improve signal processing with the devices so that they more accurately transmit music to CI listeners. On the other hand, the finding might not matter much if we assume that music, even in its less-than-optimal state, functions for CI children, as for NH children, as a developmental jumpstarter, a language-learning tool, a cognitive enricher, a motivator, and an attention enhancer.

  6. Birth of projection neurons in adult avian brain may be related to perceptual or motor learning

    International Nuclear Information System (INIS)

    Alvarez-Buylla, A.; Kirn, J.R.; Nottebohm, F.

    1990-01-01

    Projection neurons that form part of the motor pathway for song control continue to be produced and to replace older projection neurons in adult canaries and zebra finches. This is shown by combining [3H]thymidine, a cell birth marker, and fluorogold, a retrogradely transported tracer of neuronal connectivity. Species and seasonal comparisons suggest that this process is related to the acquisition of perceptual or motor memories. The ability of an adult brain to produce and replace projection neurons should influence our thinking on brain repair

  7. Accommodating Elementary Students' Learning Styles.

    Science.gov (United States)

    Wallace, James

    1995-01-01

    Examines the perceptual learning style preferences of sixth- and seventh-grade students in the Philippines. Finds that the visual modality was the most preferred and the auditory modality was the least preferred. Offers suggestions for accommodating visual, tactile, and kinesthetic preferences. (RS)

  8. Evaluation of Auditory Verbal Memory and Learning Performance of 18-30 Year Old Persian-Speaking Healthy Women

    Directory of Open Access Journals (Sweden)

    Reyhane Toufan

    2012-10-01

    Full Text Available Background and Aim: Auditory memory plays an important role in developing language skills and learning. The aim of the present study was to assess auditory verbal memory and learning performanceof 18-30 year old healthy adults using the Persian version of the Rey Auditory-Verbal Learning Test(RAVLT.Methods: This descriptive, cross-sectional study was coducted on seventy 18-30 year old healthy females with the mean age of 23.2 years and a standard deviation (SD of 2.4 years. Different aspectsof memory, like immediate recall, delayed recall, recognition, forgetting rate, interference and learning, were assessed using the Persian version of RAVLT.Results: Mean score increased from 8.94 (SD=1.91 on the first trial to 13.70 (SD=1.18 on the fifth trial. Total learning mean score was 12.19 (SD=1.08, and mean learning rate was 4.76. Mean scoresof the participants on the delayed recall and recognition trials were 13.47 (SD=1.2, and 14.72(SD=0.53, respectively. The proactive and retroactive interference scores were 0.86 and 0.96,respectively. The forgetting rate score was 1.01 and the retrieval score was 0.90.Conclusion: The auditory-verbal memory and learning performance of healthy Persian-speaking females was similar to the performance of the same population in other countries. Therefore, the Persian version of RAVLT is valid for assessment of memory function in the Persian-speaking female population.

  9. The developmental trajectory of children's auditory and visual statistical learning abilities: modality-based differences in the effect of age.

    Science.gov (United States)

    Raviv, Limor; Arnon, Inbal

    2017-09-12

    Infants, children and adults are capable of extracting recurring patterns from their environment through statistical learning (SL), an implicit learning mechanism that is considered to have an important role in language acquisition. Research over the past 20 years has shown that SL is present from very early infancy and found in a variety of tasks and across modalities (e.g., auditory, visual), raising questions on the domain generality of SL. However, while SL is well established for infants and adults, only little is known about its developmental trajectory during childhood, leaving two important questions unanswered: (1) Is SL an early-maturing capacity that is fully developed in infancy, or does it improve with age like other cognitive capacities (e.g., memory)? and (2) Will SL have similar developmental trajectories across modalities? Only few studies have looked at SL across development, with conflicting results: some find age-related improvements while others do not. Importantly, no study to date has examined auditory SL across childhood, nor compared it to visual SL to see if there are modality-based differences in the developmental trajectory of SL abilities. We addressed these issues by conducting a large-scale study of children's performance on matching auditory and visual SL tasks across a wide age range (5-12y). Results show modality-based differences in the development of SL abilities: while children's learning in the visual domain improved with age, learning in the auditory domain did not change in the tested age range. We examine these findings in light of previous studies and discuss their implications for modality-based differences in SL and for the role of auditory SL in language acquisition. A video abstract of this article can be viewed at: https://www.youtube.com/watch?v=3kg35hoF0pw. © 2017 John Wiley & Sons Ltd.

  10. The Effect of Learning Modality and Auditory Feedback on Word Memory: Cochlear-Implanted versus Normal-Hearing Adults.

    Science.gov (United States)

    Taitelbaum-Swead, Riki; Icht, Michal; Mama, Yaniv

    2017-03-01

    In recent years, the effect of cognitive abilities on the achievements of cochlear implant (CI) users has been evaluated. Some studies have suggested that gaps between CI users and normal-hearing (NH) peers in cognitive tasks are modality specific, and occur only in auditory tasks. The present study focused on the effect of learning modality (auditory, visual) and auditory feedback on word memory in young adults who were prelingually deafened and received CIs before the age of 5 yr, and their NH peers. A production effect (PE) paradigm was used, in which participants learned familiar study words by vocal production (saying aloud) or by no-production (silent reading or listening). Words were presented (1) in the visual modality (written) and (2) in the auditory modality (heard). CI users performed the visual condition twice-once with the implant ON and once with it OFF. All conditions were followed by free recall tests. Twelve young adults, long-term CI users, implanted between ages 1.7 and 4.5 yr, and who showed ≥50% in monosyllabic consonant-vowel-consonant open-set test with their implants were enrolled. A group of 14 age-matched NH young adults served as the comparison group. For each condition, we calculated the proportion of study words recalled. Mixed-measures analysis of variances were carried out with group (NH, CI) as a between-subjects variable, and learning condition (aloud or silent reading) as a within-subject variable. Following this, paired sample t tests were used to evaluate the PE size (differences between aloud and silent words) and overall recall ratios (aloud and silent words combined) in each of the learning conditions. With visual word presentation, young adults with CIs (regardless of implant status CI-ON or CI-OFF), showed comparable memory performance (and a similar PE) to NH peers. However, with auditory presentation, young adults with CIs showed poorer memory for nonproduced words (hence a larger PE) relative to their NH peers. The

  11. Perceptual Learning of Intonation Contour Categories in Adults and 9- to 11-Year-Old Children: Adults Are More Narrow-Minded

    Science.gov (United States)

    Kapatsinski, Vsevolod; Olejarczuk, Paul; Redford, Melissa A.

    2017-01-01

    We report on rapid perceptual learning of intonation contour categories in adults and 9- to 11-year-old children. Intonation contours are temporally extended patterns, whose perception requires temporal integration and therefore poses significant working memory challenges. Both children and adults form relatively abstract representations of…

  12. The Effect of Feedback Delay on Perceptual Category Learning and Item Memory: Further Limits of Multiple Systems.

    Science.gov (United States)

    Stephens, Rachel G; Kalish, Michael L

    2018-02-01

    Delayed feedback during categorization training has been hypothesized to differentially affect 2 systems that underlie learning for rule-based (RB) or information-integration (II) structures. We tested an alternative possibility: that II learning requires more precise item representations than RB learning, and so is harmed more by a delay interval filled with a confusable mask. Experiments 1 and 2 examined the effect of feedback delay on memory for RB and II exemplars, both without and with concurrent categorization training. Without the training, II items were indeed more difficult to recognize than RB items, but there was no detectable effect of delay on item memory. In contrast, with concurrent categorization training, there were effects of both category structure and delayed feedback on item memory, which were related to corresponding changes in category learning. However, we did not observe the critical selective impact of delay on II classification performance that has been shown previously. Our own results were also confirmed in a follow-up study (Experiment 3) involving only categorization training. The selective influence of feedback delay on II learning appears to be contingent on the relative size of subgroups of high-performing participants, and in fact does not support that RB and II category learning are qualitatively different. We conclude that a key part of successfully solving perceptual categorization problems is developing more precise item representations, which can be impaired by delayed feedback during training. More important, the evidence for multiple systems of category learning is even weaker than previously proposed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. Auditory reafferences: The influence of real-time feedback on movement control

    Directory of Open Access Journals (Sweden)

    Christian eKennel

    2015-01-01

    Full Text Available Auditory reafferences are real-time auditory products created by a person’s own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with nonartificial auditory cues. Our results support the existing theoretical understanding of action–perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.

  14. Auditory reafferences: the influence of real-time feedback on movement control.

    Science.gov (United States)

    Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus

    2015-01-01

    Auditory reafferences are real-time auditory products created by a person's own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action-perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.

  15. Changes in Olfactory Sensory Neuron Physiology and Olfactory Perceptual Learning After Odorant Exposure in Adult Mice.

    Science.gov (United States)

    Kass, Marley D; Guang, Stephanie A; Moberly, Andrew H; McGann, John P

    2016-02-01

    The adult olfactory system undergoes experience-dependent plasticity to adapt to the olfactory environment. This plasticity may be accompanied by perceptual changes, including improved olfactory discrimination. Here, we assessed experience-dependent changes in the perception of a homologous aldehyde pair by testing mice in a cross-habituation/dishabituation behavioral paradigm before and after a week-long ester-odorant exposure protocol. In a parallel experiment, we used optical neurophysiology to observe neurotransmitter release from olfactory sensory neuron (OSN) terminals in vivo, and thus compared primary sensory representations of the aldehydes before and after the week-long ester-odorant exposure in individual animals. Mice could not discriminate between the aldehydes during pre-exposure testing, but ester-exposed subjects spontaneously discriminated between the homologous pair after exposure, whereas home cage control mice cross-habituated. Ester exposure did not alter the spatial pattern, peak magnitude, or odorant-selectivity of aldehyde-evoked OSN input to olfactory bulb glomeruli, but did alter the temporal dynamics of that input to make the time course of OSN input more dissimilar between odorants. Together, these findings demonstrate that odor exposure can induce both physiological and perceptual changes in odor processing, and suggest that changes in the temporal patterns of OSN input to olfactory bulb glomeruli could induce differences in odor quality. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  17. An analysis of mathematical connection ability based on student learning style on visualization auditory kinesthetic (VAK) learning model with self-assessment

    Science.gov (United States)

    Apipah, S.; Kartono; Isnarto

    2018-03-01

    This research aims to analyze the quality of VAK learning with self-assessment toward the ability of mathematical connection performed by students and to analyze students’ mathematical connection ability based on learning styles in VAK learning model with self-assessment. This research applies mixed method type with concurrent embedded design. The subject of this research consists of VIII grade students from State Junior High School 9 Semarang who apply visual learning style, auditory learning style, and kinesthetic learning style. The data of learning style is collected by using questionnaires, the data of mathematical connection ability is collected by performing tests, and the data of self-assessment is collected by using assessment sheets. The quality of learning is qualitatively valued from planning stage, realization stage, and valuation stage. The result of mathematical connection ability test is analyzed quantitatively by mean test, conducting completeness test, mean differentiation test, and mean proportional differentiation test. The result of the research shows that VAK learning model results in well-qualified learning regarded from qualitative and quantitative sides. Students with visual learning style perform the highest mathematical connection ability, students with kinesthetic learning style perform average mathematical connection ability, and students with auditory learning style perform the lowest mathematical connection ability.

  18. More Is Generally Better: Higher Working Memory Capacity Does Not Impair Perceptual Category Learning

    Science.gov (United States)

    Kalish, Michael L.; Newell, Ben R.; Dunn, John C.

    2017-01-01

    It is sometimes supposed that category learning involves competing explicit and procedural systems, with only the former reliant on working memory capacity (WMC). In 2 experiments participants were trained for 3 blocks on both filtering (often said to be learned explicitly) and condensation (often said to be learned procedurally) category…

  19. Top-down inputs enhance orientation selectivity in neurons of the primary visual cortex during perceptual learning.

    Directory of Open Access Journals (Sweden)

    Samat Moldakarimov

    2014-08-01

    Full Text Available Perceptual learning has been used to probe the mechanisms of cortical plasticity in the adult brain. Feedback projections are ubiquitous in the cortex, but little is known about their role in cortical plasticity. Here we explore the hypothesis that learning visual orientation discrimination involves learning-dependent plasticity of top-down feedback inputs from higher cortical areas, serving a different function from plasticity due to changes in recurrent connections within a cortical area. In a Hodgkin-Huxley-based spiking neural network model of visual cortex, we show that modulation of feedback inputs to V1 from higher cortical areas results in shunting inhibition in V1 neurons, which changes the response properties of V1 neurons. The orientation selectivity of V1 neurons is enhanced without changing orientation preference, preserving the topographic organizations in V1. These results provide new insights to the mechanisms of plasticity in the adult brain, reconciling apparently inconsistent experiments and providing a new hypothesis for a functional role of the feedback connections.

  20. Brainstem auditory evoked potentials with the use of acoustic clicks and complex verbal sounds in young adults with learning disabilities.

    Science.gov (United States)

    Kouni, Sophia N; Giannopoulos, Sotirios; Ziavra, Nausika; Koutsojannis, Constantinos

    2013-01-01

    'other learning disabilities' and who were characterized as with 'light' dyslexia according to dyslexia tests, no significant delays were found in peak latencies A and C and interpeak latencies A-C in comparison with the control group. Acoustic representation of a speech sound and, in particular, the disyllabic word 'baba' was found to be abnormal, as low as the auditory brainstem. Because ABRs mature in early life, this can help to identify subjects with acoustically based learning problems and apply early intervention, rehabilitation, and treatment. Further studies and more experience with more patients and pathological conditions such as plasticity of the auditory system, cochlear implants, hearing aids, presbycusis, or acoustic neuropathy are necessary until this type of testing is ready for clinical application. © 2013 Elsevier Inc. All rights reserved.

  1. Age-related changes in consolidation of perceptual and muscle-based learning of motor skills

    Directory of Open Access Journals (Sweden)

    Rebecca M. C. Spencer

    2013-11-01

    Full Text Available Improvements in motor sequence learning come about via goal-based learning of the sequence of visual stimuli and muscle-based learning of the sequence of movement responses. In young adults, consolidation of goal-based learning is observed after intervals of sleep but not following wake, whereas consolidation of muscle-based learning is greater following intervals with wake compared to sleep. While the benefit of sleep on motor sequence learning has been shown to decline with age, how sleep contributes to consolidation of goal-based versus muscle-based learning in older adults has not been disentangled. We trained young (n=62 and older (n=50 adults on a motor sequence learning task and re-tested learning following 12 hr intervals containing overnight sleep or daytime wake. To probe consolidation of goal-based learning of the sequence, half of the participants were re-tested in a configuration in which the stimulus sequence was the same but, due to a shift in stimulus-response mapping, the movement response sequence differed. To probe consolidation of muscle-based learning, the remaining participants were tested in a configuration in which the stimulus sequence was novel, but now the sequence of movements used for responding was unchanged. In young adults, there was a significant condition (goal-based v. muscle-based learning by interval (sleep v. wake interaction, F(1,58=6.58, p=.013: Goal-based learning tended to be greater following sleep compared to wake, t(29=1.47, p=.072. Conversely, muscle-based learning was greater following wake than sleep, t(29=2.11, p=.021. Unlike young adults, this interaction was not significant in older adults, F(1,46=.04, p=.84, nor was there a main effect of interval, F(1,46=1.14, p=.29. Thus, older adults do not preferentially consolidate sequence learning over wake or sleep.

  2. Audiomotor Perceptual Training Enhances Speech Intelligibility in Background Noise.

    Science.gov (United States)

    Whitton, Jonathon P; Hancock, Kenneth E; Shannon, Jeffrey M; Polley, Daniel B

    2017-11-06

    Sensory and motor skills can be improved with training, but learning is often restricted to practice stimuli. As an exception, training on closed-loop (CL) sensorimotor interfaces, such as action video games and musical instruments, can impart a broad spectrum of perceptual benefits. Here we ask whether computerized CL auditory training can enhance speech understanding in levels of background noise that approximate a crowded restaurant. Elderly hearing-impaired subjects trained for 8 weeks on a CL game that, like a musical instrument, challenged them to monitor subtle deviations between predicted and actual auditory feedback as they moved their fingertip through a virtual soundscape. We performed our study as a randomized, double-blind, placebo-controlled trial by training other subjects in an auditory working-memory (WM) task. Subjects in both groups improved at their respective auditory tasks and reported comparable expectations for improved speech processing, thereby controlling for placebo effects. Whereas speech intelligibility was unchanged after WM training, subjects in the CL training group could correctly identify 25% more words in spoken sentences or digit sequences presented in high levels of background noise. Numerically, CL audiomotor training provided more than three times the benefit of our subjects' hearing aids for speech processing in noisy listening conditions. Gains in speech intelligibility could be predicted from gameplay accuracy and baseline inhibitory control. However, benefits did not persist in the absence of continuing practice. These studies employ stringent clinical standards to demonstrate that perceptual learning on a computerized audio game can transfer to "real-world" communication challenges. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Auditory learning through active engagement with sound: Biological impact of community music lessons in at-risk children

    Directory of Open Access Journals (Sweden)

    Nina eKraus

    2014-11-01

    Full Text Available The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements in the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1,000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for one year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to an instrumental training class. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. These findings speak to the potential of active engagement with sound (i.e., music-making to engender experience-dependent neuroplasticity during trand may inform the development of strategies for auditory

  4. Auditory learning through active engagement with sound: biological impact of community music lessons in at-risk children.

    Science.gov (United States)

    Kraus, Nina; Slater, Jessica; Thompson, Elaine C; Hornickel, Jane; Strait, Dana L; Nicol, Trent; White-Schwoch, Travis

    2014-01-01

    The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements of the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for 1 year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to the instrumental training. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally-trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. Despite intrinsic constraints on our study imposed by a community setting, these findings speak to the potential of active engagement with sound (i.e., music-making) to engender experience-dependent neuroplasticity and may inform the

  5. A deafening flash! Visual interference of auditory signal detection.

    Science.gov (United States)

    Fassnidge, Christopher; Cecconi Marcotti, Claudia; Freeman, Elliot

    2017-03-01

    In some people, visual stimulation evokes auditory sensations. How prevalent and how perceptually real is this? 22% of our neurotypical adult participants responded 'Yes' when asked whether they heard faint sounds accompanying flash stimuli, and showed significantly better ability to discriminate visual 'Morse-code' sequences. This benefit might arise from an ability to recode visual signals as sounds, thus taking advantage of superior temporal acuity of audition. In support of this, those who showed better visual relative to auditory sequence discrimination also had poorer auditory detection in the presence of uninformative visual flashes, though this was independent of awareness of visually-evoked sounds. Thus a visually-evoked auditory representation may occur subliminally and disrupt detection of real auditory signals. The frequent natural correlation between visual and auditory stimuli might explain the surprising prevalence of this phenomenon. Overall, our results suggest that learned correspondences between strongly correlated modalities may provide a precursor for some synaesthetic abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Caudate nucleus reactivity predicts perceptual learning rate for visual feature conjunctions.

    Science.gov (United States)

    Reavis, Eric A; Frank, Sebastian M; Tse, Peter U

    2015-04-15

    Useful information in the visual environment is often contained in specific conjunctions of visual features (e.g., color and shape). The ability to quickly and accurately process such conjunctions can be learned. However, the neural mechanisms responsible for such learning remain largely unknown. It has been suggested that some forms of visual learning might involve the dopaminergic neuromodulatory system (Roelfsema et al., 2010; Seitz and Watanabe, 2005), but this hypothesis has not yet been directly tested. Here we test the hypothesis that learning visual feature conjunctions involves the dopaminergic system, using functional neuroimaging, genetic assays, and behavioral testing techniques. We use a correlative approach to evaluate potential associations between individual differences in visual feature conjunction learning rate and individual differences in dopaminergic function as indexed by neuroimaging and genetic markers. We find a significant correlation between activity in the caudate nucleus (a component of the dopaminergic system connected to visual areas of the brain) and visual feature conjunction learning rate. Specifically, individuals who showed a larger difference in activity between positive and negative feedback on an unrelated cognitive task, indicative of a more reactive dopaminergic system, learned visual feature conjunctions more quickly than those who showed a smaller activity difference. This finding supports the hypothesis that the dopaminergic system is involved in visual learning, and suggests that visual feature conjunction learning could be closely related to associative learning. However, no significant, reliable correlations were found between feature conjunction learning and genotype or dopaminergic activity in any other regions of interest. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Rey's Auditory Verbal Learning Test scores can be predicted from whole brain MRI in Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Elaheh Moradi

    2017-01-01

    Full Text Available Rey's Auditory Verbal Learning Test (RAVLT is a powerful neuropsychological tool for testing episodic memory, which is widely used for the cognitive assessment in dementia and pre-dementia conditions. Several studies have shown that an impairment in RAVLT scores reflect well the underlying pathology caused by Alzheimer's disease (AD, thus making RAVLT an effective early marker to detect AD in persons with memory complaints. We investigated the association between RAVLT scores (RAVLT Immediate and RAVLT Percent Forgetting and the structural brain atrophy caused by AD. The aim was to comprehensively study to what extent the RAVLT scores are predictable based on structural magnetic resonance imaging (MRI data using machine learning approaches as well as to find the most important brain regions for the estimation of RAVLT scores. For this, we built a predictive model to estimate RAVLT scores from gray matter density via elastic net penalized linear regression model. The proposed approach provided highly significant cross-validated correlation between the estimated and observed RAVLT Immediate (R = 0.50 and RAVLT Percent Forgetting (R = 0.43 in a dataset consisting of 806 AD, mild cognitive impairment (MCI or healthy subjects. In addition, the selected machine learning method provided more accurate estimates of RAVLT scores than the relevance vector regression used earlier for the estimation of RAVLT based on MRI data. The top predictors were medial temporal lobe structures and amygdala for the estimation of RAVLT Immediate and angular gyrus, hippocampus and amygdala for the estimation of RAVLT Percent Forgetting. Further, the conversion of MCI subjects to AD in 3-years could be predicted based on either observed or estimated RAVLT scores with an accuracy comparable to MRI-based biomarkers.

  8. When more is less: Feedback effects in perceptual category learning

    Science.gov (United States)

    Maddox, W. Todd; Love, Bradley C.; Glass, Brian D.; Filoteo, J. Vincent

    2008-01-01

    Rule-based and information-integration category learning were compared under minimal and full feedback conditions. Rule-based category structures are those for which the optimal rule is verbalizable. Information-integration category structures are those for which the optimal rule is not verbalizable. With minimal feedback subjects are told whether their response was correct or incorrect, but are not informed of the correct category assignment. With full feedback subjects are informed of the correctness of their response and are also informed of the correct category assignment. An examination of the distinct neural circuits that subserve rule-based and information-integration category learning leads to the counterintuitive prediction that full feedback should facilitate rule-based learning but should also hinder information-integration learning. This prediction was supported in the experiment reported below. The implications of these results for theories of learning are discussed. PMID:18455155

  9. Effects of musicality and motivational orientation on auditory category learning: a test of a regulatory-fit hypothesis.

    Science.gov (United States)

    McAuley, J Devin; Henry, Molly J; Wedd, Alan; Pleskac, Timothy J; Cesario, Joseph

    2012-02-01

    Two experiments investigated the effects of musicality and motivational orientation on auditory category learning. In both experiments, participants learned to classify tone stimuli that varied in frequency and duration according to an initially unknown disjunctive rule; feedback involved gaining points for correct responses (a gains reward structure) or losing points for incorrect responses (a losses reward structure). For Experiment 1, participants were told at the start that musicians typically outperform nonmusicians on the task, and then they were asked to identify themselves as either a "musician" or a "nonmusician." For Experiment 2, participants were given either a promotion focus prime (a performance-based opportunity to gain entry into a raffle) or a prevention focus prime (a performance-based criterion that needed to be maintained to avoid losing an entry into a raffle) at the start of the experiment. Consistent with a regulatory-fit hypothesis, self-identified musicians and promotion-primed participants given a gains reward structure made more correct tone classifications and were more likely to discover the optimal disjunctive rule than were musicians and promotion-primed participants experiencing losses. Reward structure (gains vs. losses) had inconsistent effects on the performance of nonmusicians, and a weaker regulatory-fit effect was found for the prevention focus prime. Overall, the findings from this study demonstrate a regulatory-fit effect in the domain of auditory category learning and show that motivational orientation may contribute to musician performance advantages in auditory perception.

  10. Rehearsal significantly improves immediate and delayed recall on the Rey Auditory Verbal Learning Test.

    Science.gov (United States)

    Hessen, Erik

    2011-10-01

    A repeated observation during memory assessment with the Rey Auditory Verbal Learning Test (RAVLT) is that patients who spontaneously employ a memory rehearsal strategy by repeating the word list more than once achieve better scores than patients who only repeat the word list once. This observation led to concern about the ability of the standard test procedure of RAVLT and similar tests in eliciting the best possible recall scores. The purpose of the present study was to test the hypothesis that a rehearsal recall strategy of repeating the word list more than once would result in improved scores of recall on the RAVLT. We report on differences in outcome after standard administration and after experimental administration on Immediate and Delayed Recall measures from the RAVLT of 50 patients. The experimental administration resulted in significantly improved scores for all the variables employed. Additionally, it was found that patients who failed effort screening showed significantly poorer improvement on Delayed Recall compared with those who passed the effort screening. The general clear improvement both in raw scores and T-scores demonstrates that recall performance can be significantly influenced by the strategy of the patient or by small variations in instructions by the examiner.

  11. Effects of lips and hands on auditory learning of second-language speech sounds.

    Science.gov (United States)

    Hirata, Yukari; Kelly, Spencer D

    2010-04-01

    Previous research has found that auditory training helps native English speakers to perceive phonemic vowel length contrasts in Japanese, but their performance did not reach native levels after training. Given that multimodal information, such as lip movement and hand gesture, influences many aspects of native language processing, the authors examined whether multimodal input helps to improve native English speakers' ability to perceive Japanese vowel length contrasts. Sixty native English speakers participated in 1 of 4 types of training: (a) audio-only; (b) audio-mouth; (c) audio-hands; and (d) audio-mouth-hands. Before and after training, participants were given phoneme perception tests that measured their ability to identify short and long vowels in Japanese (e.g., /kato/ vs. /kato/). Although all 4 groups improved from pre- to posttest (replicating previous research), the participants in the audio-mouth condition improved more than those in the audio-only condition, whereas the 2 conditions involving hand gestures did not. Seeing lip movements during training significantly helps learners to perceive difficult second-language phonemic contrasts, but seeing hand gestures does not. The authors discuss possible benefits and limitations of using multimodal information in second-language phoneme learning.

  12. Clinical efficiency of the Auditory Verbal Learning Test for patients with internal carotid artery stenosis

    International Nuclear Information System (INIS)

    Seki, Yasuko; Maeshima, Shinichiro; Osawa, Aiko; Imura, Junko; Kohyama, Shinya; Yamane, Fumitaka; Ishihara, Shoichiro; Tanahashi, Norio

    2010-01-01

    Most patients who have an internal carotid artery (ICA) stenosis with cerebral lesion have some cognitive dysfunction. To clarify the clinical efficiency of the Auditory Verbal Learning Test (AVLT) and to assess the relationship between AVLT and cerebral damage, we examined AVLT in patients with ICA stenosis. 44 patients (35 males and 9 females) with ICA stenosis aged 56 to 83 (69.6±6.5) years old were evaluated. The educational periods were from 9 to 16 (12.3±2.8) years. Their activities of daily living (ADL) were independent. We assessed cognitive function with neuropsychological tests including AVLT, Mini-mental State Examination (MMSE), Raven's coloured progressive matrices (RCPM) and Frontal Assessment Battery (FAB), etc. We assessed cerebral damage (periventricular high intensity; PVH and white matter hyperintensity; WMH) with MRI. Then, we investigated the relationship between AVLT and other neuropsychological tests, and the relationship between AVLT and carotid/cerebral lesion. There was no association with lesion side of ICA stenosis and the scores of AVLT. In patients with ICA stenosis and cerebral damage (PVH and/or WMH), there was a significant relationship between the severity of cerebral damage and the scores in AVLT. AVLT had a significant relationship to other neuropsychological tests. AVLT might be a good cognitive assessment for patients who have cerebral damage due to ICA stenosis. (author)

  13. Incremental Learning of Perceptual Categories for Open-Domain Sketch Recognition

    National Research Council Canada - National Science Library

    Lovett, Andrew; Dehghani, Morteza; Forbus, Kenneth

    2007-01-01

    .... This paper describes an incremental learning technique for opendomain recognition. Our system builds generalizations for categories of objects based upon previous sketches of those objects and uses those generalizations to classify new sketches...

  14. Movement Sonification: Audiovisual benefits on motor learning

    Directory of Open Access Journals (Sweden)

    Weber Andreas

    2011-12-01

    Full Text Available Processes of motor control and learning in sports as well as in motor rehabilitation are based on perceptual functions and emergent motor representations. Here a new method of movement sonification is described which is designed to tune in more comprehensively the auditory system into motor perception to enhance motor learning. Usually silent features of the cyclic movement pattern "indoor rowing" are sonified in real time to make them additionally available to the auditory system when executing the movement. Via real time sonification movement perception can be enhanced in terms of temporal precision and multi-channel integration. But beside the contribution of a single perceptual channel to motor perception and motor representation also mechanisms of multisensory integration can be addressed, if movement sonification is configured adequately: Multimodal motor representations consisting of at least visual, auditory and proprioceptive components - can be shaped subtly resulting in more precise motor control and enhanced motor learning.

  15. Perceptual categories enable pattern generalization in songbirds.

    Science.gov (United States)

    Comins, Jordan A; Gentner, Timothy Q

    2013-08-01

    Since Chomsky's pioneering work on syntactic structures, comparative psychologists interested in the study of language evolution have targeted pattern complexity, using formal mathematical grammars, as the key to organizing language-relevant cognitive processes across species. This focus on formal syntactic complexity, however, often disregards the close interaction in real-world signals between the structure of a pattern and its constituent elements. Whether such features of natural auditory signals shape pattern generalization is unknown. In the present paper, we train birds to recognize differently patterned strings of natural signals (song motifs). Instead of focusing on the complexity of the overtly reinforced patterns, we ask how the perceptual groupings of pattern elements influence the generalization pattern knowledge. We find that learning and perception of training patterns is agnostic to the perceptual features of underlying elements. Surprisingly, however, these same features constrain the generalization of pattern knowledge, and thus its broader use. Our results demonstrate that the restricted focus of comparative language research on formal models of syntactic complexity is, at best, insufficient to understand pattern use. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Action Speaks Louder than Words: Young Children Differentially Weight Perceptual, Social, and Linguistic Cues to Learn Verbs

    Science.gov (United States)

    Brandone, Amanda C.; Pence, Khara L.; Golinkoff, Roberta Michnick; Hirsh-Pasek, Kathy

    2007-01-01

    This paper explores how children use two possible solutions to the verb-mapping problem: attention to perceptually salient actions and attention to social and linguistic information (speaker cues). Twenty-two-month-olds attached a verb to one of two actions when perceptual cues (presence/absence of a result) coincided with speaker cues but not…

  17. Audiovisual speech perception development at varying levels of perceptual processing.

    Science.gov (United States)

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-04-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.

  18. Learning Style Preferences of Asian American (Chinese, Filipino, Korean, and Vietnamese) Students in Secondary Schools.

    Science.gov (United States)

    Park, Clara C.

    1997-01-01

    Investigates for perceptual learning style preferences (auditory, visual, kinesthetic, and tactile) and preferences for group and individual leaning of Chinese, Filipino, Korean, and Vietnamese secondary education students. Comparison analysis reveals diverse learning style preferences between Anglo and Asian American students and also between…

  19. Anodal tDCS to V1 blocks visual perceptual learning consolidation.

    Science.gov (United States)

    Peters, Megan A K; Thompson, Benjamin; Merabet, Lotfi B; Wu, Allan D; Shams, Ladan

    2013-06-01

    This study examined the effects of visual cortex transcranial direct current stimulation (tDCS) on visual processing and learning. Participants performed a contrast detection task on two consecutive days. Each session consisted of a baseline measurement followed by measurements made during active or sham stimulation. On the first day, one group received anodal stimulation to primary visual cortex (V1), while another received cathodal stimulation. Stimulation polarity was reversed for these groups on the second day. The third (control) group of subjects received sham stimulation on both days. No improvements or decrements in contrast sensitivity relative to the same-day baseline were observed during real tDCS, nor was any within-session learning trend observed. However, task performance improved significantly from Day 1 to Day 2 for the participants who received cathodal tDCS on Day 1 and for the sham group. No such improvement was found for the participants who received anodal stimulation on Day 1, indicating that anodal tDCS blocked overnight consolidation of visual learning, perhaps through engagement of inhibitory homeostatic plasticity mechanisms or alteration of the signal-to-noise ratio within stimulated cortex. These results show that applying tDCS to the visual cortex can modify consolidation of visual learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Explicit Pre-Training Instruction Does Not Improve Implicit Perceptual-Motor Sequence Learning

    Science.gov (United States)

    Sanchez, Daniel J.; Reber, Paul J.

    2013-01-01

    Memory systems theory argues for separate neural systems supporting implicit and explicit memory in the human brain. Neuropsychological studies support this dissociation, but empirical studies of cognitively healthy participants generally observe that both kinds of memory are acquired to at least some extent, even in implicit learning tasks. A key…

  1. Perceptual Training Strongly Improves Visual Motion Perception in Schizophrenia

    Science.gov (United States)

    Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue

    2011-01-01

    Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…

  2. The Use of Music and Other Forms of Organized Sound as a Therapeutic Intervention for Students with Auditory Processing Disorder: Providing the Best Auditory Experience for Children with Learning Differences

    Science.gov (United States)

    Faronii-Butler, Kishasha O.

    2013-01-01

    This auto-ethnographical inquiry used vignettes and interviews to examine the therapeutic use of music and other forms of organized sound in the learning environment of individuals with Central Auditory Processing Disorders. It is an investigation of the traditions of healing with sound vibrations, from its earliest cultural roots in shamanism and…

  3. Explicit pre-training instruction does not improve implicit perceptual-motor sequence learning

    OpenAIRE

    Sanchez, Daniel J.; Reber, Paul J.

    2012-01-01

    Memory systems theory argues for separate neural systems supporting implicit and explicit memory in the human brain. Neuropsychological studies support this dissociation, but empirical studies of cognitively healthy participants generally observe that both kinds of memory are acquired to at least some extent, even in implicit learning tasks. A key question is whether this observation reflects parallel intact memory systems or an integrated representation of memory in healthy participants. Lea...

  4. Multisensory Cues Capture Spatial Attention Regardless of Perceptual Load

    Science.gov (United States)

    Santangelo, Valerio; Spence, Charles

    2007-01-01

    We compared the ability of auditory, visual, and audiovisual (bimodal) exogenous cues to capture visuo-spatial attention under conditions of no load versus high perceptual load. Participants had to discriminate the elevation (up vs. down) of visual targets preceded by either unimodal or bimodal cues under conditions of high perceptual load (in…

  5. Cerebellar tDCS dissociates the timing of perceptual decisions from perceptual change in speech

    NARCIS (Netherlands)

    Lametti, D.R.; Oostwoud Wijdenes, L.; Bonaiuto, J.; Bestmann, S.; Rothwell, J.C.

    2016-01-01

    Neuroimaging studies suggest that the cerebellum might play a role in both speech perception and speech perceptual learning. However, it remains unclear what this role is: does the cerebellum directly contribute to the perceptual decision? Or does it contribute to the timing of perceptual decisions?

  6. Development of the auditory system

    Science.gov (United States)

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  7. Animal models for auditory streaming

    Science.gov (United States)

    Itatani, Naoya

    2017-01-01

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044022

  8. Learning Style Preferences of Iranian EFL High School Students

    OpenAIRE

    Reza Vaseghi; Hamed Barjesteh; Sedigheh Shakib

    2013-01-01

    The current study examined the learning style preferences of 75 Iranian students at Marefat high school in Kuala Lumpur of which, 41 are females and 34 are males. As there are very few researches in which the learning style preferences of Iranian high school students investigated, this study attempts to fulfil this gap. To this end, in order to identify the students’ preferred learning styles (Visual, Auditory, Kinesthetic, Tactile, Group, and Individual) Reid’s Perceptual Learning Style Pref...

  9. Discrimination of schizophrenia auditory hallucinators by machine learning of resting-state functional MRI.

    Science.gov (United States)

    Chyzhyk, Darya; Graña, Manuel; Öngür, Döst; Shinn, Ann K

    2015-05-01

    Auditory hallucinations (AH) are a symptom that is most often associated with schizophrenia, but patients with other neuropsychiatric conditions, and even a small percentage of healthy individuals, may also experience AH. Elucidating the neural mechanisms underlying AH in schizophrenia may offer insight into the pathophysiology associated with AH more broadly across multiple neuropsychiatric disease conditions. In this paper, we address the problem of classifying schizophrenia patients with and without a history of AH, and healthy control (HC) subjects. To this end, we performed feature extraction from resting state functional magnetic resonance imaging (rsfMRI) data and applied machine learning classifiers, testing two kinds of neuroimaging features: (a) functional connectivity (FC) measures computed by lattice auto-associative memories (LAAM), and (b) local activity (LA) measures, including regional homogeneity (ReHo) and fractional amplitude of low frequency fluctuations (fALFF). We show that it is possible to perform classification within each pair of subject groups with high accuracy. Discrimination between patients with and without lifetime AH was highest, while discrimination between schizophrenia patients and HC participants was worst, suggesting that classification according to the symptom dimension of AH may be more valid than discrimination on the basis of traditional diagnostic categories. FC measures seeded in right Heschl's gyrus (RHG) consistently showed stronger discriminative power than those seeded in left Heschl's gyrus (LHG), a finding that appears to support AH models focusing on right hemisphere abnormalities. The cortical brain localizations derived from the features with strong classification performance are consistent with proposed AH models, and include left inferior frontal gyrus (IFG), parahippocampal gyri, the cingulate cortex, as well as several temporal and prefrontal cortical brain regions. Overall, the observed findings suggest that

  10. A Self-Synthesis Approach to Perceptual Learning for Multisensory Fusion in Robotics

    Science.gov (United States)

    Axenie, Cristian; Richter, Christoph; Conradt, Jörg

    2016-01-01

    Biological and technical systems operate in a rich multimodal environment. Due to the diversity of incoming sensory streams a system perceives and the variety of motor capabilities a system exhibits there is no single representation and no singular unambiguous interpretation of such a complex scene. In this work we propose a novel sensory processing architecture, inspired by the distributed macro-architecture of the mammalian cortex. The underlying computation is performed by a network of computational maps, each representing a different sensory quantity. All the different sensory streams enter the system through multiple parallel channels. The system autonomously associates and combines them into a coherent representation, given incoming observations. These processes are adaptive and involve learning. The proposed framework introduces mechanisms for self-creation and learning of the functional relations between the computational maps, encoding sensorimotor streams, directly from the data. Its intrinsic scalability, parallelisation, and automatic adaptation to unforeseen sensory perturbations make our approach a promising candidate for robust multisensory fusion in robotic systems. We demonstrate this by applying our model to a 3D motion estimation on a quadrotor. PMID:27775621

  11. A Self-Synthesis Approach to Perceptual Learning for Multisensory Fusion in Robotics

    Directory of Open Access Journals (Sweden)

    Cristian Axenie

    2016-10-01

    Full Text Available Biological and technical systems operate in a rich multimodal environment. Due to the diversity of incoming sensory streams a system perceives and the variety of motor capabilities a system exhibits there is no single representation and no singular unambiguous interpretation of such a complex scene. In this work we propose a novel sensory processing architecture, inspired by the distributed macro-architecture of the mammalian cortex. The underlying computation is performed by a network of computational maps, each representing a different sensory quantity. All the different sensory streams enter the system through multiple parallel channels. The system autonomously associates and combines them into a coherent representation, given incoming observations. These processes are adaptive and involve learning. The proposed framework introduces mechanisms for self-creation and learning of the functional relations between the computational maps, encoding sensorimotor streams, directly from the data. Its intrinsic scalability, parallelisation, and automatic adaptation to unforeseen sensory perturbations make our approach a promising candidate for robust multisensory fusion in robotic systems. We demonstrate this by applying our model to a 3D motion estimation on a quadrotor.

  12. Perceptual strategies of pigeons to detect a rotational centre--a hint for star compass learning?

    Directory of Open Access Journals (Sweden)

    Bianca Alert

    Full Text Available Birds can rely on a variety of cues for orientation during migration and homing. Celestial rotation provides the key information for the development of a functioning star and/or sun compass. This celestial compass seems to be the primary reference for calibrating the other orientation systems including the magnetic compass. Thus, detection of the celestial rotational axis is crucial for bird orientation. Here, we use operant conditioning to demonstrate that homing pigeons can principally learn to detect a rotational centre in a rotating dot pattern and we examine their behavioural response strategies in a series of experiments. Initially, most pigeons applied a strategy based on local stimulus information such as movement characteristics of single dots. One pigeon seemed to immediately ignore eccentric stationary dots. After special training, all pigeons could shift their attention to more global cues, which implies that pigeons can learn the concept of a rotational axis. In our experiments, the ability to precisely locate the rotational centre was strongly dependent on the rotational velocity of the dot pattern and it crashed at velocities that were still much faster than natural celestial rotation. We therefore suggest that the axis of the very slow, natural, celestial rotation could be perceived by birds through the movement itself, but that a time-delayed pattern comparison should also be considered as a very likely alternative strategy.

  13. Influência do contexto silábico da palavra no julgamento perceptivo-auditivo do ceceio produzido por pré-escolares Influence of syllabic context in auditory-perceptual ratings of lisping in school age children

    Directory of Open Access Journals (Sweden)

    Viviane Cristina de Castro Marino

    2012-01-01

    Full Text Available OBJETIVOS: investigar a ocorrência do ceceio em fricativas produzidas por crianças com alterações oclusais e analisar a influência do contexto silábico da fricativa no julgamento auditivo do ceceio. M ÉTODO: estudo prospectivo, em que as gravações de 428 palavras, produzidas por 15 crianças (idade média de 5 anos e 1 mês foram julgadas auditivamente por três fonoaudiólogos com experiência no julgamento de alterações de fala. As palavras utilizadas foram constituídas pelas consoantes fricativas não vozeadas, alveolar e pós-alveolar, inseridas em posição tônica, precedida das vogais [i, a, u]. Obteve-se concordância intra-juiz (quase perfeita e inter-juiz (total, 100% previamente à análise dos aspectos de interesse. RESULTADOS: embora presente na fala de todas as crianças, identificou-se ceceio em 25,23% do total das palavras. Houve aumento significante do ceceio para: (a fricativa alveolar em ataque inicial, (b fricativa alveolar em ataque inicial em relação à coda medial (p=0,001 e (c fricativa alveolar em relação à fricativa pós- alveolar (pPURPOSES: to investigate the occurrence of lisping during fricative sounds produced by children with malocclusal and to analyze the influence of the syllabic context of the fricative in the perceptual judgment of lisping. METHOD: this prospective study involved auditory perceptual identification of lisping by three experienced speech-language pathologists who judged 428 recorded words produced by 15 children (mean age of 5y1m. The words included alveolar and post-alveolar unvoiced fricative consonants, produced in initial word position followed by [i, a, u] vowels in the stressed position. Intra (almost perfect and inter (total, 100% judgments were obtained before analyzing the data. RESULTS: although all studied children presented lisping at least during one fricative production, it was identified in 25,23% of the recording analyzed words. A significant increase in lisping

  14. Improvement of uncorrected visual acuity (UCVA and contrast sensitivity (UCCS with perceptual learning and transcranial random noise stimulation (tRNS in individuals with mild myopia

    Directory of Open Access Journals (Sweden)

    Rebecca eCamilleri

    2014-10-01

    Full Text Available Perceptual learning has been shown to produce an improvement of visual acuity (VA and contrast sensitivity (CS both in subjects with amblyopia and refractive defects such as myopia or presbyopia. Transcranial random noise stimulation (tRNS has proven to be efficacious in accelerating neural plasticity and boosting perceptual learning in healthy participants. In this study we investigated whether a short behavioural training regime using a contrast detection task combined with online tRNS was as effective in improving visual functions in participants with mild myopia compared to a two-month behavioural training regime without tRNS (Camilleri et al., 2014. After two weeks of perceptual training in combination with tRNS, participants showed an improvement of 0.15 LogMAR in uncorrected VA (UCVA that was comparable with that obtained after eight weeks of training with no tRNS, and an improvement in uncorrected CS (UCCS at various spatial frequencies (whereas no UCCS improvement was seen after eight weeks of training with no tRNS. On the other hand, a control group that trained for two weeks without stimulation did not show any significant UCVA or UCCS improvement. These results suggest that the combination of behavioural and neuromodulatory techniques can be fast and efficacious in improving sight in individuals with mild myopia.

  15. Estradiol differentially affects auditory recognition and learning according to photoperiodic state in the adult male songbird, European starling (Sturnus vulgaris

    Directory of Open Access Journals (Sweden)

    Rebecca M. Calisi

    2013-09-01

    Full Text Available Changes in hormones can affect many types of learning in vertebrates. Adults experience fluctuations in a multitude of hormones over a temporal scale, from local, rapid action to more long-term, seasonal changes. Endocrine changes during development can affect behavioral outcomes in adulthood, but how learning is affected in adults by hormone fluctuations experienced during adulthood is less understood. Previous reports have implicated the sex steroid hormone estradiol (E2 in both male and female vertebrate cognitive functioning. Here, we examined the effects of E2 on auditory recognition and learning in male European starlings (Sturnus vulgaris. European starlings are photoperiodic, seasonally breeding songbirds that undergo different periods of reproductive activity according to annual changes in day length. We simulated these reproductive periods, specifically 1. photosensitivity, 2. photostimulation, and 3. photorefractoriness in captive birds by altering day length. During each period, we manipulated circulating E2 and examined multiple measures of learning. To manipulate circulating E2, we used subcutaneous implants containing either 17-β E2 and/or fadrozole (FAD, a highly specific aromatase inhibitor that suppresses E2 production in the body and the brain, and measured the latency for birds to learn and respond to short, male conspecific song segments (motifs. We report that photostimulated birds given E2 had higher response rates and responded with better accuracy than those given saline controls or FAD. Conversely, photosensitive, animals treated with E2 responded with less accuracy than those given FAD. These results demonstrate how circulating E2 and photoperiod can interact to shape auditory recognition and learning in adults, driving it in opposite directions in different states.

  16. Perceptual-cognitive changes during motor learning: The influence of mental and physical practice on mental representation, gaze behavior, and performance of a complex action

    Directory of Open Access Journals (Sweden)

    Cornelia eFrank

    2016-01-01

    Full Text Available Despite the wealth of research on differences between experts and novices with respect to their perceptual-cognitive background (e.g., mental representations, gaze behavior, little is known about the change of these perceptual-cognitive components over the course of motor learning. In the present study, changes in one’s mental representation, quiet eye behavior, and outcome performance were examined over the course of skill acquisition as it related to physical and mental practice. Novices (N = 45 were assigned to one of three conditions: physical practice, physical practice plus mental practice, and no practice. Participants in the practice groups trained on a golf putting task over the course of three days, either by repeatedly executing the putt, or by both executing and imaging the putt. Findings revealed improvements in putting performance across both practice conditions. Regarding the perceptual-cognitive changes, participants practicing mentally and physically revealed longer quiet eye durations as well as more elaborate representation structures in comparison to the control group, while this was not the case for participants who underwent physical practice only. Thus, in the present study, combined mental and physical practice led to both formation of mental representations in long-term memory and longer quiet eye durations. Interestingly, the length of the quiet eye directly related to the degree of elaborateness of the underlying mental representation, supporting the notion that the quiet eye reflects cognitive processing. This study is the first to show that the quiet eye becomes longer in novices practicing a motor action. Moreover, the findings of the present study suggest that perceptual and cognitive adaptations co-occur over the course of motor learning.

  17. Perceptual inference.

    Science.gov (United States)

    Aggelopoulos, Nikolaos C

    2015-08-01

    Perceptual inference refers to the ability to infer sensory stimuli from predictions that result from internal neural representations built through prior experience. Methods of Bayesian statistical inference and decision theory model cognition adequately by using error sensing either in guiding action or in "generative" models that predict the sensory information. In this framework, perception can be seen as a process qualitatively distinct from sensation, a process of information evaluation using previously acquired and stored representations (memories) that is guided by sensory feedback. The stored representations can be utilised as internal models of sensory stimuli enabling long term associations, for example in operant conditioning. Evidence for perceptual inference is contributed by such phenomena as the cortical co-localisation of object perception with object memory, the response invariance in the responses of some neurons to variations in the stimulus, as well as from situations in which perception can be dissociated from sensation. In the context of perceptual inference, sensory areas of the cerebral cortex that have been facilitated by a priming signal may be regarded as comparators in a closed feedback loop, similar to the better known motor reflexes in the sensorimotor system. The adult cerebral cortex can be regarded as similar to a servomechanism, in using sensory feedback to correct internal models, producing predictions of the outside world on the basis of past experience. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Use of media technology to enhance the learning of student nurses in regards to auditory hallucinations.

    Science.gov (United States)

    Mawson, Kerry

    2014-04-01

    The aim of this study was to determine if simulation aided by media technology contributes towards an increase in knowledge, empathy, and a change in attitudes in regards to auditory hallucinations for nursing students. A convenience sample of 60 second-year undergraduate nursing students from an Australian university was invited to be part of the study. A pre-post-test design was used, with data analysed using a paired samples t-test to identify pre- and post-changes on nursing students' scores on knowledge of auditory hallucinations. Nine of the 11 questions reported statistically-significant results. The remaining two questions highlighted knowledge embedded within the curriculum, with therapeutic communication being the core work of mental health nursing. The implications for practice are that simulation aided by media technology increases the knowledge of students in regards to auditory hallucinations. © 2013 Australian College of Mental Health Nurses Inc.

  19. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  20. Auditory Stimulus Processing and Task Learning Are Adequate in Dyslexia, but Benefits from Regularities Are Reduced

    Science.gov (United States)

    Daikhin, Luba; Raviv, Ofri; Ahissar, Merav

    2017-01-01

    Purpose: The reading deficit for people with dyslexia is typically associated with linguistic, memory, and perceptual-discrimination difficulties, whose relation to reading impairment is disputed. We proposed that automatic detection and usage of serial sound regularities for individuals with dyslexia is impaired (anchoring deficit hypothesis),…

  1. Effects of asymmetry and learning on phonotaxis in a robot based on the lizard auditory system

    DEFF Research Database (Denmark)

    Zhang, L.; Hallam, J.; Christensen-Dalsgaard, J.

    2012-01-01

    Lizards have strong directional hearing across a broad band of frequencies. The directionality can be attributed to the acoustical properties of the ear, especially the strong acoustical coupling of the two eardrums. The peripheral auditory system of the lizard has previously been modeled...... and magnitude of their intrinsic bias. To attain effective directional hearing, the bias in the peripheral system should be compensated. In this article, with the peripheral models, we design a decision model and a behavior model, a virtual robot, to simulate the auditory system of the lizard in software...

  2. Plastic changes in the central auditory system after hearing loss, restoration of function, and during learning

    Czech Academy of Sciences Publication Activity Database

    Syka, Josef

    2002-01-01

    Roč. 82, - (2002), s. 601-636 ISSN 0031-9333 R&D Projects: GA MZd NK6454 Institutional research plan: CEZ:AV0Z5039906 Keywords : auditory system Subject RIV: FH - Neurology Impact factor: 26.533, year: 2002

  3. Learning to listen again: the role of compliance in auditory training for adults with hearing loss.

    Science.gov (United States)

    Chisolm, Theresa Hnath; Saunders, Gabrielle H; Frederick, Melissa T; McArdle, Rachel A; Smith, Sherri L; Wilson, Richard H

    2013-12-01

    To examine the role of compliance in the outcomes of computer-based auditory training with the Listening and Communication Enhancement (LACE) program in Veterans using hearing aids. The authors examined available LACE training data for 5 tasks (i.e., speech-in-babble, time compression, competing speaker, auditory memory, missing word) from 50 hearing-aid users who participated in a larger, randomized controlled trial designed to examine the efficacy of LACE training. The goals were to determine: (a) whether there were changes in performance over 20 training sessions on trained tasks (i.e., on-task outcomes); and (b) whether compliance, defined as completing all 20 sessions, vs. noncompliance, defined as completing less than 20 sessions, influenced performance on parallel untrained tasks (i.e., off-task outcomes). The majority, 84% of participants, completed 20 sessions, with maximum outcome occurring with at least 10 sessions of training for some tasks and up to 20 sessions of training for others. Comparison of baseline to posttest performance revealed statistically significant improvements for 4 of 7 off-task outcome measures for the compliant group, with at least small (0.2 compliance in the present study may be attributable to use of systematized verbal and written instructions with telephone follow-up. Compliance, as expected, appears important for optimizing the outcomes of auditory training. Methods to improve compliance in clinical populations need to be developed, and compliance data are important to report in future studies of auditory training.

  4. The Rey Auditory Verbal Learning Test forced-choice recognition task: Base-rate data and norms.

    Science.gov (United States)

    Poreh, Amir; Bezdicek, Ondrej; Korobkova, Irina; Levin, Jennifer B; Dines, Philipp

    2016-01-01

    The present study describes a novel Forced-Choice Response (FCR) index for detecting poor effort on the Rey Auditory Verbal Learning Test (RAVLT). This retrospective study analyzes the performance of 4 groups on the new index: clinically referred patients with suspected dementia, forensic patients identified as not exhibiting adequate effort on other measures of response bias, students who simulated poor effort, and a large normative sample collected in the Gulf State of Oman. Using sensitivity and specificity analyses, the study shows that much like the California Verbal Learning Test-Second Edition FCR index, the RAVLT FCR index misses a proportion of individuals with inadequate effort (low sensitivity), but those who fail this measure are highly likely to be exhibiting poor effort (high specificity). The limitations and benefits of utilizing the RAVLT FCR index in clinical practice are discussed.

  5. Direct current induced short-term modulation of the left dorsolateral prefrontal cortex while learning auditory presented nouns

    Directory of Open Access Journals (Sweden)

    Meyer Martin

    2009-07-01

    Full Text Available Abstract Background Little is known about the contribution of transcranial direct current stimulation (tDCS to the exploration of memory functions. The aim of the present study was to examine the behavioural effects of right or left-hemisphere frontal direct current delivery while committing to memory auditory presented nouns on short-term learning and subsequent long-term retrieval. Methods Twenty subjects, divided into two groups, performed an episodic verbal memory task during anodal, cathodal and sham current application on the right or left dorsolateral prefrontal cortex (DLPFC. Results Our results imply that only cathodal tDCS elicits behavioural effects on verbal memory performance. In particular, left-sided application of cathodal tDCS impaired short-term verbal learning when compared to the baseline. We did not observe tDCS effects on long-term retrieval. Conclusion Our results imply that the left DLPFC is a crucial area involved in short-term verbal learning mechanisms. However, we found further support that direct current delivery with an intensity of 1.5 mA to the DLPFC during short-term learning does not disrupt longer lasting consolidation processes that are mainly known to be related to mesial temporal lobe areas. In the present study, we have shown that the tDCS technique has the potential to modulate short-term verbal learning mechanism.

  6. Estudo do comportamento vocal no ciclo menstrual: avaliação perceptivo-auditiva, acústica e auto-perceptiva Vocal behavior during menstrual cycle: perceptual-auditory, acoustic and self-perception analysis

    Directory of Open Access Journals (Sweden)

    Luciane C. de Figueiredo

    2004-06-01

    Full Text Available Durante o período pré-menstrual é comum a ocorrência de disfonia, e são poucas as mulheres que se dão conta dessa variação da voz dentro do ciclo menstrual (Quinteiro, 1989. OBJETIVO: Verificar se há diferença no padrão vocal de mulheres no período de ovulação em relação ao primeiro dia do ciclo menstrual, utilizando-se da análise perceptivo-auditiva, da espectrografia, dos parâmetros acústicos e quando esta diferença está presente, se é percebida pelas mulheres. FORMA DE ESTUDO: Caso-controle. MATERIAL E MÉTODO: A amostra coletada foi de 30 estudantes de Fonoaudiologia, na faixa etária de 18 a 25 anos, não-fumantes, com ciclo menstrual regular e sem o uso de contraceptivo oral. As vozes foram gravadas no primeiro dia de menstruação e no décimo-terceiro dia pós-menstruação (ovulação, para posterior comparação. RESULTADOS: Observou-se durante o período menstrual que as vozes estão rouco-soprosa de grau leve a moderado, instáveis, sem a presença de quebra de sonoridade, com pitch e loudness adequados e ressonância equilibrada. Há pior qualidade de definição dos harmônicos, maior quantidade de ruído entre eles e menor extensão dos harmônicos superiores. Encontramos uma f0 mais aguda, jitter e shimmer aumentados e PHR diminuída. CONCLUSÃO: No período menstrual há mudanças na qualidade vocal, no comportamento dos harmônicos e nos parâmetros vocais (f0,jitter, shimmer e PHR. Além disso, a maioria das estudantes de Fonoaudiologia não percebeu a variação da voz durante o ciclo menstrual.During the premenstruation period dysphonia often can be observed and only few women are aware of this voice variation (Quinteiro, 1989. AIM: To verify if there are vocal quality variations between the ovulation period and the first day of the menstrual cycle, by using perceptual-auditory and acoustic analysis, including spectrography, and the self perception of the vocal changes when it occurs. STUDY DESIGN: Case

  7. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  8. Review: Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Ja'fari

    2003-01-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  9. Sharpened cortical tuning and enhanced cortico-cortical communication contribute to the long-term neural mechanisms of visual motion perceptual learning.

    Science.gov (United States)

    Chen, Nihong; Bi, Taiyong; Zhou, Tiangang; Li, Sheng; Liu, Zili; Fang, Fang

    2015-07-15

    Much has been debated about whether the neural plasticity mediating perceptual learning takes place at the sensory or decision-making stage in the brain. To investigate this, we trained human subjects in a visual motion direction discrimination task. Behavioral performance and BOLD signals were measured before, immediately after, and two weeks after training. Parallel to subjects' long-lasting behavioral improvement, the neural selectivity in V3A and the effective connectivity from V3A to IPS (intraparietal sulcus, a motion decision-making area) exhibited a persistent increase for the trained direction. Moreover, the improvement was well explained by a linear combination of the selectivity and connectivity increases. These findings suggest that the long-term neural mechanisms of motion perceptual learning are implemented by sharpening cortical tuning to trained stimuli at the sensory processing stage, as well as by optimizing the connections between sensory and decision-making areas in the brain. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Musically cued gait-training improves both perceptual and motor timing in Parkinson’s disease

    OpenAIRE

    Benoit, C.; Dalla Bella, S.; Farrugia, N.; Obrig, H.; Mainka, S.; Kotz, S.

    2014-01-01

    It is well established that auditory cueing improves gait in patients with idiopathic Parkinson’s disease (IPD). Disease-related reductions in speed and step length can be improved by providing rhythmical auditory cues via a metronome or music. However, effects on cognitive aspects of motor control have yet to be thoroughly investigated. If synchronization of movement to an auditory cue relies on a supramodal timing system involved in perceptual, motor, and sensorimotor integration, auditory ...

  11. Temporal Resolution and Active Auditory Discrimination Skill in Vocal Musicians

    Directory of Open Access Journals (Sweden)

    Kumar, Prawin

    2015-12-01

    Full Text Available Introduction Enhanced auditory perception in musicians is likely to result from auditory perceptual learning during several years of training and practice. Many studies have focused on biological processing of auditory stimuli among musicians. However, there is a lack of literature on temporal resolution and active auditory discrimination skills in vocal musicians. Objective The aim of the present study is to assess temporal resolution and active auditory discrimination skill in vocal musicians. Method The study participants included 15 vocal musicians with a minimum professional experience of 5 years of music exposure, within the age range of 20 to 30 years old, as the experimental group, while 15 age-matched non-musicians served as the control group. We used duration discrimination using pure-tones, pulse-train duration discrimination, and gap detection threshold tasks to assess temporal processing skills in both groups. Similarly, we assessed active auditory discrimination skill in both groups using Differential Limen of Frequency (DLF. All tasks were done using MATLab software installed in a personal computer at 40dBSL with maximum likelihood procedure. The collected data were analyzed using SPSS (version 17.0. Result Descriptive statistics showed better threshold for vocal musicians compared with non-musicians for all tasks. Further, independent t-test showed that vocal musicians performed significantly better compared with non-musicians on duration discrimination using pure tone, pulse train duration discrimination, gap detection threshold, and differential limen of frequency. Conclusion The present study showed enhanced temporal resolution ability and better (lower active discrimination threshold in vocal musicians in comparison to non-musicians.

  12. Perceptual load interacts with stimulus processing across sensory modalities.

    Science.gov (United States)

    Klemen, J; Büchel, C; Rose, M

    2009-06-01

    According to perceptual load theory, processing of task-irrelevant stimuli is limited by the perceptual load of a parallel attended task if both the task and the irrelevant stimuli are presented to the same sensory modality. However, it remains a matter of debate whether the same principles apply to cross-sensory perceptual load and, more generally, what form cross-sensory attentional modulation in early perceptual areas takes in humans. Here we addressed these questions using functional magnetic resonance imaging. Participants undertook an auditory one-back working memory task of low or high perceptual load, while concurrently viewing task-irrelevant images at one of three object visibility levels. The processing of the visual and auditory stimuli was measured in the lateral occipital cortex (LOC) and auditory cortex (AC), respectively. Cross-sensory interference with sensory processing was observed in both the LOC and AC, in accordance with previous results of unisensory perceptual load studies. The present neuroimaging results therefore warrant the extension of perceptual load theory from a unisensory to a cross-sensory context: a validation of this cross-sensory interference effect through behavioural measures would consolidate the findings.

  13. Auditory Training for Adults Who Have Hearing Loss: A Comparison of Spaced Versus Massed Practice Schedules.

    Science.gov (United States)

    Tye-Murray, Nancy; Spehar, Brent; Barcroft, Joe; Sommers, Mitchell

    2017-08-16

    The spacing effect in human memory research refers to situations in which people learn items better when they study items in spaced intervals rather than massed intervals. This investigation was conducted to compare the efficacy of meaning-oriented auditory training when administered with a spaced versus massed practice schedule. Forty-seven adult hearing aid users received 16 hr of auditory training. Participants in a spaced group (mean age = 64.6 years, SD = 14.7) trained twice per week, and participants in a massed group (mean age = 69.6 years, SD = 17.5) trained for 5 consecutive days each week. Participants completed speech perception tests before training, immediately following training, and then 3 months later. In line with transfer appropriate processing theory, tests assessed both trained tasks and an untrained task. Auditory training improved the speech recognition performance of participants in both groups. Benefits were maintained for 3 months. No effect of practice schedule was found on overall benefits achieved, on retention of benefits, nor on generalizability of benefits to nontrained tasks. The lack of spacing effect in otherwise effective auditory training suggests that perceptual learning may be subject to different influences than are other types of learning, such as vocabulary learning. Hence, clinicians might have latitude in recommending training schedules to accommodate patients' schedules.

  14. Performance of normal adults on Rey Auditory Learning Test: a pilot study Desempenho de indivíduos saudáveis no Rey Auditory Verbal Learning Test (RAVLT: estudo piloto

    Directory of Open Access Journals (Sweden)

    Leila Cardoso Teruya

    2009-06-01

    Full Text Available The present study aimed to assess the performance of healthy Brazilian adults on the Rey Auditory Verbal Learning Test (RAVLT, a test devised for assessing memory, and to investigate the influence of the variables age, sex and education on the performance obtained, and finally to suggest scores which may be adopted for assessing memory with this instrument. The performance of 130 individuals, subdivided into groups according to age and education, was assessed. Overall performance decreased with age. Schooling presented a strong and positive relationship with scores on all subitems analyzed except learning, for which no influence was found. Mean scores of subitems analyzed did not differ significantly between men and women, except for the delayed recall subitem. This manuscript describes RAVLT scores according to age and education. In summary, this is a pilot study that presents a profile of Brazilian adults on A1, A7, recognition and LOT subitem.O objetivo deste estudo foi avaliar o desempenho de adultos normais brasileiros no Rey Auditory Verbal Learning Test (RAVLT, um teste destinado à avaliação da memória, e investigar a influência das variáveis idade, sexo e escolaridade no desempenho obtido, além de sugerir escores que possam ser utilizados na avaliação da memória segundo este instrumento. Foi avaliado o desempenho de 130 indivíduos, subdivididos em grupos de acordo com a idade e escolaridade. O desempenho geral no teste diminuiu com o aumento da idade. A escolaridade apresentou relação forte e positiva com os escores em todos os subitens analisados, exceto no aprendizado, no qual não foi verificada influência. As médias dos escores dos subitens analisados não foram estatisticamente diferentes entre homens e mulheres, exceto no subitem recordação tardia. Descrevemos os escores no RAVLT de acordo com faixa etária e escolaridade neste manuscrito.

  15. Psychophysical indices of perceptual functioning in dyslexia: A psychometric analysis

    OpenAIRE

    Heath, Steve M.; Bishop, Dorothy V. M.; Hogben, John H.; Roach, Neil W.

    2006-01-01

    An influential causal theory attributes dyslexia to visual and/or auditory perceptual deficits. This theory derives from group differences between individuals with dyslexia and controls on a range of psychophysical tasks, but there is substantial variation, both between individuals within a group and from task to task. We addressed two questions. First, do psychophysical measures have sufficient reliability to assess perceptual deficits in individuals? Second, do different psychophysical task...

  16. Musical experience shapes top-down auditory mechanisms: evidence from masking and auditory attention performance.

    Science.gov (United States)

    Strait, Dana L; Kraus, Nina; Parbery-Clark, Alexandra; Ashley, Richard

    2010-03-01

    A growing body of research suggests that cognitive functions, such as attention and memory, drive perception by tuning sensory mechanisms to relevant acoustic features. Long-term musical experience also modulates lower-level auditory function, although the mechanisms by which this occurs remain uncertain. In order to tease apart the mechanisms that drive perceptual enhancements in musicians, we posed the question: do well-developed cognitive abilities fine-tune auditory perception in a top-down fashion? We administered a standardized battery of perceptual and cognitive tests to adult musicians and non-musicians, including tasks either more or less susceptible to cognitive control (e.g., backward versus simultaneous masking) and more or less dependent on auditory or visual processing (e.g., auditory versus visual attention). Outcomes indicate lower perceptual thresholds in musicians specifically for auditory tasks that relate with cognitive abilities, such as backward masking and auditory attention. These enhancements were observed in the absence of group differences for the simultaneous masking and visual attention tasks. Our results suggest that long-term musical practice strengthens cognitive functions and that these functions benefit auditory skills. Musical training bolsters higher-level mechanisms that, when impaired, relate to language and literacy deficits. Thus, musical training may serve to lessen the impact of these deficits by strengthening the corticofugal system for hearing. 2009 Elsevier B.V. All rights reserved.

  17. Auditory Emotional Cues Enhance Visual Perception

    Science.gov (United States)

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  18. Auditory Training for Children with Processing Disorders.

    Science.gov (United States)

    Katz, Jack; Cohen, Carolyn F.

    1985-01-01

    The article provides an overview of central auditory processing (CAP) dysfunction and reviews research on approaches to improve perceptual skills; to provide discrimination training for communicative and reading disorders; to increase memory and analysis skills and dichotic listening; to provide speech-in-noise training; and to amplify speech as…

  19. Frequent video game players resist perceptual interference.

    Directory of Open Access Journals (Sweden)

    Aaron V Berard

    Full Text Available Playing certain types of video games for a long time can improve a wide range of mental processes, from visual acuity to cognitive control. Frequent gamers have also displayed generalized improvements in perceptual learning. In the Texture Discrimination Task (TDT, a widely used perceptual learning paradigm, participants report the orientation of a target embedded in a field of lines and demonstrate robust over-night improvement. However, changing the orientation of the background lines midway through TDT training interferes with overnight improvements in overall performance on TDT. Interestingly, prior research has suggested that this effect will not occur if a one-hour break is allowed in between the changes. These results have suggested that after training is over, it may take some time for learning to become stabilized and resilient against interference. Here, we tested whether frequent gamers have faster stabilization of perceptual learning compared to non-gamers and examined the effect of daily video game playing on interference of training of TDT with one background orientation on perceptual learning of TDT with a different background orientation. As a result, we found that non-gamers showed overnight performance improvement only on one background orientation, replicating previous results with the interference in TDT. In contrast, frequent gamers demonstrated overnight improvements in performance with both background orientations, suggesting that they are better able to overcome interference in perceptual learning. This resistance to interference suggests that video game playing not only enhances the amplitude and speed of perceptual learning but also leads to faster and/or more robust stabilization of perceptual learning.

  20. Frequent video game players resist perceptual interference.

    Science.gov (United States)

    Berard, Aaron V; Cain, Matthew S; Watanabe, Takeo; Sasaki, Yuka

    2015-01-01

    Playing certain types of video games for a long time can improve a wide range of mental processes, from visual acuity to cognitive control. Frequent gamers have also displayed generalized improvements in perceptual learning. In the Texture Discrimination Task (TDT), a widely used perceptual learning paradigm, participants report the orientation of a target embedded in a field of lines and demonstrate robust over-night improvement. However, changing the orientation of the background lines midway through TDT training interferes with overnight improvements in overall performance on TDT. Interestingly, prior research has suggested that this effect will not occur if a one-hour break is allowed in between the changes. These results have suggested that after training is over, it may take some time for learning to become stabilized and resilient against interference. Here, we tested whether frequent gamers have faster stabilization of perceptual learning compared to non-gamers and examined the effect of daily video game playing on interference of training of TDT with one background orientation on perceptual learning of TDT with a different background orientation. As a result, we found that non-gamers showed overnight performance improvement only on one background orientation, replicating previous results with the interference in TDT. In contrast, frequent gamers demonstrated overnight improvements in performance with both background orientations, suggesting that they are better able to overcome interference in perceptual learning. This resistance to interference suggests that video game playing not only enhances the amplitude and speed of perceptual learning but also leads to faster and/or more robust stabilization of perceptual learning.

  1. The speed and accuracy of perceptual decisions in a random-tone pitch task

    NARCIS (Netherlands)

    Mulder, M.J.; Keuken, M.C.; van Maanen, L.; Boekel, W.E.; Forstmann, B.U.; Wagenmakers, E.J.

    2013-01-01

    Research in perceptual decision making is dominated by paradigms that tap the visual system, such as the random-dot motion (RDM) paradigm. In this study, we investigated whether the behavioral signature of perceptual decisions in the auditory domain is similar to those observed in the visual domain.

  2. Brief Daily Exposures to Asian Females Reverses Perceptual Narrowing for Asian Faces in Caucasian Infants

    Science.gov (United States)

    Anzures, Gizelle; Wheeler, Andrea; Quinn, Paul C.; Pascalis, Olivier; Slater, Alan M.; Heron-Delaney, Michelle; Tanaka, James W.; Lee, Kang

    2012-01-01

    Perceptual narrowing in the visual, auditory, and multisensory domains has its developmental origins during infancy. The current study shows that experimentally induced experience can reverse the effects of perceptual narrowing on infants' visual recognition memory of other-race faces. Caucasian 8- to 10-month-olds who could not discriminate…

  3. The combination of appetitive and aversive reinforcers and the nature of their interaction during auditory learning.

    Science.gov (United States)

    Ilango, A; Wetzel, W; Scheich, H; Ohl, F W

    2010-03-31

    Learned changes in behavior can be elicited by either appetitive or aversive reinforcers. It is, however, not clear whether the two types of motivation, (approaching appetitive stimuli and avoiding aversive stimuli) drive learning in the same or different ways, nor is their interaction understood in situations where the two types are combined in a single experiment. To investigate this question we have developed a novel learning paradigm for Mongolian gerbils, which not only allows rewards and punishments to be presented in isolation or in combination with each other, but also can use these opposite reinforcers to drive the same learned behavior. Specifically, we studied learning of tone-conditioned hurdle crossing in a shuttle box driven by either an appetitive reinforcer (brain stimulation reward) or an aversive reinforcer (electrical footshock), or by a combination of both. Combination of the two reinforcers potentiated speed of acquisition, led to maximum possible performance, and delayed extinction as compared to either reinforcer alone. Additional experiments, using partial reinforcement protocols and experiments in which one of the reinforcers was omitted after the animals had been previously trained with the combination of both reinforcers, indicated that appetitive and aversive reinforcers operated together but acted in different ways: in this particular experimental context, punishment appeared to be more effective for initial acquisition and reward more effective to maintain a high level of conditioned responses (CRs). The results imply that learning mechanisms in problem solving were maximally effective when the initial punishment of mistakes was combined with the subsequent rewarding of correct performance. Copyright 2010 IBRO. Published by Elsevier Ltd. All rights reserved.

  4. Auditory Evoked Potential: a proposal for further evaluation in children with learning disabilities

    Directory of Open Access Journals (Sweden)

    Ana Claudia Figueiredo Frizzo

    2015-06-01

    Full Text Available The information presented in this paper demonstrates the author's experience in previews cross-sectional studies conducted in Brazil, in comparison with the current literature. Over the last ten years, AEP has been used in children with learning disabilities. This method is critical to analyze the quality of the processing in time and indicates the specific neural demands and circuits of the sensorial and cognitive process in this clinical population. Some studies with children with dyslexia and learning disabilities were shown here to illustrate the use of AEP in this population.

  5. Age and education adjusted normative data and discriminative validity for Rey's Auditory Verbal Learning Test in the elderly Greek population.

    Science.gov (United States)

    Messinis, Lambros; Nasios, Grigorios; Mougias, Antonios; Politis, Antonis; Zampakis, Petros; Tsiamaki, Eirini; Malefaki, Sonia; Gourzis, Phillipos; Papathanasopoulos, Panagiotis

    2016-01-01

    Rey's Auditory Verbal Learning Test (RAVLT) is a widely used neuropsychological test to assess episodic memory. In the present study we sought to establish normative and discriminative validity data for the RAVLT in the elderly population using previously adapted learning lists for the Greek adult population. We administered the test to 258 cognitively healthy elderly participants, aged 60-89 years, and two patient groups (192 with amnestic mild cognitive impairment, aMCI, and 65 with Alzheimer's disease, AD). From the statistical analyses, we found that age and education contributed significantly to most trials of the RAVLT, whereas the influence of gender was not significant. Younger elderly participants with higher education outperformed the older elderly with lower education levels. Moreover, both clinical groups performed significantly worse on most RAVLT trials and composite measures than matched cognitively healthy controls. Furthermore, the AD group performed more poorly than the aMCI group on most RAVLT variables. Receiver operating characteristic (ROC) analysis was used to examine the utility of the RAVLT trials to discriminate cognitively healthy controls from aMCI and AD patients. Area under the curve (AUC), an index of effect size, showed that most of the RAVLT measures (individual and composite) included in this study adequately differentiated between the performance of healthy elders and aMCI/AD patients. We also provide cutoff scores in discriminating cognitively healthy controls from aMCI and AD patients, based on the sensitivity and specificity of the prescribed scores. Moreover, we present age- and education-specific normative data for individual and composite scores for the Greek adapted RAVLT in elderly subjects aged between 60 and 89 years for use in clinical and research settings.

  6. Learning to Match Auditory and Visual Speech Cues: Social Influences on Acquisition of Phonological Categories

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2015-01-01

    Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential…

  7. Motivational Gaps and Perceptual Bias of Initial Motivation Additional Indicators of Quality for e-Learning Courses

    Science.gov (United States)

    Cação, Rosário

    2017-01-01

    We describe a study on the motivation of trainees in e-learning-based professional training and on the effect of their motivation upon the perceptions they build about the quality of the courses. We propose the concepts of "perceived motivational gap" and "real motivational gap" as indicators of e-learning quality, which…

  8. On the learning difficulty of visual and auditory modal concepts: Evidence for a single processing system.

    Science.gov (United States)

    Vigo, Ronaldo; Doan, Karina-Mikayla C; Doan, Charles A; Pinegar, Shannon

    2018-02-01

    The logic operators (e.g., "and," "or," "if, then") play a fundamental role in concept formation, syntactic construction, semantic expression, and deductive reasoning. In spite of this very general and basic role, there are relatively few studies in the literature that focus on their conceptual nature. In the current investigation, we examine, for the first time, the learning difficulty experienced by observers in classifying members belonging to these primitive "modal concepts" instantiated with sets of acoustic and visual stimuli. We report results from two categorization experiments that suggest the acquisition of acoustic and visual modal concepts is achieved by the same general cognitive mechanism. Additionally, we attempt to account for these results with two models of concept learning difficulty: the generalized invariance structure theory model (Vigo in Cognition 129(1):138-162, 2013, Mathematical principles of human conceptual behavior, Routledge, New York, 2014) and the generalized context model (Nosofsky in J Exp Psychol Learn Mem Cogn 10(1):104-114, 1984, J Exp Psychol 115(1):39-57, 1986).

  9. Biased and unbiased perceptual decision-making on vocal emotions.

    Science.gov (United States)

    Dricu, Mihai; Ceravolo, Leonardo; Grandjean, Didier; Frühholz, Sascha

    2017-11-24

    Perceptual decision-making on emotions involves gathering sensory information about the affective state of another person and forming a decision on the likelihood of a particular state. These perceptual decisions can be of varying complexity as determined by different contexts. We used functional magnetic resonance imaging and a region of interest approach to investigate the brain activation and functional connectivity behind two forms of perceptual decision-making. More complex unbiased decisions on affective voices recruited an extended bilateral network consisting of the posterior inferior frontal cortex, the orbitofrontal cortex, the amygdala, and voice-sensitive areas in the auditory cortex. Less complex biased decisions on affective voices distinctly recruited the right mid inferior frontal cortex, pointing to a functional distinction in this region following decisional requirements. Furthermore, task-induced neural connectivity revealed stronger connections between these frontal, auditory, and limbic regions during unbiased relative to biased decision-making on affective voices. Together, the data shows that different types of perceptual decision-making on auditory emotions have distinct patterns of activations and functional coupling that follow the decisional strategies and cognitive mechanisms involved during these perceptual decisions.

  10. Learning Style Preferences of Iranian EFL High School Students

    Directory of Open Access Journals (Sweden)

    Reza Vaseghi

    2013-05-01

    Full Text Available The current study examined the learning style preferences of 75 Iranian students at Marefat high school in Kuala Lumpur of which, 41 are females and 34 are males. As there are very few researches in which the learning style preferences of Iranian high school students investigated, this study attempts to fulfil this gap. To this end, in order to identify the students’ preferred learning styles (Visual, Auditory, Kinesthetic, Tactile, Group, and Individual Reid’s Perceptual Learning Style Preferences Questionnaire was used. Results indicated that the six learning style preferences considered in the questionnaire were positively preferred. Overall, kinesthetic and tactile learning were major learning styles. Auditory, group, visual, and individual were minor.

  11. Dissociable mechanisms of speed-accuracy tradeoff during visual perceptual learning are revealed by a hierarchical drift diffusion model

    Directory of Open Access Journals (Sweden)

    Jiaxiang eZhang

    2014-04-01

    Full Text Available Two phenomena are commonly observed in decision-making. First, there is a speed-accuracy tradeoff such that decisions are slower and more accurate when instructions emphasize accuracy over speed, and vice versa. Second, decision performance improves with practice, as a task is learnt. The speed-accuracy tradeoff and learning effects have been explained under a well-established evidence-accumulation framework for decision-making, which suggests that evidence supporting each choice is accumulated over time, and a decision is committed to when the accumulated evidence reaches a decision boundary. This framework suggests that changing the decision boundary creates the tradeoff between decision speed and accuracy, while increasing the rate of accumulation leads to more accurate and faster decisions after learning. However, recent studies challenged the view that speed-accuracy tradeoff and learning are associated with changes in distinct, single decision parameters. Further, the influence of speed-accuracy instructions over the course of learning remains largely unknown. Here, we used a hierarchical drift-diffusion model to examine the speed-accuracy tradeoff during learning of a coherent motion discrimination task across multiple training sessions, and a transfer test session. The influence of speed-accuracy instructions was robust over training and generalized across untrained stimulus features. Emphasizing decision accuracy rather than speed was associated with increased boundary separation, drift rate and non-decision time at the beginning of training. However, after training, an emphasis on decision accuracy was only associated with increased boundary separation. In addition, faster and more accurate decisions after learning were due to a gradual decrease in boundary separation and an increase in drift rate. The results suggest that speed-accuracy instructions and learning differentially shape decision-making processes at different time scales.

  12. Auditory Neuropathy

    Science.gov (United States)

    ... children and adults with auditory neuropathy. Cochlear implants (electronic devices that compensate for damaged or nonworking parts ... and Drug Administration: Information on Cochlear Implants Telecommunications Relay Services Your Baby's Hearing Screening News Deaf health ...

  13. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.

  14. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre

  15. Presentation of dynamically overlapping auditory messages in user interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Papp, III, Albert Louis [Univ. of California, Davis, CA (United States)

    1997-09-01

    This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by

  16. Desempenho de escolares com distúrbio de aprendizagem e dislexia em testes de processamento auditivo Performance of students with learning disabilities and dyslexia on auditory processing tests

    Directory of Open Access Journals (Sweden)

    Adriana Marques de Oliveira

    2011-06-01

    Full Text Available OBJETIVO: caracterizar e comparar, por meio de testes comportamentais, o processamento auditivo de escolares com diagnóstico interdisciplinar de (I distúrbio da aprendizagem, (II dislexia e (III escolares com bom desempenho acadêmico. MÉTODOS: participaram deste estudo 30 escolares na faixa etária de 8 a 16 anos de idade, de ambos os gêneros, de 2ª a 4ª séries do ensino fundamental, divididos em três grupos: GI composto por 10 escolares com diagnóstico interdisciplinar de distúrbio de aprendizagem, GII: composto por 10 escolares com diagnóstico interdisciplinar de dislexia e GIII composto por 10 escolares sem dificuldades de aprendizagem, pareados segundo gênero e faixa etária com os grupos GI e GII. Foram realizadas avaliação audiológica e de processamento auditivo. RESULTADOS: os escolares de GIII apresentaram desempenho superior nos testes de processamento auditivo em relação aos escolares de GI e GII. GI apresentou desempenho inferior nas habilidades auditivas avaliadas para testes dicóticos de dígitos e dissílabos alternados, logoaudiometria pediátrica, localização sonora, memória verbal e não-verbal, ao passo que GII apresentou as mesmas alterações de GI, com exceção do teste de logoaudiometria pediátrica. CONCLUSÃO: os escolares com transtornos de aprendizagem apresentaram desempenho inferior nos testes de processamento auditivo, sendo que os escolares com distúrbio de aprendizagem apresentaram maior número de habilidades auditivas alteradas, em comparação com os escolares com dislexia, por terem apresentado atenção sustentada reduzida. O grupo de escolares com dislexia apresentou alterações decorrentes da dificuldade relacionada à codificação e decodificação de estímulos sonoros.PURPOSE: to characterize and compare, by means of behavioral tests, the auditory processing of students with an interdisciplinary diagnosis of (I learning disorder, (II dyslexia and (III students with good academic

  17. Musically cued gait-training improves both perceptual and motor timing in Parkinson's disease

    Directory of Open Access Journals (Sweden)

    Charles-Etienne eBenoit

    2014-07-01

    Full Text Available It is well established that auditory cueing improves gait in patients with Idiopathic Parkinson’s Disease (IPD. Disease-related reductions in speed and step length can be improved by providing rhythmical auditory cues via a metronome or music. However, effects on cognitive aspects of motor control have yet to be thoroughly investigated. If synchronization of movement to an auditory cue relies on a supramodal timing system involved in perceptual, motor and sensorimotor integration, auditory cueing can be expected to affect both motor and perceptual timing. Here we tested this hypothesis by assessing perceptual and motor timing in 15 IPD patients before and after a four-week music training program with rhythmic auditory cueing. Long-term effects were assessed one month after the end of the training. Perceptual and motor timing was evaluated with the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA and compared to that of age-, gender-, and education-matched healthy controls. Prior to training, IPD patients exhibited impaired perceptual and motor timing. Training improved patients’ performance in tasks requiring synchronization with isochronous sequences, and enhanced their ability to adapt to durational changes in a sequence in hand tapping tasks. Benefits of cueing extended to time perception (duration discrimination and detection of misaligned beats in musical excerpts. The current results demonstrate that auditory cueing leads to benefits beyond gait and support the idea that coupling gait to rhythmic auditory cues in IPD patients relies on a neuronal network engaged in both perceptual and motor timing.

  18. Two-Photon Functional Imaging of the Auditory Cortex in Behaving Mice: From Neural Networks to Single Spines

    Directory of Open Access Journals (Sweden)

    Ruijie Li

    2018-04-01

    Full Text Available In vivo two-photon Ca2+ imaging is a powerful tool for recording neuronal activities during perceptual tasks and has been increasingly applied to behaving animals for acute or chronic experiments. However, the auditory cortex is not easily accessible to imaging because of the abundant temporal muscles, arteries around the ears and their lateral locations. Here, we report a protocol for two-photon Ca2+ imaging in the auditory cortex of head-fixed behaving mice. By using a custom-made head fixation apparatus and a head-rotated fixation procedure, we achieved two-photon imaging and in combination with targeted cell-attached recordings of auditory cortical neurons in behaving mice. Using synthetic Ca2+ indicators, we recorded the Ca2+ transients at multiple scales, including neuronal populations, single neurons, dendrites and single spines, in auditory cortex during behavior. Furthermore, using genetically encoded Ca2+ indicators (GECIs, we monitored the neuronal dynamics over days throughout the process of associative learning. Therefore, we achieved two-photon functional imaging at multiple scales in auditory cortex of behaving mice, which extends the tool box for investigating the neural basis of audition-related behaviors.

  19. Auditory cortical processing in real-world listening: the auditory system going real.

    Science.gov (United States)

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. Copyright © 2014 the authors 0270-6474/14/3415135-04$15.00/0.

  20. Development and evaluation of the LiSN & learn auditory training software for deficit-specific remediation of binaural processing deficits in children: preliminary findings.

    Science.gov (United States)

    Cameron, Sharon; Dillon, Harvey

    2011-01-01

    The LiSN & Learn auditory training software was developed specifically to improve binaural processing skills in children with suspected central auditory processing disorder who were diagnosed as having a spatial processing disorder (SPD). SPD is defined here as a condition whereby individuals are deficient in their ability to use binaural cues to selectively attend to sounds arriving from one direction while simultaneously suppressing sounds arriving from another. As a result, children with SPD have difficulty understanding speech in noisy environments, such as in the classroom. To develop and evaluate the LiSN & Learn auditory training software for children diagnosed with the Listening in Spatialized Noise-Sentences Test (LiSN-S) as having an SPD. The LiSN-S is an adaptive speech-in-noise test designed to differentially diagnose spatial and pitch-processing deficits in children with suspected central auditory processing disorder. Participants were nine children (aged between 6 yr, 9 mo, and 11 yr, 4 mo) who performed outside normal limits on the LiSN-S. In a pre-post study of treatment outcomes, participants trained on the LiSN & Learn for 15 min per day for 12 weeks. Participants acted as their own control. Participants were assessed on the LiSN-S, as well as tests of attention and memory and a self-report questionnaire of listening ability. Performance on all tasks was reassessed after 3 mo where no further training occurred. The LiSN & Learn produces a three-dimensional auditory environment under headphones on the user's home computer. The child's task was to identify a word from a target sentence presented in background noise. A weighted up-down adaptive procedure was used to adjust the signal level of the target based on the participant's response. On average, speech reception thresholds on the LiSN & Learn improved by 10 dB over the course of training. As hypothesized, there were significant improvements in posttraining performance on the LiSN-S conditions

  1. Prediction of kindergarteners' behavior on Metropolitan Readiness Tests from preschool perceptual and perceptual-motor performances: a validation study.

    Science.gov (United States)

    Belka, D E

    1981-06-01

    Multiple regression equations were generated to predict cognitive achievement for 40 children (ages 57 to 68 mo.) 1 yr. after administration of a battery of 6 perceptual and perceptual-motor tests to determine if previous results from Toledo could be replicated. Regression equations generated from maximum R2 improvement techniques indicated that performance at prekindergarten is useful for prediction of cognitive performance (total score and total score without the copying subtest on the Metropolitan Readiness Tests) 1 yr. later at the end of kindergarten. The optimal battery included scores on auditory perception, fine perceptual-motor, and gross perceptual-motor tasks. The moderate predictive power of the equations obtained was compared with high predictive power generated in the Toledo study.

  2. Perceptual Robust Design

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard

    The research presented in this PhD thesis has focused on a perceptual approach to robust design. The results of the research and the original contribution to knowledge is a preliminary framework for understanding, positioning, and applying perceptual robust design. Product quality is a topic...... been presented. Therefore, this study set out to contribute to the understanding and application of perceptual robust design. To achieve this, a state-of-the-art and current practice review was performed. From the review two main research problems were identified. Firstly, a lack of tools...... for perceptual robustness was found to overlap with the optimum for functional robustness and at most approximately 2.2% out of the 14.74% could be ascribed solely to the perceptual robustness optimisation. In conclusion, the thesis have offered a new perspective on robust design by merging robust design...

  3. Top-down (Prior Knowledge) and Bottom-up (Perceptual Modality) Influences on Spontaneous Interpersonal Synchronization.

    Science.gov (United States)

    Gipson, Christina L; Gorman, Jamie C; Hessler, Eric E

    2016-04-01

    Coordination with others is such a fundamental part of human activity that it can happen unintentionally. This unintentional coordination can manifest as synchronization and is observed in physical and human systems alike. We investigated the role of top-down influences (prior knowledge of the perceptual modality their partner is using) and bottom-up factors (perceptual modality combination) on spontaneous interpersonal synchronization. We examine this phenomena with respect to two different theoretical perspectives that differently emphasize top-down and bottom-up factors in interpersonal synchronization: joint-action/shared cognition theories and ecological-interactive theories. In an empirical study twelve dyads performed a finger oscillation task while attending to each other's movements through either visual, auditory, or visual and auditory perceptual modalities. Half of the participants were given prior knowledge of their partner's perceptual capabilities for coordinating across these different perceptual modality combinations. We found that the effect of top-down influence depends on the perceptual modality combination between two individuals. When people used the same perceptual modalities, top-down influence resulted in less synchronization and when people used different perceptual modalities, top-down influence resulted in more synchronization. Furthermore, persistence in the change in behavior as a result of having perceptual information about each other ('social memory') was stronger when this top-down influence was present.

  4. Music and the auditory brain: where is the connection?

    Directory of Open Access Journals (Sweden)

    Israel eNelken

    2011-09-01

    Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.

  5. Auditory conflict and congruence in frontotemporal dementia.

    Science.gov (United States)

    Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D

    2017-09-01

    Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  6. How may the basal ganglia contribute to auditory categorization and speech perception?

    Directory of Open Access Journals (Sweden)

    Sung-Joo eLim

    2014-08-01

    Full Text Available Listeners must accomplish two complementary perceptual feats in extracting a message from speech. They must discriminate linguistically-relevant acoustic variability and generalize across irrelevant variability. Said another way, they must categorize speech. Since the mapping of acoustic variability is language-specific, these categories must be learned from experience. Thus, understanding how, in general, the auditory system acquires and represents categories can inform us about the toolbox of mechanisms available to speech perception. This perspective invites consideration of findings from cognitive neuroscience literatures outside of the speech domain as a means of constraining models of speech perception. Although neurobiological models of speech perception have mainly focused on cerebral cortex, research outside the speech domain is consistent with the possibility of significant subcortical contributions in category learning. Here, we review the functional role of one such structure, the basal ganglia. We examine research from animal electrophysiology, human neuroimaging, and behavior to consider characteristics of basal ganglia processing that may be advantageous for speech category learning. We also present emerging evidence for a direct role for basal ganglia in learning auditory categories in a complex, naturalistic task intended to model the incidental manner in which speech categories are acquired. To conclude, we highlight new research questions that arise in incorporating the broader neuroscience research literature in modeling speech perception, and suggest how understanding contributions of the basal ganglia can inform attempts to optimize training protocols for learning non-native speech categories in adulthood.

  7. Perceptual Grouping via Untangling Gestalt Principles

    DEFF Research Database (Denmark)

    Qi, Yonggang; Guo, Jun; Li, Yi

    2013-01-01

    the importance of Gestalt rules by solving a learning to rank problem, and formulate a multi-label graph-cuts algo- rithm to group image primitives while taking into account the learned Gestalt confliction. Our experiment results confirm the existence of Gestalt confliction in perceptual grouping and demonstrate...... confliction, i.e., the relative importance of each rule compared with another, remains unsolved. In this paper, we investigate the problem of perceptual grouping by quantifying the confliction among three commonly used rules: similarity, continuity and proximity. More specifically, we propose to quantify...... an improved performance when such a conflic- tion is accounted for via the proposed grouping algorithm. Finally, a novel cross domain image classification method is proposed by exploiting perceptual grouping as representation....

  8. Learning during processing Word learning doesn’t wait for word recognition to finish

    Science.gov (United States)

    Apfelbaum, Keith S.; McMurray, Bob

    2017-01-01

    Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learning event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. PMID:27471082

  9. Auditory-Acoustic Basis of Consonant Perception. Attachments A thru I

    Science.gov (United States)

    1991-01-22

    conceptual model of the processes whereby the human listener converts the acoustic signal into a string of phonetic elements could be successfully implemented...perceptual aspect is implied. It is within the broad framwork described above that the auditory-perceptual theory will be considered. But before beginning...perceptual and not acoustic or sensory. For example, it is planned to conceptualize the target zones for stops as being physically unrealizable by letting

  10. Análise perceptivo-auditiva, acústica computadorizada e laringológica da voz de adultos jovens fumantes e não-fumantes Auditory perceptual, acoustic, computerized and laryngological analysis of young smokers' and nonsmokers' voice

    Directory of Open Access Journals (Sweden)

    Daniele C. de Figueiredo

    2003-12-01

    Full Text Available OBJETIVO: Realizar a avaliação laringológica, análise perceptivo-auditiva e acústica computadorizada das vozes de adultos jovens fumantes e não-fumantes, sem queixa vocal, compará-las e verificar a incidência de alterações laríngeas. FORMA DE ESTUDO: Caso-controle. MATERIAL E MÉTODO: Foram analisadas as vozes de 80 indivíduos com idades compreendidas entre 20 e 40 anos. Estes foram divididos em quatro grupos: 20 homens fumantes, 20 homens não-fumantes, 20 mulheres fumantes e 20 mulheres não-fumantes. Este estudo envolveu laringoscopia, realizada e interpretada por uma médica otorrinolaringologista, e gravação em fita cassete das vogais sustentadas /a/, /m/, /i/ e /u/, contagem dos números de 1 a 20, emissão dos dias da semana, dos meses do ano e da canção "Parabéns a você". A gravação em fita cassete foi editada para posterior análise espectrográfica e avaliação perceptiva auditiva por quatro avaliadores com experiência na área de voz. RESULTADOS: Após a análise, foi constatada uma discreta diminuição da freqüência fundamental da voz dos indivíduos fumantes de ambos os sexos, bem como maior incidência de rouquidão e de alterações laríngeas entre os tabagistas.AIM: The goal of this study was to make the laryngological, auditory perceptual and acoustic computer analyses of young adults' (smokers and non-smokers voices, without vocal complaint, compare them and verify the incidence of vocal alterations. STUDY DESIGN: Clinical comparative. MATERIAL AND METHOD: The voices of 80 individuals with age range from 20 to 40 years were analyzed. These individuals were divided in four groups: 20 male smokers, 20 male non-smokers, 20 female smokers and 20 female non-smokers. This analysis involved laryngoscopy, which was performed and interpreted by an otolaryngologist, and cassette tape recordings of the sustained vowels /a/, /m/, /i/ e /u/, number counting from 1 to 20, speech of the days of the week, months of

  11. Representation of auditory-filter phase characteristics in the cortex of human listeners

    DEFF Research Database (Denmark)

    Rupp, A.; Sieroka, N.; Gutschalk, A.

    2008-01-01

    consistent with the perceptual data obtained with the same stimuli and with results from simulations of neural activity at the output of cochlear preprocessing. These findings demonstrate that phase effects in peripheral auditory processing are accurately reflected up to the level of the auditory cortex....

  12. Primary Auditory Cortex Regulates Threat Memory Specificity

    Science.gov (United States)

    Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…

  13. Sinusoidal Analysis-Synthesis of Audio Using Perceptual Criteria

    Science.gov (United States)

    Painter, Ted; Spanias, Andreas

    2003-12-01

    This paper presents a new method for the selection of sinusoidal components for use in compact representations of narrowband audio. The method consists of ranking and selecting the most perceptually relevant sinusoids. The idea behind the method is to maximize the matching between the auditory excitation pattern associated with the original signal and the corresponding auditory excitation pattern associated with the modeled signal that is being represented by a small set of sinusoidal parameters. The proposed component-selection methodology is shown to outperform the maximum signal-to-mask ratio selection strategy in terms of subjective quality.

  14. Benefits of stimulus congruency for multisensory facilitation of visual learning.

    Directory of Open Access Journals (Sweden)

    Robyn S Kim

    Full Text Available BACKGROUND: Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning. METHODOLOGY/PRINCIPLE FINDINGS: Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli. CONCLUSIONS/SIGNIFICANCE: This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.

  15. Mechanism of Perceptual Attention

    National Research Council Canada - National Science Library

    Lu, Zhong-Lin

    2000-01-01

    .... Attention may affect the perceived clarity of visual displays and improve performance. In this project, a powerful external noise method was developed to identify and characterize the effect of attention on perceptual performance in visual tasks...

  16. Mechanisms of Perceptual Attention

    National Research Council Canada - National Science Library

    Dosher, Barbara

    2000-01-01

    .... Attention may affect the perceived clarity of visual displays and improve performance. In this project, a powerful external noise method was developed to identify and characterize the effect of attention on perceptual performance in visual tasks...

  17. Learning Styles.

    Science.gov (United States)

    Missouri Univ., Columbia. Coll. of Education.

    Information is provided regarding major learning styles and other factors important to student learning. Several typically asked questions are presented regarding different learning styles (visual, auditory, tactile and kinesthetic, and multisensory learning), associated considerations, determining individuals' learning styles, and appropriate…

  18. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  19. A systematic review on ‘Foveal Crowding’ in visually impaired children and perceptual learning as a method to reduce Crowding

    Directory of Open Access Journals (Sweden)

    Huurneman Bianca

    2012-07-01

    compare crowding ratios and it shows that charts with 50% interoptotype spacing were most sensitive to capture crowding effects. The groups that showed the largest crowding effects were individuals with CN, VI adults with central scotomas and children with CVI. Perceptual Learning seems to be a promising technique to reduce excessive foveal crowding effects.

  20. The processing of visual and auditory information for reaching movements.

    Science.gov (United States)

    Glazebrook, Cheryl M; Welsh, Timothy N; Tremblay, Luc

    2016-09-01

    Presenting target and non-target information in different modalities influences target localization if the non-target is within the spatiotemporal limits of perceptual integration. When using auditory and visual stimuli, the influence of a visual non-target on auditory target localization is greater than the reverse. It is not known, however, whether or how such perceptual effects extend to goal-directed behaviours. To gain insight into how audio-visual stimuli are integrated for motor tasks, the kinematics of reaching movements towards visual or auditory targets with or without a non-target in the other modality were examined. When present, the simultaneously presented non-target could be spatially coincident, to the left, or to the right of the target. Results revealed that auditory non-targets did not influence reaching trajectories towards a visual target, whereas visual non-targets influenced trajectories towards an auditory target. Interestingly, the biases induced by visual non-targets were present early in the trajectory and persisted until movement end. Subsequent experimentation indicated that the magnitude of the biases was equivalent whether participants performed a perceptual or motor task, whereas variability was greater for the motor versus the perceptual tasks. We propose that visually induced trajectory biases were driven by the perceived mislocation of the auditory target, which in turn affected both the movement plan and subsequent control of the movement. Such findings provide further evidence of the dominant role visual information processing plays in encoding spatial locations as well as planning and executing reaching action, even when reaching towards auditory targets.

  1. Perceptual Load Affects Eyewitness Accuracy & Susceptibility to Leading Questions

    Directory of Open Access Journals (Sweden)

    Gillian Murphy

    2016-08-01

    Full Text Available Load Theory (Lavie, 1995; 2005 states that the level of perceptual load in a task (i.e. the amount of information involved in processing task-relevant stimuli determines the efficiency of selective attention. There is evidence that perceptual load affects distractor processing, with increased inattentional blindness under high load. Given that high load can result in individuals failing to report seeing obvious objects, it is conceivable that load may also impair memory for the scene. The current study is the first to assess the effect of perceptual load on eyewitness memory. Across three experiments (two video-based and one in a driving simulator, the effect of perceptual load on eyewitness memory was assessed. The results showed that eyewitnesses were less accurate under high load, in particular for peripheral details. For example, memory for the central character in the video was not affected by load but memory for a witness who passed by the window at the edge of the scene was significantly worse under high load. High load memories were also more open to suggestion, showing increased susceptibility to leading questions. High visual perceptual load also affected recall for auditory information, illustrating a possible cross-modal perceptual load effect on memory accuracy. These results have implications for eyewitness memory researchers and forensic professionals.

  2. Perceptual Load Affects Eyewitness Accuracy and Susceptibility to Leading Questions.

    Science.gov (United States)

    Murphy, Gillian; Greene, Ciara M

    2016-01-01

    Load Theory (Lavie, 1995, 2005) states that the level of perceptual load in a task (i.e., the amount of information involved in processing task-relevant stimuli) determines the efficiency of selective attention. There is evidence that perceptual load affects distractor processing, with increased inattentional blindness under high load. Given that high load can result in individuals failing to report seeing obvious objects, it is conceivable that load may also impair memory for the scene. The current study is the first to assess the effect of perceptual load on eyewitness memory. Across three experiments (two video-based and one in a driving simulator), the effect of perceptual load on eyewitness memory was assessed. The results showed that eyewitnesses were less accurate under high load, in particular for peripheral details. For example, memory for the central character in the video was not affected by load but memory for a witness who passed by the window at the edge of the scene was significantly worse under high load. High load memories were also more open to suggestion, showing increased susceptibility to leading questions. High visual perceptual load also affected recall for auditory information, illustrating a possible cross-modal perceptual load effect on memory accuracy. These results have implications for eyewitness memory researchers and forensic professionals.

  3. Consensus paper: the role of the cerebellum in perceptual processes.

    Science.gov (United States)

    Baumann, Oliver; Borra, Ronald J; Bower, James M; Cullen, Kathleen E; Habas, Christophe; Ivry, Richard B; Leggio, Maria; Mattingley, Jason B; Molinari, Marco; Moulton, Eric A; Paulin, Michael G; Pavlova, Marina A; Schmahmann, Jeremy D; Sokolov, Arseny A

    2015-04-01

    Various lines of evidence accumulated over the past 30 years indicate that the cerebellum, long recognized as essential for motor control, also has considerable influence on perceptual processes. In this paper, we bring together experts from psychology and neuroscience, with the aim of providing a succinct but comprehensive overview of key findings related to the involvement of the cerebellum in sensory perception. The contributions cover such topics as anatomical and functional connectivity, evolutionary and comparative perspectives, visual and auditory processing, biological motion perception, nociception, self-motion, timing, predictive processing, and perceptual sequencing. While no single explanation has yet emerged concerning the role of the cerebellum in perceptual processes, this consensus paper summarizes the impressive empirical evidence on this problem and highlights diversities as well as commonalities between existing hypotheses. In addition to work with healthy individuals and patients with cerebellar disorders, it is also apparent that several neurological conditions in which perceptual disturbances occur, including autism and schizophrenia, are associated with cerebellar pathology. A better understanding of the involvement of the cerebellum in perceptual processes will thus likely be important for identifying and treating perceptual deficits that may at present go unnoticed and untreated. This paper provides a useful framework for further debate and empirical investigations into the influence of the cerebellum on sensory perception.

  4. Neuronal Correlates of Auditory Streaming in Monkey Auditory Cortex for Tone Sequences without Spectral Differences

    Directory of Open Access Journals (Sweden)

    Stanislava Knyazeva

    2018-01-01

    Full Text Available This study finds a neuronal correlate of auditory perceptual streaming in the primary auditory cortex for sequences of tone complexes that have the same amplitude spectrum but a different phase spectrum. Our finding is based on microelectrode recordings of multiunit activity from 270 cortical sites in three awake macaque monkeys. The monkeys were presented with repeated sequences of a tone triplet that consisted of an A tone, a B tone, another A tone and then a pause. The A and B tones were composed of unresolved harmonics formed by adding the harmonics in cosine phase, in alternating phase, or in random phase. A previous psychophysical study on humans revealed that when the A and B tones are similar, humans integrate them into a single auditory stream; when the A and B tones are dissimilar, humans segregate them into separate auditory streams. We found that the similarity of neuronal rate responses to the triplets was highest when all A and B tones had cosine phase. Similarity was intermediate when the A tones had cosine phase and the B tones had alternating phase. Similarity was lowest when the A tones had cosine phase and the B tones had random phase. The present study corroborates and extends previous reports, showing similar correspondences between neuronal activity in the primary auditory cortex and auditory streaming of sound sequences. It also is consistent with Fishman’s population separation model of auditory streaming.

  5. Neuronal Correlates of Auditory Streaming in Monkey Auditory Cortex for Tone Sequences without Spectral Differences.

    Science.gov (United States)

    Knyazeva, Stanislava; Selezneva, Elena; Gorkin, Alexander; Aggelopoulos, Nikolaos C; Brosch, Michael

    2018-01-01

    This study finds a neuronal correlate of auditory perceptual streaming in the primary auditory cortex for sequences of tone complexes that have the same amplitude spectrum but a different phase spectrum. Our finding is based on microelectrode recordings of multiunit activity from 270 cortical sites in three awake macaque monkeys. The monkeys were presented with repeated sequences of a tone triplet that consisted of an A tone, a B tone, another A tone and then a pause. The A and B tones were composed of unresolved harmonics formed by adding the harmonics in cosine phase, in alternating phase, or in random phase. A previous psychophysical study on humans revealed that when the A and B tones are similar, humans integrate them into a single auditory stream; when the A and B tones are dissimilar, humans segregate them into separate auditory streams. We found that the similarity of neuronal rate responses to the triplets was highest when all A and B tones had cosine phase. Similarity was intermediate when the A tones had cosine phase and the B tones had alternating phase. Similarity was lowest when the A tones had cosine phase and the B tones had random phase. The present study corroborates and extends previous reports, showing similar correspondences between neuronal activity in the primary auditory cortex and auditory streaming of sound sequences. It also is consistent with Fishman's population separation model of auditory streaming.

  6. Memory and learning with rapid audiovisual sequences

    Science.gov (United States)

    Keller, Arielle S.; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  7. Memory and learning with rapid audiovisual sequences.

    Science.gov (United States)

    Keller, Arielle S; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed.

  8. Medial Auditory Thalamus Is Necessary for Acquisition and Retention of Eyeblink Conditioning to Cochlear Nucleus Stimulation

    Science.gov (United States)

    Halverson, Hunter E.; Poremba, Amy; Freeman, John H.

    2015-01-01

    Associative learning tasks commonly involve an auditory stimulus, which must be projected through the auditory system to the sites of memory induction for learning to occur. The cochlear nucleus (CN) projection to the pontine nuclei has been posited as the necessary auditory pathway for cerebellar learning, including eyeblink conditioning.…

  9. Discovering Structure in Auditory Input: Evidence from Williams Syndrome

    Science.gov (United States)

    Elsabbagh, Mayada; Cohen, Henri; Karmiloff-Smith, Annette

    2010-01-01

    We examined auditory perception in Williams syndrome by investigating strategies used in organizing sound patterns into coherent units. In Experiment 1, we investigated the streaming of sound sequences into perceptual units, on the basis of pitch cues, in a group of children and adults with Williams syndrome compared to typical controls. We showed…

  10. Auditory, Tactile, and Audiotactile Information Processing Following Visual Deprivation

    Science.gov (United States)

    Occelli, Valeria; Spence, Charles; Zampini, Massimiliano

    2013-01-01

    We highlight the results of those studies that have investigated the plastic reorganization processes that occur within the human brain as a consequence of visual deprivation, as well as how these processes give rise to behaviorally observable changes in the perceptual processing of auditory and tactile information. We review the evidence showing…

  11. Cortical Representations of Speech in a Multitalker Auditory Scene.

    Science.gov (United States)

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory

  12. Skilled deaf readers have an enhanced perceptual span in reading.

    Science.gov (United States)

    Bélanger, Nathalie N; Slattery, Timothy J; Mayberry, Rachel I; Rayner, Keith

    2012-07-01

    Recent evidence suggests that, compared with hearing people, deaf people have enhanced visual attention to simple stimuli viewed in the parafovea and periphery. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to preprocess upcoming words and decide where to look next. In the study reported here, we investigated whether auditory deprivation affects low-level visual processing during reading by comparing the perceptual span of deaf signers who were skilled and less-skilled readers with the perceptual span of skilled hearing readers. Compared with hearing readers, the two groups of deaf readers had a larger perceptual span than would be expected given their reading ability. These results provide the first evidence that deaf readers' enhanced attentional allocation to the parafovea is used during complex cognitive tasks, such as reading.

  13. Temporal expectation weights visual signals over auditory signals.

    Science.gov (United States)

    Menceloglu, Melisa; Grabowecky, Marcia; Suzuki, Satoru

    2017-04-01

    Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory-visual interaction, using an auditory-visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.

  14. Perceptual elements in brain mechanisms of acoustic communication in humans and nonhuman primates.

    Science.gov (United States)

    Reser, David H; Rosa, Marcello

    2014-12-01

    Ackermann et al. outline a model for elaboration of subcortical motor outputs as a driving force for the development of the apparently unique behaviour of language in humans. They emphasize circuits in the striatum and midbrain, and acknowledge, but do not explore, the importance of the auditory perceptual pathway for evolution of verbal communication. We suggest that understanding the evolution of language will also require understanding of vocalization perception, especially in the auditory cortex.

  15. Predicting the Perceptual Consequences of Hidden Hearing Loss

    Directory of Open Access Journals (Sweden)

    Andrew J. Oxenham

    2016-12-01

    Full Text Available Recent physiological studies in several rodent species have revealed that permanent damage can occur to the auditory system after exposure to a noise that produces only a temporary shift in absolute thresholds. The damage has been found to occur in the synapses between the cochlea’s inner hair cells and the auditory nerve, effectively severing part of the connection between the ear and the brain. This synaptopathy has been termed hidden hearing loss because its effects are not thought to be revealed in standard clinical, behavioral, or physiological measures of absolute threshold. It is currently unknown whether humans suffer from similar deficits after noise exposure. Even if synaptopathy occurs in humans, it remains unclear what the perceptual consequences might be or how they should best be measured. Here, we apply a simple theoretical model, taken from signal detection theory, to provide some predictions for what perceptual effects could be expected for a given loss of synapses. Predictions are made for a number of basic perceptual tasks, including tone detection in quiet and in noise, frequency discrimination, level discrimination, and binaural lateralization. The model’s predictions are in line with the empirical observations that a 50% loss of synapses leads to changes in threshold that are too small to be reliably measured. Overall, the model provides a simple initial quantitative framework for understanding and predicting the perceptual effects of synaptopathy in humans.

  16. Steady-state signatures of visual perceptual load, multimodal distractor filtering, and neural competition.

    Science.gov (United States)

    Parks, Nathan A; Hilimire, Matthew R; Corballis, Paul M

    2011-05-01

    The perceptual load theory of attention posits that attentional selection occurs early in processing when a task is perceptually demanding but occurs late in processing otherwise. We used a frequency-tagged steady-state evoked potential paradigm to investigate the modality specificity of perceptual load-induced distractor filtering and the nature of neural-competitive interactions between task and distractor stimuli. EEG data were recorded while participants monitored a stream of stimuli occurring in rapid serial visual presentation (RSVP) for the appearance of previously assigned targets. Perceptual load was manipulated by assigning targets that were identifiable by color alone (low load) or by the conjunction of color and orientation (high load). The RSVP task was performed alone and in the presence of task-irrelevant visual and auditory distractors. The RSVP stimuli, visual distractors, and auditory distractors were "tagged" by modulating each at a unique frequency (2.5, 8.5, and 40.0 Hz, respectively), which allowed each to be analyzed separately in the frequency domain. We report three important findings regarding the neural mechanisms of perceptual load. First, we replicated previous findings of within-modality distractor filtering and demonstrated a reduction in visual distractor signals with high perceptual load. Second, auditory steady-state distractor signals were unaffected by manipulations of visual perceptual load, consistent with the idea that perceptual load-induced distractor filtering is modality specific. Third, analysis of task-related signals revealed that visual distractors competed with task stimuli for representation and that increased perceptual load appeared to resolve this competition in favor of the task stimulus.

  17. From Perceptual Categories to Concepts: What Develops?

    Science.gov (United States)

    Sloutsky, Vladimir M.

    2010-01-01

    People are remarkably smart: they use language, possess complex motor skills, make non-trivial inferences, develop and use scientific theories, make laws, and adapt to complex dynamic environments. Much of this knowledge requires concepts and this paper focuses on how people acquire concepts. It is argued that conceptual development progresses from simple perceptual grouping to highly abstract scientific concepts. This proposal of conceptual development has four parts. First, it is argued that categories in the world have different structure. Second, there might be different learning systems (sub-served by different brain mechanisms) that evolved to learn categories of differing structures. Third, these systems exhibit differential maturational course, which affects how categories of different structures are learned in the course of development. And finally, an interaction of these components may result in the developmental transition from perceptual groupings to more abstract concepts. This paper reviews a large body of empirical evidence supporting this proposal. PMID:21116483

  18. Distraction by deviance: comparing the effects of auditory and visual deviant stimuli on auditory and visual target processing.

    Science.gov (United States)

    Leiva, Alicia; Parmentier, Fabrice B R; Andrés, Pilar

    2015-01-01

    We report the results of oddball experiments in which an irrelevant stimulus (standard, deviant) was presented before a target stimulus and the modality of these stimuli was manipulated orthogonally (visual/auditory). Experiment 1 showed that auditory deviants yielded distraction irrespective of the target's modality while visual deviants did not impact on performance. When participants were forced to attend the distractors in order to detect a rare target ("target-distractor"), auditory deviants yielded distraction irrespective of the target's modality and visual deviants yielded a small distraction effect when targets were auditory (Experiments 2 & 3). Visual deviants only produced distraction for visual targets when deviant stimuli were not visually distinct from the other distractors (Experiment 4). Our results indicate that while auditory deviants yield distraction irrespective of the targets' modality, visual deviants only do so when attended and under selective conditions, at least when irrelevant and target stimuli are temporally and perceptually decoupled.

  19. Defining Auditory-Visual Objects: Behavioral Tests and Physiological Mechanisms.

    Science.gov (United States)

    Bizley, Jennifer K; Maddox, Ross K; Lee, Adrian K C

    2016-02-01

    Crossmodal integration is a term applicable to many phenomena in which one sensory modality influences task performance or perception in another sensory modality. We distinguish the term binding as one that should be reserved specifically for the process that underpins perceptual object formation. To unambiguously differentiate binding form other types of integration, behavioral and neural studies must investigate perception of a feature orthogonal to the features that link the auditory and visual stimuli. We argue that supporting true perceptual binding (as opposed to other processes such as decision-making) is one role for cross-sensory influences in early sensory cortex. These early multisensory interactions may therefore form a physiological substrate for the bottom-up grouping of auditory and visual stimuli into auditory-visual (AV) objects. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Natural texture retrieval based on perceptual similarity measurement

    Science.gov (United States)

    Gao, Ying; Dong, Junyu; Lou, Jianwen; Qi, Lin; Liu, Jun

    2018-04-01

    A typical texture retrieval system performs feature comparison and might not be able to make human-like judgments of image similarity. Meanwhile, it is commonly known that perceptual texture similarity is difficult to be described by traditional image features. In this paper, we propose a new texture retrieval scheme based on texture perceptual similarity. The key of the proposed scheme is that prediction of perceptual similarity is performed by learning a non-linear mapping from image features space to perceptual texture space by using Random Forest. We test the method on natural texture dataset and apply it on a new wallpapers dataset. Experimental results demonstrate that the proposed texture retrieval scheme with perceptual similarity improves the retrieval performance over traditional image features.

  1. Auditory-vocal mirroring in songbirds.

    Science.gov (United States)

    Mooney, Richard

    2014-01-01

    Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.

  2. Effect of perceptual load on conceptual processing: an extension of Vermeulen's theory.

    Science.gov (United States)

    Xie, Jiushu; Wang, Ruiming; Sun, Xun; Chang, Song

    2013-10-01

    The effect of color and shape load on conceptual processing was studied. Perceptual load effects have been found in visual and auditory conceptual processing, supporting the theory of embodied cognition. However, whether different types of visual concepts, such as color and shape, share the same perceptual load effects is unknown. In the current experiment, 32 participants were administered simultaneous perceptual and conceptual tasks to assess the relation between perceptual load and conceptual processing. Keeping color load in mind obstructed color conceptual processing. Hence, perceptual processing and conceptual load shared the same resources, suggesting embodied cognition. Color conceptual processing was not affected by shape pictures, indicating that different types of properties within vision were separate.

  3. Enhanced perceptual functioning in autism: an update, and eight principles of autistic perception.

    Science.gov (United States)

    Mottron, Laurent; Dawson, Michelle; Soulières, Isabelle; Hubert, Benedicte; Burack, Jake

    2006-01-01

    We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception of first order static stimuli, diminished perception of complex movement, autonomy of low-level information processing toward higher-order operations, and differential relation between perception and general intelligence. Increased perceptual expertise may be implicated in the choice of special ability in savant autistics, and in the variability of apparent presentations within PDD (autism with and without typical speech, Asperger syndrome) in non-savant autistics. The overfunctioning of brain regions typically involved in primary perceptual functions may explain the autistic perceptual endophenotype.

  4. Adaptation and perceptual norms

    Science.gov (United States)

    Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole

    2007-02-01

    We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.

  5. Perceptual Processing Affects Conceptual Processing

    Science.gov (United States)

    van Dantzig, Saskia; Pecher, Diane; Zeelenberg, Rene; Barsalou, Lawrence W.

    2008-01-01

    According to the Perceptual Symbols Theory of cognition (Barsalou, 1999), modality-specific simulations underlie the representation of concepts. A strong prediction of this view is that perceptual processing affects conceptual processing. In this study, participants performed a perceptual detection task and a conceptual property-verification task…

  6. Short-term delayed recall of auditory verbal learning test is equivalent to long-term delayed recall for identifying amnestic mild cognitive impairment.

    Directory of Open Access Journals (Sweden)

    Qianhua Zhao

    Full Text Available Delayed recall of words in a verbal learning test is a sensitive measure for the diagnosis of amnestic mild cognitive impairment (aMCI and early Alzheimer's disease (AD. The relative validity of different retention intervals of delayed recall has not been well characterized. Using the Auditory Verbal Learning Test-Huashan version, we compared the differentiating value of short-term delayed recall (AVL-SR, that is, a 3- to 5-minute delay time and long-term delayed recall (AVL-LR, that is, a 20-minute delay time in distinguishing patients with aMCI (n = 897 and mild AD (n = 530 from the healthy elderly (n = 1215. In patients with aMCI, the correlation between AVL-SR and AVL-LR was very high (r = 0.94, and the difference between the two indicators was less than 0.5 points. There was no difference between AVL-SR and AVL-LR in the frequency of zero scores. In the receiver operating characteristic curves analysis, although the area under the curve (AUC of AVL-SR and AVL-LR for diagnosing aMCI was significantly different, the cut-off scores of the two indicators were identical. In the subgroup of ages 80 to 89, the AUC of the two indicators showed no significant difference. Therefore, we concluded that AVL-SR could substitute for AVL-LR in identifying aMCI, especially for the oldest patients.

  7. Selective auditory grouping by zebra finches: testing the iambic-trochaic law.

    Science.gov (United States)

    Spierings, Michelle; Hubert, Jeroen; Ten Cate, Carel

    2017-07-01

    Humans have a strong tendency to spontaneously group visual or auditory stimuli together in larger patterns. One of these perceptual grouping biases is formulated as the iambic/trochaic law, where humans group successive tones alternating in pitch and intensity as trochees (high-low and loud-soft) and alternating in duration as iambs (short-long). The grouping of alternations in pitch and intensity into trochees is a human universal and is also present in one non-human animal species, rats. The perceptual grouping of sounds alternating in duration seems to be affected by native language in humans and has so far not been found among animals. In the current study, we explore to which extent these perceptual biases are present in a songbird, the zebra finch. Zebra finches were trained to discriminate between short strings of pure tones organized as iambs and as trochees. One group received tones that alternated in pitch, a second group heard tones alternating in duration, and for a third group, tones alternated in intensity. Those zebra finches that showed sustained correct discrimination were next tested with longer, ambiguous strings of alternating sounds. The zebra finches in the pitch condition categorized ambiguous strings of alternating tones as trochees, similar to humans. However, most of the zebra finches in the duration and intensity condition did not learn to discriminate between training stimuli organized as iambs and trochees. This study shows that the perceptual bias to group tones alternating in pitch as trochees is not specific to humans and rats, but may be more widespread among animals.

  8. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  9. Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.

    Science.gov (United States)

    Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M

    1991-06-01

    An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.

  10. Auditory Perception and Word Recognition in Cantonese-Chinese Speaking Children with and without Specific Language Impairment

    Science.gov (United States)

    Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.

    2017-01-01

    Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…

  11. Perceptual effects in auralization of virtual rooms

    Science.gov (United States)

    Kleiner, Mendel; Larsson, Pontus; Vastfjall, Daniel; Torres, Rendell R.

    2002-05-01

    By using various types of binaural simulation (or ``auralization'') of physical environments, it is now possible to study basic perceptual issues relevant to room acoustics, as well to simulate the acoustic conditions found in concert halls and other auditoria. Binaural simulation of physical spaces in general is also important to virtual reality systems. This presentation will begin with an overview of the issues encountered in the auralization of room and other environments. We will then discuss the influence of various approximations in room modeling, in particular, edge- and surface scattering, on the perceived room response. Finally, we will discuss cross-modal effects, such as the influence of visual cues on the perception of auditory cues, and the influence of cross-modal effects on the judgement of ``perceived presence'' and the rating of room acoustic quality.

  12. Barriers to repeated assessment of verbal learning and memory: a comparison of international shopping list task and rey auditory verbal learning test on build-up of proactive interference.

    Science.gov (United States)

    Rahimi-Golkhandan, S; Maruff, P; Darby, D; Wilson, P

    2012-11-01

    Proactive interference (PI) that remains unidentified can confound the assessment of verbal learning, particularly when its effects vary from one population to another. The International Shopping List Task (ISLT) is a new measure that provides multiple forms that can be equated for linguistic factors across cultural groups. The aim of this study was to examine the build-up of PI on two measures of verbal learning-a traditional test of list learning (Rey Auditory Verbal Learning Test, RAVLT) and the ISLT. The sample consisted of 61 healthy adults aged 18-40. Each test had three parallel forms, each recalled three times. Results showed that repeated administration of the ISLT did not result in significant PI effects, unlike the RAVLT. Although these PI effects, observed during short retest intervals, may not be as robust under normal clinical administrations of the tests, the results suggest that the choice of the verbal learning test should be guided by the knowledge of PI effects and the susceptibility of particular patient groups to this effect.

  13. Implicit Recognition Based on Lateralized Perceptual Fluency

    OpenAIRE

    Vargas, Iliana M.; Voss, Joel L.; Paller, Ken A.

    2012-01-01

    In some circumstances, accurate recognition of repeated images in an explicit memory test is driven by implicit memory. We propose that this “implicit recognition” results from perceptual fluency that influences responding without awareness of memory retrieval. Here we examined whether recognition would vary if images appeared in the same or different visual hemifield during learning and testing. Kaleidoscope images were briefly presented left or right of fixation during divided-attention enc...

  14. Model cortical responses for the detection of perceptual onsets and beat tracking in singing

    NARCIS (Netherlands)

    Coath, M.; Denham, S.L.; Smith, L.M.; Honing, H.; Hazan, A.; Holonowicz, P.; Purwins, H.

    2009-01-01

    We describe a biophysically motivated model of auditory salience based on a model of cortical responses and present results that show that the derived measure of salience can be used to identify the position of perceptual onsets in a musical stimulus successfully. The salience measure is also shown

  15. Temporal integration of consecutive tones into synthetic vowels demonstrates perceptual assembly in audition

    NARCIS (Netherlands)

    Saija, Jefta D.; Andringa, Tjeerd C.; Başkent, Deniz; Akyürek, Elkan G.

    Temporal integration is the perceptual process combining sensory stimulation over time into longer percepts that can span over 10 times the duration of a minimally detectable stimulus. Particularly in the auditory domain, such "long-term" temporal integration has been characterized as a relatively

  16. Brazilian children performance on Rey’s auditory verbal learning paradigm Desempenho de crianças brasileiras no paradigma de aprendizagem auditivo-verbal de Rey

    Directory of Open Access Journals (Sweden)

    Rosinda Martins Oliveira

    2008-03-01

    Full Text Available The Rey Auditory Verbal Learning paradigm is worldwide used in clinical and research settings. There is consensus about its psychometric robustessness and that its various scores provide relevant information about different aspects of memory and learning. However, there are only a few studies in Brazil employing this paradigm and none of them with children. This paper describes the performance of 119 Brazilian children in a version of Rey´s paradigm. The correlations between scores showed the internal consistency of this version. Also, the pattern of results observed was very similar to that observed in foreign studies with adults and children. There was correlation between age in months and recall scores, showing that age affects the rhythm of learning. These results were discussed based on the information processing theory.O paradigma de aprendizagem auditivo-verbal de Rey é utilizado em todo o mundo, tanto em pesquisa quanto na clínica. Há consenso sobre sua robustez psicométrica e de que seus vários escores fornecem informações relevantes sobre diferentes aspectos da memória e da aprendizagem. No entanto, existem apenas alguns poucos estudos no Brasil envolvendo este paradigma e nenhum deles com crianças. Este artigo descreve o desempenho de 119 crianças brasileiras em uma versão do paradigma de Rey. As correlações entre escores mostraram a consistência interna desta versão. Além disso, o padrão de resultados encontrado foi muito similar àquele observado em estudos estrangeiros com adultos e crianças. Verificou-se correlação entre idade em meses e os escores de evocação, mostrando que a idade afeta o ritmo de aprendizagem. Estes resultados foram discutidos a partir da teoria do processamento da informação.

  17. Working memory does not dissociate between different perceptual categorization tasks.

    Science.gov (United States)

    Lewandowsky, Stephan; Yang, Lee-Xieng; Newell, Ben R; Kalish, Michael L

    2012-07-01

    Working memory is crucial for many higher level cognitive functions, ranging from mental arithmetic to reasoning and problem solving. Likewise, the ability to learn and categorize novel concepts forms an indispensable part of human cognition. However, very little is known about the relationship between working memory and categorization. This article reports 2 studies that related people's working memory capacity (WMC) to their learning performance on multiple rule-based and information-integration perceptual categorization tasks. In both studies, structural equation modeling revealed a strong relationship between WMC and category learning irrespective of the requirement to integrate information across multiple perceptual dimensions. WMC was also uniformly related to people's ability to focus on the most task-appropriate strategy, regardless of whether or not that strategy involved information integration. Contrary to the predictions of the multiple systems view of categorization, working memory thus appears to underpin performance in both major classes of perceptual category-learning tasks. 2012 APA, all rights reserved

  18. Perceptual Bias and Loudness Change: An Investigation of Memory, Masking, and Psychophysiology

    Science.gov (United States)

    Olsen, Kirk N.

    Loudness is a fundamental aspect of human auditory perception that is closely associated with a sound's physical acoustic intensity. The dynamic quality of intensity change is an inherent acoustic feature in real-world listening domains such as speech and music. However, perception of loudness change in response to continuous intensity increases (up-ramps) and decreases (down-ramps) has received relatively little empirical investigation. Overestimation of loudness change in response to up-ramps is said to be linked to an adaptive survival response associated with looming (or approaching) motion in the environment. The hypothesised 'perceptual bias' to looming auditory motion suggests why perceptual overestimation of up-ramps may occur; however it does not offer a causal explanation. It is concluded that post-stimulus judgements of perceived loudness change are significantly affected by a cognitive recency response bias that, until now, has been an artefact of experimental procedure. Perceptual end-level differences caused by duration specific sensory adaptation at peripheral and/or central stages of auditory processing may explain differences in post-stimulus judgements of loudness change. Experiments that investigate human responses to acoustic intensity dynamics, encompassing topics from basic auditory psychophysics (e.g., sensory adaptation) to cognitive-emotional appraisal of increasingly complex stimulus events such as music and auditory warnings, are proposed for future research.

  19. Quadri-stability of a spatially ambiguous auditory illusion

    Directory of Open Access Journals (Sweden)

    Constance May Bainbridge

    2015-01-01

    Full Text Available In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front versus behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or bouncing to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual

  20. The acoustic and perceptual cues affecting melody segregation for listeners with a cochlear implant.

    Directory of Open Access Journals (Sweden)

    Jeremy eMarozeau

    2013-11-01

    Full Text Available Our ability to listen selectively to single sound sources in complex auditory environments is termed ‘auditory stream segregation.’ This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody.The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found