The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029
Zahorik, P.; Brungart, D.S.; Bronkhorst, A.W.
Although auditory distance perception is a critical component of spatial hearing, it has received substantially less scienti.c attention than the directional aspects of auditory localization. Here we summarize current knowledge on auditory distance perception, with special emphasis on recent
Buchholz, Jörg; Favrot, Sylvain Emmanuel
. This system provides a flexible research platform for conducting auditory experiments with normal-hearing, hearing-impaired, and aided hearing-impaired listeners in a fully controlled and realistic environment. This includes measures of basic auditory function (e.g., signal detection, distance perception......) and measures of speech intelligibility. A battery of objective tests (e.g., reverberation time, clarity, interaural correlation coefficient) and subjective tests (e.g., speech reception thresholds) is presented that demonstrates the applicability of the LoRA system....
Buchholz, Jörg; Favrot, Sylvain Emmanuel
Recently a loudspeaker-based room auralisation (LoRA) system has been developed at CAHR, which combines modern room acoustic modeling techniques with high-order Ambisonics auralisation. The environment provides: (i) a flexible research tool to study the signal processing of the normal, impaired, ...
and Piercy, M. (1973). Defects of non - verbal auditory perception in children with developmental aphasia . Nature (London), 241, 468-469. Watson, C.S...LII, zS 4p ETV I Hearing and Communication Laboratory Department of Speech and Hearing Sciences 7 Indiana University Bloomington, Indiana 47405 Final...Technical Report Air Force Office of Scientific Research AFOSR-84-0337 September 1, 1984 to August 31, 1987 Hearing and Communication Laboratory
the surrounding space and the location and position of our own body within it. Thus, it is the multisensory awareness of being immersed in a specific...improves situational awareness, speech perception, and sound source identification in the presence of other sound sources (e.g., Bronkhorst, 2000; Kidd et...ventriloquism effect (VE) (Howard and Templeton, 1966) in which the listener perceives the ventriloquist’s speech as coming from ventriloquist’s dummy. The
the presence of primacy and recency effects , resulting in a large number of errors in which listeners erroneously selected the loudspeaker that had...the sound source that produced this sound. As in the previous studies mentioned, pronounced primacy and recency effect were found. Further research...16 2.3.2 Sound Onset and Precedence Effect
Federal Laboratory Consortium — EAR is an auditory perception and communication research center enabling state-of-the-art simulation of various indoor and outdoor acoustic environments. The heart...
Crommett, L.E.; Pérez Bellido, A.; Yau, J.M.
Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals
Cottrell, David; Campbell, Megan E J
When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.
Buchholz, Jörg; Favrot, Sylvain Emmanuel
of the LoRA processing is first presented, followed by a battery of objective and subjective tests to demonstrate the applicability of the different components of the system. In the objective evaluation, monaural and binaural room acoustic measures (e.g., reverberation time, clarity, interaural cross...... correlation coefficient) were considered. The subject evaluation included speech intelligibility and distance perception measures....
Voyer, Daniel; Thibodeau, Sophie-Hélène; Delong, Breanna J.
Four experiments were conducted to investigate the interplay between context and tone of voice in the perception of sarcasm. These experiments emphasized the role of contrast effects in sarcasm perception exclusively by means of auditory stimuli whereas most past research has relied on written material. In all experiments, a positive or negative…
Liebenthal, Einat; Möttönen, Riikka
Mounting evidence indicates a role in perceptual decoding of speech for the dorsal auditory stream connecting between temporal auditory and frontal-parietal articulatory areas. The activation time course in auditory, somatosensory and motor regions during speech processing is seldom taken into account in models of speech perception. We critically review the literature with a focus on temporal information, and contrast between three alternative models of auditory-motor speech processing: parallel, hierarchical, and interactive. We argue that electrophysiological and transcranial magnetic stimulation studies support the interactive model. The findings reveal that auditory and somatomotor areas are engaged almost simultaneously, before 100 ms. There is also evidence of early interactions between auditory and motor areas. We propose a new interactive model of auditory-motor speech perception in which auditory and articulatory somatomotor areas are connected from early stages of speech processing. We also discuss how attention and other factors can affect the timing and strength of auditory-motor interactions and propose directions for future research. Copyright © 2017 Elsevier Inc. All rights reserved.
Zmigrod, Sharon; Hommel, Bernhard
The features of perceived objects are processed in distinct neural pathways, which call for mechanisms that integrate the distributed information into coherent representations (the binding problem). Recent studies of sequential effects have demonstrated feature binding not only in perception, but also across (visual) perception and action planning. We investigated whether comparable effects can be obtained in and across auditory perception and action. The results from two experiments revealed effects indicative of spontaneous integration of auditory features (pitch and loudness, pitch and location), as well as evidence for audio-manual stimulus-response integration. Even though integration takes place spontaneously, features related to task-relevant stimulus or response dimensions are more likely to be integrated. Moreover, integration seems to follow a temporal overlap principle, with features coded close in time being more likely to be bound together. Taken altogether, the findings are consistent with the idea of episodic event files integrating perception and action plans.
ARL-TR-7203 ● FEB 2015 US Army Research Laboratory HEaDS-UP Phase IV Assessment: Headgear Effects on Auditory Perception...Assessment: Headgear Effects on Auditory Perception by Angelique A Scharine Human Research and Engineering Directorate, ARL...Assessment: Headgear Effects on Auditory Perception 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Angelique A
Parks, Anthony J.
How do listener head rotations affect auditory perception of elevation? This investi-. gation addresses this in the hopes that perceptual judgments of elevated auditory. percepts may be more thoroughly understood in terms of dynamic listening cues. engendered by listener head rotations and that this phenomenon can be psychophys-. ically and computationally modeled. Two listening tests were conducted and a. psychophysical model was constructed to this end. The frst listening test prompted. listeners to detect an elevated auditory event produced by a virtual noise source. orbiting the median plane via 24-channel ambisonic spatialization. Head rotations. were tracked using computer vision algorithms facilitated by camera tracking. The. data were used to construct a dichotomous criteria model using factorial binary. logistic regression model. The second auditory test investigated the validity of the. historically supported frequency dependence of auditory elevation perception using. narrow-band noise for continuous and brief stimuli with fxed and free-head rotation. conditions. The data were used to construct a multinomial logistic regression model. to predict categorical judgments of above, below, and behind. Finally, in light. of the psychophysical data found from the above studies, a functional model of. elevation perception for point sources along the cone of confusion was constructed. using physiologically-inspired signal processing methods along with top-down pro-. cessing utilizing principles of memory and orientation. The model is evaluated using. white noise bursts for 42 subjects' head-related transfer functions. The investigation. concludes with study limitations, possible implications, and speculation on future. research trajectories.
Etchemendy, Pablo E; Abregú, Ezequiel; Calcagno, Esteban R; Eguia, Manuel C; Vechiatti, Nilda; Iasi, Federico; Vergara, Ramiro O
In this article, we show that visual distance perception (VDP) is influenced by the auditory environmental context through reverberation-related cues. We performed two VDP experiments in two dark rooms with extremely different reverberation times: an anechoic chamber and a reverberant room. Subjects assigned to the reverberant room perceived the targets farther than subjects assigned to the anechoic chamber. Also, we found a positive correlation between the maximum perceived distance and the auditorily perceived room size. We next performed a second experiment in which the same subjects of Experiment 1 were interchanged between rooms. We found that subjects preserved the responses from the previous experiment provided they were compatible with the present perception of the environment; if not, perceived distance was biased towards the auditorily perceived boundaries of the room. Results of both experiments show that the auditory environment can influence VDP, presumably through reverberation cues related to the perception of room size.
Lüttke, C.S.; Ekman, M.; Gerven, M.A.J. van; Lange, F.P. de
Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of 'ada'. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we
with two ears. Jornal f tbp & ti So iM SL Amrir, 25 975-979. Coren, S. & Girgus, J. S. (1972). Differentiation and decrement in the Meller -Lyer...behavio. New York: AL1eton-Centy-Crofts. Lwisp E. 0. (1908). The effect of practice on the perception of the Meller -Lyer illusion. Britia amrnal nt...PAR Technology Corp. 7926 Jones Branch Drive Dr. Sandra P. Marshall Suite 170 Dept. of Psychology McLean, VA 22102 San Diego State University San
Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina
Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mathias, Brian; Palmer, Caroline; Perrin, Fabien; Tillmann, Barbara
Sounds that have been produced with one's own motor system tend to be remembered better than sounds that have only been perceived, suggesting a role of motor information in memory for auditory stimuli. To address potential contributions of the motor network to the recognition of previously produced sounds, we used event-related potential, electric current density, and behavioral measures to investigate memory for produced and perceived melodies. Musicians performed or listened to novel melodies, and then heard the melodies either in their original version or with single pitch alterations. Production learning enhanced subsequent recognition accuracy and increased amplitudes of N200, P300, and N400 responses to pitch alterations. Premotor and supplementary motor regions showed greater current density during the initial detection of alterations in previously produced melodies than in previously perceived melodies, associated with the N200. Primary motor cortex was more strongly engaged by alterations in previously produced melodies within the P300 and N400 timeframes. Motor memory traces may therefore interface with auditory pitch percepts in premotor regions as early as 200 ms following perceived pitch onsets. Outcomes suggest that auditory-motor interactions contribute to memory benefits conferred by production experience, and support a role of motor prediction mechanisms in the production effect. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com.
Auditory perception or hearing can be defined as the interpretation of sensory evidence, produced by the ears in response to sound, in terms of the events that caused the sound. We do not hear a window but we may hear a window closing. We do not hear a dog but we may hear a dog barking. And we do not hear a person but we may hear a person talking. Hearing impairment can result in anxiety or stress in everyday life. Pure-tone hearing loss (or threshold shift) is a measure of hearing impairment. Aging and excessive noise are the main causes of hearing impairment. Speech perception is another concept. The difference with the former is best illustrated by the disabled individual declaring "I can hear that someone is talking to me, but I don't understand what she says". Being unable to understand easily and clearly significant others, especially in understanding speech in a noisy environment, can give rise to considerable psychosocial and professional consequences (disability). Presbycusis is the decline in hearing sensitivity caused by the aging process at different levels of the auditory system. However, it is difficult to isolate age effects from other contributors to age-related hearing loss such as noise damage, genetic susceptibility, inflammatory otologic disorders, and ototoxic agents. Therefore, presbycusis and age-related hearing loss are often used synonymously. In this report pathophysiology is mostly described with regard to presbycusis, and the main peripheral types of presbycusis (sensory or Corti organ-related, strial, and neural) are summarized. An original experimental model of strial presbycusis, based on chronic application of furosemide at the round window, is further described. Central presbycusis is mainly determined by degeneration secondary to peripheral impairment (concept of deafferentation). Central auditory changes typically affect speed of processing and result in poorer speech understanding in noise or with rapid or degraded speech. Last
Riskind, John H; Kleiman, Evan M; Seifritz, Erich; Neuhoff, John
Previous studies show that individuals with an anticipatory auditory looming bias over-estimate the closeness of a sound source that approaches them. Our present study bridges cognitive clinical and perception research, and provides evidence that anxiety symptoms and a particular putative cognitive style that creates vulnerability for anxiety (looming cognitive style, or LCS) are related to how people perceive this ecologically fundamental auditory warning signal. The effects of anxiety symptoms on the anticipatory auditory looming effect synergistically depend on the dimension of perceived personal danger assessed by the LCS (physical or social threat). Depression symptoms, in contrast to anxiety symptoms, predict a diminution of the auditory looming bias. Findings broaden our understanding of the links between cognitive-affective states and auditory perception processes and lend further support to past studies providing evidence that the looming cognitive style is related to bias in threat processing. Copyright © 2013 Elsevier Ltd. All rights reserved.
Rennig, Johannes; Bleyer, Anna Lena; Karnath, Hans-Otto
Simultanagnosia is a neuropsychological deficit of higher visual processes caused by temporo-parietal brain damage. It is characterized by a specific failure of recognition of a global visual Gestalt, like a visual scene or complex objects, consisting of local elements. In this study we investigated to what extend this deficit should be understood as a deficit related to specifically the visual domain or whether it should be seen as defective Gestalt processing per se. To examine if simultanagnosia occurs across sensory domains, we designed several auditory experiments sharing typical characteristics of visual tasks that are known to be particularly demanding for patients suffering from simultanagnosia. We also included control tasks for auditory working memory deficits and for auditory extinction. We tested four simultanagnosia patients who suffered from severe symptoms in the visual domain. Two of them indeed showed significant impairments in recognition of simultaneously presented sounds. However, the same two patients also suffered from severe auditory working memory deficits and from symptoms comparable to auditory extinction, both sufficiently explaining the impairments in simultaneous auditory perception. We thus conclude that deficits in auditory Gestalt perception do not appear to be characteristic for simultanagnosia and that the human brain obviously uses independent mechanisms for visual and for auditory Gestalt perception. Copyright © 2017 Elsevier Ltd. All rights reserved.
Patel, Aniruddh D; Iversen, John R
a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This "action simulation for auditory prediction" (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.
Full Text Available While many studies have shown that visual information affects perception in the other modalities, little is known about how auditory and haptic information affect visual perception. In this study, we investigated how auditory, haptic, or auditory and haptic stimulation affects visual perception. We used a behavioral task based on the subjects observing the phenomenon of two identical visual objects moving toward each other, overlapping and then continuing their original motion. Subjects may perceive the objects as either streaming each other or bouncing and reversing their direction of motion. With only visual motion stimulus, subjects usually report the objects as streaming, whereas if a sound or flash is played when the objects touch each other, subjects report the objects as bouncing (Bounce-Inducing Effect. In this study, “auditory stimulation”, “haptic stimulation” or “haptic and auditory stimulation” were presented at various times relative to the visual overlap of objects. Our result shows the bouncing rate when haptic and auditory stimulation were presented were the highest. This result suggests that the Bounce-Inducing Effect is enhanced by simultaneous modality presentation to visual motion. In the future, a neuroscience approach (eg, TMS, fMRI may be required to elucidate the brain mechanism in this study.
Maria Neimark Geffen
Full Text Available Many natural signals, including environmental sounds, exhibit scale-invariant statistics: their structure is repeated at multiple scales. Such scale invariance has been identified separately across spectral and temporal correlations of natural sounds (Clarke and Voss, 1975; Attias and Schreiner, 1997; Escabi et al., 2003; Singh and Theunissen, 2003. Yet the role of scale-invariance across overall spectro-temporal structure of the sound has not been explored directly in auditory perception. Here, we identify that the sound wave of a recording of running water is a self-similar fractal, exhibiting scale-invariance not only within spectral channels, but also across the full spectral bandwidth. The auditory perception of the water sound did not change with its scale. We tested the role of scale-invariance in perception by using an artificial sound, which could be rendered scale-invariant. We generated a random chirp stimulus: an auditory signal controlled by two parameters, Q, controlling the relative, and r, controlling the absolute, temporal structure of the sound. Imposing scale-invariant statistics on the artificial sound was required for its perception as natural and water-like. Further, Q had to be restricted to a specific range for the sound to be perceived as natural. To detect self-similarity in the water sound, and identify Q, the auditory system needs to process the temporal dynamics of the waveform across spectral bands in terms of the number of cycles, rather than absolute timing. We propose a two-stage neural model implementing this computation. This computation may be carried out by circuits of neurons in the auditory cortex. The set of auditory stimuli developed in this study are particularly suitable for measurements of response properties of neurons in the auditory pathway, allowing for quantification of the effects of varying the statistics of the spectro-temporal statistical structure of the stimulus.
Martin, Stephanie; Mikutta, Christian; Leonard, Matthew K; Hungate, Dylan; Koelsch, Stefan; Shamma, Shihab; Chang, Edward F; Millán, José Del R; Knight, Robert T; Pasley, Brian N
Despite many behavioral and neuroimaging investigations, it remains unclear how the human cortex represents spectrotemporal sound features during auditory imagery, and how this representation compares to auditory perception. To assess this, we recorded electrocorticographic signals from an epileptic patient with proficient music ability in 2 conditions. First, the participant played 2 piano pieces on an electronic piano with the sound volume of the digital keyboard on. Second, the participant replayed the same piano pieces, but without auditory feedback, and the participant was asked to imagine hearing the music in his mind. In both conditions, the sound output of the keyboard was recorded, thus allowing precise time-locking between the neural activity and the spectrotemporal content of the music imagery. This novel task design provided a unique opportunity to apply receptive field modeling techniques to quantitatively study neural encoding during auditory mental imagery. In both conditions, we built encoding models to predict high gamma neural activity (70-150 Hz) from the spectrogram representation of the recorded sound. We found robust spectrotemporal receptive fields during auditory imagery with substantial, but not complete overlap in frequency tuning and cortical location compared to receptive fields measured during auditory perception. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: firstname.lastname@example.org.
McWalter, Richard Ian; MacDonald, Ewen; Dau, Torsten
Sound textures have been identified as a category of sounds which are processed by the peripheral auditory system and captured with running timeaveraged statistics. Although sound textures are temporally homogeneous, they offer a listener with enough information to identify and differentiate...... sources. This experiment investigated the ability of the auditory system to identify statistically blurred sound textures and the perceptual relationship between sound textures. Identification performance of statistically blurred sound textures presented at a fixed blur increased over those presented...... as a gradual blur. The results suggests that the correct identification of sound textures is influenced by the preceding blurred stimulus. These findings draw parallels to the recognition of blurred images....
Jepsen, Morten Løve; Dau, Torsten
Models of auditory signal processing and perception allow us to generate hypotheses that can be quantitatively tested, which in turn helps us to explain and understand the functioning of the auditory system. Here, the perceptual consequences of hearing impairment in individual listeners were...... investigated within the framework of the computational auditory signal processing and perception (CASP) model of Jepsen et al. [ J. Acoust. Soc. Am., in press]. Several parameters of the model were modified according to data from psychoacoustic measurements. Parameters associated with the cochlear stage were...... forward masking. The model may be useful for the evaluation of hearing-aid algorithms, where a reliable simulation of hearing impairment may reduce the need for time-consuming listening tests during development....
the target sound in time determine whether or not across-frequency modulation effects are observed. The results suggest that the binding of sound elements into coherent auditory objects precedes aspects of modulation analysis and imply a cortical locus involving integration times of several hundred...
Fox, Robert Allen; Jacewicz, Ewa; Chang, Chiung-Yun
Purpose: To evaluate potential contributions of broadband spectral integration in the perception of static vowels. Specifically, can the auditory system infer formant frequency information from changes in the intensity weighting across harmonics when the formant itself is missing? Does this type of integration produce the same results in the lower…
Ocklenburg, Sebastian; Hirnstein, Marco; Hausmann, Markus; Lewald, Jorg
Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via…
Talebi, Hossein; Moossavi, Abdollah; Faghihzadeh, Soghrat
Background: Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds. Methods: Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared...
Full Text Available Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 ms or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor. The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a
Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang
Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of
Full Text Available Binaural recordings can simulate externalized auditory space perception over headphones. However, if the orientation of the recorder's head and the orientation of the listener's head are incongruent, the simulated auditory space is not realistic. For example, if a person lying flat on a bed listens to an environmental sound that was recorded by microphones inserted in ears of a person who was in an upright position, the sound simulates an auditory space rotated 90 degrees to the real-world horizontal axis. Our question is whether brain activation patterns are different between the unrealistic auditory space (ie, the orientation of the listener's head and the orientation of the recorder's head are incongruent and the realistic auditory space (ie, the orientations are congruent. River sounds that were binaurally recorded either in a supine position or in an upright body position were served as auditory stimuli. During fMRI experiments, participants listen to the stimuli and pressed one of two buttons indicating the direction of the water flow (horizontal/vertical. Behavioral results indicated that participants could not differentiate between the congruent and the incongruent conditions. However, neuroimaging results showed that the congruent condition activated the planum temporale significantly more than the incongruent condition.
Full Text Available The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear moulds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localisation, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear moulds or through virtual auditory space stimulation using non-individualised spectral cues. The work with ear moulds demonstrates that a relatively short period of training involving sensory-motor feedback (5 – 10 days significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide a spatial code but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.
Sato, Marc; Troille, Emilie; Ménard, Lucie; Cathiard, Marie-Agnès; Gracco, Vincent
The concept of an internal forward model that internally simulates the sensory consequences of an action is a central idea in speech motor control. Consistent with this hypothesis, silent articulation has been shown to modulate activity of the auditory cortex and to improve the auditory identification of concordant speech sounds, when embedded in white noise. In the present study, we replicated and extended this behavioral finding by showing that silently articulating a syllable in synchrony with the presentation of a concordant auditory and/or visually ambiguous speech stimulus improves its identification. Our results further demonstrate that, even in the case of perfect perceptual identification, concurrent mouthing of a syllable speeds up the perceptual processing of a concordant speech stimulus. These results reflect multisensory-motor interactions during speech perception and provide new behavioral arguments for internally generated sensory predictions during silent speech production.
Patel, Aniruddh D.; Iversen, John R.
Every human culture has some form of music with a beat: a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This “action simulation for auditory prediction” (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi. PMID:24860439
Aniruddh D. Patel
Full Text Available Every human culture has some form of music with a beat: a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement. More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This action simulation for auditory prediction (ASAP hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in nonhuman primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.
Perception and action are coupled via bidirectional relationships between sensory and motor systems. Motor systems influence sensory areas by imparting a feedforward influence on sensory processing termed "motor efference copy" (MEC). MEC is suggested to occur in humans because speech preparation and production modulate neural measures of auditory cortical activity. However, it is not known if MEC can affect auditory perception. We tested the hypothesis that during speech preparation auditory thresholds will increase relative to a control condition, and that the increase would be most evident for frequencies that match the upcoming vocal response. Participants performed trials in a speech condition that contained a visual cue indicating a vocal response to prepare (one of two frequencies), followed by a go signal to speak. To determine threshold shifts, voice-matched or -mismatched pure tones were presented at one of three time points between the cue and target. The control condition was the same except the visual cues did not specify a response and subjects did not speak. For each participant, we measured f0 thresholds in isolation from the task in order to establish baselines. Results indicated that auditory thresholds were highest during speech preparation, relative to baselines and a non-speech control condition, especially at suprathreshold levels. Thresholds for tones that matched the frequency of planned responses gradually increased over time, but sharply declined for the mismatched tones shortly before targets. Findings support the hypothesis that MEC influences auditory perception by modulating thresholds during speech preparation, with some specificity relative to the planned response. The threshold increase in tasks vs. baseline may reflect attentional demands of the tasks.
Blamey, P J; Cowan, R S; Alcantara, J I; Whitford, L A; Clark, G M
Four normally-hearing subjects were trained and tested with all combinations of a highly-degraded auditory input, a visual input via lipreading, and a tactile input using a multichannel electrotactile speech processor. The speech perception of the subjects was assessed with closed sets of vowels, consonants, and multisyllabic words; with open sets of words and sentences, and with speech tracking. When the visual input was added to any combination of other inputs, a significant improvement occurred for every test. Similarly, the auditory input produced a significant improvement for all tests except closed-set vowel recognition. The tactile input produced scores that were significantly greater than chance in isolation, but combined less effectively with the other modalities. The addition of the tactile input did produce significant improvements for vowel recognition in the auditory-tactile condition, for consonant recognition in the auditory-tactile and visual-tactile conditions, and in open-set word recognition in the visual-tactile condition. Information transmission analysis of the features of vowels and consonants indicated that the information from auditory and visual inputs were integrated much more effectively than information from the tactile input. The less effective combination might be due to lack of training with the tactile input, or to more fundamental limitations in the processing of multimodal stimuli.
Féron, François-Xavier; Frissen, Ilja; Boissinot, Julien; Guastavino, Catherine
Three experiments are reported, which investigated the auditory velocity thresholds beyond which listeners are no longer able to perceptually resolve a smooth circular trajectory. These thresholds were measured for band-limited noises, white noise, and harmonic sounds (HS), and in different acoustical environments. Experiments 1 and 2 were conducted in an acoustically dry laboratory. Observed thresholds varied as a function of stimulus type and spectral content. Thresholds for band-limited noises were unaffected by center frequency and equal to that of white noise. For HS, however, thresholds decreased as the fundamental frequency of the stimulus increased. The third experiment was a replication of the second in a reverberant concert hall, which produced qualitatively similar results except that thresholds were significantly higher than in the acoustically dry laboratory.
human sound localization (pp. 36-200). Cambridge, MA: The MIT Press. Carmichel, E. L., Harris, F. P., & Story, B. H. (2007). Effects of binaural ...auditory distance perception by reducing the level differences between sounds . The focus of the present study was to investigate the effect of amplitude...create stimuli. Two levels of amplitude compression were applied to the recordings through Adobe Audition sound editing software to simulate military
Full Text Available Williams syndrome (WS, a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and typically developing individuals with and without amusia.
Law, Jeremy M.; Vandermosten, Maaike; Ghesquiere, Pol; Wouters, Jan
This study investigated whether auditory, speech perception, and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e., rapid automatic naming, verbal short-term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency m...
Jeremy eLaw; Maaike eVandermosten; Pol eGhesquiere; Jan eWouters
This study investigated whether auditory, speech perception and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e. rapid automatic naming, verbal short term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency mod...
Full Text Available Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.
Ruan, Qingwei; Ma, Cheng; Zhang, Ruxin; Yu, Zhuowei
The development of presbycusis, or age-related hearing loss, is determined by a combination of genetic and environmental factors. The auditory periphery exhibits a progressive bilateral, symmetrical reduction of auditory sensitivity to sound from high to low frequencies. The central auditory nervous system shows symptoms of decline in age-related cognitive abilities, including difficulties in speech discrimination and reduced central auditory processing, ultimately resulting in auditory perceptual abnormalities. The pathophysiological mechanisms of presbycusis include excitotoxicity, oxidative stress, inflammation, aging and oxidative stress-induced DNA damage that results in apoptosis in the auditory pathway. However, the originating signals that trigger these mechanisms remain unclear. For instance, it is still unknown whether insulin is involved in auditory aging. Auditory aging has preclinical lesions, which manifest as asymptomatic loss of periphery auditory nerves and changes in the plasticity of the central auditory nervous system. Currently, the diagnosis of preclinical, reversible lesions depends on the detection of auditory impairment by functional imaging, and the identification of physiological and molecular biological markers. However, despite recent improvements in the application of these markers, they remain under-utilized in clinical practice. The application of antisenescent approaches to the prevention of auditory aging has produced inconsistent results. Future research will focus on the identification of markers for the diagnosis of preclinical auditory aging and the development of effective interventions. © 2013 Japan Geriatrics Society.
Erdener, Doğu; Burnham, Denis
Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception - lip-reading and visual influence in auditory-visual integration; (ii) the development of auditory speech perception and native language perceptual attunement; and (iii) the relationship between these and a language skill relevant at this age, receptive vocabulary. Visual speech perception skills improved even over this relatively short time period. However, regression analyses revealed that vocabulary was predicted by auditory-only speech perception, and native language attunement, but not by visual speech perception ability. The results suggest that, in contrast to infants and schoolchildren, in three- to four-year-olds the relationship between speech perception and language ability is based on auditory and not visual or auditory-visual speech perception ability. Adding these results to existing findings allows elaboration of a more complete account of the developmental course of auditory-visual speech perception.
Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David
Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals--over a range of time scales from milliseconds to seconds--renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own 'privileged' temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. Copyright © 2014 Elsevier B.V. All rights reserved.
Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten
A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997......)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass...... modulation filterbank, a constant-variance internal noise, and an optimal detector stage. The model was evaluated in experimental conditions that reflect, to a different degree, effects of compression as well as spectral and temporal resolution in auditory processing. The experiments include intensity...
van Beinum, F.J.; Schwippert, C.E.; Been, P.H.; van Leeuwen, T.H.; Kuijpers, C.T.L.
A national longitudinal research program on developmental dyslexia was started in The Netherlands, including auditory perception and processing as an important research component. New test materials had to be developed, to be used for measuring the auditory sensitivity of the subjects to speech-like
van Beinum, F.J.; Schwippert, C.E.; Been, P.H.; van Leeuwen, T.H.; Kuijpers, C.T.L.
A national longitudinal research program on developmental dyslexia was started in The Netherlands, including auditory perception and processing as an important research component. New test materials had to be developed, to be used for measuring the auditory sensitivity of the subjects to speech-like
van Beinum, Florien J.; Schwippert, Caroline E.; Been, Pieter H.; van Leeuwen, Theo H.; Kuijpers, Cecile T.L.
A national longitudinal research program on developmental dyslexia was started in The Netherlands, including auditory perception and processing as an important research component. New test materials had to be developed, to be used for measuring the auditory sensitivity of the subjects to speech-like
Mahshie, James; Core, Cynthia; Larsen, Michael D
The aim of the present research is to examine the relations between auditory perception and production of specific speech contrasts by children with cochlear implants (CIs) who received their implants before 3 years of age and to examine the hierarchy of abilities for perception and production for consonant and vowel features. The following features were examined: vowel height, vowel place, consonant place of articulation (front and back), continuance, and consonant voicing. Fifteen children (mean age = 4;0 and range 3;2 to 5;11) with a minimum of 18 months of experience with their implants and no additional known disabilities served as participants. Perception of feature contrasts was assessed using a modification of the Online Imitative Speech Pattern Contrast test, which uses imitation to assess speech feature perception. Production was examined by having the children name a series of pictures containing consonant and vowel segments that reflected contrasts of each feature. For five of the six feature contrasts, production accuracy was higher than perception accuracy. There was also a significant and positive correlation between accuracy of production and auditory perception for each consonant feature. This correlation was not found for vowels, owing largely to the overall high perception and production scores attained on the vowel features. The children perceived vowel feature contrasts more accurately than consonant feature contrasts. On average, the children had lower perception scores for Back Place and Continuance feature contrasts than for Anterior Place and Voicing contrasts. For all features, the median production scores were 100%; the majority of the children were able to accurately and consistently produce the feature contrasts. The mean production scores for features reflect greater score variability for consonant feature production than for vowel features. Back Place of articulation for back consonants and Continuance contrasts appeared to be the
Pocztaruk, R.D.; Abbink, J.H.; Wijk, de R.A.; Frasca, L.C.D.; Gaviao, M.B.D.; Bilt, van de A.
The influence of auditory and/or visual information on the perception of crispy food and on the physiology of chewing was investigated. Participants chewed biscuits of three different levels of crispness under four experimental conditions: no masking, auditory masking, visual masking, and auditory
release; distribution is unlimited. ii REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this...steady-state acoustic threats. This has led to research into the effects of various types of headgear on directional sound detection, auditory...research infrastructure available at ARL-HRED includes a unique world- class multispace auditory spatial perception laboratory, the Environment for
Santurette, Sébastien; Dau, Torsten
The ability of eight normal-hearing listeners and fourteen listeners with sensorineural hearing loss to detect and identify pitch contours was measured for binaural-pitch stimuli and salience-matched monaurally detectable pitches. In an effort to determine whether impaired binaural pitch perception was linked to a specific deficit, the auditory profiles of the individual listeners were characterized using measures of loudness perception, cognitive ability, binaural processing, temporal fine structure processing, and frequency selectivity, in addition to common audiometric measures. Two of the listeners were found not to perceive binaural pitch at all, despite a clear detection of monaural pitch. While both binaural and monaural pitches were detectable by all other listeners, identification scores were significantly lower for binaural than for monaural pitch. A total absence of binaural pitch sensation coexisted with a loss of a binaural signal-detection advantage in noise, without implying reduced cognitive function. Auditory filter bandwidths did not correlate with the difference in pitch identification scores between binaural and monaural pitches. However, subjects with impaired binaural pitch perception showed deficits in temporal fine structure processing. Whether the observed deficits stemmed from peripheral or central mechanisms could not be resolved here, but the present findings may be useful for hearing loss characterization.
for a variety of basic auditory tasks, indicating that it may be a crucial measure to consider for hearing-loss characterization. In contrast to hearing-impaired listeners, adults with dyslexia showed no deficits in binaural pitch perception, suggesting intact low-level auditory mechanisms. The second part...... into the fundamental auditory mechanisms underlying pitch perception, and may have implications for future pitch-perception models, as well as strategies for auditory-profile characterization and restoration of accurate pitch perception in impaired hearing.......Pitch is an important attribute of hearing that allows us to perceive the musical quality of sounds. Besides music perception, pitch contributes to speech communication, auditory grouping, and perceptual segregation of sound sources. In this work, several aspects of pitch perception in humans were...
Full Text Available This study investigated whether auditory, speech perception and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e. rapid automatic naming, verbal short term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency modulation (FM and an amplitude rise time (RT; an intensity discrimination task (ID was included as a non-dynamic control task. Speech perception was assessed by means of sentences and words in noise tasks. Group analysis revealed significant group differences in auditory tasks (i.e. RT and ID and in phonological processing measures, yet no differences were found for speech perception. In addition, performance on RT discrimination correlated with reading but this relation was mediated by phonological processing and not by speech in noise. Finally, inspection of the individual scores revealed that the dyslexic readers showed an increased proportion of deviant subjects on the slow-dynamic auditory and phonological tasks, yet each individual dyslexic reader does not display a clear pattern of deficiencies across the levels of processing skills. Although our results support phonological and slow-rate dynamic auditory deficits which relate to literacy, they suggest that at the individual level, problems in reading and writing cannot be explained by the cascading auditory theory. Instead, dyslexic adults seem to vary considerably in the extent to which each of the auditory and phonological factors are expressed and interact with environmental and higher-order cognitive influences.
Law, Jeremy M; Vandermosten, Maaike; Ghesquiere, Pol; Wouters, Jan
This study investigated whether auditory, speech perception, and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e., rapid automatic naming, verbal short-term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency modulation (FM) and an amplitude rise time (RT); an intensity discrimination task (ID) was included as a non-dynamic control task. Speech perception was assessed by means of sentences and words-in-noise tasks. Group analyses revealed significant group differences in auditory tasks (i.e., RT and ID) and in phonological processing measures, yet no differences were found for speech perception. In addition, performance on RT discrimination correlated with reading but this relation was mediated by phonological processing and not by speech-in-noise. Finally, inspection of the individual scores revealed that the dyslexic readers showed an increased proportion of deviant subjects on the slow-dynamic auditory and phonological tasks, yet each individual dyslexic reader does not display a clear pattern of deficiencies across the processing skills. Although our results support phonological and slow-rate dynamic auditory deficits which relate to literacy, they suggest that at the individual level, problems in reading and writing cannot be explained by the cascading auditory theory. Instead, dyslexic adults seem to vary considerably in the extent to which each of the auditory and phonological factors are expressed and interact with environmental and higher-order cognitive influences.
in a concert hall, while the musicians have the tendency to underestimate such a distance. 6. Distance Estimation in an Open Field The difficulty... Musicians and Sound Engineers in Estimation of Egocentric Source Distance in a Concert Hall Stereophonic Recording. Proceedings of the 28th International...Servos, P. Distance Estimation in the Visual and Visuomotor Systems. Experimental Brain Research 2000, 130, 35–47. Shaw, B. K.; McGowan, R. S.; Turvey, M
Full Text Available Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis. We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information.
Megino-Elvira, Laura; Martín-Lobo, Pilar; Vergara-Moragues, Esperanza
The authors' aim was to analyze the relationship of eye movements, auditory perception, and phonemic awareness with the reading process. The instruments used were the King-Devick Test (saccade eye movements), the PAF test (auditory perception), the PFC (phonemic awareness), the PROLEC-R (lexical process), the Canals reading speed test, and the…
Ashmead, Daniel H; Grantham, D Wesley; Maloff, Erin S; Hornsby, Benjamin; Nakamura, Takabun; Davis, Timothy J; Pampel, Faith; Rushing, Erin G
These experiments address concerns that motor vehicles in electric engine mode are so quiet that they pose a risk to pedestrians, especially those with visual impairments. The "quiet car" issue has focused on hybrid and electric vehicles, although it also applies to internal combustion engine vehicles. Previous research has focused on detectability of vehicles, mostly in quiet settings. Instead, we focused on the functional ability to perceive vehicle motion paths. Participants judged whether simulated vehicles were traveling straight or turning, with emphasis on the impact of background traffic sound. In quiet, listeners made the straight-or-turn judgment soon enough in the vehicle's path to be useful for deciding whether to start crossing the street. This judgment is based largely on sound level cues rather than the spatial direction of the vehicle. With even moderate background traffic sound, the ability to tell straight from turn paths is severely compromised. The signal-to-noise ratio needed for the straight-or-turn judgment is much higher than that needed to detect a vehicle. Although a requirement for a minimum vehicle sound level might enhance detection of vehicles in quiet settings, it is unlikely that this requirement would contribute to pedestrian awareness of vehicle movements in typical traffic settings with many vehicles present. The findings are relevant to deliberations by government agencies and automobile manufacturers about standards for minimum automobile sounds and, more generally, for solutions to pedestrians' needs for information about traffic, especially for pedestrians with sensory impairments.
Santurette, Sébastien; Dau, Torsten
The ability of eight normal-hearing listeners and fourteen listeners with sensorineural hearing loss to detect and identify pitch contours was measured for binaural-pitch stimuli and salience-matched monaurally detectable pitches. In an effort to determine whether impaired binaural pitch perception...... sensation coexisted with a loss of a binaural signal-detection advantage in noise, without implying reduced cognitive function. Auditory filter bandwidths did not correlate with the difference in pitch identification scores between binaural and monaural pitches. However, subjects with impaired binaural...... pitch perception showed deficits in temporal fine structure processing. Whether the observed deficits stemmed from peripheral or central mechanisms could not be resolved here, but the present findings may be useful for hearing loss characterization. (C) 2012 Acoustical Society of America. [http...
D'Ausilio, Alessandro; Bartoli, Eleonora; Maffongelli, Laura; Berry, Jeffrey James; Fadiga, Luciano
Audiovisual speech perception is likely based on the association between auditory and visual information into stable audiovisual maps. Conflicting audiovisual inputs generate perceptual illusions such as the McGurk effect. Audiovisual mismatch effects could be either driven by the detection of violations in the standard audiovisual statistics or via the sensorimotor reconstruction of the distal articulatory event that generated the audiovisual ambiguity. In order to disambiguate between the two hypotheses we exploit the fact that the tongue is hidden to vision. For this reason, tongue movement encoding can solely be learned via speech production but not via others׳ speech perception alone. Here we asked participants to identify speech sounds while matching or mismatching visual representations of tongue movements which were shown. Vision of congruent tongue movements facilitated auditory speech identification with respect to incongruent trials. This result suggests that direct visual experience of an articulator movement is not necessary for the generation of audiovisual mismatch effects. Furthermore, we suggest that audiovisual integration in speech may benefit from speech production learning. Copyright © 2014 Elsevier Ltd. All rights reserved.
Full Text Available The aim of the present study was to test whether transcranial electrical stimulation can modulate illusory perception in the auditory domain. In two separate experiments we applied transcranial Direct Current Stimulation (anodal/cathodal tDCS, 2 mA; N = 60 and high-frequency transcranial Random Noise Stimulation (hf-tRNS, 1.5 mA, offset 0; N = 45 on the temporal cortex during the presentation of the stimuli eliciting the Deutsch's illusion. The illusion arises when two sine tones spaced one octave apart (400 and 800 Hz are presented dichotically in alternation, one in the left and the other in the right ear, so that when the right ear receives the high tone, the left ear receives the low tone, and vice versa. The majority of the population perceives one high-pitched tone in one ear alternating with one low-pitched tone in the other ear. The results revealed that neither anodal nor cathodal tDCS applied over the left/right temporal cortex modulated the perception of the illusion, whereas hf-tRNS applied bilaterally on the temporal cortex reduced the number of times the sequence of sounds is perceived as the Deutsch's illusion with respect to the sham control condition. The stimulation time before the beginning of the task (5 or 15 min did not influence the perceptual outcome. In accordance with previous findings, we conclude that hf-tRNS can modulate auditory perception more efficiently than tDCS.
Nickisch, A; Heuckmann, C; Burger, T; Massinger, C
The diagnosis of APD (Auditory Perception Disorder) is a time consuming procedure. In Germany at the present, no screening test for APD exists which makes it possible to differentiate between children who are not likely to suffer from an APD and those who need to be diagnosed in detail. The Munich Auditory Screening of Perception Disorders (MAUS) contains the following subtests: Series of Syllables, Words in Noise and Identification and Differentiation of Phonemes (test duration: 15 minutes). The MAUS was standardized using 359 primary school children between 6 and 11 years of age. Furthermore, the MAUS was used in addition to the complete, extensive APD-diagnostics in testing 52 children (36 with APD and 16 without APD) within the age group mentioned. T-scores for each subtest were established by the standardization of the MAUS. The internal consistency of the test was sufficient. The intercorrelation between subtests was very slight. Therefore, each subtest seems to play an independent part in defining the construct of APD. Because of the results of the pilot study which formed the basis for the development of the screening instrument used, and because of the sensitivity scores reached in testing a group of 36 children with diagnosed APD, it can be expected that the MAUS will show a high sensitivity with regard to APD. Using the MAUS, it can be determined if and to what extent the test results of an individual deviate from those of the normal primary school population. The MAUS can identify children at risk of having an APD and can differentiate these children from those who are unlikely to suffer from an APD.
Full Text Available We have recently demonstrated that alternating left-right sound sources induce motion perception to static visual stimuli along the horizontal plane (SIVM: sound-induced visual motion perception, Hidaka et al., 2009. The aim of the current study was to elucidate whether auditory motion signals, rather than auditory positional signals, can directly contribute to the SIVM. We presented static visual flashes at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flashes appeared to move in the situation where auditory positional information would have little influence on the perceived position of visual stimuli; the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the auditory motion altered visual motion perception in a global motion display; in this display, different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception so that there was no clear one-to-one correspondence between the auditory stimuli and each visual stimulus. These findings suggest the existence of direct interactions between the auditory and visual modalities in motion processing and motion perception.
Bolders, Anna C; Band, Guido P H; Stallen, Pieter Jan M
Mood has been shown to influence cognitive performance. However, little is known about the influence of mood on sensory processing, specifically in the auditory domain. With the current study, we sought to investigate how auditory processing of neutral sounds is affected by the mood state of the listener. This was tested in two experiments by measuring masked-auditory detection thresholds before and after a standard mood-induction procedure. In the first experiment ( N = 76), mood was induced by imagining a mood-appropriate event combined with listening to mood inducing music. In the second experiment ( N = 80), imagining was combined with affective picture viewing to exclude any possibility of confounding the results by acoustic properties of the music. In both experiments, the thresholds were determined by means of an adaptive staircase tracking method in a two-interval forced-choice task. Masked detection thresholds were compared between participants in four different moods (calm, happy, sad, and anxious), which enabled differentiation of mood effects along the dimensions arousal and pleasure. Results of the two experiments were analyzed both in separate analyses and in a combined analysis. The first experiment showed that, while there was no impact of pleasure level on the masked threshold, lower arousal was associated with lower threshold (higher masked sensitivity). However, as indicated by an interaction effect between experiment and arousal, arousal did have a different effect on the threshold in Experiment 2. Experiment 2 showed a trend of arousal in opposite direction. These results show that the effect of arousal on auditory-masked sensitivity may depend on the modality of the mood-inducing stimuli. As clear conclusions regarding the genuineness of the arousal effect on the masked threshold cannot be drawn, suggestions for further research that could clarify this issue are provided.
Whiteford, Kelly L; Oxenham, Andrew J
Congenital amusia is a music perception disorder believed to reflect a deficit in fine-grained pitch perception and/or short-term or working memory for pitch. Because most measures of pitch perception include memory and segmentation components, it has been difficult to determine the true extent of pitch processing deficits in amusia. It is also unclear whether pitch deficits persist at frequencies beyond the range of musical pitch. To address these questions, experiments were conducted with amusics and matched controls, manipulating both the stimuli and the task demands. First, we assessed pitch discrimination at low (500Hz and 2000Hz) and high (8000Hz) frequencies using a three-interval forced-choice task. Amusics exhibited deficits even at the highest frequency, which lies beyond the existence region of musical pitch. Next, we assessed the extent to which frequency coding deficits persist in one- and two-interval frequency-modulation (FM) and amplitude-modulation (AM) detection tasks at 500Hz at slow (fm=4Hz) and fast (fm=20Hz) modulation rates. Amusics still exhibited deficits in one-interval FM detection tasks that should not involve memory or segmentation. Surprisingly, amusics were also impaired on AM detection, which should not involve pitch processing. Finally, direct comparisons between the detection of continuous and discrete FM demonstrated that amusics suffer deficits in both coding and segmenting pitch information. Our results reveal auditory deficits in amusia extending beyond pitch perception that are subtle when controlling for memory and segmentation, and are likely exacerbated in more complex contexts such as musical listening. Copyright © 2017 Elsevier Ltd. All rights reserved.
Full Text Available People obtain a lot of information from visual and auditory sensation on daily life. Regarding the effect of visual stimuli on perception of auditory stimuli, studies of phonological perception and sound localization have been made in great numbers. This study examined the effect of visual stimuli on perception in loudness and pitch of auditory stimuli. We used the image of figures whose size or brightness was changed as visual stimuli, and the sound of pure tone whose loudness or pitch was changed as auditory stimuli. Those visual and auditory stimuli were combined independently to make four types of audio-visual multisensory stimuli for psychophysical experiments. In the experiments, participants judged change in loudness or pitch of auditory stimuli, while they judged the direction of size change or the kind of a presented figure in visual stimuli. Therefore they cannot neglect visual stimuli while they judged auditory stimuli. As a result, perception in loudness and pitch were promoted significantly around their difference limen, when the image was getting bigger or brighter, compared with the case in which the image had no changes. This indicates that perception in loudness and pitch were affected by change in size and brightness of visual stimuli.
Bordegoni, Monica; Ferrise, Francesco; Grani, Francesco
In this paper we describe an experiment that investigates the role of auditory feedback in affecting the perception of effort when using a physical pulley machine. Specifically, we investigated whether variations in the amplitude and frequency content of the pulley sound affect perception of effort....... Results show that variations in frequency content affect the perception of effort....
Remez, R E
Some influential accounts of speech perception have asserted that the goal of perception is to recover the articulatory gestures that create the acoustic signal, while others have proposed that speech perception proceeds by a method of acoustic categorization of signal elements. These accounts have been frustrated by difficulties in identifying a set of primitive articulatory constituents underlying speech production, and a set of primitive acoustic-auditory elements underlying speech perception. An argument by Lindblom favors an account of production and perception based on the auditory form of speech and its cognitive elaboration, rejecting the aim of defining a set of articulatory primitives by appealing to theoretical principle, while recognizing the empirical difficulty of identifying a set of acoustic or auditory primitives. An examination of this thesis found opportunities to defend some of its conclusions with independent evidence, but favors a characterization of the constituents of speech perception as linguistic rather than as articulatory or acoustic.
Erdener, Dogu; Burnham, Denis
Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception--lip-reading and visual…
Saoud, Houda; Josse, Goulven; Bertasi, Eric; Truy, Eric; Chait, Maria; Giraud, Anne-Lise
Asymmetry in auditory cortical oscillations could play a role in speech perception by fostering hemispheric triage of information across the two hemispheres. Due to this asymmetry, fast speech temporal modulations relevant for phonemic analysis could be best perceived by the left auditory cortex, while slower modulations conveying vocal and paralinguistic information would be better captured by the right one. It is unclear, however, whether and how early oscillation-based selection influences speech perception. Using a dichotic listening paradigm in human participants, where we provided different parts of the speech envelope to each ear, we show that word recognition is facilitated when the temporal properties of speech match the rhythmic properties of auditory cortices. We further show that the interaction between speech envelope and auditory cortices rhythms translates in their level of neural activity (as measured with fMRI). In the left auditory cortex, the neural activity level related to stimulus-brain rhythm interaction predicts speech perception facilitation. These data demonstrate that speech interacts with auditory cortical rhythms differently in right and left auditory cortex, and that in the latter, the interaction directly impacts speech perception performance.
Lense, Miriam D; Shivers, Carolyn M; Dykens, Elisabeth M
Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia.
Fernandes, Nayara Freitas; Yamaguti, Elisabete Honda; Morettin, Marina; Costa, Orozimbo Alves
To analyze speech perception in children with pre-lingual hearing loss with auditory neuropathy spectrum disorder users of bilateral hearing aid. This is a descriptive and exploratory study carried out at the Research Center Audiological (HRAC/USP). The study included four children aged between 8 years and 3 months and 12 years and 2 months. Lists of monosyllabic words, two syllables, nonsense words and sentences, the Infant Toddler-Meaningful Auditory Integration Scale (IT-MAIS) and the Meaningful Use of Speech Scale (MUSS), hearing, and language categories were used. All lists were applied in acoustic booth, with speakers, in free field, in silence. The results showed an average 69.5% for the list of monosyllabic words, 87.75% for the list of two-syllable words, 89.92% for the list of nonsense syllables, and 92.5% for the list of sentences. The therapeutic process that includes the use of bilateral hearing aid was extremely satisfactory, since it allowed the maximum development of auditory skills.
Sarro, Emma C; Sanes, Dan H
In humans, auditory perception reaches maturity over a broad age range, extending through adolescence. Despite this slow maturation, children are considered to be outstanding learners, suggesting that immature perceptual skills might actually be advantageous to improvement on an acoustic task as a result of training (perceptual learning). Previous non-human studies have not employed an identical task when comparing perceptual performance of young and mature subjects, making it difficult to assess learning. Here, we used an identical procedure on juvenile and adult gerbils to examine the perception of amplitude modulation (AM), a stimulus feature that is an important component of most natural sounds. On average, Adult animals could detect smaller fluctuations in amplitude (i.e., smaller modulation depths) than Juveniles, indicating immature perceptual skills in Juveniles. However, the population variance was much greater for Juveniles, a few animals displaying adult-like AM detection. To determine whether immature perceptual skills facilitated learning, we compared naïve performance on the AM detection task with the amount of improvement following additional training. The amount of improvement in Adults correlated with naïve performance: those with the poorest naïve performance improved the most. In contrast, the naïve performance of Juveniles did not predict the amount of learning. Those Juveniles with immature AM detection thresholds did not display greater learning than Adults. Furthermore, for several of the Juveniles with adult-like thresholds, AM detection deteriorated with repeated testing. Thus, immature perceptual skills in young animals were not associated with greater learning. (c) 2010 Wiley Periodicals, Inc.
Zhang, Juan; McBride-Chang, Catherine
While the importance of phonological sensitivity for understanding reading acquisition and impairment across orthographies is well documented, what underlies deficits in phonological sensitivity is not well understood. Some researchers have argued that speech perception underlies variability in phonological representations. Others have…
Seyed Basir Hashemi
Full Text Available Background: The number of children with cochlear implants who have other difficulties such as attention deficiency and cerebral palsy has increased dramatically. Despite the need for information on the results of cochlear implantation in this group, the available literature is extremely limited. We, therefore, sought to compare the levels of auditory perception in children with cochlear implants with and without additional disabilities. Methods: A spondee test comprising 20 two-syllable words was performed. The data analysis was done using SPSS, version 19. Results: Thirty-one children who had received cochlear implants 2 years previously and were at an average age of 7.5 years were compared via the spondee test. From the 31 children,15 had one or more additional disabilities. The data analysis indicated that the mean score of auditory perception in this group was approximately 30 scores below that of the children with cochlear implants who had no additional disabilities. Conclusion: Although there was an improvement in the auditory perception of all the children with cochlear implants, there was a noticeable difference in the level of auditory perception between those with and without additional disabilities. Deafness and additional disabilities depended the children on lip reading alongside the auditory ways of communication. In addition, the level of auditory perception in the children with cochlear implants who had more than one additional disability was significantly less than that of the other children with cochlear implants who had one additional disability.
Full Text Available Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF and Auditory-FacialGestural (AFG inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers’ performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning.
Iliadou, Vasiliki (Vivian); Ptok, Martin; Grech, Helen; Pedersen, Ellen Raben; Brechmann, André; Deggouj, Naïma; Kiese-Himmel, Christiane; Śliwińska-Kowalska, Mariola; Nickisch, Andreas; Demanez, Laurent; Veuillet, Evelyne; Thai-Van, Hung; Sirimanna, Tony; Callimachou, Marina; Santarelli, Rosamaria; Kuske, Sandra; Barajas, Jose; Hedjever, Mladen; Konukseven, Ozlem; Veraguth, Dorothy; Stokkereit Mattsson, Tone; Martins, Jorge Humberto; Bamiou, Doris-Eva
Current notions of “hearing impairment,” as reflected in clinical audiological practice, do not acknowledge the needs of individuals who have normal hearing pure tone sensitivity but who experience auditory processing difficulties in everyday life that are indexed by reduced performance in other more sophisticated audiometric tests such as speech audiometry in noise or complex non-speech sound perception. This disorder, defined as “Auditory Processing Disorder” (APD) or “Central Auditory Processing Disorder” is classified in the current tenth version of the International Classification of diseases as H93.25 and in the forthcoming beta eleventh version. APDs may have detrimental effects on the affected individual, with low esteem, anxiety, and depression, and symptoms may remain into adulthood. These disorders may interfere with learning per se and with communication, social, emotional, and academic-work aspects of life. The objective of the present paper is to define a baseline European APD consensus formulated by experienced clinicians and researchers in this specific field of human auditory science. A secondary aim is to identify issues that future research needs to address in order to further clarify the nature of APD and thus assist in optimum diagnosis and evidence-based management. This European consensus presents the main symptoms, conditions, and specific medical history elements that should lead to auditory processing evaluation. Consensus on definition of the disorder, optimum diagnostic pathway, and appropriate management are highlighted alongside a perspective on future research focus.
Vasiliki (Vivian Iliadou
Full Text Available Current notions of “hearing impairment,” as reflected in clinical audiological practice, do not acknowledge the needs of individuals who have normal hearing pure tone sensitivity but who experience auditory processing difficulties in everyday life that are indexed by reduced performance in other more sophisticated audiometric tests such as speech audiometry in noise or complex non-speech sound perception. This disorder, defined as “Auditory Processing Disorder” (APD or “Central Auditory Processing Disorder” is classified in the current tenth version of the International Classification of diseases as H93.25 and in the forthcoming beta eleventh version. APDs may have detrimental effects on the affected individual, with low esteem, anxiety, and depression, and symptoms may remain into adulthood. These disorders may interfere with learning per se and with communication, social, emotional, and academic-work aspects of life. The objective of the present paper is to define a baseline European APD consensus formulated by experienced clinicians and researchers in this specific field of human auditory science. A secondary aim is to identify issues that future research needs to address in order to further clarify the nature of APD and thus assist in optimum diagnosis and evidence-based management. This European consensus presents the main symptoms, conditions, and specific medical history elements that should lead to auditory processing evaluation. Consensus on definition of the disorder, optimum diagnostic pathway, and appropriate management are highlighted alongside a perspective on future research focus.
Fuller, Christina Diechina
Cochlear implants (CIs) are auditory prostheses for severely deaf people that do not benefit from conventional hearing aids. Speech perception is reasonably good with CIs; other signals such as music perception are challenging. First, the perception of music and music related perception in CI users was tested. Second, the possible positive influence of musical training on auditory perception was investigated. The enjoyment of music in CI users was suboptimal. Identifying vocal emotions (angry...
McCourt, Mark E; Leone, Lynnette M
We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.
Bayat, Arash; Farhadi, Mohammad; Pourbakht, Akram; Sadjedi, Hamed; Emamdjomeh, Hesam; Kamali, Mohammad; Mirmomeni, Golshan
Background Auditory scene analysis (ASA) is the process by which the auditory system separates individual sounds in natural-world situations. ASA is a key function of auditory system, and contributes to speech discrimination in noisy backgrounds. It is known that sensorineural hearing loss (SNHL) detrimentally affects auditory function in complex environments, but relatively few studies have focused on the influence of SNHL on higher level processes which are likely involved in auditory perception in different situations. Objectives The purpose of the current study was to compare the auditory system ability of normally hearing and SNHL subjects using the ASA examination. Materials and Methods A total of 40 right-handed adults (age range: 18 - 45 years) participated in this study. The listeners were divided equally into control and mild to moderate SNHL groups. ASA ability was measured using an ABA-ABA sequence. The frequency of the "A" was kept constant at 500, 1000, 2000 or 4000 Hz, while the frequency of the "B" was set at 3 to 80 percent above the" A" tone. For ASA threshold detection, the frequency of the B stimulus was decreased until listeners reported that they could no longer hear two separate sounds. Results The ASA performance was significantly better for controls than the SNHL group; these differences were more obvious at higher frequencies. We found no significant differences between ASA ability as a function of tone durations in both groups. Conclusions The present study indicated that SNHL may cause a reduction in perceptual separation of the incoming acoustic information to form accurate representations of our acoustic world. PMID:24719695
Radziwon, Kelly E.
Mice are useful laboratory subjects because of their small size, their modest cost, and the fact that researchers have created many different strains to study a variety of disorders. In particular, researchers have found nearly 100 naturally occurring mouse mutations with hearing impairments. For these reasons, mice have become an important model for studies of human deafness. Although much is known about the genetic makeup and physiology of the laboratory mouse, far less is known about mouse auditory behavior. To fully understand the effects of genetic mutations on hearing, it is necessary to determine the hearing abilities of these mice. Two experiments here examined various aspects of mouse auditory perception using CBA/CaJ mice, a commonly used mouse strain. The frequency difference limens experiment tested the mouse's ability to discriminate one tone from another based solely on the frequency of the tone. The mice had similar thresholds as wild mice and gerbils but needed a larger change in frequency than humans and cats. The second psychoacoustic experiment sought to determine which cue, frequency or duration, was more salient when the mice had to identify various tones. In this identification task, the mice overwhelmingly classified the tones based on frequency instead of duration, suggesting that mice are using frequency when differentiating one mouse vocalization from another. The other two experiments were more naturalistic and involved both auditory perception and mouse vocal production. Interest in mouse vocalizations is growing because of the potential for mice to become a model of human speech disorders. These experiments traced mouse vocal development from infant to adult, and they tested the mouse's preference for various vocalizations. This was the first known study to analyze the vocalizations of individual mice across development. Results showed large variation in calling rates among the three cages of adult mice but results were highly
Full Text Available Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks (minimum audible angle and space bisection and two acoustically different environments (normal room and anechoic room. With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the minimum audible angle after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.
Tonelli, Alessia; Brayda, Luca; Gori, Monica
Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks [minimum audible angle (MAA) and space bisection] and two acoustically different environments (normal room and anechoic room). With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the MAA after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.
Blamey, P J; Cowan, R S; Alcantara, J I; Whitford, L A; Clark, G M
Four normally-hearing subjects were trained and tested with all combinations of a highly-degraded auditory input, a visual input via lipreading, and a tactile input using a multichannel electrotactile speech processor...
Champoux, François; Shiller, Douglas M; Zatorre, Robert J
In the present study, we demonstrate an audiotactile effect in which amplitude modulation of auditory feedback during voiced speech induces a throbbing sensation over the lip and laryngeal regions. Control tasks coupled with the examination of speech acoustic parameters allow us to rule out the possibility that the effect may have been due to cognitive factors or motor compensatory effects. We interpret the effect as reflecting the tight interplay between auditory and tactile modalities during vocal production.
Full Text Available In the present study, we demonstrate an audiotactile effect in which amplitude modulation of auditory feedback during voiced speech induces a throbbing sensation over the lip and laryngeal regions. Control tasks coupled with the examination of speech acoustic parameters allow us to rule out the possibility that the effect may have been due to cognitive factors or motor compensatory effects. We interpret the effect as reflecting the tight interplay between auditory and tactile modalities during vocal production.
Prather, Jonathan F
Learning and maintaining the sounds we use in vocal communication require accurate perception of the sounds we hear performed by others and feedback-dependent imitation of those sounds to produce our own vocalizations. Understanding how the central nervous system integrates auditory and vocal-motor information to enable communication is a fundamental goal of systems neuroscience, and insights into the mechanisms of those processes will profoundly enhance clinical therapies for communication disorders. Gaining the high-resolution insight necessary to define the circuits and cellular mechanisms underlying human vocal communication is presently impractical. Songbirds are the best animal model of human speech, and this review highlights recent insights into the neural basis of auditory perception and feedback-dependent imitation in those animals. Neural correlates of song perception are present in auditory areas, and those correlates are preserved in the auditory responses of downstream neurons that are also active when the bird sings. Initial tests indicate that singing-related activity in those downstream neurons is associated with vocal-motor performance as opposed to the bird simply hearing itself sing. Therefore, action potentials related to auditory perception and action potentials related to vocal performance are co-localized in individual neurons. Conceptual models of song learning involve comparison of vocal commands and the associated auditory feedback to compute an error signal that is used to guide refinement of subsequent song performances, yet the sites of that comparison remain unknown. Convergence of sensory and motor activity onto individual neurons points to a possible mechanism through which auditory and vocal-motor signals may be linked to enable learning and maintenance of the sounds used in vocal communication. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013
Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan
Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise. Copyright © 2013 Elsevier Inc. All rights reserved.
Full Text Available Perception involves the collection, processing and interpretation of information through sensory receptors and represents the reality of an individual. Collecting customer information is imperative for marketing, because consumers are in the focus of defining all its objectives, strategies and plans. The result of the perception depends on a number of factors and that is why people do not experience stimuli in the same way. A marketing research of consumer perceptions has been carried out in order to identify the habits and understand the behavior of consumers when choosing products with special emphasis on the influence of perception, stimuli from the environment and perceptions of risk in their decision. .
Talebi, Hossein; Moossavi, Abdollah; Faghihzadeh, Soghrat
Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds. Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared in 30 male older adults (15 normal and 15 cases with right hemisphere CVA) in the same age groups (60-75 years old). For the CST, participants were presented with target message in one ear and competing message in the other one. The task was to listen to target sentence and repeat back without attention to competing sentence. For the DDT, auditory stimuli were monosyllabic digits presented dichotically and the task was to repeat those. Comparing mean score of CST and DDT between CVA patients with right hemisphere impairment and normal participants showed statistically significant difference (p=0.001 for CST and p<0.0001 for DDT). The present study revealed that abnormal CST and DDT scores of participants with right hemisphere CVA could be related to concurrent segregation difficulties. These findings suggest that low level segregation mechanisms and/or high level attention mechanisms might contribute to the problems.
Fuller, Christina Diechina
Cochlear implants (CIs) are auditory prostheses for severely deaf people that do not benefit from conventional hearing aids. Speech perception is reasonably good with CIs; other signals such as music perception are challenging. First, the perception of music and music related perception in CI users
Banai, Karen; Ahissar, Merav
Background The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. Methodology/Principal Findings Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds) were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. Conclusions/Significance Participants’ previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and memory skills are less
Banai, Karen; Ahissar, Merav
The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds) were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and memory skills are less likely to study music and if so, why this is the case.
Full Text Available BACKGROUND: The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. METHODOLOGY/PRINCIPAL FINDINGS: Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. CONCLUSIONS/SIGNIFICANCE: Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and
Meha-Bettison, Kiriana; Sharma, Mridula; Ibrahim, Ronny K; Mandikal Vasuki, Pragati Rao
The current research investigated whether professional musicians outperformed non-musicians on auditory processing and speech-in-noise perception as assessed using behavioural and electrophysiological tasks. Spectro-temporal processing skills were assessed using a psychoacoustic test battery. Speech-in-noise perception was measured using the Listening in Spatialised Noise - Sentences (LiSN-S) test and Cortical Auditory Evoked Potentials (CAEPs) recorded to the speech syllable/da/presented in quiet and in 8-talker babble noise at 0, 5, and 10 dB signal-to-noise ratios (SNRs). Ten professional musicians and 10 non-musicians participated in this study. Musicians significantly outperformed non-musicians in the frequency discrimination task and low-cue condition of the LiSN-S test. Musicians' N1 amplitude showed no difference between 5 dB and 0 dB SNR conditions while non-musicians showed significantly lower N1 amplitude at 0 dB SNR compared to 5 dB SNR. Brain-behaviour correlation for musicians showed a significant association between CAEPs at 5 dB SNR and the low-cue condition of the LiSN-S test at 30-70 ms. Time-frequency analysis indicated musicians had significantly higher alpha power desynchronisation in the 0 dB SNR condition indicating involvement of attention. Through the use of behavioural and electrophysiological data, the results provide converging evidence for improved speech recognition in noise in musicians.
Walker-Andrews, Arlene S.; Lennon, Elizabeth M.
Examines, in two experiments, 5-month-old infants' sensitivity to auditory-visual specification of distance and direction of movement. One experiment presented two films with soundtracks in either a match or mismatch condition; the second showed the two films side-by-side with a single soundtrack appropriate to one. Infants demonstrated visual…
Kirkwood, Brent Christopher
Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length...
Gick, Bryan; Jóhannsdóttir, Kristín M.; Gibraiel, Diana; Mühlbauer, Jeff
A single pool of untrained subjects was tested for interactions across two bimodal perception conditions: audio-tactile, in which subjects heard and felt speech, and visual-tactile, in which subjects saw and felt speech. Identifications of English obstruent consonants were compared in bimodal and no-tactile baseline conditions. Results indicate that tactile information enhances speech perception by about 10 percent, regardless of which other mode (auditory or visual) is active. However, withi...
the Environment for Auditory Research BRAXTON BOREN MENTOR: MARK ERICSON HUMAN RESEARCH AND ENGINEERING DIRECTORATE ABERDEEN PROVING GROUND, MARYLAND Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or
Full Text Available Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1 if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors, and (2 whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli. Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only
In this thesis the influence of the auditory environment on the emotional perception of speech in mediated communication is addressed. The motivation of this study is the development of techniques that enable suppression of environmental sound, with the goal to increasethe signal-to-noise ratio in
Chung, Kevin K. H.; McBride-Chang, Catherine; Cheung, Him; Wong, Simpson W. L.
This study focused on the associations of general auditory processing, speech perception, phonological awareness and word reading in Cantonese-speaking children from Hong Kong learning to read both Chinese (first language [L1]) and English (second language [L2]). Children in Grades 2--4 ("N" = 133) participated and were administered…
Torppa, Ritva; Faulkner, Andrew; Huotilainen, Minna; Järvikivi, Juhani; Lipsanen, Jari; Laasonen, Marja; Vainio, Martti
To study prosodic perception in early-implanted children in relation to auditory discrimination, auditory working memory, and exposure to music. Word and sentence stress perception, discrimination of fundamental frequency (F0), intensity and duration, and forward digit span were measured twice over approximately 16 months. Musical activities were assessed by questionnaire. Twenty-one early-implanted and age-matched normal-hearing (NH) children (4-13 years). Children with cochlear implants (CIs) exposed to music performed better than others in stress perception and F0 discrimination. Only this subgroup of implanted children improved with age in word stress perception, intensity discrimination, and improved over time in digit span. Prosodic perception, F0 discrimination and forward digit span in implanted children exposed to music was equivalent to the NH group, but other implanted children performed more poorly. For children with CIs, word stress perception was linked to digit span and intensity discrimination: sentence stress perception was additionally linked to F0 discrimination. Prosodic perception in children with CIs is linked to auditory working memory and aspects of auditory discrimination. Engagement in music was linked to better performance across a range of measures, suggesting that music is a valuable tool in the rehabilitation of implanted children.
Full Text Available BACKGROUND: Individuals with the rare genetic disorder Williams-Beuren syndrome (WS are known for their characteristic auditory phenotype including strong affinity to music and sounds. In this work we attempted to pinpoint a neural substrate for the characteristic musicality in WS individuals by studying the structure-function relationship of their auditory cortex. Since WS subjects had only minor musical training due to psychomotor constraints we hypothesized that any changes compared to the control group would reflect the contribution of genetic factors to auditory processing and musicality. METHODOLOGY/PRINCIPAL FINDINGS: Using psychoacoustics, magnetoencephalography and magnetic resonance imaging, we show that WS individuals exhibit extreme and almost exclusive holistic sound perception, which stands in marked contrast to the even distribution of this trait in the general population. Functionally, this was reflected by increased amplitudes of left auditory evoked fields. On the structural level, volume of the left auditory cortex was 2.2-fold increased in WS subjects as compared to control subjects. Equivalent volumes of the auditory cortex have been previously reported for professional musicians. CONCLUSIONS/SIGNIFICANCE: There has been an ongoing debate in the neuroscience community as to whether increased gray matter of the auditory cortex in musicians is attributable to the amount of training or innate disposition. In this study musical education of WS subjects was negligible and control subjects were carefully matched for this parameter. Therefore our results not only unravel the neural substrate for this particular auditory phenotype, but in addition propose WS as a unique genetic model for training-independent auditory system properties.
Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.
Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu
The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. © The Author(s) 2016.
Lagacé, Josée; Jutras, Benoît; Gagné, Jean-Pierre
A hallmark listening problem of individuals presenting with auditory processing disorder (APD) is their poor recognition of speech in noise. The underlying perceptual problem of the listening difficulties in unfavorable listening conditions is unknown. The objective of this article was to demonstrate theoretically how to determine whether the speech recognition problems are related to an auditory dysfunction, a language-based dysfunction, or a combination of both. Tests such as the Speech Perception in Noise (SPIN) test allow the exploration of the auditory and language-based functions involved in speech perception in noise, which is not possible with most other speech-in-noise tests. Psychometric functions illustrating results from hypothetical groups of individuals with APD on the SPIN test are presented. This approach makes it possible to postulate about the origin of the speech perception problems in noise. APD is a complex and heterogeneous disorder for which the underlying deficit is currently unclear. Because of their design, SPIN-like tests can potentially be used to identify the nature of the deficits underlying problems with speech perception in noise for this population. A better understanding of the difficulties with speech perception in noise experienced by many listeners with APD should lead to more efficient intervention programs.
Kirkwood, Brent Christopher
Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length......-estimation experiment. A comparison of the length-estimation accuracy for the three presentation methods indicates that the choice of presentation method is important for maintaining realism and for maintaining the acoustic cues utilized by listeners in perceiving length....
Alcántara, J I; Blamey, P J; Clark, G M
The following study compared the effectiveness of unimodal and bimodal training strategies at improving the perception of speech information under a variety of conditions. Normal-hearing subjects were trained in the perception of vowel and consonant stimuli. Speech information was provided either via a multiple channel electrotactile speech processing aid (the Tickle Talker), and/or by a 200-Hz low-pass filtered auditory signal. Two subjects were trained only in the combined tactile-plus-auditory (TA) condition; the remaining two were trained in both the tactile-alone (T) and auditory-alone (A) conditions; however, only one condition was used at any single time. All subjects were evaluated in the TA, T, and A conditions, both at the beginning of the study, prior to training, and at the completion of training, on closed-set vowel and consonant confusion tests, and on an open-set word test. Results indicated that whilst statistically significant improvements occurred from one evaluation period to the next, in both groups of subjects, the improvements per condition were not dependent on the type of training received. The results provide a preliminary indication that the provision of unimodal training does not impair the perception of speech information under bimodal perception conditions.
Yamamoto, Kosuke; Kawabata, Hideaki
We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.
Chiou, Rocco; Stelter, Marleen; Rich, Anina N
Our brain constantly integrates signals across different senses. Auditory-visual synaesthesia is an unusual form of cross-modal integration in which sounds evoke involuntary visual experiences. Previous research primarily focuses on synaesthetic colour, but little is known about non-colour synaesthetic visual features. Here we studied a group of synaesthetes for whom sounds elicit consistent visual experiences of coloured 'geometric objects' located at specific spatial location. Changes in auditory pitch alter the brightness, size, and spatial height of synaesthetic experiences in a systematic manner resembling the cross-modal correspondences of non-synaesthetes, implying synaesthesia may recruit cognitive/neural mechanisms for 'normal' cross-modal processes. To objectively assess the impact of synaesthetic objects on behaviour, we devised a multi-feature cross-modal synaesthetic congruency paradigm and asked participants to perform speeded colour or shape discrimination. We found irrelevant sounds influenced performance, as quantified by congruency effects, demonstrating that synaesthetes were not able to suppress their synaesthetic experiences even when these were irrelevant for the task. Furthermore, we found some evidence for task-specific effects consistent with feature-based attention acting on the constituent features of synaesthetic objects: synaesthetic colours appeared to have a stronger impact on performance than synaesthetic shapes when synaesthetes attended to colour, and vice versa when they attended to shape. We provide the first objective evidence that visual synaesthetic experience can involve multiple features forming object-like percepts and suggest that each feature can be selected by attention despite it being internally generated. These findings suggest theories of the brain mechanisms of synaesthesia need to incorporate a broader neural network underpinning multiple visual features, perceptual knowledge, and feature integration, rather than
Varnet, Léo; Knoblauch, Kenneth; Serniclaes, Willy; Meunier, Fanny; Hoen, Michel
Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique that allows experimenters to estimate the relative importance of time-frequency regions in categorizing natural speech utterances in noise. Importantly, this technique enables the testing of hypotheses on the listening strategies of participants at the group level. We exemplify this approach by identifying the acoustic cues involved in da/ga categorization with two phonetic contexts, Al- or Ar-. The application of Auditory Classification Images to our group of 16 participants revealed significant critical regions on the second and third formant onsets, as predicted by the literature, as well as an unexpected temporal cue on the first formant. Finally, through a cluster-based nonparametric test, we demonstrate that this method is sufficiently sensitive to detect fine modifications of the classification strategies between different utterances of the same phoneme.
Okada, Kayoko; Venezia, Jonathan H; Matchin, William; Saberi, Kourosh; Hickok, Gregory
Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.
Full Text Available Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS. Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.
Lynne E Bernstein
Full Text Available Speech perception under audiovisual conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how audiovisual training might benefit or impede auditory perceptual learning speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures in a protocol with a fixed number of trials. In Experiment 1, paired-associates (PA audiovisual (AV training of one group of participants was compared with audio-only (AO training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct. PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early audiovisual speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.
Bernstein, Lynne E; Auer, Edward T; Eberhardt, Silvio P; Jiang, Jintao
Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.
Bernstein, Lynne E.; Auer, Edward T.; Eberhardt, Silvio P.; Jiang, Jintao
Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called “reverse hierarchy theory” of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning. PMID:23515520
Anderson, David J
Microelectrode arrays offer the auditory systems physiologists many opportunities through a number of electrode technologies. In particular, silicon substrate electrode arrays offer a large design space including choice of layout plan, range of surface areas for active sites, a choice of site materials and high spatial resolution. Further, most designs can double as recording and stimulation electrodes in the same preparation. Scala tympani auditory prosthesis research has been aided by mapping electrodes in the cortex and the inferior colliculus to assess the CNS responses to peripheral stimulation. More recently silicon stimulation electrodes placed in the auditory nerve, cochlear nucleus and the inferior colliculus have advanced the exploration of alternative stimulation sites for auditory prostheses. Multiplication of results from experimental effort by simultaneously stimulating several locations, or by acquiring several streams of data synchronized to the same stimulation event, is a commonly sought after advantage. Examples of inherently multichannel functions which are not possible with single electrode sites include (1) current steering resulting in more focused stimulation, (2) improved signal-to-noise ratio (SNR) for recording when noise and/or neural signals appear on more than one site and (3) current source density (CSD) measurements. Still more powerful are methods that exploit closely-spaced recording and stimulation sites to improve detailed interrogation of the surrounding neural domain. Here, we discuss thin-film recording/stimulation arrays on silicon substrates. These electrode arrays have been shown to be valuable because of their precision coupled with reproducibility in an ever expanding design space. The shape of the electrode substrate can be customized to accommodate use in cortical, deep and peripheral neural structures while flexible cables, fluid delivery and novel coatings have been added to broaden their application. The use of
Full Text Available Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e. optic flow and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e. engine, tire, and wind sounds. Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.
Etchemendy, Pablo E; Spiousas, Ignacio; Calcagno, Esteban R; Abregú, Ezequiel; Eguia, Manuel C; Vergara, Ramiro O
In this study we evaluated whether a method of direct location is an appropriate response method for measuring auditory distance perception of far-field sound sources. We designed an experimental set-up that allows participants to indicate the distance at which they perceive the sound source by moving a visual marker. We termed this method Cross-Modal Direct Location (CMDL) since the response procedure involves the visual modality while the stimulus is presented through the auditory modality. Three experiments were conducted with sound sources located from 1 to 6 m. The first one compared the perceived distances obtained using either the CMDL device or verbal report (VR), which is the response method more frequently used for reporting auditory distance in the far field, and found differences on response compression and bias. In Experiment 2, participants reported visual distance estimates to the visual marker that were found highly accurate. Then, we asked the same group of participants to report VR estimates of auditory distance and found that the spatial visual information, obtained from the previous task, did not influence their reports. Finally, Experiment 3 compared the same responses that Experiment 1 but interleaving the methods, showing a weak, but complex, mutual influence. However, the estimates obtained with each method remained statistically different. Our results show that the auditory distance psychophysical functions obtained with the CMDL method are less susceptible to previously reported underestimation for distances over 2 m.
Polzer, U; Gaebel, W
The assessment of nonverbal expression (e.g. facial action, speech, body movements, etc.) are an important aspect of the diagnostic and prognostic process in psychiatric patients. By means of observer rating scales' expression is usually assessed on different observation levels. It appears that visual and auditory perception of expression interfere with one other. In the present study it was demonstrated, that ratings of certain attributes of expression was significantly more inconsistent in schizophrenic than in depressed patients, provided information was simultaneously displayed to both visual and auditory channels of perception. A "disintegration" of the components of expression in schizophrenics may explain why raters get differings impressions of the patient's overall expression. Moreover, the description of expressive behaviors seems to be influenced by diagnostic stereotypes. The development of a more objective method of assessment would therefore be promising.
Vandewalle, Ellen; Boets, Bart; Ghesquière, Pol; Zink, Inge
This longitudinal study investigated temporal auditory processing (frequency modulation and between-channel gap detection) and speech perception (speech-in-noise and categorical perception) in three groups of 6 years 3 months to 6 years 8 months-old children attending grade 1: (1) children with specific language impairment (SLI) and literacy delay (n = 8), (2) children with SLI and normal literacy (n = 10) and (3) typically developing children (n = 14). Moreover, the relations between these auditory processing and speech perception skills and oral language and literacy skills in grade 1 and grade 3 were analyzed. The SLI group with literacy delay scored significantly lower than both other groups on speech perception, but not on temporal auditory processing. Both normal reading groups did not differ in terms of speech perception or auditory processing. Speech perception was significantly related to reading and spelling in grades 1 and 3 and had a unique predictive contribution to reading growth in grade 3, even after controlling reading level, phonological ability, auditory processing and oral language skills in grade 1. These findings indicated that speech perception also had a unique direct impact upon reading development and not only through its relation with phonological awareness. Moreover, speech perception seemed to be more associated with the development of literacy skills and less with oral language ability. Copyright © 2011 Elsevier Ltd. All rights reserved.
Biau, Emmanuel; Soto-Faraco, Salvador
Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words…
Gygi, Brian; Shafiro, Valeriy
Previously, Gygi and Shafiro (2011) found that when environmental sounds are semantically incongruent with the background scene (e.g., horse galloping in a restaurant), they can be identified more accurately by young normal-hearing listeners (YNH) than sounds congruent with the scene (e.g., horse galloping at a racetrack). This study investigated how age and high-frequency audibility affect this Incongruency Advantage (IA) effect. In Experiments 1a and 1b, elderly listeners ( N = 18 for 1a; N = 10 for 1b) with age-appropriate hearing (EAH) were tested on target sounds and auditory scenes in 5 sound-to-scene ratios (So/Sc) between -3 and -18 dB. Experiment 2 tested 11 YNH on the same sound-scene pairings lowpass-filtered at 4 kHz (YNH-4k). The EAH and YNH-4k groups exhibited an almost identical pattern of significant IA effects, but both were at approximately 3.9 dB higher So/Sc than the previously tested YNH listeners. However, the psychometric functions revealed a shallower slope for EAH listeners compared with YNH listeners for the congruent stimuli only, suggesting a greater difficulty for the EAH listeners in attending to sounds expected to occur in a scene. These findings indicate that semantic relationships between environmental sounds in soundscapes are mediated by both audibility and cognitive factors and suggest a method for dissociating these factors.
Sato, Marc; Cavé, Christian; Ménard, Lucie; Brasseur, Annie
The present study investigated whether manual tactile information from a speaker's face modulates the intelligibility of speech when audio-tactile perception is compared with audio-only perception. Since more elaborated auditory and tactile skills have been reported in the blind, two groups of congenitally blind and sighted adults were compared. Participants performed a forced-choice syllable decision task across three conditions: audio-only and congruent/incongruent audio-tactile conditions. For the auditory modality, the syllables were embedded or not in noise while, for the tactile modality, participants felt in synchrony a mouthed syllable by placing a hand on the face of a talker. In the absence of acoustic noise, syllables were almost perfectly recognized in all conditions. On the contrary, with syllables embedded with acoustic noise, more correct responses were reported in case of congruent mouthing compared to no mouthing, and in case of no mouthing compared to incongruent mouthing. Interestingly, no perceptual differences were observed between blind and sighted adults. These findings demonstrate that manual tactile information relevant to recovering speech gestures modulates auditory speech perception in case of degraded acoustic information and that audio-tactile interactions occur similarly in blind and sighted untrained listeners. Copyright © 2010 Elsevier Ltd. All rights reserved.
Full Text Available Current theories of auditory pitch perception propose that cochlear place (spectral and activity timing pattern (temporal information are somehow combined within the brain to produce holistic pitch percepts, yet the neural mechanisms for integrating these two kinds of information remain obscure. To examine this process in more detail, stimuli made up of three pure tones whose components are individually resolved by the peripheral auditory system, but that nonetheless elicit a holistic, "missing fundamental" pitch percept, were played to human listeners. A technique was used to separate neural timing activity related to individual components of the tone complexes from timing activity related to an emergent feature of the complex (the envelope, and the region of the tonotopic map where information could originate from was simultaneously restricted by masking noise. Pitch percepts were mirrored to a very high degree by a simple combination of component-related and envelope-related neural responses with similar timing that originate within higher-frequency regions of the tonotopic map where stimulus components interact. These results suggest a coding scheme for holistic pitches whereby limited regions of the tonotopic map (spectral places carrying envelope- and component-related activity with similar timing patterns selectively provide a key source of neural pitch information. A similar mechanism of integration between local and emergent object properties may contribute to holistic percepts in a variety of sensory systems.
Zhang, Yang; Kuhl, Patricia; Imada, Toshiaki; Imada, Toshiaki; Kotani, Makoto
This phonetic study examined neural encoding of within-and cross- category information as a function of language experience. Behavioral and magnetoencephalography (MEG) measures for synthetic /ba-wa/ and /ra-la/ stimuli were obtained from ten American and ten Japanese subjects. The MEG experiments employed the oddball paradigm in two conditions. One condition used single exemplars to represent the phonetic categories, and the other introduced within-category variations for both the standard and deviant stimuli. Behavioral results showed three major findings: (a) a robust phonetic boundary effect was observed only in the native listeners; (b) all listeners were able to detect within-category differences on an acoustic basis; and (c) both within- and cross- category discriminations were strongly influenced by language experience. Consistent with behavioral findings, American listeners had larger mismatch field (MMF) responses for /ra-la/ in both conditions but not for /ba-wa/ in either. Moreover, American listeners showed a significant MMF reduction in encoding within-category variations for /ba-wa/ but not for /ra-la/, and Japanese listeners had MMF reductions for both. These results strongly suggest that the grain size of auditory mismatch response is determined not only by experience-dependent phonetic knowledge, but also by the specific characteristics of speech stimuli. [Work supported by NIH.
Newman, Dina L.; Fisher, Laurel M.; Ohmen, Jeffrey; Parody, Robert; Fong, Chin-To; Frisina, Susan T.; Mapes, Frances; Eddins, David A.; Frisina, D. Robert; Frisina, Robert D.; Friedman, Rick A.
Age-related hearing impairment (ARHI), or presbycusis, is a common condition of the elderly that results in significant communication difficulties in daily life. Clinically, it has been defined as a progressive loss of sensitivity to sound, starting at the high frequencies, inability to understand speech, lengthening of the minimum discernable temporal gap in sounds, and a decrease in the ability to filter out background noise. The causes of presbycusis are likely a combination of environmental and genetic factors. Previous research into the genetics of presbycusis has focused solely on hearing as measured by pure-tone thresholds. A few loci have been identified, based on a best ear pure-tone average phenotype, as having a likely role in susceptibility to this type of hearing loss; and GRM7 is the only gene that has achieved genome-wide significance. We examined the association of GRM7 variants identified from the previous study, which used an European cohort with Z-scores based on pure-tone thresholds, in a European–American population from Rochester, NY (N = 687), and used novel phenotypes of presbycusis. In the present study mixed modeling analyses were used to explore the relationship of GRM7 haplotype and SNP genotypes with various measures of auditory perception. Here we show that GRM7 alleles are associated primarily with peripheral measures of hearing loss, and particularly with speech detection in older adults. PMID:23102807
Jenson, David; Harkrider, Ashley W; Thornton, David; Bowers, Andrew L; Saltuklaroglu, Tim
Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < 0.05) concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.
Jill B Firszt
Full Text Available Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants, less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, and cortical organization following hearing recovery. Hearing in the congenitally affected ear of a 41 year old female improved significantly after stapedotomy and reconstruction. Pre-operative hearing threshold levels showed unilateral, mixed, moderately-severe to profound hearing loss. The contralateral ear had hearing threshold levels within normal limits. Testing was completed prior to, and three and nine months after surgery. Measurements were of sound localization with intensity-roved stimuli and speech recognition in various noise conditions. We also evoked magnetic resonance signals with monaural stimulation to the unaffected ear. Activation magnitudes were determined in core, belt, and parabelt auditory cortex regions via an interrupted single event design. Hearing improvement following 40 years of congenital unilateral hearing loss resulted in substantially improved sound localization and speech recognition in noise. Auditory cortex also reorganized. Contralateral auditory cortex responses were increased after hearing recovery and the extent of activated cortex was bilateral, including a greater portion of the posterior superior temporal plane. Thus, prolonged predominant monaural stimulation did not prevent auditory system changes consequent to restored binaural hearing. Results support future research of unilateral auditory deprivation effects and plasticity, with consideration for length of deprivation, age at hearing correction, degree and type
Wightman, Frederic L.; Zahorik, Pavel A.
Attending a live concert is a multisensory experience. In some cases it could be argued that hearing is the primary sense involved, but it is never the only one. Vision, smell, and even touch make important contributions to the overall experience. Moreover, the senses interact such that what one hears, for example, is influenced by what one sees, and vice-versa. This talk will address primarily the auditory aspects of the concert experience, focusing on the results of basic studies of human spatial hearing in reverberant environments, and how these results may help us understand the concert experience. The topics will include sound localization in anechoic and reverberant environments, the precedence effect, the cocktail party effect, the perception of distance, and the impact of room acoustics on loudness perception. Also discussed will be what has been learned from empirical research on auditory-visual interactions. In this area the focus will be on the visual capture effects, the best known of which is the ventriloquism effect. Finally, the limitations of modern psychoacoustics will be addressed in connection with the problem of fully revealing the complexities of the concert experience, especially individual differences in subjective impression.
PERCEPTION, SCIENTIFIC RESEARCH), VISUAL PERCEPTION, ARTHROPODA, EYE , BEHAVIOR, NERVE CELLS, ANATOMICAL MODELS, AUDITORY PERCEPTION, BIONICS , RODENTS, EAR, LEPIDOPTERA, TOUCH, HANDS, EARPHONES, THRESHOLDS(PHYSIOLOGY)
Full Text Available An essential step in understanding the processes underlying the general mechanism of perceptual categorization is to identify which portions of a physical stimulation modulate the behavior of our perceptual system. More specifically, in the context of speech comprehension, it is still a major open challenge to understand which information is used to categorize a speech stimulus as one phoneme or another, the auditory primitives relevant for the categorical perception of speech being still unknown. Here we propose to adapt technique relying on a Generalized Linear Model with smoothness priors technique, already used in the visual domain for estimation of so-called classification images, to auditory experiments. This statistical model offers a rigorous framework for dealing with non-Gaussian noise, as it is often the case in the auditory modality, and limits the amount of noise in the estimated template by enforcing smoother solution. By applying this technique to a specific two-alternative forced choice experiment between stimuli ‘aba’ and ‘ada’ in noise with an adaptive SNR, we confirm that the second formantic transition is a key for classifying phonemes into /b/ or /d/ in noise, and that its estimation by the auditory system is a relative measurement across spectral bands and in relation to the perceived height of the second formant in the preceding syllable. Through this example, we show how the GLM with smoothness priors approach can be applied to the identification of fine functional acoustic cues in speech perception. Finally we discuss some assumptions of the model in the specific case of speech perception.
Santos, Daniel P R; Barbosa, Roberto N; Vieira, Luiz H P; Santiago, Paulo R P; Zagatto, Alessandro M; Gomes, Matheus M
Identifying the trajectory and spin of the ball with speed and accuracy is critical for good performance in table tennis. The aim of this study was to analyze the ability of table tennis players presenting different levels of training/experience to identify the magnitude of the ball spin from the sound produced when the racket hit the ball. Four types of "forehand" contact sounds were collected in the laboratory, defined as: Fast Spin (spinning ball forward at 140 r/s); Medium Spin (105 r/s); Slow Spin (84 r/s); and Flat Hit (less than 60 r/s). Thirty-four table tennis players of both sexes (24 men and 10 women) aged 18-40 years listened to the sounds and tried to identify the magnitude of the ball spin. The results revealed that in 50.9% of the cases the table tennis players were able to identify the ball spin and the observed number of correct answers (10.2) was significantly higher (χ(2) = 270.4, p <0.05) than the number of correct answers that could occur by chance. On the other hand, the results did not show any relationship between the level of training/experience and auditory perception of the ball spin. This indicates that auditory information contributes to identification of the magnitude of the ball spin, however, it also reveals that, in table tennis, the level of training does not interfere with the auditory perception of the ball spin.
Vandewalle, Ellen; Boets, Bart; Ghesquiere, Pol; Zink, Inge
This longitudinal study investigated temporal auditory processing (frequency modulation and between-channel gap detection) and speech perception (speech-in-noise and categorical perception) in three groups of 6 years 3 months to 6 years 8 months-old children attending grade 1: (1) children with specific language impairment (SLI) and literacy delay…
Zaar, Johannes; Dau, Torsten
Responses obtained in consonant perception experiments typically show a large variability across stimuli of the same phonetic identity. The present study investigated the influence of different potential sources of this response variability. It was distinguished between source-induced variability......, referring to perceptual differences caused by acoustical differences in the speech tokens and/or the masking noise tokens, and receiver-related variability, referring to perceptual differences caused by within- and across-listener uncertainty. Two experiments were conducted with normal-hearing listeners...... using consonant-vowel combinations (CVs) in white noise. The responses were analyzed with respect to the different sources of variability based on a measure of perceptual distance. The speech-induced variability across and within talkers and the across-listener variability were substantial...
David E Jenson
Full Text Available Sensorimotor integration within the dorsal stream enables online monitoring of speech. Jenson et al. (2014 used independent component analysis (ICA and event related spectral perturbation (ERSP analysis of EEG data to describe anterior sensorimotor (e.g., premotor cortex; PMC activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory regions of the dorsal stream in the same tasks. Perception tasks required ‘active’ discrimination of syllable pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral ‘auditory’ alpha (α components in 15 of 29 participants localized to pSTG (left and pMTG (right. ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < .05 concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions also temporally aligned with PMC activity reported in Jenson et al. (2014. These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.
Jenson, David; Harkrider, Ashley W.; Thornton, David; Bowers, Andrew L.; Saltuklaroglu, Tim
Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required “active” discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral “auditory” alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique. PMID:26500519
Most, Tova; Rothem, Hilla; Luntz, Michal
The researchers evaluated the contribution of cochlear implants (CIs) to speech perception by a sample of prelingually deaf individuals implanted after age 8 years. This group was compared with a group with profound hearing impairment (HA-P), and with a group with severe hearing impairment (HA-S), both of which used hearing aids. Words and…
Full Text Available Human speech consists of a variety of articulated sounds that vary dynamically in spectral composition. We investigated the neural activity associated with the perception of two types of speech segments: (a the period of rapid spectral transition occurring at the beginning of a stop-consonant vowel (CV syllable and (b the subsequent spectral steady-state period occurring during the vowel segment of the syllable. Functional magnetic resonance imaging (fMRI was recorded while subjects listened to series of synthesized CV syllables and non-phonemic control sounds. Adaptation to specific sound features was measured by varying either the transition or steady-state periods of the synthesized sounds. Two spatially distinct brain areas in the superior temporal cortex were found that were sensitive to either the type of adaptation or the type of stimulus. In a relatively large section of the bilateral dorsal superior temporal gyrus (STG, activity varied as a function of adaptation type regardless of whether the stimuli were phonemic or non-phonemic. Immediately adjacent to this region in a more limited area of the ventral STG, increased activity was observed for phonemic trials compared to non-phonemic trials, however, no adaptation effects were found. In addition, a third area in the bilateral medial superior temporal plane showed increased activity to non-phonemic compared to phonemic sounds. The results suggest a multi-stage hierarchical stream for speech sound processing extending ventrolaterally from the superior temporal plane to the superior temporal sulcus. At successive stages in this hierarchy, neurons code for increasingly more complex spectrotemporal features. At the same time, these representations become more abstracted from the original acoustic form of the sound.
Sameiro-Barbosa, Catia M.; Geiser, Eveline
The auditory system displays modulations in sensitivity that can align with the temporal structure of the acoustic environment. This sensory entrainment can facilitate sensory perception and is particularly relevant for audition. Systems neuroscience is slowly uncovering the neural mechanisms underlying the behaviorally observed sensory entrainment effects in the human sensory system. The present article summarizes the prominent behavioral effects of sensory entrainment and reviews our current understanding of the neural basis of sensory entrainment, such as synchronized neural oscillations, and potentially, neural activation in the cortico-striatal system. PMID:27559306
Full Text Available This study investigated a potential auditory illusion in duration perception induced by rhythmic temporal contexts. Listeners with or without musical training performed a duration discrimination task for a silent period in a rhythmic auditory sequence. The critical temporal interval was presented either within a perceptual group or between two perceptual groups. We report the just-noticeable difference (difference limen, DL for temporal intervals and the point of subjective equality (PSE derived from individual psychometric functions based on performance of a two-alternative forced choice task. In musically untrained individuals, equal temporal intervals were perceived as significantly longer when presented between perceptual groups than within a perceptual group (109.25% versus 102.5% of the standard duration. Only the perceived duration of the between-group interval was significantly longer than its objective duration. Musically trained individuals did not show this effect. However, in both musically trained and untrained individuals, the relative difference limens for discriminating the comparison interval from the standard interval were larger in the between-groups condition than in the within-group condition (7.3% vs. 5.6% of the standard duration. Thus, rhythmic grouping affected sensitivity to duration changes in all listeners, with duration differences being harder to detect at boundaries of rhythm groups than within rhythm groups. Our results show for the first time that temporal Gestalt induces auditory duration illusions in typical listeners, but that musical experts are not susceptible to this effect of rhythmic grouping.
Laurent, Raphaël; Barnaud, Marie-Lou; Schwartz, Jean-Luc; Bessière, Pierre; Diard, Julien
There is a consensus concerning the view that both auditory and motor representations intervene in the perceptual processing of speech units. However, the question of the functional role of each of these systems remains seldom addressed and poorly understood. We capitalized on the formal framework of Bayesian Programming to develop COSMO (Communicating Objects using Sensory-Motor Operations), an integrative model that allows principled comparisons of purely motor or purely auditory implementations of a speech perception task and tests the gain of efficiency provided by their Bayesian fusion. Here, we show 3 main results: (a) In a set of precisely defined "perfect conditions," auditory and motor theories of speech perception are indistinguishable; (b) When a learning process that mimics speech development is introduced into COSMO, it departs from these perfect conditions. Then auditory recognition becomes more efficient than motor recognition in dealing with learned stimuli, while motor recognition is more efficient in adverse conditions. We interpret this result as a general "auditory-narrowband versus motor-wideband" property; and (c) Simulations of plosive-vowel syllable recognition reveal possible cues from motor recognition for the invariant specification of the place of plosive articulation in context that are lacking in the auditory pathway. This provides COSMO with a second property, where auditory cues would be more efficient for vowel decoding and motor cues for plosive articulation decoding. These simulations provide several predictions, which are in good agreement with experimental data and suggest that there is natural complementarity between auditory and motor processing within a perceptuo-motor theory of speech perception. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Full Text Available Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: ‘element motion’ or ‘group motion’. In element motion, the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in group motion, both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside. Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of group motion as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50-230 ms in the long glide was perceived to be shorter than that within both the short glide and the ‘gap-transfer’ auditory configurations in the same physical intervals (gaps. The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role.
Full Text Available The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the ‘ABA-’ auditory streaming paradigm we trained listeners until they could reliably recognise all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated. Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e. the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in
Full Text Available We have recently shown that vision is important to improve spatial auditory cognition. In this study we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people – one experimental and one control group – in an auditory space bisection task. In the first group the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound
Meyer, Martin; Elmer, Stefan; Baumann, Simon; Jancke, Lutz
In this EEG study we sought to examine the neuronal underpinnings of short-term plasticity as a top-down guided auditory learning process. We hypothesized, that (i) auditory imagery should elicit proper auditory evoked effects (N1/P2 complex) and a late positive component (LPC). Generally, based on recent human brain mapping studies we expected (ii) to observe the involvement of different temporal and parietal lobe areas in imagery and in perception of acoustic stimuli. Furthermore we predicted (iii) that temporal regions show an asymmetric trend due to the different specialization of the temporal lobes in processing speech and non-speech sounds. Finally we sought evidence supporting the notion that short-term training is sufficient to drive top-down activity in brain regions that are not normally recruited by sensory induced bottom up processing. 18 non-musicians partook in a 30 channels based EEG session that investigated spatio-temporal dynamics of auditory imagery of "consonant-vowel" (CV) syllables and piano triads. To control for conditioning effects, we split the volunteers in two matched groups comprising the same conditions (visual, auditory or bimodal stimulation) presented in a slightly different serial order. Furthermore the study presents electromagnetic source localization (LORETA) of perception and imagery of CV- and piano stimuli. Our results imply that auditory imagery elicited similar electrophysiological effects at an early stage (N1/P2) as auditory stimulation. However, we found an additional LPC following the N1/P2 for auditory imagery only. Source estimation evinced bilateral engagement of anterior temporal cortex, which was generally stronger for imagery of music relative to imagery of speech. While we did not observe lateralized activity for the imagery of syllables we noted significantly increased rightward activation over the anterior supratemporal plane for musical imagery. Thus, we conclude that short-term top-down training based
Kohlrausch, Armin; van de Par, Steven
In our natural environment, we simultaneously receive information through various sensory modalities. The properties of these stimuli are coupled by physical laws, so that, e.g., auditory and visual stimuli caused by the same even have a fixed temporal relation when reaching the observer. In speech, for example, visible lip movements and audible utterances occur in close synchrony which contributes to the improvement of speech intelligibility under adverse acoustic conditions. Research into multi- sensory perception is currently being performed in a great variety of experimental contexts. This paper attempts to give an overview of the typical research areas dealing with audio-visual interaction and integration, bridging the range from cognitive psychology to applied research for multimedia applications. Issues of interest are the sensitivity to asynchrony between audio and video signals, the interaction between audio-visual stimuli with discrepant spatial and temporal rate information, crossmodal effects in attention, audio-visual interactions in speech perception and the combined perceived quality of audio-visual stimuli.
Repp, Bruno H
Auditory stream segregation can occur when tones of different pitch (A, B) are repeated cyclically: The larger the pitch separation and the faster the tempo, the more likely perception of two separate streams is to occur. The present study assessed stream segregation in perceptual and sensorimotor tasks, using identical ABBABB ... sequences. The perceptual task required detection of single phase-shifted A tones; this was expected to be facilitated by the presence of B tones unless segregation occurred. The sensorimotor task required tapping in synchrony with the A tones; here the phase correction response (PCR) to shifted A tones was expected to be inhibited by B tones unless segregation occurred. Two sequence tempi and three pitch separations (2, 10, and 48 semitones) were used with musically trained participants. Facilitation of perception occurred only at the smallest pitch separation, whereas the PCR was reduced equally at all separations. These results indicate that auditory action control is immune to perceptual stream segregation, at least in musicians. This may help musicians coordinate with diverse instruments in ensemble playing.
Santos Daniel P. R.
Full Text Available Identifying the trajectory and spin of the ball with speed and accuracy is critical for good performance in table tennis. The aim of this study was to analyze the ability of table tennis players presenting different levels of training/experience to identify the magnitude of the ball spin from the sound produced when the racket hit the ball. Four types of “forehand” contact sounds were collected in the laboratory, defined as: Fast Spin (spinning ball forward at 140 r/s; Medium Spin (105 r/s; Slow Spin (84 r/s; and Flat Hit (less than 60 r/s. Thirty-four table tennis players of both sexes (24 men and 10 women aged 18-40 years listened to the sounds and tried to identify the magnitude of the ball spin. The results revealed that in 50.9% of the cases the table tennis players were able to identify the ball spin and the observed number of correct answers (10.2 was significantly higher (χ2 = 270.4, p <0.05 than the number of correct answers that could occur by chance. On the other hand, the results did not show any relationship between the level of training/experience and auditory perception of the ball spin. This indicates that auditory information contributes to identification of the magnitude of the ball spin, however, it also reveals that, in table tennis, the level of training does not interfere with the auditory perception of the ball spin.
Slevc, L Robert; Shell, Alison R
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.
Thomas, N.; Hayward, M.; Peters, E; van der Gaag, M.; Bentall, R.P.; Jenner, J.; Strauss, C.; Sommer, I.E.; Johns, L.C.; Varese, F.; Gracia-Montes, J.M.; Waters, F.; Dodgson, G.; McCarthy-Jones, S.
This report from the International Consortium on Hallucinations Research considers the current status and future directions in research on psychological therapies targeting auditory hallucinations (hearing voices). Therapy approaches have evolved from behavioral and coping-focused interventions,
Favrot, Sylvain Emmanuel
to systematically study the signal processing of realistic sounds by normal-hearing and hearing-impaired listeners, a flexible, reproducible and fully controllable auditory environment is needed. A loudspeaker-based room auralization (LoRA) system was developed in this thesis to provide virtual auditory...... environments (VAEs) with an array of loudspeakers. The LoRA system combines state-of-the-art acoustic room models with sound-field reproduction techniques. Limitations of these two techniques were taken into consideration together with the limitations of the human auditory system to localize sounds...
Fu, Ying; Chen, Yuan; Xi, Xin; Hong, Mengdi; Chen, Aiting; Wang, Qian; Wong, Lena
To investigate the development of early auditory capability and speech perception in the prelingual deaf children after cochlear implantation, and to study the feasibility of currently available Chinese assessment instruments for the evaluation of early auditory skill and speech perception in hearing-impaired children. A total of 83 children with severe-to-profound prelingual hearing impairment participated in this study. Participants were divided into four groups according to the age for surgery: A (1-2 years), B (2-3 years), C (3-4 years) and D (4-5 years). The auditory skill and speech perception ability of CI children were evaluated by trained audiologists using the infant-toddler/meaningful auditory integration scale (IT-MAIS/MAIS) questionnaire, the Mandarin Early Speech Perception (MESP) test and the Mandarin Pediatric Speech Intelligibility (MPSI) test. The questionnaires were used in face to face interviews with the parents or guardians. Each child was assessed before the operation and 3 months, 6 months, 12 months after switch-on. After cochlear implantation, early postoperative auditory development and speech perception gradually improved. All MAIS/IT-MAIS scores showed a similar increasing trend with the rehabilitation duration (F=5.743, P=0.007). Preoperative and post operative MAIS/IT-MAIS scores of children in age group C (3-4 years) was higher than that of other groups. Children who had longer hearing aid experience before operation demonstrated higher MAIS/IT-MAIS scores than those with little or no hearing aid experience (F=4.947, P=0.000). The MESP test showed that, children were not able to perceive speech as well as detecting speech signals. However as the duration of CI use increased, speech perception ability also improved substantially. However, only about 40% of the subjects could be evaluated using the most difficult subtest on the MPSI in quiet at 12 months after switch-on. As MCR decreased, the proportion of children who could be tested
Fujioka, Takako; Ross, Bernhard; Trainor, Laurel J
Dancing to music involves synchronized movements, which can be at the basic beat level or higher hierarchical metrical levels, as in a march (groups of two basic beats, one-two-one-two …) or waltz (groups of three basic beats, one-two-three-one-two-three …). Our previous human magnetoencephalography studies revealed that the subjective sense of meter influences auditory evoked responses phase locked to the stimulus. Moreover, the timing of metronome clicks was represented in periodic modulation of induced (non-phase locked) β-band (13-30 Hz) oscillation in bilateral auditory and sensorimotor cortices. Here, we further examine whether acoustically accented and subjectively imagined metric processing in march and waltz contexts during listening to isochronous beats were reflected in neuromagnetic β-band activity recorded from young adult musicians. First, we replicated previous findings of beat-related β-power decrease at 200 ms after the beat followed by a predictive increase toward the onset of the next beat. Second, we showed that the β decrease was significantly influenced by the metrical structure, as reflected by differences across beat type for both perception and imagery conditions. Specifically, the β-power decrease associated with imagined downbeats (the count "one") was larger than that for both the upbeat (preceding the count "one") in the march, and for the middle beat in the waltz. Moreover, beamformer source analysis for the whole brain volume revealed that the metric contrasts involved auditory and sensorimotor cortices; frontal, parietal, and inferior temporal lobes; and cerebellum. We suggest that the observed β-band activities reflect a translation of timing information to auditory-motor coordination. With magnetoencephalography, we examined β-band oscillatory activities around 20 Hz while participants listened to metronome beats and imagined musical meters such as a march and waltz. We demonstrated that β-band event
Lassen, N A; Friberg, L
Specific types of brain activity as sensory perception auditory, somato-sensory or visual -or the performance of movements are accompanied by increases of blood flow and oxygen consumption in the cortical areas involved with performing the respective tasks. The activation patterns observed by mea...
Millman, Rebecca E.; Mattys, Sven L.
Purpose: Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the…
Zhang, Juan; McBride-Chang, Catherine
A 4-stage developmental model, in which auditory sensitivity is fully mediated by speech perception at both the segmental and suprasegmental levels, which are further related to word reading through their associations with phonological awareness, rapid automatized naming, verbal short-term memory and morphological awareness, was tested with…
Morrison, James A.; Michael, William B.
A Spanish auditory perception test, La Prueba de Analisis Auditivo, was developed and administered to 158 Spanish-speaking Latino children, kindergarten through grade 3. Psychometric data for the test are presented, including its relationship to SOBER, a criterion-referenced Spanish reading measure. (Author/BW)
Brooker, Jake S
Previous research has highlighted the varied effects of auditory enrichment on different captive animals. This study investigated how manipulating musical components can influence the behavior of a group of captive western lowland gorillas (Gorilla gorilla gorilla) at Bristol Zoo. The gorillas were observed during exposure to classical music, rock-and-roll music, and rainforest sounds. The two music conditions were modified to create five further conditions: unmanipulated, decreased pitch, increased pitch, decreased tempo, and increased tempo. We compared the prevalence of activity, anxiety, and social behaviors between the standard conditions. We also compared the prevalence of each of these behaviors across the manipulated conditions of each type of music independently and collectively. Control observations with no sound exposure were regularly scheduled between the observations of the 12 auditory conditions. The results suggest that naturalistic rainforest sounds had no influence on the anxiety of captive gorillas, contrary to past research. The tempo of music appears to be significantly associated with activity levels among this group, and social behavior may be affected by pitch. Low tempo music also may be effective at reducing anxiety behavior in captive gorillas. Regulated auditory enrichment may provide effective means of calming gorillas, or for facilitating active behavior. Zoo Biol. 35:398-408, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Badcock, Nicholas A; Mousikou, Petroula; Mahajan, Yatin; de Lissa, Peter; Thie, Johnson; McArthur, Genevieve
.... In this study we tested if auditory ERPs measured using a gaming EEG system (Emotiv EPOC(®), www.emotiv.com) were equivalent to those measured by a widely-used, laboratory-based, research EEG system...
Badcock, Nicholas A; Mousikou, Petroula; Mahajan, Yatin; de Lissa, Peter; Thie, Johnson; McArthur, Genevieve
... up. In this study we tested if auditory ERPs measured using a gaming EEG system (Emotiv EPOC(®), www.emotiv.com) were equivalent to those measured by a widely-used, laboratory-based, research EEG system...
Skipper, Jeremy I
What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded. Here, neuroimaging meta-analyses show the opposite: AC is least active and sometimes deactivated when participants listened to meaningful speech compared to less meaningful sounds. Results are explained by an active hypothesis-and-test mechanism where speech production (SP) regions are neurally re-used to predict auditory objects associated with available context. By this model, more AC activity for less meaningful sounds occurs because predictions are less successful from context, requiring further hypotheses be tested. This also explains the large overlap of AC co-activity for less meaningful sounds with meta-analyses of SP. An experiment showed a similar pattern of results for non-verbal context. Specifically, words produced less activity in AC and SP regions when preceded by co-speech gestures that visually described those words compared to those words without gestures. Results collectively suggest that what we 'hear' during real-world speech perception may come more from the brain than our ears and that the function of AC is to confirm or deny internal predictions about the identity of sounds.
van Dorp Schuitman, Jasper; de Vries, Diemer; Lindau, Alexander
Acousticians generally assess the acoustic qualities of a concert hall or any other room using impulse response-based measures such as the reverberation time, clarity index, and others. These parameters are used to predict perceptual attributes related to the acoustic qualities of the room. Various studies show that these physical measures are not able to predict the related perceptual attributes sufficiently well under all circumstances. In particular, it has been shown that physical measures are dependent on the state of occupation, are prone to exaggerated spatial fluctuation, and suffer from lacking discrimination regarding the kind of acoustic stimulus being presented. Accordingly, this paper proposes a method for the derivation of signal-based measures aiming at predicting aspects of room acoustic perception from content specific signal representations produced by a binaural, nonlinear model of the human auditory system. Listening tests were performed to test the proposed auditory parameters for both speech and music. The results look promising; the parameters correlate with their corresponding perceptual attributes in most cases.
Full Text Available For multimodal Human-Computer Interaction (HCI, it is very useful to identify the modalities on which the user is currently processing information. This would enable a system to select complementary output modalities to reduce the user's workload. In this paper, we develop a hybrid Brain-Computer Interface (BCI which uses Electroencephalography (EEG and functional Near Infrared Spectroscopy (fNIRS to discriminate and detect visual and auditory stimulus processing. We describe the experimental setup we used for collection of our data corpus with 12 subjects. We present cross validation evaluation results for different classification conditions. We show that our subject-dependent systems achieved a classification accuracy of 97.8% for discriminating visual and auditory perception processes from each other and a classification accuracy of up to 94.8% for detecting modality-specific processes independently of other cognitive activity. The same classification conditions could also be discriminated in a subject-independent fashion with accuracy of up to 94.6% and 86.7%, respectively. We also look at the contributions of the two signal types and show that the fusion of classifiers using different features significantly increases accuracy.
Putze, Felix; Hesslinger, Sebastian; Tse, Chun-Yu; Huang, YunYing; Herff, Christian; Guan, Cuntai; Schultz, Tanja
For multimodal Human-Computer Interaction (HCI), it is very useful to identify the modalities on which the user is currently processing information. This would enable a system to select complementary output modalities to reduce the user's workload. In this paper, we develop a hybrid Brain-Computer Interface (BCI) which uses Electroencephalography (EEG) and functional Near Infrared Spectroscopy (fNIRS) to discriminate and detect visual and auditory stimulus processing. We describe the experimental setup we used for collection of our data corpus with 12 subjects. On this data, we performed cross-validation evaluation, of which we report accuracy for different classification conditions. The results show that the subject-dependent systems achieved a classification accuracy of 97.8% for discriminating visual and auditory perception processes from each other and a classification accuracy of up to 94.8% for detecting modality-specific processes independently of other cognitive activity. The same classification conditions could also be discriminated in a subject-independent fashion with accuracy of up to 94.6 and 86.7%, respectively. We also look at the contributions of the two signal types and show that the fusion of classifiers using different features significantly increases accuracy. PMID:25477777
Putze, Felix; Hesslinger, Sebastian; Tse, Chun-Yu; Huang, YunYing; Herff, Christian; Guan, Cuntai; Schultz, Tanja
For multimodal Human-Computer Interaction (HCI), it is very useful to identify the modalities on which the user is currently processing information. This would enable a system to select complementary output modalities to reduce the user's workload. In this paper, we develop a hybrid Brain-Computer Interface (BCI) which uses Electroencephalography (EEG) and functional Near Infrared Spectroscopy (fNIRS) to discriminate and detect visual and auditory stimulus processing. We describe the experimental setup we used for collection of our data corpus with 12 subjects. On this data, we performed cross-validation evaluation, of which we report accuracy for different classification conditions. The results show that the subject-dependent systems achieved a classification accuracy of 97.8% for discriminating visual and auditory perception processes from each other and a classification accuracy of up to 94.8% for detecting modality-specific processes independently of other cognitive activity. The same classification conditions could also be discriminated in a subject-independent fashion with accuracy of up to 94.6 and 86.7%, respectively. We also look at the contributions of the two signal types and show that the fusion of classifiers using different features significantly increases accuracy.
Experiencing nature, landscape and heritage, first chapter. Karmanov provides a general overview of methods of studying landscape perceptions, illustrated by a wide variety of mainly experimental psychological research.
Full Text Available Previous studies on the effect of spectral content on auditory distance perception (ADP focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction or when the sound travels distances >15 m (high-frequency energy losses due to air absorption. Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1–6 m influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects. Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation. The results obtained in this study show that, depending on
He, Shuman; Grose, John H; Teagle, Holly F B; Woodard, Jennifer; Park, Lisa R; Hatch, Debora R; Buchman, Craig A
This study aimed (1) to investigate the feasibility of recording the electrically evoked auditory event-related potential (eERP), including the onset P1-N1-P2 complex and the electrically evoked auditory change complex (EACC) in response to temporal gaps, in children with auditory neuropathy spectrum disorder (ANSD); and (2) to evaluate the relationship between these measures and speech-perception abilities in these subjects. Fifteen ANSD children who are Cochlear Nucleus device users participated in this study. For each subject, the speech-processor microphone was bypassed and the eERPs were elicited by direct stimulation of one mid-array electrode (electrode 12). The stimulus was a train of biphasic current pulses 800 msec in duration. Two basic stimulation conditions were used to elicit the eERP. In the no-gap condition, the entire pulse train was delivered uninterrupted to electrode 12, and the onset P1-N1-P2 complex was measured relative to the stimulus onset. In the gapped condition, the stimulus consisted of two pulse train bursts, each being 400 msec in duration, presented sequentially on the same electrode and separated by one of five gaps (i.e., 5, 10, 20, 50, and 100 msec). Open-set speech-perception ability of these subjects with ANSD was assessed using the phonetically balanced kindergarten (PBK) word lists presented at 60 dB SPL, using monitored live voice in a sound booth. The eERPs were recorded from all subjects with ANSD who participated in this study. There were no significant differences in test-retest reliability, root mean square amplitude or P1 latency for the onset P1-N1-P2 complex between subjects with good (>70% correct on PBK words) and poorer speech-perception performance. In general, the EACC showed less mature morphological characteristics than the onset P1-N1-P2 response recorded from the same subject. There was a robust correlation between the PBK word scores and the EACC thresholds for gap detection. Subjects with poorer speech-perception
Evans, Samuel; Davis, Matthew H.
How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that differed considerably in their surface acoustic form by changing speaker and degrading surface acoustics using noise-vocoding and sine wave synthesis while we recorded neural responses with functional magnetic resonance imaging. We found evidence for a graded hierarchy of abstraction across the brain. At the peak of the hierarchy, neural representations in somatomotor cortex encoded syllable identity but not surface acoustic form, at the base of the hierarchy, primary auditory cortex showed the reverse. In contrast, bilateral temporal cortex exhibited an intermediate response, encoding both syllable identity and the surface acoustic form of speech. Regions of somatomotor cortex associated with encoding syllable identity in perception were also engaged when producing the same syllables in a separate session. These findings are consistent with a hierarchical account of how variable acoustic signals are transformed into abstract representations of the identity of speech sounds. PMID:26157026
Evans, Samuel; Davis, Matthew H
How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that differed considerably in their surface acoustic form by changing speaker and degrading surface acoustics using noise-vocoding and sine wave synthesis while we recorded neural responses with functional magnetic resonance imaging. We found evidence for a graded hierarchy of abstraction across the brain. At the peak of the hierarchy, neural representations in somatomotor cortex encoded syllable identity but not surface acoustic form, at the base of the hierarchy, primary auditory cortex showed the reverse. In contrast, bilateral temporal cortex exhibited an intermediate response, encoding both syllable identity and the surface acoustic form of speech. Regions of somatomotor cortex associated with encoding syllable identity in perception were also engaged when producing the same syllables in a separate session. These findings are consistent with a hierarchical account of how variable acoustic signals are transformed into abstract representations of the identity of speech sounds. © The Author 2015. Published by Oxford University Press.
Grant, Ken W.; van Wassenhove, Virginie
Auditory-visual speech perception has been shown repeatedly to be both more accurate and more robust than auditory speech perception. Attempts to explain these phenomena usually treat acoustic and visual speech information (i.e., accessed via speechreading) as though they were derived from independent processes. Recent electrophysiological (EEG) studies, however, suggest that visual speech processes may play a fundamental role in modulating the way we hear. For example, both the timing and amplitude of auditory-specific event-related potentials as recorded by EEG are systematically altered when speech stimuli are presented audiovisually as opposed to auditorilly. In addition, the detection of a speech signal in noise is more readily accomplished when accompanied by video images of the speaker's production, suggesting that the influence of vision on audition occurs quite early in the perception process. But the impact of visual cues on what we ultimately hear is not limited to speech. Our perceptions of loudness, timbre, and sound source location can also be influenced by visual cues. Thus, for speech and nonspeech stimuli alike, predicting a listener's response to sound based on acoustic engineering principles alone may be misleading. Examples of acoustic-visual interactions will be presented which highlight the multisensory nature of our hearing experience.
Hubbard, Amy L; Wilson, Stephen M; Callan, Daniel E; Dapretto, Mirella
Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture-a fundamental type of hand gesture that marks speech prosody-might impact speech perception at the neural level. Subjects underwent fMRI while listening to spontaneously-produced speech accompanied by beat gesture, nonsense hand movement, or a still body; as additional control conditions, subjects also viewed beat gesture, nonsense hand movement, or a still body all presented without speech. Validating behavioral evidence that gesture affects speech perception, bilateral nonprimary auditory cortex showed greater activity when speech was accompanied by beat gesture than when speech was presented alone. Further, the left superior temporal gyrus/sulcus showed stronger activity when speech was accompanied by beat gesture than when speech was accompanied by nonsense hand movement. Finally, the right planum temporale was identified as a putative multisensory integration site for beat gesture and speech (i.e., here activity in response to speech accompanied by beat gesture was greater than the summed responses to speech alone and beat gesture alone), indicating that this area may be pivotally involved in synthesizing the rhythmic aspects of both speech and gesture. Taken together, these findings suggest a common neural substrate for processing speech and gesture, likely reflecting their joint communicative role in social interactions.
Hoffman, Ralph E
Auditory/verbal hallucinations (AVHs) are comprised of spoken conversational speech seeming to arise from specific, nonself speakers. One hertz repetitive transcranial magnetic stimulation (rTMS) reduces excitability in the brain region stimulated. Studies utilizing 1-Hz rTMS delivered to the left temporoparietal cortex, a brain area critical to speech perception, have demonstrated statistically significant improvements in AVHs relative to sham simulation. A novel mechanism of AVHs is proposed whereby dramatic pre-psychotic social withdrawal prompts neuroplastic reorganization by the "social brain" to produce spurious social meaning via hallucinations of conversational speech. Preliminary evidence supporting this hypothesis includes a very high rate of social withdrawal emerging prior to the onset of frank psychosis in patients who develop schizophrenia and AVHs. Moreover, reduced AVHs elicited by temporoparietal 1-Hz rTMS are likely to reflect enhanced long-term depression. Some evidence suggests a loss of long-term depression following experimentally-induced deafferentation. Finally, abnormal cortico-cortical coupling is associated with AVHs and also is a common outcome of deafferentation. Auditory/verbal hallucinations (AVHs) of spoken speech or "voices" are reported by 60-80% of persons with schizophrenia at various times during the course of illness. AVHs are associated with high levels of distress, functional disability, and can lead to violent acts. Among patients with AVHs, these symptoms remain poorly or incompletely responsive to currently available treatments in approximately 25% of cases. For patients with AVHs who do respond to antipsychotic drugs, there is a very high likelihood that these experiences will recur in subsequent episodes. A more precise characterization of underlying pathophysiology may lead to more efficacious treatments.
Bidelman, Gavin M; Weiss, Michael W; Moreno, Sylvain; Alain, Claude
Musicianship is associated with neuroplastic changes in brainstem and cortical structures, as well as improved acuity for behaviorally relevant sounds including speech. However, further advance in the field depends on characterizing how neuroplastic changes in brainstem and cortical speech processing relate to one another and to speech-listening behaviors. Here, we show that subcortical and cortical neural plasticity interact to yield the linguistic advantages observed with musicianship. We compared brainstem and cortical neuroelectric responses elicited by a series of vowels that differed along a categorical speech continuum in amateur musicians and non-musicians. Musicians obtained steeper identification functions and classified speech sounds more rapidly than non-musicians. Behavioral advantages coincided with more robust and temporally coherent brainstem phase-locking to salient speech cues (voice pitch and formant information) coupled with increased amplitude in cortical-evoked responses, implying an overall enhancement in the nervous system's responsiveness to speech. Musicians' subcortical and cortical neural enhancements (but not behavioral measures) were correlated with their years of formal music training. Associations between multi-level neural responses were also stronger in musically trained listeners, and were better predictors of speech perception than in non-musicians. Results suggest that musicianship modulates speech representations at multiple tiers of the auditory pathway, and strengthens the correspondence of processing between subcortical and cortical areas to allow neural activity to carry more behaviorally relevant information. We infer that musicians have a refined hierarchy of internalized representations for auditory objects at both pre-attentive and attentive levels that supplies more faithful phonemic templates to decision mechanisms governing linguistic operations. © 2014 Federation of European Neuroscience Societies and John Wiley
Mousikou, Petroula; Mahajan, Yatin; de Lissa, Peter; Thie, Johnson; McArthur, Genevieve
Background. Auditory event-related potentials (ERPs) have proved useful in investigating the role of auditory processing in cognitive disorders such as developmental dyslexia, specific language impairment (SLI), attention deficit hyperactivity disorder (ADHD), schizophrenia, and autism. However, laboratory recordings of auditory ERPs can be lengthy, uncomfortable, or threatening for some participants – particularly children. Recently, a commercial gaming electroencephalography (EEG) system has been developed that is portable, inexpensive, and easy to set up. In this study we tested if auditory ERPs measured using a gaming EEG system (Emotiv EPOC®, www.emotiv.com) were equivalent to those measured by a widely-used, laboratory-based, research EEG system (Neuroscan). Methods. We simultaneously recorded EEGs with the research and gaming EEG systems, whilst presenting 21 adults with 566 standard (1000 Hz) and 100 deviant (1200 Hz) tones under passive (non-attended) and active (attended) conditions. The onset of each tone was marked in the EEGs using a parallel port pulse (Neuroscan) or a stimulus-generated electrical pulse injected into the O1 and O2 channels (Emotiv EPOC®). These markers were used to calculate research and gaming EEG system late auditory ERPs (P1, N1, P2, N2, and P3 peaks) and the mismatch negativity (MMN) in active and passive listening conditions for each participant. Results. Analyses were restricted to frontal sites as these are most commonly reported in auditory ERP research. Intra-class correlations (ICCs) indicated that the morphology of the research and gaming EEG system late auditory ERP waveforms were similar across all participants, but that the research and gaming EEG system MMN waveforms were only similar for participants with non-noisy MMN waveforms (N = 11 out of 21). Peak amplitude and latency measures revealed no significant differences between the size or the timing of the auditory P1, N1, P2, N2, P3, and MMN peaks. Conclusions
Badcock, Nicholas A; Mousikou, Petroula; Mahajan, Yatin; de Lissa, Peter; Thie, Johnson; McArthur, Genevieve
Background. Auditory event-related potentials (ERPs) have proved useful in investigating the role of auditory processing in cognitive disorders such as developmental dyslexia, specific language impairment (SLI), attention deficit hyperactivity disorder (ADHD), schizophrenia, and autism. However, laboratory recordings of auditory ERPs can be lengthy, uncomfortable, or threatening for some participants - particularly children. Recently, a commercial gaming electroencephalography (EEG) system has been developed that is portable, inexpensive, and easy to set up. In this study we tested if auditory ERPs measured using a gaming EEG system (Emotiv EPOC(®), www.emotiv.com) were equivalent to those measured by a widely-used, laboratory-based, research EEG system (Neuroscan). Methods. We simultaneously recorded EEGs with the research and gaming EEG systems, whilst presenting 21 adults with 566 standard (1000 Hz) and 100 deviant (1200 Hz) tones under passive (non-attended) and active (attended) conditions. The onset of each tone was marked in the EEGs using a parallel port pulse (Neuroscan) or a stimulus-generated electrical pulse injected into the O1 and O2 channels (Emotiv EPOC(®)). These markers were used to calculate research and gaming EEG system late auditory ERPs (P1, N1, P2, N2, and P3 peaks) and the mismatch negativity (MMN) in active and passive listening conditions for each participant. Results. Analyses were restricted to frontal sites as these are most commonly reported in auditory ERP research. Intra-class correlations (ICCs) indicated that the morphology of the research and gaming EEG system late auditory ERP waveforms were similar across all participants, but that the research and gaming EEG system MMN waveforms were only similar for participants with non-noisy MMN waveforms (N = 11 out of 21). Peak amplitude and latency measures revealed no significant differences between the size or the timing of the auditory P1, N1, P2, N2, P3, and MMN peaks
Nicholas A. Badcock
Full Text Available Background. Auditory event-related potentials (ERPs have proved useful in investigating the role of auditory processing in cognitive disorders such as developmental dyslexia, specific language impairment (SLI, attention deficit hyperactivity disorder (ADHD, schizophrenia, and autism. However, laboratory recordings of auditory ERPs can be lengthy, uncomfortable, or threatening for some participants – particularly children. Recently, a commercial gaming electroencephalography (EEG system has been developed that is portable, inexpensive, and easy to set up. In this study we tested if auditory ERPs measured using a gaming EEG system (Emotiv EPOC®, www.emotiv.com were equivalent to those measured by a widely-used, laboratory-based, research EEG system (Neuroscan.Methods. We simultaneously recorded EEGs with the research and gaming EEG systems, whilst presenting 21 adults with 566 standard (1000 Hz and 100 deviant (1200 Hz tones under passive (non-attended and active (attended conditions. The onset of each tone was marked in the EEGs using a parallel port pulse (Neuroscan or a stimulus-generated electrical pulse injected into the O1 and O2 channels (Emotiv EPOC®. These markers were used to calculate research and gaming EEG system late auditory ERPs (P1, N1, P2, N2, and P3 peaks and the mismatch negativity (MMN in active and passive listening conditions for each participant.Results. Analyses were restricted to frontal sites as these are most commonly reported in auditory ERP research. Intra-class correlations (ICCs indicated that the morphology of the research and gaming EEG system late auditory ERP waveforms were similar across all participants, but that the research and gaming EEG system MMN waveforms were only similar for participants with non-noisy MMN waveforms (N = 11 out of 21. Peak amplitude and latency measures revealed no significant differences between the size or the timing of the auditory P1, N1, P2, N2, P3, and MMN peaks
Most, Tova; Aviner, Chen
This study evaluated the benefits of cochlear implant (CI) with regard to emotion perception of participants differing in their age of implantation, in comparison to hearing aid users and adolescents with normal hearing (NH). Emotion perception was examined by having the participants identify happiness, anger, surprise, sadness, fear, and disgust.…
Full Text Available Background and Aim: Dyslexia is the most common learning disability. One of the main factors have role in this disability is auditory perception imperfection that cause a lot of problems in education. We aimed to study the effect of auditory perception training on reading performance of female students with dyslexia at the third grade of elementary school.Methods: Thirty-eight female students at the third grade of elementary schools of Khomeinishahr City, Iran, were selected by multistage cluster random sampling of them, 20 students which were diagnosed dyslexic by Reading test and Wechsler test, devided randomly to two equal groups of experimental and control. For experimental group, during ten 45-minute sessions, auditory perception training were conducted, but no intervention was done for control group. An participants were re-assessed by Reading test after the intervention (pre- and post- test method. Data were analyed by covariance test.Results: The effect of auditory perception training on reading performance (81% was significant (p<0.0001 for all subtests execpt the separate compound word test.Conclusion: Findings of our study confirm the hypothesis that auditory perception training effects on students' functional reading. So, auditory perception training seems to be necessary for the students with dyslexia.
Bee, Mark A
The perceptual analysis of acoustic scenes involves binding together sounds from the same source and separating them from other sounds in the environment. In large social groups, listeners experience increased difficulty performing these tasks due to high noise levels and interference from the concurrent signals of multiple individuals. While a substantial body of literature on these issues pertains to human hearing and speech communication, few studies have investigated how nonhuman animals may be evolutionarily adapted to solve biologically analogous communication problems. Here, I review recent and ongoing work aimed at testing hypotheses about perceptual mechanisms that enable treefrogs in the genus Hyla to communicate vocally in noisy, multi-source social environments. After briefly introducing the genus and the methods used to study hearing in frogs, I outline several functional constraints on communication posed by the acoustic environment of breeding "choruses". Then, I review studies of sound source perception aimed at uncovering how treefrog listeners may be adapted to cope with these constraints. Specifically, this review covers research on the acoustic cues used in sequential and simultaneous auditory grouping, spatial release from masking, and dip listening. Throughout the paper, I attempt to illustrate how broad-scale, comparative studies of carefully considered animal models may ultimately reveal an evolutionary diversity of underlying mechanisms for solving cocktail-party-like problems in communication. Copyright © 2014 Elsevier B.V. All rights reserved.
Dickhaus, Britta; Mayer, Emeran A; Firooz, Nazanin; Stains, Jean; Conde, Francisco; Olivas, Teresa I; Fass, Ronnie; Chang, Lin; Mayer, Minou; Naliboff, Bruce D
Symptoms in irritable bowel syndrome (IBS) patients are sensitive to psychological stressors. These effects may operate through an enhanced responsiveness of the emotional motor system, a network of brain circuits that modulate arousal, viscerosomatic perception, and autonomic responses associated with emotional responses, including anxiety and anger. The aim of this study was to test the primary hypothesis that IBS patients show altered perceptual responses to rectal balloon distention during experimentally induced psychological stress compared with healthy control subjects. A total of 15 IBS patients (nine women and six men) and 14 healthy controls (seven women and seven men) were studied during two laboratory sessions: 1) a mild stress condition (dichotomous listening to two conflicting types of music), and 2) a control condition (relaxing nature sounds). The stress and relaxation auditory stimuli were delivered over a 10-min listening period preceding rectal distentions and during the rectal distentions but not during the distention rating process. Ratings of intensity and unpleasantness of the visceral sensations, subjective emotional responses, heart rate, and neuroendocrine measures (norepinephrine, cortisol, adrenocorticotropic hormone [ACTH], and prolactin) were obtained during the study. IBS patients, but not healthy controls, rated the 45-mm Hg visceral stimulus significantly higher in terms of intensity and unpleasantness during the stress condition compared with the relaxation condition. IBS patients also reported higher ratings of stress, anger, and anxiety during the stress compared with the relaxing condition, whereas controls had smaller and nonsignificant subjective responses. Heart rate measurements, but not other neuroendocrine stress measures, were increased under the stress condition in both groups. These findings confirm the hypothesis of altered stress-induced modulation of visceral perception in IBS patients.
Full Text Available Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation. The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.
Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M; Lenarz, Thomas; Lim, Hubert H
Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.
Lim, Hubert H.; Lenarz, Thomas
The cochlear implant is considered one of the most successful neural prostheses to date, which was made possible by visionaries who continued to develop the cochlear implant through multiple technological and clinical challenges. However, patients without a functional auditory nerve or implantable cochlea cannot benefit from a cochlear implant. The focus of the paper is to review the development and translation of a new type of central auditory prosthesis for this group of patients, which is known as the auditory midbrain implant (AMI) and is designed for electrical stimulation within the inferior colliculus. The rationale and results for the first AMI clinical study using a multi-site single-shank array will be presented initially. Although the AMI has achieved encouraging results in terms of safety and improvements in lip-reading capabilities and environmental awareness, it has not yet provided sufficient speech perception. Animal and human data will then be presented to show that a two-shank AMI array can potentially improve hearing performance by targeting specific neurons of the inferior colliculus. Modifications to the AMI array design, stimulation strategy, and surgical approach have been made that are expected to improve hearing performance in the patients implanted with a two-shank array in an upcoming clinical trial funded by the National Institutes of Health. Positive outcomes from this clinical trial will motivate new efforts and developments toward improving central auditory prostheses for those who cannot sufficiently benefit from cochlear implants. PMID:25613994
Firszt, Jill B.; Reeder, Ruth M.; Holden, Timothy A.; Harold eBurton; Chole, Richard A.
Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants), less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, ...
Recent psychological research aimed at determining whether dynamic event perception is direct or mediated by cue-based inference convincingly demonstrates evidence of both modes of perception or apprehension. This work also shows that noise is involved in attaining any perceptual variable, whether it perfectly (invariantly) specifies or imperfectly (fallibly) indicates the value of a target or criterion variable. As such, event-perception researchers encounter both internal (sensory or inferential) and external ecological sources of noise or uncertainty, owing to the organism's possible use of imperfect or 'nonspecifying' variables (or cues) and cue-based inference. Because both sources play central roles in Egon Brunswik's theory of probabilistic functionalism and methodology of representative design, event-perception research will benefit by explicitly leveraging original Brunswikian and, more recent, neo-Brunswikian scientific resources. Doing so will result in a more coherent and powerful approach to perceptual and cognitive psychology than is currently displayed in the scientific literature.
Zheng, Yun; Soli, Sigfrid D; Tao, Yong; Xu, Ke; Meng, Zhaoli; Li, Gang; Wang, Kai; Zheng, Hong
The primary purpose of the current study was to evaluate early prelingual auditory development (EPLAD) and early speech perception longitudinally over the first year after cochlear implantation in Mandarin-speaking pediatric cochlear implant (CI) recipients. Outcome measures were designed to allow comparisons of outcomes with those of English-speaking pediatric CI recipients reported in previous research. A hierarchical outcome assessment battery designed to measure EPLAD and early speech perception was used to evaluate 39 pediatric CI recipients implanted between the ages of 1 and 6 years at baseline and 3, 6, and 12 months after implantation. The battery consists of the Mandarin Infant-Toddler Meaningful Auditory Integration Scale (ITMAIS), the Mandarin Early Speech Perception (MESP) test, and the Mandarin Pediatric Speech Intelligibility (MPSI) test. The effects of age at implantation, duration of pre-implant hearing aid use, and Mandarin dialect exposure on performance were evaluated. EPLAD results were compared with the normal developmental trajectory and with results for English-speaking pediatric CI recipients. MESP and MPSI measures of early speech perception were compared with results for English-speaking recipients obtained with comparable measures. EPLAD, as measured with the ITMAIS/MAIS, was comparable in Mandarin- and English-speaking pediatric CI recipients. Both groups exceeded the normal developmental trajectory when hearing age in CI recipients and chronological age in normal were equated. Evidence of significant EPLAD during pre-implant hearing aid use was observed; although at a more gradual rate than after implantation. Early development of speech perception, as measures with the MESP and MPSI tests, was also comparable for Mandarin- and English-speaking CI recipients throughout the first 12 months after implantation. Both Mandarin dialect exposure and the duration of pre-implant hearing aid use significantly affected measures of early speech
Fostick, Leah; Babkoff, Harvey; Zukerman, Gil
Purpose: To test the effects of 24 hr of sleep deprivation on auditory and linguistic perception and to assess the magnitude of this effect by comparing such performance with that of aging adults on speech perception and with that of dyslexic readers on phonological awareness. Method: Fifty-five sleep-deprived young adults were compared with 29…
Most, Tova; Michaelis, Hilit
Purpose: This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. Method: A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify…
Zuk, J.; Bishop-Liebler, P.; Ozernov-Palchik, O.; Moore, E.; Overy, K.; Welch, G.; Gaab, N.
Previous research has suggested a link between musical training and auditory processing skills. Musicians have shown enhanced perception of auditory features critical to both music and speech, suggesting that this link extends beyond basic auditory processing. It remains unclear to what extent musicians who also have dyslexia show these specialized abilities, considering often-observed persistent deficits that coincide with reading impairments. The present study evaluated auditory sequencing ...
Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania
A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re
Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.
Busby, P A; Tong, Y C; Clark, G M
The identification of consonants in (a)-C-(a) nonsense syllables, using a fourteen-alternative forced-choice procedure, was examined in 4 profoundly hearing-impaired children under five conditions: audition alone using hearing aids in free-field (A), vision alone (V), auditory-visual using hearing aids in free-field (AV1), auditory-visual with linear amplification (AV2), and auditory-visual with syllabic compression (AV3). In the AV2 and AV3 conditions, acoustic signals were binaurally presented by magnetic or acoustic coupling to the subjects' hearing aids. The syllabic compressor had a compression ratio of 10:1, and attack and release times were 1.2 ms and 60 ms. The confusion matrices were subjected to two analysis methods: hierarchical clustering and information transmission analysis using articulatory features. The same general conclusions were drawn on the basis of results obtained from either analysis method. The results indicated better performance in the V condition than in the A condition. In the three AV conditions, the subjects predominantly combined the acoustic parameter of voicing with the visual signal. No consistent differences were recorded across the three AV conditions. Syllabic compression did not, therefore, appear to have a significant influence on AV perception for these children. A high degree of subject variability was recorded for the A and three AV conditions, but not for the V condition.
Salamon, Elliott; Bernstein, Steven R; Kim, Seung-A; Kim, Minsun; Stefano, George B
The use of music as a method of relieving anxiety has been studied extensively by researchers from varying disciplines. The abundance of these reports focused on which genre of music best aided in the relief of stress. Little work has been performed in the area of auditory preference in an attempt to ascertain whether an individual's preferred music type aids in their anxiety reduction at levels greater than music that they have little or no propensity for. In the present report we seek to determine whether naive human subjects exposed to music of their preference show a decrease in anxiety, as measured by systolic and diastolic blood pressure values. We furthermore contrast these values to those obtained during non-preferred music listening. We found statistically significant reduction of anxiety levels only when subjects were exposed to their preferred musical selections. Students participating in the study already had knowledge of what genre of music would best relax them. It is our belief, that within the general population, many people do not have this self understanding. We conclude that music therapy may provide a mechanism for this self-understanding and subsequently help alleviate anxiety and stress.
Satoh, Masayuki; Takeda, Katsuhiko; Kuzuhara, Shigeki
There is fairly general agreement that the melody and the rhythm are the independent components of the perception of music. In the theory of music, the melody and harmony determine to which tonality the music belongs. It remains an unsettled question whether the tonality is also an independent component of the perception of music, or a by-product of the melody and harmony. We describe a patient with auditory agnosia and expressive amusia that developed after a bilateral infarction of the temporal lobes. We carried out a detailed examination of musical ability in the patient and in control subjects. Comparing with a control population, we identified the following impairments in music perception: (a) discrimination of familiar melodies; (b) discrimination of unfamiliar phrases, and (c) discrimination of isolated chords. His performance in pitch discrimination and tonality were within normal limits. Although intrasubject statistical analysis revealed significant difference only between tonality task and unfamiliar phrase performance, comparison with control subjects suggested a dissociation between a preserved tonality analysis and impairment of perception of melody and chords. By comparing the results of our patient with those in the literature, we may say that there is a double dissociation between the tonality and the other components. Thus, it seems reasonable to suppose that tonality is an independent component of music perception. Based on our present and previous studies, we proposed the revised version of the cognitive model of musical processing in the brain. Copyright 2007 S. Karger AG, Basel.
Full Text Available OBJECTIVE: An experienced sonographer can by listening to the Doppler audio signals perceive various timbres that distinguish different types of umbilical artery flow despite an unchanged pulsatility index (PI. Our aim was to develop an objective measure of the Doppler audio signals recorded from fetoplacental circulation in a sheep model. METHODS: Various degrees of pathological flow velocity waveforms in the umbilical artery, similar to those in human complicated pregnancies, were induced by microsphere embolization of the placental bed (embolization model, 7 lamb fetuses, 370 Doppler recordings or by fetal hemodilution (anemia model, 4 lamb fetuses, 184 recordings. A subjective 11-step operator auditory scale (OAS was related to conventional Doppler parameters, PI and time average mean velocity (TAM, and to sound frequency analysis of Doppler signals (sound frequency with the maximum energy content [MAXpeak] and frequency band at maximum level minus 15 dB [MAXpeak-15 dB] over several heart cycles. RESULTS: WE FOUND A NEGATIVE CORRELATION BETWEEN THE OAS AND PI: median Rho -0.73 (range -0.35- -0.94 and -0.68 (range -0.57- -0.78 in the two lamb models, respectively. There was a positive correlation between OAS and TAM in both models: median Rho 0.80 (range 0.58-0.95 and 0.90 (range 0.78-0.95, respectively. A strong correlation was found between TAM and the results of sound spectrum analysis; in the embolization model the median r was 0.91 (range 0.88-0.97 for MAXpeak and 0.91 (range 0.82-0.98 for MAXpeak-15 dB. In the anemia model, the corresponding values were 0.92 (range 0.78-0.96 and 0.96 (range 0.89-0.98, respectively. CONCLUSION: Audio-spectrum analysis reflects the subjective perception of Doppler sound signals in the umbilical artery and has a strong correlation to TAM-velocity. This information might be of importance for clinical management of complicated pregnancies as an addition to conventional Doppler parameters.
Moradi, Shahram; Lidestam, Björn; Saremi, Amin; Rönnberg, Jerker
This study aimed to measure the initial portion of signal required for the correct identification of auditory speech stimuli (or isolation points, IPs) in silence and noise, and to investigate the relationships between auditory and cognitive functions in silence and noise. Twenty-one university students were presented with auditory stimuli in a gating paradigm for the identification of consonants, words, and final words in highly predictable and low predictable sentences. The Hearing in Noise Test (HINT), the reading span test, and the Paced Auditory Serial Attention Test were also administered to measure speech-in-noise ability, working memory and attentional capacities of the participants, respectively. The results showed that noise delayed the identification of consonants, words, and final words in highly predictable and low predictable sentences. HINT performance correlated with working memory and attentional capacities. In the noise condition, there were correlations between HINT performance, cognitive task performance, and the IPs of consonants and words. In the silent condition, there were no correlations between auditory and cognitive tasks. In conclusion, a combination of hearing-in-noise ability, working memory capacity, and attention capacity is needed for the early identification of consonants and words in noise.
Van Rheenen, Tamsyn E; Rossell, Susan L
Accurate emotion processing is critical to understanding the social world. Despite growing evidence of facial emotion processing impairments in patients with bipolar disorder (BD), comprehensive investigations of emotional prosodic processing is limited. The existing (albeit sparse) literature is inconsistent at best, and confounded by failures to control for the effects of gender or low level sensory-perceptual impairments. The present study sought to address this paucity of research by utilizing a novel behavioural battery to comprehensively investigate the auditory-prosodic profile of BD. Fifty BD patients and 52 healthy controls completed tasks assessing emotional and linguistic prosody, and sensitivity for discriminating tones that deviate in amplitude, duration and pitch. BD patients were less sensitive than their control counterparts in discriminating amplitude and durational cues but not pitch cues or linguistic prosody. They also demonstrated impaired ability to recognize happy intonations; although this was specific to male's with the disorder. The recognition of happy in the patient group was correlated with pitch and amplitude sensitivity in female patients only. The small sample size of patients after stratification by current mood state prevented us from conducting subgroup comparisons between symptomatic, euthymic and control participants to explicitly examine the effects of mood. Our findings indicate the existence of a female advantage for the processing of emotional prosody in BD, specifically for the processing of happy. Although male BD patients were impaired in their ability to recognize happy prosody, this was unrelated to reduced tone discrimination sensitivity. This study indicates the importance of examining both gender and low order sensory perceptual capacity when examining emotional prosody. © 2013 Elsevier B.V. All rights reserved.
Hemanth Narayan Shetty
Full Text Available Context: Deep band modulation (DBM improves speech perception in individuals with learning disability and older adults, who had temporal impairment in them. However, it is unclear on perception of DBM phrases at quiet and noise conditions in individuals with auditory neuropathy spectrum disorder (ANSD and sensorineural hearing loss (SNHL, as these individuals suffer from temporal impairment. Aim: The aim is to study the effect of DBM and noise on phrase perception in individuals with normal hearing, SNHL, and ANSD. Settings and Design: A factorial design was used to study deep-band-modulated phrase perception in quiet and at noise. Materials and Methods: Twenty participants in each group (normal, SNHL, and ANSD were included to assess phrase perception on four lists of each unprocessed (UP and DBM phrases at different signal-to-noise ratios (SNRs (−1, −3, and −5 dB SNR, which were presented at most comfortable level. In addition, a temporal processing was determined by gap detection threshold test. Statistical Analysis: A mixed analysis of variance was used to investigate main and interaction effects of conditions, noise, and groups. Further, a Pearson product moment correlation was used to document relationship between phrase perception and temporal processing among study participants in each experimental condition. Results: In each group, a significant improvement was observed in DBM phrase perception over UP phrase recognition in quiet and noise conditions. Although a significant improvement was observed, the benefit of recognition from DBM over UP is negligible at −5 dB SNR in both SNHL and ANSD groups. In addition, as expected, a significant improvement in phrase perception in each condition was found in normal hearing than SNHL followed by ANSD. Further, in both atypical groups, a strong negative correlation was found between phrase perception and gap detection threshold in each of the experimental condition. Conclusion: This
García-Pérez, Miguel A; Alcalá-Quintana, Rocío
Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal.
Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.
Karen L. Hanson
Full Text Available Informationists at NYU Health Sciences Libraries (NYUHSL successfully applied for a NLM supplement to a translational research grant obtained by PIs in the NYU School of Medicine Department of Otolaryngology titled, “Clinical Management of Cochlear Implant Patients with Contralateral Hearing Aids”. The grant involves development of evidence-based guidelines for post-implant management of patients with bimodal cochlear implants. The PIs are also seeking to acquire new data sets to merge with grant-generated data. In light of the shifting data requirements, and the potential introduction of additional datasets, informationists will evaluate and restructure the data model and data entry tool. Report queries will be refined for the new data model and options for a query tool appropriate for users unfamiliar with query languages will be assessed and implemented. The services offered through this supplement represent the deepest and most detailed data management support offered by NYUHSL to date. The components of the supplement are being analyzed as a pilot of a broader offering of these data management services.
Skipper, Jeremy I
What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded...
McLachlan, Neil; Wilson, Sarah
The model presents neurobiologically plausible accounts of sound recognition (including absolute pitch), neural plasticity involved in pitch, loudness and location information integration, and streaming and auditory recall. It is proposed that a cortical mechanism for sound identification modulates the spectrotemporal response fields of inferior…
Vercammen, A.; de Haan, E. H. F.; Aleman, A.
Background. It has recently been suggested that auditory hallucinations are the result of a criterion shift when deciding whether or not a meaningful signal has emerged. The approach proposes that a liberal criterion may result in increased false-positive identifications, without additional
Songbirds, such as zebra finches, learn their songs from a ‘tutor’ (usually the father), early in life. There are strong parallels between the behavioural, cognitive and neural processes that underlie vocal learning in humans and songbirds. In both cases there is a sensitive period for auditory
Stevens, Catherine J
Experimental investigations of cross-cultural music perception and cognition reported during the past decade are described. As globalization and Western music homogenize the world musical environment, it is imperative that diverse music and musical contexts are documented. Processes of music perception include grouping and segmentation, statistical learning and sensitivity to tonal and temporal hierarchies, and the development of tonal and temporal expectations. The interplay of auditory, visual, and motor modalities is discussed in light of synchronization and the way music moves via emotional response. Further research is needed to test deep-rooted psychological assumptions about music cognition with diverse materials and groups in dynamic contexts. Although empirical musicology provides keystones to unlock musical structures and organization, the psychological reality of those theorized structures for listeners and performers, and the broader implications for theories of music perception and cognition, awaits investigation. Copyright © 2012 Cognitive Science Society, Inc.
DEC Screen Management routines (SMG$ Run Time Library). Finally, we developed a program to assist the user in editing a file which contains a list of...demisyllable unit over the whole syllable is large reduction in the size of the reference inventory . One study (46) shows that a reduction by about a...perceptual aspect is implied. It is within the broad framwork described above that the auditory-perceptual theory will be considered. But before beginning
Bao, Yan; Szymaszek, Aneta; Wang, Xiaoying; Oron, Anna; Pöppel, Ernst; Szelag, Elzbieta
The close relationship between temporal perception and speech processing is well established. The present study focused on the specific question whether the speech environment could influence temporal order perception in subjects whose language backgrounds are distinctively different, i.e., Chinese (tonal language) vs. Polish (non-tonal language). Temporal order thresholds were measured for both monaurally presented clicks and binaurally presented tone pairs. Whereas the click experiment showed similar order thresholds for the two language groups, the experiment with tone pairs resulted in different observations: while Chinese demonstrated better performance in discriminating the temporal order of two "close frequency" tone pairs (600 Hz and 1200 Hz), Polish subjects showed a reversed pattern, i.e., better performance for "distant frequency" tone pairs (400 Hz and 3000 Hz). These results indicate on the one hand a common temporal mechanism for perceiving the order of two monaurally presented stimuli, and on the other hand neuronal plasticity for perceiving the order of frequency-related auditory stimuli. We conclude that the auditory brain is modified with respect to temporal processing by long-term exposure to a tonal or a non-tonal language. As a consequence of such an exposure different cognitive modes of operation (analytic vs. holistic) are selected: the analytic mode is adopted for "distant frequency" tone pairs in Chinese and for "close frequency" tone pairs in Polish subjects, whereas the holistic mode is selected for "close frequency" tone pairs in Chinese and for "distant frequency" tone pairs in Polish subjects, reflecting a double dissociation of function. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Kluizenaar, Y. de; Matsui, T.
With the aim to identify recent research achievements, current trends in research, remaining gaps of knowledge and priority areas of future research in the field of non-auditory health effects of noise, recent research progress was reviewed. A search was performed in PubMed (search terms “noise AND
Richards, Susan; Goswami, Usha
Purpose: We investigated whether impaired acoustic processing is a factor in developmental language disorders. The amplitude envelope of the speech signal is known to be important in language processing. We examined whether impaired perception of amplitude envelope rise time is related to impaired perception of lexical and phrasal stress in…
Albouy, Philippe; Mattout, Jeremie; Bouet, Romain; Maby, Emmanuel; Sanchez, Gaetan; Aguera, Pierre-Emmanuel; Daligault, Sebastien; Delpuech, Claude; Bertrand, Olivier; Caclin, Anne; Tillmann, Barbara
Congenital amusia is a lifelong disorder of music perception and production. The present study investigated the cerebral bases of impaired pitch perception and memory in congenital amusia using behavioural measures, magnetoencephalography and voxel-based morphometry. Congenital amusics and matched control subjects performed two melodic tasks (a…
Papadopoulos, Judith; Domahs, Frank; Kauschke, Christina
Although it has been established that human beings process concrete and abstract words differently, it is still a matter of debate what factors contribute to this difference. Since concrete concepts are closely tied to sensory perception, perceptual experience seems to play an important role in their processing. The present study investigated the processing of nouns during an auditory lexical decision task. Participants came from three populations differing in their visual-perceptual experience: congenitally blind persons, word-color synesthetes, and sighted non-synesthetes. Specifically, three features with potential relevance to concreteness were manipulated: sensory perception, emotionality, and Husserlian lifeworld, a concept related to the inner versus the outer world of the self. In addition to a classical concreteness effect, our results revealed a significant effect of lifeworld: words that are closely linked to the internal states of humans were processed faster than words referring to the outside world. When lifeworld was introduced as predictor, there was no effect of emotionality. Concerning participants' perceptual experience, an interaction between participant group and item characteristics was found: the effects of both concreteness and lifeworld were more pronounced for blind compared to sighted participants. We will discuss the results in the context of embodied semantics, and we will propose an approach to concreteness based on the individual's bodily experience and the relatedness of a given concept to the self.
Bhattacharya, Joydeep; Pereda, Ernesto; Ioannou, Christos
Maximal information coefficient (MIC) is a recently introduced information-theoretic measure of functional association with a promising potential of application to high dimensional complex data sets. Here, we applied MIC to reveal the nature of the functional associations between different brain regions during the perception of binaural beat (BB); BB is an auditory illusion occurring when two sinusoidal tones of slightly different frequency are presented separately to each ear and an illusory beat at the different frequency is perceived. We recorded sixty-four channels EEG from two groups of participants, musicians and non-musicians, during the presentation of BB, and systematically varied the frequency difference from 1 Hz to 48 Hz. Participants were also presented non-binuaral beat (NBB) stimuli, in which same frequencies were presented to both ears. Across groups, as compared to NBB, (i) BB conditions produced the most robust changes in the MIC values at the whole brain level when the frequency differences were in the classical alpha range (8-12 Hz), and (ii) the number of electrode pairs showing nonlinear associations decreased gradually with increasing frequency difference. Between groups, significant effects were found for BBs in the broad gamma frequency range (34-48 Hz), but such effects were not observed between groups during NBB. Altogether, these results revealed the nature of functional associations at the whole brain level during the binaural beat perception and demonstrated the usefulness of MIC in characterizing interregional neural dependencies.
Kellerman, Gabriella R; Fan, Jin; Gorman, Jack M
Recently, findings on a wide range of auditory abnormalities among individuals with autism have been reported. To date, functional distinctions among these varied findings are poorly established. Such distinctions should be of interest to clinicians and researchers alike given their potential therapeutic and experimental applications. This review suggests three general trends among these findings as a starting point for future analyses. First, studies of auditory perception of linguistic and social auditory stimuli among individuals with autism generally have found impaired perception versus normal controls. Such findings may correlate with impaired language and communication skills and social isolation observed among individuals with autism. Second, studies of auditory perception of pitch and music among individuals with autism generally have found enhanced perception versus normal controls. These findings may correlate with the restrictive and highly focused behaviors observed among individuals with autism. Third, findings on the auditory perception of non-linguistic, non-musical stimuli among autism patients resist any generalized conclusions. Ultimately, as some researchers have already suggested, the distinction between impaired global processing and enhanced local processing may prove useful in making sense of apparently discordant findings on auditory abnormalities among individuals with autism.
Hubbard, Amy L; Wilson, Stephen M.; Callan, Daniel E; Dapretto, Mirella
Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture – a fundamental type of hand gesture that marks speech prosody – might impact speech perception at the neu...
Herrmann, Björn; Obleser, Jonas; Kalberlah, Christian; Haynes, John-Dylan; Friederici, Angela D
In language processing, the relative contribution of early sensory and higher cognitive brain areas is still an open issue. A recent controversial hypothesis proposes that sensory cortices show sensitivity to syntactic processes, whereas other studies suggest a wider neural network outside sensory regions. The goal of the current event-related fMRI study is to clarify the contribution of sensory cortices in auditory syntactic processing in a 2 × 2 design. Two-word utterances were presented auditorily and varied both in perceptual markedness (presence or absence of an overt word category marking "-t"), and in grammaticality (syntactically correct or incorrect). A multivariate pattern classification approach was applied to the data, flanked by conventional cognitive subtraction analyses. The combination of methods and the 2 × 2 design revealed a clear picture: The cognitive subtraction analysis found initial syntactic processing signatures in a neural network including the left IFG, the left aSTG, the left superior temporal sulcus (STS), as well as the right STS/STG. Classification of local multivariate patterns indicated the left-hemispheric regions in IFG, aSTG, and STS to be more syntax-specific than the right-hemispheric regions. Importantly, auditory sensory cortices were only sensitive to the overt perceptual marking, but not to the grammaticality, speaking against syntax-inflicted sensory cortex modulations. Instead, our data provide clear evidence for a distinction between regions involved in pure perceptual processes and regions involved in initial syntactic processes. Copyright © 2011 Wiley Periodicals, Inc.
Full Text Available There is a wide range of acoustic and visual variability across different talkers and different speaking contexts. Listeners with normal hearing accommodate that variability in ways that facilitate efficient perception, but it is not known whether listeners with cochlear implants can do the same. In this study, listeners with normal hearing (NH and listeners with cochlear implants (CIs were tested for accommodation to auditory and visual phonetic contexts created by gender-driven speech differences as well as vowel coarticulation and lip rounding in both consonants and vowels. Accommodation was measured as the shifting of perceptual boundaries between /s/ and /ʃ/ sounds in various contexts, as modeled by mixed-effects logistic regression. Owing to the spectral contrasts thought to underlie these context effects, CI listeners were predicted to perform poorly, but showed considerable success. Listeners with cochlear implants not only showed sensitivity to auditory cues to gender, they were also able to use visual cues to gender (i.e. faces as a supplement or proxy for information in the acoustic domain, in a pattern that was not observed for listeners with normal hearing. Spectrally-degraded stimuli heard by listeners with normal hearing generally did not elicit strong context effects, underscoring the limitations of noise vocoders and/or the importance of experience with electric hearing. Visual cues for consonant lip rounding and vowel lip rounding were perceived in a manner consistent with coarticulation and were generally used more heavily by listeners with CIs. Results suggest that listeners with cochlear implants are able to accommodate various sources of acoustic variability either by attending to appropriate acoustic cues or by inferring them via the visual signal.
Kost, Rhonda G; Lee, Laura M; Yessis, Jennifer; Coller, Barry S; Henderson, David K
Participants' perceptions of their research experiences provide valuable measures of ethical treatment, yet no validated instruments exist to measure these experiences. We conducted focus groups of research participants and professionals as the initial step in developing a validated instrument. Research participants enrolled in 12 focus groups, consisting of: (1) individuals with disorders undergoing interventions; (2) in natural history studies; or (3) healthy volunteers. Research professionals participated in six separate groups of: (1) institutional review board members, ethicists, and Research Subject Advocates; (2) research nurses/coordinators; or (3) investigators. Focus groups used standard methodologies. Eighty-five participants and 29 professionals enrolled at eight academic centers. Altruism and personal relevance of the research were commonly identified motivators; financial compensation was less commonly mentioned. Participants were satisfied with informed consent processes but disappointed if not provided test results, or study outcomes. Positive relationships with research teams were valued highly. Research professionals were concerned about risks, undue influence, and informed consent. Participants join studies for varied, complex reasons, notably altruism and personal relevance. They value staff relationships, health gains, new knowledge, and compensation, and expect professionalism and good organization. On the basis of these insights, we propose specific actions to enhance participant recruitment, retention, and satisfaction. © 2011 Wiley Periodicals, Inc.
Kost, Rhonda G.; Lee, Laura M.; Yessis, Jennifer; Coller, Barry S.; Henderson, David K.
Abstract Introduction: Participants’ perceptions of their research experiences provide valuable measures of ethical treatment, yet no validated instruments exist to measure these experiences. We conducted focus groups of research participants and professionals as the initial step in developing a validated instrument. Methods: Research participants enrolled in 12 focus groups, consisting of: (1) individuals with disorders undergoing interventions; (2) in natural history studies; or (3) healthy volunteers. Research professionals participated in six separate groups of: (1) institutional review board members, ethicists, and Research Subject Advocates; (2) research nurses/coordinators; or (3) investigators. Focus groups used standard methodologies. Results: Eighty‐five participants and 29 professionals enrolled at eight academic centers. Altruism and personal relevance of the research were commonly identified motivators; financial compensation was less commonly mentioned. Participants were satisfied with informed consent processes but disappointed if not provided test results, or study outcomes. Positive relationships with research teams were valued highly. Research professionals were concerned about risks, undue influence, and informed consent. Conclusions: Participants join studies for varied, complex reasons, notably altruism and personal relevance. They value staff relationships, health gains, new knowledge, and compensation, and expect professionalism and good organization. On the basis of these insights, we propose specific actions to enhance participant recruitment, retention, and satisfaction. Clin Trans Sci 2011; Volume 4: 403–413 PMID:22212221
Viswanathan, Navin; Magnuson, James S.; Fowler, Carol A.
According to one approach to speech perception, listeners perceive speech by applying general pattern matching mechanisms to the acoustic signal (e.g., Diehl, Lotto, & Holt, 2004). An alternative is that listeners perceive the phonetic gestures that structured the acoustic signal (e.g., Fowler, 1986). The two accounts have offered different…
Neuhoff, John G.; Planisek, Rianna; Seifritz, Erich
In 4 experiments, the authors examined sex differences in audiospatial perception of sounds that moved toward and away from the listener. Experiment 1 showed that both men and women underestimated the time-to-arrival of full-cue looming sounds. However, this perceptual bias was significantly stronger among women than among men. In Experiment 2,…
Lavie, Limor; Banai, Karen; Karni, Avi; Attias, Joseph
Purpose: We tested whether using hearing aids can improve unaided performance in speech perception tasks in older adults with hearing impairment. Method: Unaided performance was evaluated in dichotic listening and speech-in-noise tests in 47 older adults with hearing impairment; 36 participants in 3 study groups were tested before hearing aid…
Luque, David; Luque, Juan L.; Lopez-Zamora, Miguel
The study examined whether individual differences in the quality of phonological representations, measured by a categorical perception task (CP), are related with the use of phonological information in a lexical decision pseudohomophone task. In addition, the lexical frequency of the stimuli was manipulated. The sample consisted of…
Jennings, M B; Shaw, L; Hodgins, H; Kuchar, D A; Bataghva, L Poost-Foroosh
For older workers with acquired hearing loss, this loss as well as the changing nature of work and the workforce, may lead to difficulties and disadvantages in obtaining and maintaining employment. Currently there are very few instruments that can assist workplaces, employers and workers to prepare for older workers with hearing loss or with the evaluation of auditory perception demands of work, especially those relevant to communication, and safety sensitive workplaces that require high levels of communication. This paper introduces key theoretical considerations that informed the development of a new framework, The Audiologic Ergonomic (AE) Framework to guide audiologists, work rehabilitation professionals and workers in developing tools to support the identification and evaluation of auditory perception demands in the workplace, the challenges to communication and the subsequent productivity and safety in the performance of work duties by older workers with hearing loss. The theoretical concepts underpinning this framework are discussed along with next steps in developing tools such as the Canadian Hearing Demands Tool (C-HearD Tool) in advancing approaches to evaluate auditory perception and communication demands in the workplace.
Dunne, B J; Jahn, R G
This article has four purposes: 1) to present for the first time in archival form all results of some 25 years of remote perception research at this laboratory; 2) to describe all of the analytical scoring methods developed over the course of this program to quantify the amount of anomalous information acquired in the experiments; 3) to display a remarkable anti-correlation between the objective specificity of those methods and the anomalous yield of the experiments; and 4) to discuss the phenomenological and pragmatic implications of this complementarity. The formal database comprises 653 experimental trials performed over several phases of investigation. The scoring methods involve various arrays of descriptor queries that can be addressed to both the physical targets and the percipients' description thereof, the responses to which provide the basis for numerical evaluation and statistical assessment of the degree of anomalous information acquired. Twenty-four such recipes have been employed, with queries posed in binary, ternary, quaternary, and ten-level distributive formats. Thus treated, the database yields a composite z-score against chance of 5.418 (p = 3 x 10(-8), one-tailed). Numerous subsidiary analyses agree that these overall results are not significantly affected by any of the secondary protocol parameters tested, or by variations in descriptor effectiveness, possible participant response biases, target distance from the percipient, or time interval between perception effort and agent target visitation. However, over the course of the program there has been a striking diminution of the anomalous yield that appears to be associated with the participants' growing attention to, and dependence upon, the progressively more detailed descriptor formats and with the corresponding reduction in the content of the accompanying free-response transcripts. The possibility that increased emphasis on objective quantification of the phenomenon somehow may have
Full Text Available A growing consensus in social cognitive neuroscience holds that large portions of the primate visual brain are dedicated to the processing of social information, i.e., to those aspects of stimuli that are usually encountered in social interactions such as others’ facial expressions, actions and symbols. Yet, studies of social perception have mostly employed simple pictorial representations of conspecifics. These stimuli are social only in the restricted sense that they physically resemble objects with which the observer would typically interact. In an equally important sense, however, these stimuli might be regarded as ‘non-social’: the observer knows that they are viewing pictures and might therefore not attribute current mental states to the stimuli or might do so in a qualitatively different way than in a real social interaction. Recent studies have demonstrated the importance of such higher-order conceptualisation of the stimulus for social perceptual processing. Here, we assess the similarity between the various types of stimuli used in the laboratory and object classes encountered in real social interactions. We distinguish two different levels at which experimental stimuli can match social stimuli as encountered in everyday social settings: (i the extent to which a stimulus’ physical properties resemble those typically encountered in social interactions and (ii the higher-level conceptualisation of the stimulus as indicating another person’s mental states. We illustrate the significance of this distinction for social perception research and report new empirical evidence further highlighting the importance of mental state attribution for perceptual processing. Finally, we discuss the potential of this approach to inform studies of clinical conditions such as autism.
Munhall, K G; Jones, Jeffery A; Callan, Daniel E; Kuratate, Takaaki; Vatikiotis-Bateson, Eric
People naturally move their heads when they speak, and our study shows that this rhythmic head motion conveys linguistic information. Three-dimensional head and face motion and the acoustics of a talker producing Japanese sentences were recorded and analyzed. The head movement correlated strongly with the pitch (fundamental frequency) and amplitude of the talker's voice. In a perception study, Japanese subjects viewed realistic talking-head animations based on these movement recordings in a speech-in-noise task. The animations allowed the head motion to be manipulated without changing other characteristics of the visual or acoustic speech. Subjects correctly identified more syllables when natural head motion was present in the animation than when it was eliminated or distorted. These results suggest that nonverbal gestures such as head movements play a more direct role in the perception of speech than previously known.
Levitin, Daniel J; Menon, Vinod; Schmitt, J Eric; Eliez, Stephan; White, Christopher D; Glover, Gary H; Kadis, Jay; Korenberg, Julie R; Bellugi, Ursula; Reiss, Allan L
Williams syndrome (WS), a neurogenetic developmental disorder, is characterized by a rare fractionation of higher cortical functioning: selective preservation of certain complex faculties (language, music, face processing, and sociability) in contrast to marked and severe deficits in nearly every other cognitive domain (reasoning, spatial ability, motor coordination, arithmetic, problem solving). WS people are also known to suffer from hyperacusis and to experience heightened emotional reactions to music and certain classes of noise. We used functional magnetic resonance imaging to examine the neural basis of auditory processing of music and noise in WS patients and age-matched controls and found strikingly different patterns of neural organization between the groups. Those regions supporting music and noise processing in normal subjects were found not to be consistently activated in the WS participants (e.g., superior temporal and middle temporal gyri). Instead, the WS participants showed significantly reduced activation in the temporal lobes coupled with significantly greater activation in the right amygdala. In addition, WS participants (but not controls) showed a widely distributed network of activation in cortical and subcortical structures, including the brain stem, during music processing. Taken together with previous ERP and cytoarchitectonic studies, this first published report of WS using fMRI provides additional evidence of a different neurofunctional organization in WS people than normal people, which may help to explain their atypical reactions to sound. These results constitute an important first step in drawing out the links between genes, brain, cognition, and behavior in Williams syndrome.
Sinha, Anisha; Rout, Nachiketa
Rhyming ability is among the earliest metaphonological skills to be acquired during the process of speech and language acquisition. Metalinguistic skills, particularly metaphonological skills, greatly influence language learning during early, school grades and reportedly children with learning disorders are poor at these skills. To develop and validate a Bengali rhyming checklist and study the auditory perception of non-sense and familiar Bengali rhyming words in children with and without specific learning disability (SLD). 60 children, age range 8-11years, participated in two groups; group-A included children with SLD and group-B, typically developing children (TDC). All participants were native Bengali speakers, attending regular school, with hearing sensitivity less than 25dBHL, no history of ear discharge and middle socioeconomic background. A rhyming checklist was developed in Bengali, consisting of familiar (section-A) and non-sense (section-B) words. Test-retest reliability and validity measures were obtained. The items on the checklist were audio recorded and presented to the participants in a rhyming judgment task in one to one set up. Scores were obtained and statistically analyzed using SPSS software (version-11.0). Children with SLD scored significantly low on the rhyming judgment task as against TDC (p.05). Semantic content influences rhyming perception in children with SLD but has no significant effect on TDC. The developed rhyming checklist may be used as a screening tool for children at risk of SLD at primary school grades. Rhyming activities may be utilized by teachers and parents, to promote language learning in young learners. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Gutschalk, Alexander; Dykstra, Andrew R
Our auditory system is constantly faced with the task of decomposing the complex mixture of sound arriving at the ears into perceptually independent streams constituting accurate representations of individual sound sources. This decomposition, termed auditory scene analysis, is critical for both survival and communication, and is thought to underlie both speech and music perception. The neural underpinnings of auditory scene analysis have been studied utilizing invasive experiments with animal models as well as non-invasive (MEG, EEG, and fMRI) and invasive (intracranial EEG) studies conducted with human listeners. The present article reviews human neurophysiological research investigating the neural basis of auditory scene analysis, with emphasis on two classical paradigms termed streaming and informational masking. Other paradigms - such as the continuity illusion, mistuned harmonics, and multi-speaker environments - are briefly addressed thereafter. We conclude by discussing the emerging evidence for the role of auditory cortex in remapping incoming acoustic signals into a perceptual representation of auditory streams, which are then available for selective attention and further conscious processing. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.
Mondelli, Maria Fernanda Capoani Garcia
Full Text Available Introduction: To process and decode the acoustic stimulation are necessary cognitive and neurophysiological mechanisms. The hearing stimulation is influenced by cognitive factor from the highest levels, such as the memory, attention and learning. The sensory deprivation caused by hearing loss from the conductive type, frequently in population with cleft lip and palate, can affect many cognitive functions - among them the attention, besides harm the school performance, linguistic and interpersonal. Objective: Verify the perception of the parents of children with cleft lip and palate about the hearing attention of their kids. Method: Retrospective study of infants with any type of cleft lip and palate, without any genetic syndrome associate which parents answered a relevant questionnaire about the auditory attention skills. Results: 44 are from the male kind and 26 from the female kind, 35,71% of the answers were affirmative for the hearing loss and 71,43% to otologic infections. Conclusion: Most of the interviewed parents pointed at least one of the behaviors related to attention contained in the questionnaire, indicating that the presence of cleft lip and palate can be related to difficulties in hearing attention.
Weed, Ethan; Kratschmer, Alexandra Regina; Pedersen, Michael Nygaard
, are available in both Emotiv and Brain Products layouts, so we used these in our analysis. Both MMN and N170 were measured as difference waves (MMN: deviants - standards, N170: faces - houses). Significance was measured with one-sample t-tests at each of the chosen electrodes. The MMN was visible in data......One of the challenges in collecting ERP data is the time-consuming process of fitting caps and prepping electrodes with gel. This can be particularly true when working with clinical populations, where efficiency in data collection is important. Recently gel-free wireless headsets designed for brain...... and smaller cognitive components. To test the feasibility of these headsets for cognitive research, we compared performance of the Emotiv Epoc wireless headset (EM) with Brain Products ActiCAP (BP) active electrodes on two well-studied components: the auditory mismatch negativity (MMN) and the visual face...
.... This fundamental process of auditory perception is called auditory scene analysis. of particular importance in auditory scene analysis is the separation of speech from interfering sounds, or speech segregation...
Thomas, Neil; Hayward, Mark; Peters, Emmanuelle; van der Gaag, Mark; Bentall, Richard P; Jenner, Jack; Strauss, Clara; Sommer, Iris E; Johns, Louise C; Varese, Filippo; García-Montes, José Manuel; Waters, Flavie; Dodgson, Guy; McCarthy-Jones, Simon
This report from the International Consortium on Hallucinations Research considers the current status and future directions in research on psychological therapies targeting auditory hallucinations (hearing voices). Therapy approaches have evolved from behavioral and coping-focused interventions, through formulation-driven interventions using methods from cognitive therapy, to a number of contemporary developments. Recent developments include the application of acceptance- and mindfulness-based approaches, and consolidation of methods for working with connections between voices and views of self, others, relationships and personal history. In this article, we discuss the development of therapies for voices and review the empirical findings. This review shows that psychological therapies are broadly effective for people with positive symptoms, but that more research is required to understand the specific application of therapies to voices. Six key research directions are identified: (1) moving beyond the focus on overall efficacy to understand specific therapeutic processes targeting voices, (2) better targeting psychological processes associated with voices such as trauma, cognitive mechanisms, and personal recovery, (3) more focused measurement of the intended outcomes of therapy, (4) understanding individual differences among voice hearers, (5) extending beyond a focus on voices and schizophrenia into other populations and sensory modalities, and (6) shaping interventions for service implementation. © The Author 2014. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center.
Li, Shu-Chen; Passow, Susanne; Nietfeld, Wilfried; Schröder, Julia; Bertram, Lars; Heekeren, Hauke R; Lindenberger, Ulman
Using a specific variant of the dichotic listening paradigm, we studied the influence of dopamine on attentional modulation of auditory perception by assessing effects of allelic variation of a single-nucleotide polymorphism (SNP) rs907094 in the DARPP-32 gene (dopamine and adenosine 3', 5'-monophosphate-regulated phosphoprotein 32 kilodations; also known as PPP1R1B) on behavior and cortical evoked potentials. A frequent DARPP-32 haplotype that includes the A allele of this SNP is associated with higher mRNA expression of DARPP-32 protein isoforms, striatal dopamine receptor function, and frontal-striatal connectivity. As we hypothesized, behaviorally the A homozygotes were more flexible in selectively attending to auditory inputs than any G carriers. Moreover, this genotype also affected auditory evoked cortical potentials that reflect early sensory and late attentional processes. Specifically, analyses of event-related potentials (ERPs) revealed that amplitudes of an early component of sensory selection (N1) and a late component (N450) reflecting attentional deployment for conflict resolution were larger in A homozygotes than in any G carriers. Taken together, our data lend support for dopamine's role in modulating auditory attention both during the early sensory selection and late conflict resolution stages. Copyright © 2013 Elsevier Ltd. All rights reserved.
Burgmeier, Robert; Desai, Rajen U; Farner, Katherine C; Tiano, Benjamin; Lacey, Ryan; Volpe, Nicholas J; Mets, Marilyn B
Children with a history of amblyopia, even if resolved, exhibit impaired visual-auditory integration and perceive speech differently. To determine whether a history of amblyopia is associated with abnormal visual-auditory speech integration. Retrospective observational study at an academic pediatric ophthalmologic clinic with an average of 4 years of follow-up. Participants were at least 3 years of age and without any history of neurologic or hearing disorders. Of 39 children originally in our study, 6 refused to participate. The remaining 33 participants completed the study. Twenty-four participants (mean [SD] age, 7.0 [1.5] years) had a history of amblyopia in 1 eye, with a visual acuity of at least 20/20 in the nonamblyopic eye. Nine controls (mean [SD] age, 8.0 [3.4] years) were recruited from referrals for visually insignificant etiologies or through preschool-screening eye examinations; all had 20/20 in both eyes. Participants were presented with a video demonstrating the McGurk effect (ie, a stimulus presenting an audio track playing the sound /pa/ and a separate video track of a person articulating /ka/). Normal visual-auditory integration produces the perception of hearing a fusion sound /ta/. Participants were asked to report which sound was perceived, /ka/, /pa/, or /ta/. Prevalence of perception of the fusion /ta/ sound. Prior to the study, amblyopic children were hypothesized to less frequently perceive /ta/. The McGurk effect was perceived by 11 of the 24 participants with amblyopia (45.8%) and all 9 controls (100%) (adjusted odds ratio, 22.3 [95% CI, 1.2-426.0]; P = .005). The McGurk effect was perceived by 100% of participants with amblyopia that was resolved by 5 years of age and by 100% of participants whose onset at amblyopia developed at or after 5 years of age. However, only 18.8% of participants with amblyopia that was unresolved by 5 years of age (n = 16) perceived the McGurk effect (adjusted odds ratio, 27.0 [95% CI, 1.1-654.0]; P = .02
Jepsen, Morten Løve
in a diagnostic rhyme test. The framework was constructed such that discrimination errors originating from the front-end and the back-end were separated. The front-end was fitted to individual listeners with cochlear hearing loss according to non-speech data, and speech data were obtained in the same listeners....... It was shown that an accurate simulation of cochlear input-output functions, in addition to the audiogram, played a major role in accounting both for sensitivity and supra-threshold processing. Finally, the model was used as a front-end in a framework developed to predict consonant discrimination...... and reduced speech perception performance in the listeners with cochlear hearing loss. Overall, this work suggests a possible explanation of the variability in consequences of cochlear hearing loss. The proposed model might be an interesting tool for, e.g., evaluation of hearing-aid signal processing....
Schmitz, Judith; Bartoli, Eleonora; Maffongelli, Laura; Fadiga, Luciano; Sebastian-Galles, Nuria; D'Ausilio, Alessandro
Listening to speech has been shown to activate motor regions, as measured by corticobulbar excitability. In this experiment, we explored if motor regions are also recruited during listening to non-native speech, for which we lack both sensory and motor experience. By administering Transcranial Magnetic Stimulation (TMS) over the left motor cortex we recorded corticobulbar excitability of the lip muscles when Italian participants listened to native-like and non-native German vowels. Results showed that lip corticobulbar excitability increased for a combination of lip use during articulation and non-nativeness of the vowels. Lip corticobulbar excitability was further related to measures obtained in perception and production tasks showing a negative relationship with nativeness ratings and a positive relationship with the uncertainty of lip movement during production of the vowels. These results suggest an active and compensatory role of the motor system during listening to perceptually/articulatory unfamiliar phonemes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.
Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…
Schall, Sonja; von Kriegstein, Katharina
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.
Full Text Available It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI data. Participants (17 normal participants, 17 developmental prosopagnosics first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker. This was followed by an auditory-only speech recognition task and a control task (voice recognition involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.
volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...... auditory display creation; data handling for auditory display systems; applications of auditory display....
AlGhamdi, Khalid M; Moussa, Noura A.; AlEssa, Dana S.; AlOthimeen, Nermeen; Al-Saud, Adwa S.
We aimed to explore perceptions, attitudes and practices toward research among medical students. A self-administered questionnaire was distributed among senior medical students at the King Saud University, Riyadh, Saudi Arabia.
M. Stereoscopic vision induced by parallax images on HMD and its influence on visual functions. In: Shumaker R, ed. Virtual and mixed reality – new... vestibular , interpretation of the visual environment is considered the strongest.7 Depth perception is of particular importance, as many military aviation...initiated a flare maneuver when he judged his plane to be about 1 foot above the ground, when in reality he was about 15 feet; the plane stalled and
Steele, Janeé M.; Rawls, Glinda J.
This study explored master's-level counseling students' (N = 804) perceptions of training in the Council for Accreditation of Counseling and Related Educational Programs (2009) Research and Program Evaluation standard, and their attitudes toward quantitative research. Training perceptions and quantitative research attitudes were low to moderate,…
Hosseini, Seyed Mahmood; Rezaei, Rohollah
This descriptive survey research was undertaken to design appropriate programs for the creation of a positive perception of nanotechnology among their intended beneficiaries. In order to do that, the factors affecting positive perceptions were defined. A stratified random sample of 278 science board members was selected out of 984 researchers who were working in 22 National Agricultural Research Institutions (NARIs). Data were collected by using a mailed questionnaire. The descriptive results revealed that more than half of the respondents had "low" or "very low" familiarity with nanotechnology. Regression analysis indicated that the perceptions of Iranian NARI Science Board Members towards nanotechnology were explained by three variables: the level of their familiarity with emerging applications of nanotechnology in agriculture, the level of their familiarity with nanotechnology and their work experiences. The findings of this study can contribute to a better understanding of the present situation of the development of nanotechnology and the planning of appropriate programs for creating a positive perception of nanotechnology.
This investigation elicited the perceptions of thirteen of the most successful research supervisors from one university, with a view to identifying their approaches to selecting research candidates. The supervisors were identified by the university's research office using the single criterion of having the largest number of ...
Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar
Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.
Kottler, Sylvia B.
Procedures and sample activities are provided for both identifying and training children with auditory perception problems related to sound localization, sound discrimination, and sound sequencing. (KW)
Hamada, Megumi; Goya, Hideki
This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a…
Background: Tissue banking refers to a structured and organized resource collection of tissue. Recent advances in research technology and knowledge in the fields of human genetics/ genomics highlights the need to maintain a steady supply of tissue for researchers. Objective: To assess the perception and willingness of ...
Miles, Shannon R.; Cromer, Lisa DeMarni; Narayan, Anupama
Human subject pools have been a valuable resource to universities conducting research with student participants. However, the costs and benefits to student participants must be carefully weighed by students, researchers, and institutional review board administrators in order to avoid coercion. Participant perceptions are pivotal in deciding…
Gundala, Raghava Rao; Singh, Mandeep; Baldwin, Andrew
This paper is an investigation into undergraduate students' perceptions on use of live projects as a teaching pedagogy in marketing research courses. Students in undergraduate marketing research courses from fall 2009 to spring 2013 completed an online questionnaire consisting of 17 items. The results suggested that student understanding of…
Agricultural researchers in the Province of Isfahan were surveyed in order to explore their perception about role of nanotechnology in food security. The methodology used in this study involved a combination of descriptive and quantitative research and included the use of correlation, regression and descriptive analysis as ...
Lim, Hubert H.; Lenarz, Minoo; Lenarz, Thomas
The auditory midbrain implant (AMI) is a new hearing prosthesis designed for stimulation of the inferior colliculus in deaf patients who cannot sufficiently benefit from cochlear implants. The authors have begun clinical trials in which five patients have been implanted with a single shank AMI array (20 electrodes). The goal of this review is to summarize the development and research that has led to the translation of the AMI from a concept into the first patients. This study presents the rationale and design concept for the AMI as well a summary of the animal safety and feasibility studies that were required for clinical approval. The authors also present the initial surgical, psychophysical, and speech results from the first three implanted patients. Overall, the results have been encouraging in terms of the safety and functionality of the implant. All patients obtain improvements in hearing capabilities on a daily basis. However, performance varies dramatically across patients depending on the implant location within the midbrain with the best performer still not able to achieve open set speech perception without lip-reading cues. Stimulation of the auditory midbrain provides a wide range of level, spectral, and temporal cues, all of which are important for speech understanding, but they do not appear to sufficiently fuse together to enable open set speech perception with the currently used stimulation strategies. Finally, several issues and hypotheses for why current patients obtain limited speech perception along with several feasible solutions for improving AMI implementation are presented. PMID:19762428
Triarhou, Lazaros C; Verina, Tatyana
In 1899 a landmark paper entitled "On the musical centers of the brain" was published in Pflügers Archiv, based on work carried out in the Anatomo-Physiological Laboratory of the Neuropsychiatric Clinic of Vladimir M. Bekhterev (1857-1927) in St. Petersburg, Imperial Russia. The author of that paper was Vladimir E. Larionov (1857-1929), a military doctor and devoted brain scientist, who pursued the problem of the localization of function in the canine and human auditory cortex. His data detailed the existence of tonotopy in the temporal lobe and further demonstrated centrifugal auditory pathways emanating from the auditory cortex and directed to the opposite hemisphere and lower brain centers. Larionov's discoveries have been largely considered as findings of the Bekhterev school. Perhaps this is why there are limited resources on Larionov, especially keeping in mind his military medical career and the fact that after 1917 he just seems to have practiced otorhinolaryngology in Odessa. Larionov died two years after Bekhterev's mysterious death of 1927. The present study highlights the pioneering contributions of Larionov to auditory neuroscience, trusting that the life and work of Vladimir Efimovich will finally, and deservedly, emerge from the shadow of his celebrated master, Vladimir Mikhailovich. Copyright © 2016 Elsevier B.V. All rights reserved.
Okura, M. [Dynax Co., Tokyo (Japan); Maeda, T.; Tachi, S. [The University of Tokyo, Tokyo (Japan). Faculty of Engineering
For binocular visual space, the horizontal line seen as a straight line on the subjective frontoparallel plane does not always agree with the physically straight line, and the shape thereof depends on distance from the observer. This phenomenon is known as a Helmhotz`s horopter. The same phenomenon may occur also in binaural space, which depends on distance to an acoustic source. This paper formulates a scaler addition model that explains auditory horopter by using two items of information: sound pressure and interaural time difference. Furthermore, this model was used to perform simulations on different learning domains, and the following results were obtained. It was verified that the distance dependence of the auditory horopter can be explained by using the above scaler addition model; and difference in horopter shapes among the subjects may be explained by individual difference in learning domains of spatial position recognition. In addition, such an auditory model was shown not to include as short distance as in the learning domain in the auditory horopter model. 21 refs., 6 figs.
Ferguson, Melanie A.; Hall, Rebecca L.; Riley, Alison; Moore, David R.
Purpose: Parental reports of communication, listening, and behavior in children receiving a clinical diagnosis of specific language impairment (SLI) or auditory processing disorder (APD) were compared with direct tests of intelligence, memory, language, phonology, literacy, and speech intelligibility. The primary aim was to identify whether there…
Abdulraheem Olarongbe Mahmoud
Full Text Available The current research aimed at collating the views of medical specialists on disease priorities, class and outcomes of health research in Nigeria, and draw appropriate policy implications. Structured questionnaires were distributed to consent 90 randomly selected medical specialists practising in six Nigerian tertiary health institutions. Participants' background information, relative disease priority, research types and class, type and class of publication media, frequency of publications, challenges faced in publishing research, impact of their research on health practice or policy, and inventions made were probed. Fifty-one out of the 90 questionnaires distributed were returned giving a response rate of 63.3%. Sixty-four point six percent indicated that the highest priority should be given to non communicable diseases while still recognizing that considerations should be giving to the others. They were largely “always” involved in simple low budget retrospective studies or cross-sectional and medical education studies (67.8% and over a third (37.5% had never been involved in clinical trials. They largely preferred to “always” publish in PubMed indexed journals that are foreign-based (65.0%. They also indicated that their research works very rarely resulted in inventions (4% and change (4% in clinical practice or health policy. Our study respondents indicated that they were largely involved in simple low budget research works that rarely had significant impacts and outcomes. We recommend that adequate resources and research infrastructures particularly funding be made available to medical specialists in Nigeria. Both undergraduate and postgraduate medical education in Nigeria should emphasize research training in their curricula.
enhanced relative to the non-musicians for both resolved and unresolved harmonics in the right auditory cortex, right frontal regions and inferior colliculus. However, the increase in neural activation in the right auditory cortex of musicians was predictive of the increased pitch......Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... of training, which seemed to be specific to the stimuli containing resolved harmonics. Finally, a functional magnetic resonance imaging paradigm was used to examine the response of the auditory cortex to resolved and unresolved harmonics in musicians and non-musicians. The neural responses in musicians were...
Schwender, D; Kaiser, A; Klasing, S; Faber-Züllig, E; Golling, W; Pöppel, E; Peter, K
There is a high incidence of intraoperative awareness during cardiac surgery. Mid-latency auditory evoked potentials (MLAEP) reflect the primary cortical processing of auditory stimuli. In the present study, we investigated MLAEP and explicit and implicit memory for information presented during cardiac anaesthesia. PATIENTS AND METHODS. Institutional approval and informed consent was obtained in 30 patients scheduled for elective cardiac surgery. Anaesthesia was induced in group I (n = 10) with flunitrazepam/fentanyl (0.01 mg/kg) and maintained with flunitrazepam/fentanyl (1.2 mg/h). The patients in group II (n = 10) received etomidate (0.25 mg/kg) and fentanyl (0.005 mg/kg) for induction and isoflurane (0.6-1.2 vol%)/fentanyl (1.2 mg/h) for maintenance of general anaesthesia. Group III (n = 10) served as a control and patients were anaesthetized as in I or II. After sternotomy an audiotape that included an implicit memory task was presented to the patients in groups I and II. The story of Robinson Crusoe was told, and it was suggested to the patients that they remember Robinson Crusoe when asked what they associated with the word Friday 3-5 days postoperatively. Auditory evoked potentials were recorded awake and during general anaesthesia before and after the audiotape presentation on vertex (positive) and mastoids on both sides (negative). Auditory clicks were presented binaurally at 70 dBnHL at a rate of 9.3 Hz. Using the electrodiagnostic system Pathfinder I (Nicolet), 1000 successive stimulus responses were averaged over a 100 ms poststimulus interval and analyzed off-line. Latencies of the peak V, Na, Pa were measured. V belongs to the brainstem-generated potentials, which demonstrates that auditory stimuli were correctly transduced. Na, Pa are generated in the primary auditory cortex of the temporal lobe and are the electrophysiological correlate of the primary cortical processing of the auditory stimuli. RESULTS. None of the patients had an explicit memory
The way the elite perceive poverty and the poor in any society constitutes a very important dimension of poverty research. This is because normally there are several areas of interrelationship and interdependence between the poor and the elite, and these form part of the basis for social life in all societies. Perceptions of the ...
Penn, Felicity; Stephens, Danielle; Morgan, Jessica; Upton, Penney; Upton, Dominic
There is a push for universities to equip graduates with desirable employability skills and "hands-on" experience. This article explores the perceptions of students and staff experiences of a research assistantship scheme. Nine students from the University of Worcester were given the opportunity to work as a student vacation researcher…
Sep 13, 2010 ... Role of Nanotechnology in Achieving Food Security. Seyed Jamal F. Hosseini* ... the perception of researchers regarding the role of nanotechnology in achieving food security. Based ..... derived foods are new to consumers and it remains ... attitudes toward organic agriculture and biotechnology. Center for.
The perceptions of research values and priorities in water resource management from the 3rd Orange River Basin Symposium. ... No 2 (2012) >. Log in or Register to get access to full text downloads. ... Alternatively, you can download the PDF file directly to your computer, from where it can be opened using a PDF reader.
The aim of this study is to compare the risk perception levels of Basketball athletes in Turkish League teams according to some variables. In this research the "general screening model," which is one of the descriptive screening methods, was used. While the population of the study consists of athletes actively engaged in the Turkish…
Most of the literature on the assignment traditionally called the "research paper" focusses on first-year students, and often centers on what they don't know or can't do. This article seeks to expand the conversation to one about the skills and knowledge displayed by senior students, and about their perceptions of the universe of…
Kandiah, Jay; Saiki, Diana
The purpose of this study is to investigate family and consumer sciences (FCS) professionals' perceptions of multidisciplinary collaboration in teaching, research, and service. A focus group and survey were participants identified projects, strengths, weaknesses, and suggestions related to collaboration.Topics and projects that incorporated…
Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit
The present study focuses on examining the hypothesis that auditory temporal perception deficit is a basic cause for reading disabilities among dyslexics. This hypothesis maintains that reading impairment is caused by a fundamental perceptual deficit in processing rapid auditory or visual stimuli. Since the auditory perception involves a number of…
Fullenkamp, Amy N.; Haynes, Erin N.; Meloncon, Lisa; Succop, Paul; Nebert, Daniel W.
Appalachian Americans are an underserved population with increased risk for diseases having strong genetic and environmental precursors. The purpose of this study is to understand the thoughts and perceptions of genetic research of Appalachian Americans residing in eastern Ohio prior to conducting a genetic research study with this population. A genetic survey was developed and completed by 180 participants from Marietta, Cambridge and East Liverpool, Ohio. The majority of respondents were Ca...
Yathiraj, Asha; Maggu, Akshay Raj
The presence of auditory processing disorder in school-age children has been documented (Katz and Wilde, 1985; Chermak and Musiek, 1997; Jerger and Musiek, 2000; Muthuselvi and Yathiraj, 2009). In order to identify these children early, there is a need for a screening test that is not very time-consuming. The present study aimed to evaluate the independence of four subsections of the Screening Test for Auditory Processing (STAP) developed by Yathiraj and Maggu (2012). The test was designed to address auditory separation/closure, binaural integration, temporal resolution, and auditory memory in school-age children. The study also aimed to examine the number of children who are at risk for different auditory processes. Factor analysis research design was used in the current study. Four hundred school-age children consisting of 218 males and 182 females were randomly selected from 2400 children attending three schools. The children, aged 8 to 13 yr, were in grade three to eight class placements. DATA COLLECTION AND ANALYSES: The children were evaluated on the four subsections of the STAP (speech perception in noise, dichotic consonant-vowel [CV], gap detection, and auditory memory) in a quiet room within their school. The responses were analyzed using principal component analysis (PCA) and confirmatory factor analysis (CFA). In addition, the data were also analyzed to determine the number of children who were at risk for an auditory processing disorder (APD). Based on the PCA, three components with Eigen values greater than 1 were extracted. The orthogonal rotation of the variables using the Varimax technique revealed that component 1 consisted of binaural integration, component 2 consisted of temporal resolution, and component 3 was shared by auditory separation/closure and auditory memory. These findings were confirmed using CFA, where the predicted model displayed a good fit with or without the inclusion of the auditory memory subsection. It was determined that 16
Kim, Duck O; Zahorik, Pavel; Carney, Laurel H; Bishop, Brian B; Kuwada, Shigeyuki
Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35-200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined. Copyright © 2015 the authors 0270-6474/15/355360-13$15.00/0.
Andy T. Woods
Full Text Available This article provides an overview of the recent literature on the use of internet-based testing to address important questions in perception research. Our goal is to provide a starting point for the perception researcher who is keen on assessing this tool for their own research goals. Internet-based testing has several advantages over in-lab research, including the ability to reach a relatively broad set of participants and to quickly and inexpensively collect large amounts of empirical data, via services such as Amazon’s Mechanical Turk or Prolific Academic. In many cases, the quality of online data appears to match that collected in lab research. Generally-speaking, online participants tend to be more representative of the population at large than those recruited for lab based research. There are, though, some important caveats, when it comes to collecting data online. It is obviously much more difficult to control the exact parameters of stimulus presentation (such as display characteristics with online research. There are also some thorny ethical elements that need to be considered by experimenters. Strengths and weaknesses of the online approach, relative to others, are highlighted, and recommendations made for those researchers who might be thinking about conducting their own studies using this increasingly-popular approach to research in the psychological sciences.
Woods, Andy T; Velasco, Carlos; Levitan, Carmel A; Wan, Xiaoang; Spence, Charles
This article provides an overview of the recent literature on the use of internet-based testing to address important questions in perception research. Our goal is to provide a starting point for the perception researcher who is keen on assessing this tool for their own research goals. Internet-based testing has several advantages over in-lab research, including the ability to reach a relatively broad set of participants and to quickly and inexpensively collect large amounts of empirical data, via services such as Amazon's Mechanical Turk or Prolific Academic. In many cases, the quality of online data appears to match that collected in lab research. Generally-speaking, online participants tend to be more representative of the population at large than those recruited for lab based research. There are, though, some important caveats, when it comes to collecting data online. It is obviously much more difficult to control the exact parameters of stimulus presentation (such as display characteristics) with online research. There are also some thorny ethical elements that need to be considered by experimenters. Strengths and weaknesses of the online approach, relative to others, are highlighted, and recommendations made for those researchers who might be thinking about conducting their own studies using this increasingly-popular approach to research in the psychological sciences.
D'Angelo, Anne-Lise D; Ray, Rebecca D; Jenewein, Caitlin G; Jones, Grace F; Pugh, Carla M
Surgery residents may take years away from clinical responsibilities for dedicated research time. As part of a longitudinal project, the study aim was to investigate residents' perceptions of clinical skill reduction during dedicated research time. Our hypothesis was that residents would perceive a greater potential reduction in skill during research time for procedures they were less confident in performing. Surgical residents engaged in dedicated research training at multiple training programs participated in four simulated procedures: urinary catheterization, subclavian central line, bowel anastomosis, and laparoscopic ventral hernia (LVH) repair. Using preprocedure and postprocedure surveys, participants rated procedures for confidence and difficulty. Residents also indicated the perceived level of skills reduction for the four procedures as a result of time in the laboratory. Thirty-eight residents (55% female) completed the four clinical simulators. Participants had between 0-36 mo in a laboratory (M = 9.29 mo, standard deviation = 9.38). Preprocedure surveys noted lower confidence and higher perceived difficulty for performing the LVH repair followed by bowel anastomosis, central line insertion, and urinary catheterization (P perception for urinary catheterization (P perception and may provide a mechanism for maintaining skills and keeping confidence grounded in experience. Copyright © 2015 Elsevier Inc. All rights reserved.
Badcock, Nicholas A; Preece, Kathryn A; de Wit, Bianca; Glenn, Katharine; Fieder, Nora; Thie, Johnson; McArthur, Genevieve
Background. Previous work has demonstrated that a commercial gaming electroencephalography (EEG) system, Emotiv EPOC, can be adjusted to provide valid auditory event-related potentials (ERPs) in adults that are comparable to ERPs recorded by a research-grade EEG system, Neuroscan. The aim of the current study was to determine if the same was true for children. Method. An adapted Emotiv EPOC system and Neuroscan system were used to make simultaneous EEG recordings in nineteen 6- to 12-year-old children under "passive" and "active" listening conditions. In the passive condition, children were instructed to watch a silent DVD and ignore 566 standard (1,000 Hz) and 100 deviant (1,200 Hz) tones. In the active condition, they listened to the same stimuli, and were asked to count the number of 'high' (i.e., deviant) tones. Results. Intraclass correlations (ICCs) indicated that the ERP morphology recorded with the two systems was very similar for the P1, N1, P2, N2, and P3 ERP peaks (r = .82 to .95) in both passive and active conditions, and less so, though still strong, for mismatch negativity ERP component (MMN; r = .67 to .74). There were few differences between peak amplitude and latency estimates for the two systems. Conclusions. An adapted EPOC EEG system can be used to index children's late auditory ERP peaks (i.e., P1, N1, P2, N2, P3) and their MMN ERP component.
Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed
Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Gutschalk, Alexander; Brandt, Tobias; Bartsch, Andreas; Jansen, Claudia
In contrast to lesions of the visual and somatosensory cortex, lesions of the auditory cortex are not associated with self-evident contralesional deficits. Only when two or more stimuli are presented simultaneously to the left and right, contralesional extinction has been observed after unilateral lesions of the auditory cortex. Because auditory extinction is also considered a sign of neglect, clinical separation of auditory neglect from deficits caused by lesions of the auditory cortex is challenging. Here, we directly compared a number of tests previously used for either auditory-cortex lesions or neglect in 29 controls and 27 patients suffering from unilateral auditory-cortex lesions, neglect, or both. The results showed that a dichotic-speech test revealed similar amounts of extinction for both auditory cortex lesions and neglect. Similar results were obtained for words lateralized by inter-aural time differences. Consistent extinction after auditory cortex lesions was also observed in a dichotic detection task. Neglect patients showed more general problems with target detection but no consistent extinction in the dichotic detection task. In contrast, auditory lateralization perception was biased toward the right in neglect but showed considerably less disruption by auditory cortex lesions. Lateralization of auditory-evoked magnetic fields in auditory cortex was highly correlated with extinction in the dichotic target-detection task. Moreover, activity in the right primary auditory cortex was somewhat reduced in neglect patients. The results confirm that auditory extinction is observed with lesions of the auditory cortex and auditory neglect. A distinction can nevertheless be made with dichotic target-detection tasks, auditory-lateralization perception, and magnetoencephalography. Copyright © 2012 Elsevier Ltd. All rights reserved.
Full Text Available To differentiate between stop-consonants, the auditory system has to detect subtle place of articulation (PoA and voice onset time (VOT differences between stop-consonants. How this differential processing is represented on the cortical level remains unclear. The present functional magnetic resonance (fMRI study takes advantage of the superior spatial resolution and high sensitivity of ultra high field 7T MRI. Subjects were attentively listening to consonant-vowel syllables with an alveolar or bilabial stop-consonant and either a short or long voice-onset time. The results showed an overall bilateral activation pattern in the posterior temporal lobe during the processing of the consonant-vowel syllables. This was however modulated strongest by place of articulation such that syllables with an alveolar stop-consonant showed stronger left lateralized activation. In addition, analysis of underlying functional and effective connectivity revealed an inhibitory effect of the left planum temporale onto the right auditory cortex during the processing of alveolar consonant-vowel syllables. Further, the connectivity result indicated also a directed information flow from the right to the left auditory cortex, and further to the left planum temporale for all syllables. These results indicate that auditory speech perception relies on an interplay between the left and right auditory cortex, with the left planum temporale as modulator. Furthermore, the degree of functional asymmetry is determined by the acoustic properties of the consonant-vowel syllables.
Kaya, Emine Merve; Elhilali, Mounya
Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information-a phenomenon referred to as the 'cocktail party problem'. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by 'bottom-up' sensory-driven factors, as well as 'top-down' task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
Paul Wallace Anderson
Full Text Available Past research has shown that auditory distance estimation improves when listeners are given the opportunity to see all possible sound sources when compared to no visual input. It has also been established that distance estimation is more accurate in vision than in audition. The present study investigates the degree to which auditory distance estimation is improved when matched with a congruent visual stimulus. Virtual sound sources based on binaural room impulse response (BRIR measurements made from distances ranging from approximately 0.3 to 9.8 m in a concert hall were used as auditory stimuli. Visual stimuli were photographs taken from the listener’s perspective at each distance in the impulse response measurement setup presented on a large HDTV monitor. Listeners were asked to estimate egocentric distance to the sound source in each of three conditions: auditory only (A, visual only (V, and congruent auditory/visual stimuli (A+V. Each condition was presented within its own block. Sixty-two listeners were tested in order to quantify the response variability inherent in auditory distance perception. Distance estimates from both the V and A+V conditions were found to be considerably more accurate and less variable than estimates from the A condition.
Full Text Available While the discussion on the integrity of data obtained from Web- delivered experiments is mainly about issues of method and control (Mehler, 1999; McGraw et al., 2000; Auditory, 2007, this comment stresses the potential that Web- based experiments might have for studies in music perception. It is argued that, due to some important advances in technology, Web-based experiments have become a reliable source for empirical research. Next to becoming a serious alternative to a certain class of lab-based experiments, Web-based experiments can potentially reach a much larger, more varied and intrinsically motivated participant pool. Nevertheless, an important challenge to Web-based experiments is to control for attention and to make sure that participants act as instructed; Interestingly, this is not essentially different from experiments that are performed in the laboratory. Some practical solutions to this challenge are proposed.
Full Text Available Auditory Scene Analysis provides a useful framework for understanding atypical auditory perception in autism. Specifically, a failure to segregate the incoming acoustic energy into distinct auditory objects might explain the aversive reaction autistic individuals have to certain auditory stimuli or environments. Previous research with non-autistic participants has demonstrated the presence of an Object Related Negativity (ORN in the auditory event related potential that indexes pre-attentive processes associated with auditory scene analysis. Also evident is a later P400 component that is attention dependent and thought to be related to decision-making about auditory objects. We sought to determine whether there are differences between individuals with and without autism in the levels of processing indexed by these components. Electroencephalography (EEG was used to measure brain responses from a group of 16 autistic adults, and 16 age- and verbal-IQ-matched typically-developing adults. Auditory responses were elicited using lateralized dichotic pitch stimuli in which inter-aural timing differences create the illusory perception of a pitch that is spatially separated from a carrier noise stimulus. As in previous studies, control participants produced an ORN in response to the pitch stimuli. However, this component was significantly reduced in the participants with autism. In contrast, processing differences were not observed between the groups at the attention-dependent level (P400. These findings suggest that autistic individuals have difficulty segregating auditory stimuli into distinct auditory objects, and that this difficulty arises at an early pre-attentive level of processing.
Papadopoulos, Judith; Domahs, Frank; Kauschke, Christina
Although it has been established that human beings process concrete and abstract words differently, it is still a matter of debate what factors contribute to this difference. Since concrete concepts are closely tied to sensory perception, perceptual experience seems to play an important role in their processing. The present study investigated the…
Oldoni, Damiano; De Coensel, Bert; Boes, Michiel; Rademaker, Michaël; De Baets, Bernard; Van Renterghem, Timothy; Botteldooren, Dick
Urban soundscape design involves creating outdoor spaces that are pleasing to the ear. One way to achieve this goal is to add or accentuate sounds that are considered to be desired by most users of the space, such that the desired sounds mask undesired sounds, or at least distract attention away from undesired sounds. In view of removing the need for a listening panel to assess the effectiveness of such soundscape measures, the interest for new models and techniques is growing. In this paper, a model of auditory attention to environmental sound is presented, which balances computational complexity and biological plausibility. Once the model is trained for a particular location, it classifies the sounds that are present in the soundscape and simulates how a typical listener would switch attention over time between different sounds. The model provides an acoustic summary, giving the soundscape designer a quick overview of the typical sounds at a particular location, and allows assessment of the perceptual effect of introducing additional sounds.
Hu, Min; Liu, GuoZhong
People with neuromuscular disorders are difficult to communicate with the outside world. It is very important to the clinician and the patient's family that how to distinguish vegetative state (VS) and minimally conscious state (MCS) for a disorders of consciousness (DOC) patient. If a patient is diagnosed with VS, this means that the hope of recovery is greatly reduced, thus leading to the family to abandon the treatment. Brain-computer interface (BCI) is aiming to help those people by analyzing patients' electroencephalogram (EEG). This paper focus on analyzing the corresponding activated regions of the brain when a subject responses "yes" or "no" to an auditory stimuli question. When the brain concentrates, the phase of the related area will become orderly from desultorily. So in this paper we analyzed EEG from the angle of phase. Seven healthy subjects volunteered to participate in the experiment. A total of 84 groups of repeatability stimulation test were done. Firstly, the frequency is fragmented by using wavelet method. Secondly, the phase of EEG is extracted by Hilbert. At last, we obtained approximate entropy and information entropy of each frequency band of EEG. The results show that brain areas are activated of the central area when people say "yes", and the areas are activated of the central area and temporal when people say "no". This conclusion is corresponding to magnetic resonance imaging technology. This study provides the theory basis and the algorithm design basis for designing BCI equipment for people with neuromuscular disorders.
Fasanya, Bankole K
This study investigated the effects of multiple cognitive tasks on human performance. Twenty-four students at North Carolina A&T State University participated in the study. The primary task was auditory signal change perception and the secondary task was a computational task. Results showed that participants' performance in a single task was statistically significantly different from their performance in combined tasks: (a) algebra problems (algebra problem primary and auditory perception secondary); (b) auditory perception tasks (auditory perception primary and algebra problems secondary); and (c) mean false-alarm score in auditory perception (auditory detection primary and algebra problems secondary). Using signal detection theory (SDT), participants' performance measured in terms of sensitivity was calculated as -0.54 for combined tasks (algebra problems the primary task) and -0.53 auditory perceptions the primary task. During auditory perception tasks alone, SDT was found to be 2.51. Performance was 83% in a single task compared to 17% when combined tasks.
Nicholas A. Badcock
Full Text Available Background. Previous work has demonstrated that a commercial gaming electroencephalography (EEG system, Emotiv EPOC, can be adjusted to provide valid auditory event-related potentials (ERPs in adults that are comparable to ERPs recorded by a research-grade EEG system, Neuroscan. The aim of the current study was to determine if the same was true for children.Method. An adapted Emotiv EPOC system and Neuroscan system were used to make simultaneous EEG recordings in nineteen 6- to 12-year-old children under “passive” and “active” listening conditions. In the passive condition, children were instructed to watch a silent DVD and ignore 566 standard (1,000 Hz and 100 deviant (1,200 Hz tones. In the active condition, they listened to the same stimuli, and were asked to count the number of ‘high’ (i.e., deviant tones.Results. Intraclass correlations (ICCs indicated that the ERP morphology recorded with the two systems was very similar for the P1, N1, P2, N2, and P3 ERP peaks (r = .82 to .95 in both passive and active conditions, and less so, though still strong, for mismatch negativity ERP component (MMN; r = .67 to .74. There were few differences between peak amplitude and latency estimates for the two systems.Conclusions. An adapted EPOC EEG system can be used to index children’s late auditory ERP peaks (i.e., P1, N1, P2, N2, P3 and their MMN ERP component.
Alvarez, Maria do Carmo Avamilano; França, Ivan; Cuenca, Angela Maria Belloni; Bastos, Francisco I; Ueno, Helene Mariko; Barros, Cláudia Renata; Guimarães, Maria Cristina Soares
Information literacy has evolved with changes in lifelong learning. Can Brazilian health researchers search for and use updated scientific information? To describe researchers' information literacy based on their perceptions of their abilities to search for and use scientific information and on their interactions with libraries. Semi-structured interviews and focus group conducted with six Brazilian HIV/AIDS researchers. Analyses comprised the assessment of researchers as disseminators, their interactions with librarians, their use of information and communication technology and language. Interviewees believed they were partially qualified to use databases. They used words and phrases that indicated their knowledge of technology and terminology. They acted as disseminators for students during information searches. Researchers' abilities to interact with librarians are key skills, especially in a renewed context where libraries have, to a large extent, changed from physical spaces to digital environments. Great amounts of information have been made available, and researchers' participation in courses does not automatically translate into adequate information literacy. Librarians must help research groups, and as such, librarians' information literacy-related responsibilities in Brazil should be redefined and expanded. Students must develop the ability to learn quickly, and librarians should help them in their efforts. Librarians and researchers can act as gatekeepers for research groups and as information coaches to improve others' search abilities. © 2013 Health Libraries Group of CILIP and John Wiley & Sons Ltd.
Full Text Available This study investigated eye-movement patterns during emotion perception for children with hearing aids and hearing children. Seventy-eight participants aged from 3 to 7 were asked to watch videos with a facial expression followed by an oral statement, and these two cues were either congruent or incongruent in emotional valence. Results showed that while hearing children paid more attention to the upper part of the face, children with hearing aids paid more attention to the lower part of the face after the oral statement was presented, especially for the neutral facial expression/neutral oral statement condition. These results suggest that children with hearing aids have an altered eye contact pattern with others and a difficulty in matching visual and voice cues in emotion perception. The negative cause and effect of these gaze patterns should be avoided in earlier rehabilitation for hearing-impaired children with assistive devices.
Rutkowska, Joanna; Łobaczuk-Sitnik, Anna; Kosztyła-Hojna, Bożena
Increasing numbers of hearing pathology is auditory processing disorders. Auditory Processing Disorders (APD) are defined as difficulty in using auditory information to communicate and learn in the presence of normal peripheral hearing. It may be recognized as a problem with understanding of speech in noise and perception disorder of distorted speech. APD may accompany to articulation disorders, language problems and difficulties in reading and writing. The diagnosis of auditory processing disorders causes many difficulties primarily due to the lack of common testing procedures, precise criteria for qualification to the group of norm and pathology. The Brain-Boy Universal Professional (BUP) is one of diagnostics tools. It enables to assess the higher auditory functions. The aim of the study was preliminary assessment of hearing difficulties that may suggest the occurrence of auditory processing disorders in children. The questionnaire of hearing difficulties and BUP was used. Study includes 20 participants 2nd grade students of elementary school. The examination of the basic central functions was carried out with BUP. The parents and teacher complete the questionnaire to evaluate the hearing problems. Studies carried out indicate that the 40% schoolchild have hearing difficulties. The high percentage of deficits in auditory functions was confirmed with research results of medical device and the questionnaire for teacher. On the basis of the studies conducted may establish that the Warnke Method can serve as preliminary assessment of hearing difficulties that may suggest the occurrence of auditory processing disorders in children.
Moradi, Shahram; Lidestam, Björn; Hällgren, Mathias; Rönnberg, Jerker
This study compared elderly hearing aid (EHA) users and elderly normal-hearing (ENH) individuals on identification of auditory speech stimuli (consonants, words, and final word in sentences) that were different when considering their linguistic properties. We measured the accuracy with which the target speech stimuli were identified, as well as the isolation points (IPs: the shortest duration, from onset, required to correctly identify the speech target). The relationships between working memory capacity, the IPs, and speech accuracy were also measured. Twenty-four EHA users (with mild to moderate hearing impairment) and 24 ENH individuals participated in the present study. Despite the use of their regular hearing aids, the EHA users had delayed IPs and were less accurate in identifying consonants and words compared with the ENH individuals. The EHA users also had delayed IPs for final word identification in sentences with lower predictability; however, no significant between-group difference in accuracy was observed. Finally, there were no significant between-group differences in terms of IPs or accuracy for final word identification in highly predictable sentences. Our results also showed that, among EHA users, greater working memory capacity was associated with earlier IPs and improved accuracy in consonant and word identification. Together, our findings demonstrate that the gated speech perception ability of EHA users was not at the level of ENH individuals, in terms of IPs and accuracy. In addition, gated speech perception was more cognitively demanding for EHA users than for ENH individuals in the absence of semantic context. © The Author(s) 2014.
Sullivan, Sarah; Aalborg, Annette; Basagoitia, Armando; Cortes, Jacqueline; Lanza, Oscar; Schwind, Jessica S
In Bolivia, there is increasing interest in incorporating research ethics into study procedures, but there have been inconsistent application of research ethics practices. Minimal data exist regarding the experiences of researchers concerning the ethical conduct of research. A cross-sectional study was administered to Bolivian health leaders with research experience (n = 82) to document their knowledge, perceptions, and experiences of research ethics committees and infrastructure support for research ethics. Results showed that 16% of respondents reported not using ethical guidelines to conduct their research and 66% indicated their institutions did not consistently require ethics approval for research. Barriers and facilitators to incorporate research ethics into practice were outlined. These findings will help inform a comprehensive rights-based research ethics education program in Bolivia. © The Author(s) 2015.
Full Text Available Research on auditory verbal hallucinations (AVHs indicates that AVH schizophrenia patients show greater abnormalities on tasks requiring recognition of affective prosody (AP than non-AVH patients. Detecting AP requires accurate perception of manipulations in pitch, amplitude and duration. Schizophrenia patients with AVHs also experience difficulty detecting these acoustic manipulations; with a number of theorists speculating that difficulties in pitch, amplitude and duration discrimination underlie AP abnormalities. This study examined whether both AP and these aspects of auditory processing are also impaired in first degree relatives of persons with AVHs. It also examined whether pitch, amplitude and duration discrimination were related to AP, and to hallucination proneness. Unaffected relatives of AVH schizophrenia patients (N=19 and matched healthy controls (N=33 were compared using tone discrimination tasks, an AP task, and clinical measures. Relatives were slower at identifying emotions on the AP task (p =.002, with secondary analysis showing this was especially so for happy (p = .014 and neutral (p =.001 sentences. There was a significant interaction effect for pitch between tone deviation level and group (p = .019, and relatives performed worse than controls on amplitude discrimination and duration discrimination. AP performance for happy and neutral sentences was significantly correlated with amplitude perception. Lastly, AVH proneness in the entire sample was significantly correlated with pitch discrimination (r = .44 and pitch perception was shown to predict AVH proneness in the sample (p = .005. These results suggest basic impairments in auditory processing are present in relatives of AVH patients; they potentially underlie processing speed in AP tasks, and predict AVH proneness. This indicates auditory processing deficits may be a core feature of AVHs in schizophrenia, and are worthy of further study as a potential endophenotype for
Getzmann, Stephan; Falkenstein, Michael; Wascher, Edmund
The ability to understand speech under adverse listening conditions deteriorates with age. In addition to genuine hearing deficits, age-related declines in attentional and inhibitory control are assumed to contribute to these difficulties. Here, the impact of task-irrelevant distractors on speech perception was studied in 28 younger and 24 older participants in a simulated "cocktail party" scenario. In a two-alternative forced-choice word discrimination task, the participants responded to a rapid succession of short speech stimuli ("on" and "off") that was presented at a frequent standard location or at a rare deviant location in silence or with a concurrent distractor speaker. Behavioral responses and event-related potentials (mismatch negativity MMN, P3a, and reorienting negativity RON) were analyzed to study the interplay of distraction, orientation, and refocusing in the presence of changes in target location. While shifts in target location decreased performance of both age groups, this effect was more pronounced in the older group. Especially in the distractor condition, the electrophysiological measures indicated a delayed attention capture and a delayed re-focussing of attention toward the task-relevant stimulus feature in the older group, relative to the young group. In sum, the results suggest that a delay in the attention switching mechanism contribute to the age-related difficulties in speech perception in dynamic listening situations with multiple speakers. Copyright © 2014 Elsevier B.V. All rights reserved.
Johanna C. Badcock
Full Text Available The National Institute of Mental Health initiative called the Research Domain Criteria (RDoC project aims to provide a new approach to understanding mental illness grounded in the fundamental domains of human behaviour and psychological functioning. To this end the RDoC framework encourages researchers and clinicians to think outside the [diagnostic]box, by studying symptoms, behaviours or biomarkers that cut across traditional mental illness categories. In this article we examine and discuss how the RDoC framework can improve our understanding of psychopathology by zeroing in on hallucinations- now widely recognized as a symptom that occurs in a range of clinical and non-clinical groups. We focus on a single domain of functioning - namely cognitive [inhibitory] control - and assimilate key findings structured around the basic RDoC units of analysis, which span the range from observable behaviour to molecular genetics. Our synthesis and critique of the literature provides a deeper understanding of the mechanisms involved in the emergence of auditory hallucinations, linked to the individual dynamics of inhibitory development before and after puberty; favours separate developmental trajectories for clinical and nonclinical hallucinations; yields new insights into co-occurring emotional and behavioural problems; and suggests some novel avenues for treatment.
Fullenkamp, Amy N; Haynes, Erin N; Meloncon, Lisa; Succop, Paul; Nebert, Daniel W
Appalachian Americans are an underserved population with increased risk for diseases having strong genetic and environmental precursors. The purpose of this study is to understand the thoughts and perceptions of genetic research of Appalachian Americans residing in eastern Ohio prior to conducting a genetic research study with this population. A genetic survey was developed and completed by 180 participants from Marietta, Cambridge and East Liverpool, Ohio. The majority of respondents were Caucasian women with a median age of 37.5 years. We found that participants had a high interest in participating in 80 %, allowing their children to participate in 78 %, and learning more about genetic research studies (90 %); moreover, they thought that genetic research studies are useful to society (93 %). When asked what information would be useful when deciding to participate in a genetic research study, the following were most important: how environmental pollutants affect their genes and their child's genes (84 %), types of biological specimens needed for genetic research studies (75 %) and who will have access to their samples (75 %). Of the 20 % who responded that they were "unsure" about participating in a genetic research study, the leading reason was "I don't have enough information about genetic research to make a decision" (56 %). We also asked respondents to choose their preferred method for receiving genetic information, and the principal response was to read a brochure (40 %). Findings from this study will improve community education materials and dissemination methods that are tailored for underserved populations engaged in genetic research.
Saiki, Takuya; Kawakami, Chihiro; Suzuki, Yasuyuki
Objectives This study aimed to examine how students' perceptions of research and learning change through participation in undergraduate research and to identify the factors that affect the process of their engagement in re-search projects. Methods This qualitative study has drawn on phenomenography as research methodology to explore third-year medical students' experiences of undergraduate research from participants' perspectives (n=14). Data included semi-structured individual interviews conducted as pre and post reflections. Thematic analysis of pre-course interviews combined with researcher-participant observations in-formed design of end-of-course interview questions. Results Phenomenographic data analysis demonstrated qualitative changes in students' perceptions of research. At the beginning of the course, the majority of students ex-pressed a relatively narrow definition of research, focusing on the content and outcomes of scientific research. End-of-course reflections indicated increased attention to research processes including researcher autonomy, collaboration and knowledge construction processes. Furthermore, acknowledgement of the linkage between research and learning processes indicated an epistemological change leading them to take a deep approach to learning in undergraduate research. Themes included: an inquiring mind, synthesis of knowledge, active participation, collaborative and reflective learning. However, they also encountered some difficulties in undertaking group research projects. These were attributed to their prior learning experiences, differences in valuing towards interpersonal communication, understanding of the research process, and social relationships with others. Conclusions This study provided insights into the potential for undergraduate research in medical education. Medical students' awareness of the linkage between research and learning may be one of the most important outcomes in the undergraduate research process. PMID
Imafuku, Rintaro; Saiki, Takuya; Kawakami, Chihiro; Suzuki, Yasuyuki
This study aimed to examine how students' perceptions of research and learning change through participation in undergraduate research and to identify the factors that affect the process of their engagement in research projects. This qualitative study has drawn on phenomenography as research methodology to explore third-year medical students' experiences of undergraduate research from participants' perspectives (n=14). Data included semi-structured individual interviews conducted as pre and post reflections. Thematic analysis of pre-course interviews combined with researcher-participant observations informed design of end-of-course interview questions. Phenomenographic data analysis demonstrated qualitative changes in students' perceptions of research. At the beginning of the course, the majority of students ex-pressed a relatively narrow definition of research, focusing on the content and outcomes of scientific research. End-of-course reflections indicated increased attention to research processes including researcher autonomy, collaboration and knowledge construction processes. Furthermore, acknowledgement of the linkage between research and learning processes indicated an epistemological change leading them to take a deep approach to learning in undergraduate research. Themes included: an inquiring mind, synthesis of knowledge, active participation, collaborative and reflective learning. However, they also encountered some difficulties in undertaking group research projects. These were attributed to their prior learning experiences, differences in valuing towards interpersonal communication, understanding of the research process, and social relationships with others. This study provided insights into the potential for undergraduate research in medical education. Medical students' awareness of the linkage between research and learning may be one of the most important outcomes in the undergraduate research process.
Bremner, J. Gavin
This paper reviews progress over the past 20 years in four areas of research on infant perception and cognition. Work on perception of dynamic events has identified perceptual constraints on perception of object unity and object trajectory continuity that have led to a perceptual account of early development that supplements Nativist accounts.…
Mishra, Srikanta K; Panda, Manasa R
Musical training and experience greatly enhance the cortical and subcortical processing of sounds, which may translate to superior auditory perceptual acuity. Auditory temporal resolution is a fundamental perceptual aspect that is critical for speech understanding in noise in listeners with normal hearing, auditory disorders, cochlear implants, and language disorders, yet very few studies have focused on music-induced learning of temporal resolution. This report demonstrates that Carnatic musical training and experience have a significant impact on temporal resolution assayed by gap detection thresholds. This experience-dependent learning in Carnatic-trained musicians exhibits the universal aspects of human perception and plasticity. The present work adds the perceptual component to a growing body of neurophysiological and imaging studies that suggest plasticity of the peripheral auditory system at the level of the brainstem. The present work may be intriguing to researchers and clinicians alike interested in devising cross-cultural training regimens to alleviate listening-in-noise difficulties.
Full Text Available Although the study by Bailes & Dean (2007 addresses an underresearched area of auditory and musical perception, it raises questions concerning stimuli, methodology, and the study's relation to previous research, that are outlined in this commentary.
ten Holt, Gineke A.; Arendsen, Jeroen; de Ridder, Huib; Koenderink-van Doorn, Andrea J.; Reinders, Marcel J. T.; Hendriks, Emile A.
Current automatic sign language recognition (ASLR) seldom uses perceptual knowledge about the recognition of sign language. Using such knowledge can improve ASLR because it can give an indication which elements or phases of a sign are important for its meaning. Also, the current generation of data-driven ASLR methods has shortcomings which may not be solvable without the use of knowledge on human sign language processing. Handling variation in the precise execution of signs is an example of such shortcomings: data-driven methods (which include almost all current methods) have difficulty recognizing signs that deviate too much from the examples that were used to train the method. Insight into human sign processing is needed to solve these problems. Perceptual research on sign language can provide such insights. This paper discusses knowledge derived from a set of sign perception experiments, and the application of such knowledge in ASLR. Among the findings are the facts that not all phases and elements of a sign are equally informative, that defining the 'correct' form for a sign is not trivial, and that statistical ASLR methods do not necessarily arrive at sign representations that resemble those of human beings. Apparently, current ASLR methods are quite different from human observers: their method of learning gives them different sign definitions, they regard each moment and element of a sign as equally important and they employ a single definition of 'correct' for all circumstances. If the object is for an ASLR method to handle natural sign language, then the insights from sign perception research must be integrated into ASLR.
Full Text Available It is well established that hand gestures affect comprehension and learning of semantic aspects of a foreign language (FL. However, much less is known about the role of hand gestures in lower-level language processes, such as perception of phonemes. To address this gap, we explored the role that metaphoric gestures play in perceiving FL speech sounds that varied on two dimensions: length and intonation. English speaking adults listened to Japanese length contrasts and sentence-final intonational distinctions in the context of congruent, incongruent and no gestures. For intonational contrasts, identification was more accurate for congruent gestures and less accurate for incongruent gestures relative to the baseline no gesture condition. However, for the length contrasts, there was no such clear and consistent pattern, and in fact, congruent gestures made speech processing more effortful. We conclude that metaphoric gestures help with some—but not all—novel speech sounds in a FL, suggesting that gesture and speech are phonemically integrated to differing extents depending on the nature of the gesture and/or speech sound.
Full Text Available Recruitment into clinical research studies is a major challenge. This study was carried out to explore the perceptions and attitudes towards clinical research participation among the general public in Qatar. A population based questionnaire study was carried out at public events held in Qatar. Residents of Qatar, 18 years or above in age were surveyed, anonymously, following verbal consent. Descriptive and multivariate analyses were conducted. We administered 2517 questionnaires to examine clinical research participation, of which 2379 complete forms were analyzed. Those who had previously been approached to participate in research completed a more detailed assessment. Data showed that only 5.7% participants (n = 134 had previously been approached to participate in a clinical research study. Of these 63.4% (n = 85 had agreed to participate while 36.6% (n = 49 had declined. The main reasons for declining participation included: time constraint (47.8%, n = 11, ‘fear’ (13.0%, n = 3, lack of awareness about clinical research (8.7%, n = 2 and lack of interest (8.7%, n = 2. ‘To help others’ (31.8%, n = 27 and ‘thought it might improve my access to health care’ (24.7%, n = 21 were the prime motivators for participation. There was a general agreement among participants that their previous research experience was associated with positive outcomes for self and others, that the research conduct was ethical, and that opportunities for participation will be welcomed in future. More than ten years of stay within Qatar was a statistically significant determinant of willingness to participate, adjusted odds ratio 5.82 (95% CI 1.93–17.55, p = 0.002. Clinical research participation in Qatar needs improvement. Time constraints, lack of trust in and poor awareness about clinical research are main barriers to participation. Altruism, and improved health access are reported as prime motivators. Deeper insight in to the factors
van Vugt, Floris Tijmen; Jabusch, Hans-Christian; Altenmüller, Eckart
We investigated how musical phrasing and motor sequencing interact to yield timing patterns in the conservatory students' playing piano scales. We propose a novel analysis method that compared the measured note onsets to an objectively regular scale fitted to the data. Subsequently, we segment the timing variability into (i) systematic deviations from objective evenness that are perhaps residuals of expressive timing or of perceptual biases and (ii) non-systematic deviations that can be interpreted as motor execution errors, perhaps due to noise in the nervous system. The former, systematic deviations reveal that the two-octave scales are played as a single musical phrase. The latter, trial-to-trial variabilities reveal that pianists' timing was less consistent at the boundaries between the octaves, providing evidence that the octave is represented as a single motor sequence. These effects cannot be explained by low-level properties of the motor task such as the thumb passage and also did not show up in simulated scales with temporal jitter. Intriguingly, this instability in motor production around the octave boundary is mirrored by an impairment in the detection of timing deviations at those positions, suggesting that chunks overlap between perception and action. We conclude that the octave boundary instability in the scale playing motor program provides behavioral evidence that our brain chunks musical sequences into octave units that do not coincide with musical phrases. Our results indicate that trial-to-trial variability is a novel and meaningful indicator of this chunking. The procedure can readily be extended to a variety of tasks to help understand how movements are divided into units and what processing occurs at their boundaries.
Floris Tijmen Van Vugt
Full Text Available We investigated how musical phrasing and motor sequencing interact to yield timing patterns in the conservatory students' playing piano scales. We propose a novel analysis method that compared the measured note onsets to an objectively regular scale fitted to the data. Subsequently, we segment the timing variability into (i systematic deviations from objective evenness that are perhaps residuals of expressive timing or of perceptual biases and (ii non-systematic deviations that can be interpreted as motor execution errors, perhaps due to noise in the nervous system. The former, systematic deviations, reveal that the two octave scales are played as a single musical phrase. The latter, trial-to-trial variabilities reveal that pianists' timing was less consistent at the boundaries between the octaves, providing evidence that the octave is represented as a single motor sequence. These effects cannot be explained by low-level properties of the motor task such as the thumb-passage and also did not show up in simulated scales with temporal jitter. Intriguingly, this instability in motor production around the octave boundary is mirrored by an impairment in the detection of timing deviations at those positions, suggesting that chunks overlap between perception and action. We conclude that the octave boundary instability in the scale playing motor program provides behavioural evidence that our brain chunks musical sequences into octave units that do not coincide with musical phrases. Our results indicate that trial-to-trial variability is a novel and meaningful indicator of this chunking. The procedure can readily be extended to a variety of tasks to help understand how movements are divided into units and what processing occurs at their boundaries.
Norrix, Linda W.; Plante, Elena; Vance, Rebecca
Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…
Namasivayam, Aravind Kumar; Wong, Wing Yiu Stephanie; Sharma, Dinaay; van Lieshout, Pascal
Visual and auditory systems interact at both cortical and subcortical levels. Studies suggest a highly context-specific cross-modal modulation of the auditory system by the visual system. The present study builds on this work by sampling data from 17 young healthy adults to test whether visual speech stimuli evoke different responses in the auditory efferent system compared to visual non-speech stimuli. The descending cortical influences on medial olivocochlear (MOC) activity were indirectly assessed by examining the effects of contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) at 1, 2, 3 and 4 kHz under three conditions: (a) in the absence of any contralateral noise (Baseline), (b) contralateral noise + observing facial speech gestures related to productions of vowels /a/ and /u/ and (c) contralateral noise + observing facial non-speech gestures related to smiling and frowning. The results are based on 7 individuals whose data met strict recording criteria and indicated a significant difference in TEOAE suppression between observing speech gestures relative to the non-speech gestures, but only at the 1 kHz frequency. These results suggest that observing a speech gesture compared to a non-speech gesture may trigger a difference in MOC activity, possibly to enhance peripheral neural encoding. If such findings can be reproduced in future research, sensory perception models and theories positing the downstream convergence of unisensory streams of information in the cortex may need to be revised.
Whitehouse, Martha M.
The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.
Berger, Christopher C; Ehrsson, H Henrik
Multisensory interactions are the norm in perception, and an abundance of research on the interaction and integration of the senses has demonstrated the importance of combining sensory information from different modalities on our perception of the external world. However, although research on mental imagery has revealed a great deal of functional and neuroanatomical overlap between imagery and perception, this line of research has primarily focused on similarities within a particular modality and has yet to address whether imagery is capable of leading to multisensory integration. Here, we devised novel versions of classic multisensory paradigms to systematically examine whether imagery is capable of integrating with perceptual stimuli to induce multisensory illusions. We found that imagining an auditory stimulus at the moment two moving objects met promoted an illusory bounce percept, as in the classic cross-bounce illusion; an imagined visual stimulus led to the translocation of sound toward the imagined stimulus, as in the classic ventriloquist illusion; and auditory imagery of speech stimuli led to a promotion of an illusory speech percept in a modified version of the McGurk illusion. Our findings provide support for perceptually based theories of imagery and suggest that neuronal signals produced by imagined stimuli can integrate with signals generated by real stimuli of a different sensory modality to create robust multisensory percepts. These findings advance our understanding of the relationship between imagery and perception and provide new opportunities for investigating how the brain distinguishes between endogenous and exogenous sensory events. Copyright © 2013 Elsevier Ltd. All rights reserved.
Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.
Munung, Nchangwi Syntia; Mayosi, Bongani M; de Vries, Jantina
Africa is currently host to a number of international genomics research and biobanking consortia, each with a mandate to advance genomics research and biobanking in Africa. Whilst most of these consortia promise to transform the way international health research is done in Africa, few have articulated exactly how they propose to go about this. In this paper, we report on a qualitative interviewing study in which we involved 17 genomics researchers in Africa. We describe their perceptions and expectations of international genomics research and biobanking initiatives in Africa. All interviewees were of the view that externally funded genomics research and biobanking initiatives in Africa, have played a critical role in building capacity for genomics research and biobanking in Africa and in providing an opportunity for researchers in Africa to collaborate and network with other researchers. Whilst the opportunity to collaborate was seen as a benefit, some interviewees stressed the importance of recognizing that these collaborations carry mutual benefits for all partners, including their collaborators in HICs. They also voiced two major concerns of being part of these collaborative initiatives: the possibility of exploitation of African researchers and the non-sustainability of research capacity building efforts. As a way of minimising exploitation, researchers in Africa recommended that genuine efforts be made to create transparent and equitable international health research partnerships. They suggested that this could be achieved through,: having rules of engagement, enabling African researchers to contribute to the design and conduct of international health projects in Africa, and mutual and respectful exchange of experience and capacity between research collaborators. These were identified as hallmarks to equitable international health research collaborations in Africa. Genomics research and biobanking initiatives in Africa such as H3Africa have gone some way in
Scott, Brian H; Mishkin, Mortimer
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.
Zabonick, Lisa A.
This qualitative autoethnographic-action research study examined how lack of voice as a special education student in the mid-1970s influenced my self-perception. This study also examined, through the use of action research, what influence storytelling had on teacher perceptions of students with disabilities. Autoethnographic data results were used…
Al Kuwaiti, Ahmed; Subbarayalu, Arun Vijay
Purpose: The purpose of this paper was to examine the perceptions of students of health sciences on research training programs offered at Saudi universities. Design/methodology/approach: A cross-sectional survey design was adopted to capture the perceptions of health science students about research training programs offered at selected Saudi…
This chapter outlines a critical, reflexive research agenda for environmental perception, interpretation and evaluation research (PIE). Here, PIE refers to all those studies that explore the ways in which people perceive, interpret and vaue the natural and the cultural environment
.... For this review, "perceptions of healthy eating" are defined as the public's and health professionals' meanings, understandings, views, attitudes and beliefs about healthy eating, eating for health, and healthy foods...
Moore, David R.; Halliday, Lorna F.; Amitay, Sygal
This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers hav...
Byars-Winston, Angela M.; Branchaw, Janet; Pfund, Christine; Leverett, Patrice; Newton, Joseph
Few studies have empirically investigated the specific factors in mentoring relationships between undergraduate researchers (mentees) and their mentors in the biological and life sciences that account for mentees' positive academic and career outcomes. Using archival evaluation data from more than 400 mentees gathered over a multi-year period (2005-2011) from several undergraduate biology research programs at a large, Midwestern research university, we validated existing evaluation measures of the mentored research experience and the mentor-mentee relationship. We used a subset of data from mentees (77% underrepresented racial/ethnic minorities) to test a hypothesized social cognitive career theory model of associations between mentees' academic outcomes and perceptions of their research mentoring relationships. Results from path analysis indicate that perceived mentor effectiveness indirectly predicted post-baccalaureate outcomes via research self-efficacy beliefs. Findings are discussed with implications for developing new and refining existing tools to measure this impact, programmatic interventions to increase the success of culturally diverse research mentees and future directions for research.
Oryadi Zanjani, Mohammad Majid; Hasanzadeh, Saeid; Rahgozar, Mehdi; Shemshadi, Hashem; Purdy, Suzanne C; Mahmudi Bakhtiari, Behrooz; Vahab, Maryam
Since the introduction of cochlear implantation, researchers have considered children's communication and educational success before and after implantation. Therefore, the present study aimed to compare auditory, speech, and language development scores following one-sided cochlear implantation between two groups of prelingual deaf children educated through either auditory-only (unisensory) or auditory-visual (bisensory) modes. A randomized controlled trial with a single-factor experimental design was used. The study was conducted in the Instruction and Rehabilitation Private Centre of Hearing Impaired Children and their Family, called Soroosh in Shiraz, Iran. We assessed 30 Persian deaf children for eligibility and 22 children qualified to enter the study. They were aged between 27 and 66 months old and had been implanted between the ages of 15 and 63 months. The sample of 22 children was randomly assigned to two groups: auditory-only mode and auditory-visual mode; 11 participants in each group were analyzed. In both groups, the development of auditory perception, receptive language, expressive language, speech, and speech intelligibility was assessed pre- and post-intervention by means of instruments which were validated and standardized in the Persian population. No significant differences were found between the two groups. The children with cochlear implants who had been instructed using either the auditory-only or auditory-visual modes acquired auditory, receptive language, expressive language, and speech skills at the same rate. Overall, spoken language significantly developed in both the unisensory group and the bisensory group. Thus, both the auditory-only mode and the auditory-visual mode were effective. Therefore, it is not essential to limit access to the visual modality and to rely solely on the auditory modality when instructing hearing, language, and speech in children with cochlear implants who are exposed to spoken language both at home and at school
Strickland, Justin C; Stoops, William W
Despite the prominence of human laboratory and clinical trial research in the development of interventions for substance use disorders, this research presents numerous ethical challenges. Ethical principles outlined in the Belmont Report, including respect for persons, beneficence, and justice, have traditionally guided research conduct. Few empirical studies exist examining substance abuse research ethics. The present study examined perceptions of beneficence and respect for persons in substance use research, including relative risk and desired monetary compensation, using an online sample of cocaine users. The study was conducted on Amazon.com's Mechanical Turk (mTurk), a crowdsourcing website used for survey-based research. Of 1764 individuals screened, 138 reported past year cocaine use. These respondents completed a battery of standardized and experimenter-designed questionnaires used to characterize each respondent's self-reported attitudes, beliefs, and behaviors about drug use and the relative risks and desired monetary compensation associated with research participation. Ratings of relative risk revealed that most respondents found common research practices as less than or equal to the relative risk of everyday life. Receiving experimental medication outside the hospital was rated as the most risky research activity, but on average was not rated as presenting more risk than everyday life. Desired compensation for research participation was associated with the perceived risk of research activities. Increases in desired compensation for participation were only observed for research perceived as much more risky than everyday activities. These findings indicate that cocaine users assess risk in a way that is consistent with standard research practice. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Dorning, Monica; Van Berkel, Derek B.; Semmens, Darius J.
Purpose of ReviewHuman perceptions of the landscape can influence land-use and land-management decisions. Recognizing the diversity of landscape perceptions across space and time is essential to understanding land change processes and emergent landscape patterns. We summarize the role of landscape perceptions in the land change process, demonstrate advances in quantifying and mapping landscape perceptions, and describe how these spatially explicit techniques have and may benefit land change research.Recent FindingsMapping landscape perceptions is becoming increasingly common, particularly in research focused on quantifying ecosystem services provision. Spatial representations of landscape perceptions, often measured in terms of landscape values and functions, provide an avenue for matching social and environmental data in land change studies. Integrating these data can provide new insights into land change processes, contribute to landscape planning strategies, and guide the design and implementation of land change models.SummaryChallenges remain in creating spatial representations of human perceptions. Maps must be accompanied by descriptions of whose perceptions are being represented and the validity and uncertainty of those representations across space. With these considerations, rapid advancements in mapping landscape perceptions hold great promise for improving representation of human dimensions in landscape ecology and land change research.
Zhang, Yilu; Weng, Juyang; Hwang, Wey-Shiuan
Motivated by the human autonomous development process from infancy to adulthood, we have built a robot that develops its cognitive and behavioral skills through real-time interactions with the environment. We call such a robot a developmental robot. In this paper, we present the theory and the architecture to implement a developmental robot and discuss the related techniques that address an array of challenging technical issues. As an application, experimental results on a real robot, self-organizing, autonomous, incremental learner (SAIL), are presented with emphasis on its audition perception and audition-related action generation. In particular, the SAIL robot conducts the auditory learning from unsegmented and unlabeled speech streams without any prior knowledge about the auditory signals, such as the designated language or the phoneme models. Neither available before learning starts are the actions that the robot is expected to perform. SAIL learns the auditory commands and the desired actions from physical contacts with the environment including the trainers.
Nchangwi Syntia Munung
Full Text Available Africa is currently host to a number of international genomics research and biobanking consortia, each with a mandate to advance genomics research and biobanking in Africa. Whilst most of these consortia promise to transform the way international health research is done in Africa, few have articulated exactly how they propose to go about this. In this paper, we report on a qualitative interviewing study in which we involved 17 genomics researchers in Africa. We describe their perceptions and expectations of international genomics research and biobanking initiatives in Africa.All interviewees were of the view that externally funded genomics research and biobanking initiatives in Africa, have played a critical role in building capacity for genomics research and biobanking in Africa and in providing an opportunity for researchers in Africa to collaborate and network with other researchers. Whilst the opportunity to collaborate was seen as a benefit, some interviewees stressed the importance of recognizing that these collaborations carry mutual benefits for all partners, including their collaborators in HICs. They also voiced two major concerns of being part of these collaborative initiatives: the possibility of exploitation of African researchers and the non-sustainability of research capacity building efforts. As a way of minimising exploitation, researchers in Africa recommended that genuine efforts be made to create transparent and equitable international health research partnerships. They suggested that this could be achieved through,: having rules of engagement, enabling African researchers to contribute to the design and conduct of international health projects in Africa, and mutual and respectful exchange of experience and capacity between research collaborators. These were identified as hallmarks to equitable international health research collaborations in Africa.Genomics research and biobanking initiatives in Africa such as H3Africa have
Kane, Emily W.
Based on quantitative survey data and qualitative data from journal entries by students in a seminar focused on community-based research, undergraduate student perceptions of community partners are explored in the context of debates about the politics of knowledge. Student perceptions that frame community partners as the recipients of academic…
In research on European foreign policy two important axes of debate have been running relatively independently of each other for more than a decade: the study of the European Union as a normative power (NPE) and the study of external perceptions of the EU. However, the studies of external...... perception offer some findings that are central for the NPE debate. This article’s argument is that the external perceptions literature points to a limited (if still identifiable) perception of the EU as a normative power depending on the geographical area. By comparison, the image of a powerful economic...
Maier, Joost X; Ghazanfar, Asif A
Looming signals (signals that indicate the rapid approach of objects) are behaviorally relevant signals for all animals. Accordingly, studies in primates (including humans) reveal attentional biases for detecting and responding to looming versus receding signals in both the auditory and visual domains. We investigated the neural representation of these dynamic signals in the lateral belt auditory cortex of rhesus monkeys. By recording local field potential and multiunit spiking activity while the subjects were presented with auditory looming and receding signals, we show here that auditory cortical activity was biased in magnitude toward looming versus receding stimuli. This directional preference was not attributable to the absolute intensity of the sounds nor can it be attributed to simple adaptation, because white noise stimuli with identical amplitude envelopes did not elicit the same pattern of responses. This asymmetrical representation of looming versus receding sounds in the lateral belt auditory cortex suggests that it is an important node in the neural network correlate of looming perception.
Albert, Mathieu; Laberge, Suzanne; Hodges, Brian D.
Funding agencies in Canada are attempting to break down the organizational boundaries between disciplines to promote interdisciplinary research and foster the integration of the social sciences into the health research field. This paper explores the extent to which biomedical and clinician scientists' perceptions of social science research operate…
Peter Q. Pfordresher
Full Text Available This review summarizes recent research on the way in which music performance may rely on the perception of sounds that accompany actions (termed auditory feedback. Alterations of auditory feedback can profoundly disrupt performance, though not all alterations cause disruption and different alterations generate different types of disruption. Recent results have revealed a basic distinction between the role of feedback contents (musical pitch and the degree to which feedback onsets are synchronized with actions. These results further suggest a theoretical framework for the coordination of actions with feedback in which perception and action share a common representation of sequence structure.
To relate scientific evidence with subjective interpretations relevant to the construction and appreciation of visual images, this paper reviews the literature pertinent to the processes involving the perception of visual images, the distinct functions of the left and right hemispheres of the human brain in recording and interpreting visual data,…
Keib, Carrie N; Cailor, Stephanie M; Kiersma, Mary E; Chen, Aleda M H
Nurses need a sound education in research and evidence-based practice (EBP) to provide patients with optimal care, but current teaching methods could be more effective. To evaluate the changes in nursing students 1) perceptions of research and EBP, 2) confidence in research and EBP, and 3) interest in research participation after completing a course in research and EBP. A pre-post assessment design was utilized to compare changes in students. This project was conducted at a small, private liberal arts institution with Bachelor of Science (BSN) students. Two cohorts of third-year BSN students (Year 1 N=55, Year 2 N=54) who were taking a required, semester-long Nursing Research and EBP course. Students' perceptions of and confidence in research and EBP were assessed pre- and post-semester using the Confidence in Research and EBP survey, which contained 7 demographic items, 9 Research Perceptions items, and 19 Confidence in Research items (5-point Likert scale; 1=Not at all confident, 5=Extremely confident). Two years of data were collected and analyzed in SPSS v.24.0. Wilcoxon signed-ranks tests and Mann-Whitney-U tests were utilized to examine the data. Students had significant improvements in perceptions of and confidence in research and EBP (pstudents' plans to perform research or plans to participate in research in the future. A Research and EBP course is an effective way to improve student perceptions of and confidence in research and EBP, increasing the likelihood of applying these skills to future nursing practice. Copyright © 2017. Published by Elsevier Ltd.
Full Text Available BACKGROUND: Tinnitus is an auditory phantom perception that is most likely generated in the central nervous system. Most of the tinnitus research has concentrated on the auditory system. However, it was suggested recently that also non-auditory structures are involved in a global network that encodes subjective tinnitus. We tested this assumption using auditory steady state responses to entrain the tinnitus network and investigated long-range functional connectivity across various non-auditory brain regions. METHODS AND FINDINGS: Using whole-head magnetoencephalography we investigated cortical connectivity by means of phase synchronization in tinnitus subjects and healthy controls. We found evidence for a deviating pattern of long-range functional connectivity in tinnitus that was strongly correlated with individual ratings of the tinnitus percept. Phase couplings between the anterior cingulum and the right frontal lobe and phase couplings between the anterior cingulum and the right parietal lobe showed significant condition x group interactions and were correlated with the individual tinnitus distress ratings only in the tinnitus condition and not in the control conditions. CONCLUSIONS: To the best of our knowledge this is the first study that demonstrates existence of a global tinnitus network of long-range cortical connections outside the central auditory system. This result extends the current knowledge of how tinnitus is generated in the brain. We propose that this global extend of the tinnitus network is crucial for the continuos perception of the tinnitus tone and a therapeutical intervention that is able to change this network should result in relief of tinnitus.
Stephen Michael Town
Full Text Available Timbre is the attribute that distinguishes sounds of equal pitch, loudness and duration. It contributes to our perception and discrimination of different vowels and consonants in speech, instruments in music and environmental sounds. Here we begin by reviewing human timbre perception and the spectral and temporal acoustic features that give rise to timbre in speech, musical and environmental sounds. We also consider the perception of timbre by animals, both in the case of human vowels and non-human vocalizations. We then explore the neural representation of timbre, first within the peripheral auditory system and later at the level of the auditory cortex. We examine the neural networks that are implicated in timbre perception and the computations that may be performed in auditory cortex to enable listeners to extract information about timbre. We consider whether single neurons in auditory cortex are capable of representing spectral timbre independently of changes in other perceptual attributes and the mechanisms that may shape neural sensitivity to timbre. Finally, we conclude by outlining some of the questions that remain about the role of neural mechanisms in behavior and consider some potentially fruitful avenues for future research.
Sandesh, Nagarajappa; Wahrekar, Shilpa
With the increasing demand to publish due to 'publish or perish' culture among research and academic institutions, the choice of a journal for publishing scientific articles becomes very important. A publication with many citations and high impact factor can propel researchers in their academic careers. The aim of this study is to explore the perceptions of medical and dental researchers in India about the important criteria to consider while selecting scientific journals for publishing their research. 206 faculty staff members from three medical and five dental institutions were selected through convenience sampling. The study participants completed a questionnaire with 24 closed ended questions on various factors related to journal selection for publication. Factors such as publication frequency, journal citation, indexing, peer-review, impact factor, publication fees, acceptance or rejection rate, publishing house, previous submission and online submission process were considered. The responses were recorded using a Likert scale. Cronbach's alpha as a measure of internal consistency or homogeneity was 0.909. Descriptive statistics and Mann-Whitney U test were employed for comparison of responses among study participants. The mean weight of 24 criteria on a scale of 0 to 4 varied between 2.13 and 3.45. The results showed that indexing of journal (3.45±0.74), online submission (3.24±0.83), impact factor (3.11±0.91), peer-review process (3.0±1.02) and publication fees (2.99±1.11) were among the most important criteria to consider in journal selection. Of the 24 factors considered by health researchers for journal selection, the most important were Journal indexing, online submission, impact factor, peer-review and publication fees. Compared to dental researchers, medical researchers perceived open access and peer-review process as significantly more important criteria.
Mathias, Brian; Gehring, William J; Palmer, Caroline
The current study investigated the relationship between planning processes and feedback monitoring during music performance, a complex task in which performers prepare upcoming events while monitoring their sensory outcomes. Theories of action planning in auditory-motor production tasks propose that the planning of future events co-occurs with the perception of auditory feedback. This study investigated the neural correlates of planning and feedback monitoring by manipulating the contents of auditory feedback during music performance. Pianists memorized and performed melodies at a cued tempo in a synchronization-continuation task while the EEG was recorded. During performance, auditory feedback associated with single melody tones was occasionally substituted with tones corresponding to future (next), present (current), or past (previous) melody tones. Only future-oriented altered feedback disrupted behavior: Future-oriented feedback caused pianists to slow down on the subsequent tone more than past-oriented feedback, and amplitudes of the auditory N1 potential elicited by the tone immediately following the altered feedback were larger for future-oriented than for past-oriented or noncontextual (unrelated) altered feedback; larger N1 amplitudes were associated with greater slowing following altered feedback in the future condition only. Feedback-related negativities were elicited in all altered feedback conditions. In sum, behavioral and neural evidence suggests that future-oriented feedback disrupts performance more than past-oriented feedback, consistent with planning theories that posit similarity-based interference between feedback and planning contents. Neural sensory processing of auditory feedback, reflected in the N1 ERP, may serve as a marker for temporal disruption caused by altered auditory feedback in auditory-motor production tasks. © 2016 Society for Psychophysiological Research.
Gulmez, Deniz; Kozan, Hatice Irem Ozteke
In current study research assistants' perceptions about the concepts of "Academic adviser" and "Academic life" via metaphors were aimed which is conducted with qualitative research method. Participants of study consist of 82 research assistant (45 of them women) work in Educational Faculties in Turkey. In data collection, for…
Castelló, Montserrat; McAlpine, Lynn; Pyhältö, Kirsi
Post-PhD researchers working at universities are contributors to a country's productivity and competitiveness mostly through writing, which becomes a means to establish their scholarly identity as they contribute to knowledge. However, little is known about researchers' writing perceptions, and their interrelations with engagement in research,…
Brew, Angela; Mantai, Lilia
How can universities ensure that strategic aims to integrate research and teaching through engaging students in research-based experiences be effectively realised within institutions? This paper reports on the findings of a qualitative study exploring academics' perceptions of the challenges and barriers to implementing undergraduate research.…
Full Text Available Categorization enables listeners to efficiently encode and respond to auditory stimuli. Behavioral evidence for auditory categorization has been well documented across a broad range of human and non-human animal species. Moreover, neural correlates of auditory categorization have been documented in a variety of different brain regions in the ventral auditory pathway, which is thought to underlie auditory-object processing and auditory perception. Here, we review and discuss how neural representations of auditory categories are transformed across different scales of neural organization in the ventral auditory pathway: from across different brain areas to within local microcircuits. We propose different neural transformations across different scales of neural organization in auditory categorization. Along the ascending auditory system in the ventral pathway, there is a progression in the encoding of categories from simple acoustic categories to categories for abstract information. On the other hand, in local microcircuits, different classes of neurons differentially compute categorical information.
Begault, Durand R.; Godfroy, Martine; Sandor, Aniko; Holden, Kritina
The design of caution-warning signals for NASA s Crew Exploration Vehicle (CEV) and other future spacecraft will be based on both best practices based on current research and evaluation of current alarms. A design approach is presented based upon cross-disciplinary examination of psychoacoustic research, human factors experience, aerospace practices, and acoustical engineering requirements. A listening test with thirteen participants was performed involving ranking and grading of current and newly developed caution-warning stimuli under three conditions: (1) alarm levels adjusted for compliance with ISO 7731, "Danger signals for work places - Auditory Danger Signals", (2) alarm levels adjusted to an overall 15 dBA s/n ratio and (3) simulated codec low-pass filtering. Questionnaire data yielded useful insights regarding cognitive associations with the sounds.
Guediche, Sara; Blumstein, Sheila E.; Fiez, Julie A.; Holt, Lori L.
Adult speech perception reflects the long-term regularities of the native language, but it is also flexible such that it accommodates and adapts to adverse listening conditions and short-term deviations from native-language norms. The purpose of this article is to examine how the broader neuroscience literature can inform and advance research efforts in understanding the neural basis of flexibility and adaptive plasticity in speech perception. Specifically, we highlight the potential role of learning algorithms that rely on prediction error signals and discuss specific neural structures that are likely to contribute to such learning. To this end, we review behavioral studies, computational accounts, and neuroimaging findings related to adaptive plasticity in speech perception. Already, a few studies have alluded to a potential role of these mechanisms in adaptive plasticity in speech perception. Furthermore, we consider research topics in neuroscience that offer insight into how perception can be adaptively tuned to short-term deviations while balancing the need to maintain stability in the perception of learned long-term regularities. Consideration of the application and limitations of these algorithms in characterizing flexible speech perception under adverse conditions promises to inform theoretical models of speech. PMID:24427119
The theme of the Bachelor work is: ‘’Research on Tele2 campaign ‘’Meteorite’’. The real and the desirable perception by target audience.’’’’. Several subjects are described in this work, such as – communication process from a marketing perspective, integrated marketing communication, campaig planning, guerilla marketing and it’s tools. The problematics – perception of the target audience, which leads to the objective: finding out if the desirable perception which was planned...
Schröger, Erich; Kotz, Sonja A; SanMiguel, Iria
Prediction and attention are fundamental brain functions in the service of perception and action. Theories on prediction relate to neural (mental) models inferring about (present or future) sensory or action-related information, whereas theories of attention are about the control of information flow underlying perception and action. Both concepts are related and not always clearly distinguishable. The special issue includes current research on prediction and attention in various subfields of perception and action. It especially considers interactions between predictive and attentive processes, which constitute a newly emerging and highly interesting field of research. As outlined in this editorial, the contributions in this special issue allow specifying as well as bridging concepts on prediction and attention. The joint consideration of prediction and attention also reveals common functional principles of perception and action. Copyright © 2015 Elsevier B.V. All rights reserved.
Full Text Available Pediatric hearing evaluation based on pure tone audiometry does not always reflect how a child hears in everyday life. This practice is inappropriate when evaluating the difficulties children experiencing auditory processing disorder (APD in school or on the playground. Despite the marked increase in research on pediatric APD, there remains limited access to proper evaluation worldwide. This perspective article presents five common misconceptions of APD that contribute to inappropriate or limited management in children experiencing these deficits. The misconceptions discussed are (1 the disorder cannot be diagnosed due to the lack of a gold standard diagnostic test; (2 making generalizations based on profiles of children suspected of APD and not diagnosed with the disorder; (3 it is best to discard an APD diagnosis when another disorder is present; (4 arguing that the known link between auditory perception and higher cognition function precludes the validity of APD as a clinical entity; and (5 APD is not a clinical entity. These five misconceptions are described and rebutted using published data as well as critical thinking on current available knowledge on APD.
Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias
Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…
Emine Merve Kaya
Full Text Available Bottom-up attention is a sensory-driven selection mechanism that directs perception towards a subset of the stimulus that is considered salient, or attention-grabbing. Most studies of bottom-up auditory attention have adapted frameworks similar to visual attention models whereby local or global contrast is a central concept in defining salient elements in a scene. In the current study, we take a more fundamental approach to modeling auditory attention; providing the first examination of the space of auditory saliency spanning pitch, intensity and timbre; and shedding light on complex interactions among these features. Informed by psychoacoustic results, we develop a computational model of auditory saliency implementing a novel attentional framework, guided by processes hypothesized to take place in the auditory pathway. In particular, the model tests the hypothesis that perception tracks the evolution of sound events in a multidimensional feature space, and flags any deviation from background statistics as salient. Predictions from the model corroborate the relationship between bottom-up auditory attention and statistical inference, and argues for a potential role of predictive coding as mechanism for saliency detection in acoustic scenes.
Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa
A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.
Liemburg, Edith J.; Vercammen, Ans; Ter Horst, Gert J.; Curcic-Blake, Branislava; Knegtering, Henderikus; Aleman, Andre
Brain circuits involved in language processing have been suggested to be compromised in patients with schizophrenia. This does not only include regions subserving language production and perception, but also auditory processing and attention. We investigated resting state network connectivity of
Woodward, Paul J.; And Others
A factor analysis of the Carrow Auditory-Visual Abilities Test identified common factors in a population of 1,032 nondisabled 4- through 10-year-olds and a clinical population of language-disordered or learning-disabled peers with auditory and/or visual perception problems. Most subtests fell into factors attributed to auditory or visual…
Full Text Available Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1. Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept-a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition.
Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim
Purpose: Preliminary research ( Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Method: Seventeen IWA (M[subscript age] = 53.19 years)…
... fields of social sciences, clinical and basic sciences. Research experience ranged from one to thirty four years. 27% had had formal training in research ethics; the remaining 73% had a vague idea about research ethics. All respondents appreciated the importance of confidentiality although data management procedures ...
Research has played an important role in water resource management and a consensus on research objectives would increase the efficiency of these practices. In this paper we aimed to elicit the views of attendees of the 3rd Orange River Basin Symposium regarding water-related research, by using both quantitative and ...
Lev, Elise L; Kolassa, John; Bakken, Lori L
Mentoring in nursing is an important process for socializing nurse researchers, developing a body of professional knowledge, and influencing career choices of students. Self-efficacy (Bandura, 1997) is concerned with one's perceived ability to perform tasks within a specific domain. The purpose of this study was to compare undergraduate and graduate student's perceptions of their abilities to pursue research (research self-efficacy) with their mentors' perceptions. A cross-sectional design was used to study mentors in any academic discipline who received external funding and worked with an undergraduate or graduate student on the research study. Recruitment and data collection were completed using the Internet and included 21 faculty mentors and student dyads. The Clinical Research Appraisal Inventory was used to measure research self-efficacy. Differences between the faculty mentor's perception of the student's confidence in research and students' perception were significant at p=efficacy appraisals can result in opportunities forsaken and careers not pursued. Assisting mentors to guide students' skill perfection may increase students' choice of research careers, promote the effectiveness of mentorship, aid in the development of a body of professional knowledge and benefit careers of both mentors and students. Copyright 2009 Elsevier Ltd. All rights reserved.
Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.
Mishra, Srikanta K; Panda, Manas R; Herbert, Carolyn
Many features of auditory perception are positively altered in musicians. Traditionally auditory mechanisms in musicians are investigated using the Western-classical musician model. The objective of the present study was to adopt an alternative model-Indian-classical music-to further investigate auditory temporal processing in musicians. This study presents that musicians have significantly lower across-channel gap detection thresholds compared to nonmusicians. Use of the South Indian musician model provides an increased external validity for the prediction, from studies on Western-classical musicians, that auditory temporal coding is enhanced in musicians.
Paul W. Irving
Full Text Available [This paper is part of the Focused Collection on Upper Division Physics Courses.] As part of a longitudinal study into identity development in upper-level physics students, we used a phenomenographic research method to examine students’ perceptions of what it means to be a physicist. Analysis revealed six different categories of perception of what it means to be a physicist. We found the following themes: research and its association with being a physicist, differences in mindset, and exclusivity of accomplishments. The paper highlights how these perceptions relate to two communities of practice that the students are members of, and also highlights the importance of undergraduate research for students to transition from the physics undergraduate community of practice to the community of practicing physicists.
Evans, Gemma; Duggan, Ravani; Boldy, Duncan
To explore perceptions about nursing research of registered nurses (RNs) who were engaged in research activities at a metropolitan hospital in Western Australia. In order to improve RNs' research engagement and promote evidence-based practice, Nurse Research Consultants (NRCs) were appointed jointly by the study hospital and a local university. This joint appointment commenced in 2004 in the hospital's emergency department. Early findings indicated that the NRC role was effective in assisting registered nurses with research activities and hence the NRC role was expanded to all areas of the hospital. However, no formal investigation had been carried out to explore the effect of the NRC role on RNs' engagement with nursing research across the hospital. A qualitative interview process. Ten RN participants from the adult and paediatric wards were interviewed. Audio-recorded data were transcribed verbatim and thematic analysis was undertaken. Four main themes were identified, namely: perceptions of nursing research, perceived enablers, perceived barriers and improving research engagement. There was some overlap with some sub-themes being linked with more than one theme. This appeared to be due to differing levels of research education and research engagement. 6pc some of the RNs that participated in this study were experienced in the conduct of research, finding adequate support from NRCs in the workplace, whilst others experienced barriers limiting their involvement in nursing research activities. These barriers could be reduced with additional education, support, improved communication, time and opportunities to undertake research activities.
Full Text Available Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama, a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio–visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.
Jun 29, 2011 ... Research has played an important role in water resource management and a consensus on research objectives would increase the efficiency of these ... The availability of water underpins the very social and eco- nomic fabric of the southern ... Council USA, 2004). Scientists and practitioners have often.
Roosa, Mark W.; White, Rebecca M. B.; Zeiders, Katharine H.; Tein, Jenn-Yun
Accumulating research demonstrates that both archival indicators and residents' self-reports of neighborhood conditions are useful predictors of a variety of physical health, mental health, substance use, criminal, and educational outcomes. Although studies have shown these two types of measures are often related, no research has systematically…
Behar-Horenstein, Linda S; Beck, Diane E; Su, Yu
Pharmacy educators have identified that pharmacy faculty need a better understanding of educational research to facilitate improvement of teaching, curricula, and related outcomes. However, the specific faculty development needs have not been assessed. The purpose of this study was to investigate self-reported confidence among clinical doctor of pharmacy faculty in skills essential for conducting educational research. Faculty members with primary responsibilities in teaching at the University of Florida College of Pharmacy were invited to the take the Adapted Self-Efficacy in Research Measure (ASERM). Descriptive analysis and independent samples t-tests were used to compare the self-efficacy items by faculty rank, gender, and years of experience. Twenty-two of the 37 faculty members answered the 30-item survey that identified their self-efficacy in items and categories of skills, including writing skills, statistical skills, research design, research management and dissemination in education research. Senior faculty had significantly higher confidence than junior faculty on seven items. Participants who worked more than ten years had statistically higher confidence in preparing and submitting grant proposals to obtain funding for educational research. Skills where both junior and senior faculty had low confidence were related to using non-traditional methods such as qualitative methods and identifying funding resources for educational research. Findings from the ASERM provided insights among pharmacy educators regarding self-efficacy related to skills needed for educational research, options for faculty development opportunities and actions for improving educational research knowledge and skills among them. Copyright © 2017 Elsevier Inc. All rights reserved.
Jones, Anna Barbara
Auditory comprehension, the ability to understand spoken language, consists of a number of different auditory processing skills. In the five studies presented in this thesis I investigated both intact and impaired auditory comprehension at different levels: voice versus phoneme perception, as well as single word auditory comprehension in terms of phonemic and semantic content. In the first study, using sounds from different continua of ‘male’-/pæ/ to ‘female’-/tæ/ and ‘male’...
Zuk, Jennifer; Bishop-Liebler, Paula; Ozernov-Palchik, Ola; Moore, Emma; Overy, Katie; Welch, Graham; Gaab, Nadine
Previous research has suggested a link between musical training and auditory processing skills. Musicians have shown enhanced perception of auditory features critical to both music and speech, suggesting that this link extends beyond basic auditory processing. It remains unclear to what extent musicians who also have dyslexia show these specialized abilities, considering often-observed persistent deficits that coincide with reading impairments. The present study evaluated auditory sequencing and speech discrimination in 52 adults comprised of musicians with dyslexia, nonmusicians with dyslexia, and typical musicians. An auditory sequencing task measuring perceptual acuity for tone sequences of increasing length was administered. Furthermore, subjects were asked to discriminate synthesized syllable continua varying in acoustic components of speech necessary for intraphonemic discrimination, which included spectral (formant frequency) and temporal (voice onset time [VOT] and amplitude envelope) features. Results indicate that musicians with dyslexia did not significantly differ from typical musicians and performed better than nonmusicians with dyslexia for auditory sequencing as well as discrimination of spectral and VOT cues within syllable continua. However, typical musicians demonstrated superior performance relative to both groups with dyslexia for discrimination of syllables varying in amplitude information. These findings suggest a distinct profile of speech processing abilities in musicians with dyslexia, with specific weaknesses in discerning amplitude cues within speech. Because these difficulties seem to remain persistent in adults with dyslexia despite musical training, this study only partly supports the potential for musical training to enhance the auditory processing skills known to be crucial for literacy in individuals with dyslexia. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
González-Saldivar, Gerardo; Rodríguez-Gutiérrez, René; Viramontes-Madrid, José Luis; Salcido-Montenegro, Alejandro; Carlos-Reyna, Kevin Erick Gabriel; Treviño-Alvarez, Andrés Marcelo; Álvarez-Villalobos, Neri Alejandro; González-González, José Gerardo
Background There is scarce scientific information assessing participants’ perception of pharmaceutical research in developed and developing countries concerning the risks, safety, and purpose of clinical trials. Methods To assess the perception that 604 trial participants (cases) and 604 nonparticipants (controls) of pharmaceutical clinical trials have about pharmaceutical clinical research, we surveyed participants with one of four chronic diseases from 12 research sites throughout Mexico. Results Participation in clinical trials positively influences the perception of pharmaceutical clinical research. More cases (65.4%) than controls (50.7%) perceived that the main purpose of pharmaceutical research is to cure more diseases and to do so more effectively. In addition, more cases considered that there are significant benefits when participating in a research study, such as excellent medical care and extra free services, with this being the most important motivation to participate for both groups (cases 52%, controls 54.5%). We also found a sense of trust in their physicians to deal with adverse events, and the perception that clinical research is a benefit to their health, rather than a risk. More controls believed that clinical trial participants’ health is put at risk (57% vs 33.3%). More cases (99.2%) than controls (77.5%) would recommend participating in a clinical trial, and 90% of cases would enroll in a clinical trial again. Conclusion Participation in clinical trials positively influences the perception that participants have about pharmaceutical clinical research when compared to nonparticipants. This information needs to be conveyed to clinicians, public health authorities, and general population to overcome misconceptions. PMID:27199549
Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples
Shannon L. M. Heald
Full Text Available In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument, speaking (or playing rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we
Guo, Jia; Xu, Peng; Yao, Li; Shu, Hua; Zhao, Xiaojie
Neural mechanism of auditory-visual speech integration is always a hot study of multi-modal perception. The articulation conveys speech information that helps detect and disambiguate the auditory speech. As important characteristic of EEG, oscillations and its synchronization have been applied to cognition research more and more. This study analyzed the EEG data acquired by unimodal and bimodal stimuli using time frequency and phase synchrony approach, investigated the oscillatory activities and its synchrony modes behind evoked potential during auditory-visual integration, in order to reveal the inherent neural integration mechanism under these modes. It was found that beta activity and its synchronization differences had relationship with gesture N1-P2, which happened in the earlier stage of speech coding to pronouncing action. Alpha oscillation and its synchronization related with auditory N1-P2 might be mainly responsible for auditory speech process caused by anticipation from gesture to sound feature. The visual gesture changing enhanced the interaction of auditory brain regions. These results provided explanations to the power and connectivity change of event-evoked oscillatory activities which matched ERPs during auditory-visual speech integration.
Camporesi, Tiziano; Florio, Massimo; Giffoni, Francesco
More than 36,000 students and post-docs will be involved in experiments at the Large Hadron Collider (LHC) until 2025. Do they expect that their learning experience will have an impact on their professional future? By drawing from earlier salary expectations literature, this paper proposes a framework aiming at explaining the professional expectations of early career researchers (ECR) at the LHC. Results from different ordered logistic models suggest that experiential learning at LHC positively correlates with both current and former students' salary expectations. At least two not mutually exclusive explanations underlie such a relationship. First, the training at LHC gives early career researchers valuable expertise, which in turn affects salary expectations; secondly, respondents recognise that the LHC research experience per se may act as a signal in the labour market. Respondents put a price tag on their experience at LHC, a "salary premium" ranging from 5% to 12% in terms of future salaries compared with...
Lunney, David; Morrison, Robert C.
Our research group has been working for several years on the development of auditory alternatives to visual graphs, primarily in order to give blind science students and scientists access to instrumental measurements. In the course of this work we have tried several modes for auditory presentation of data: synthetic speech, tones of varying pitch, complex waveforms, electronic music, and various non-musical sounds. Our most successful translation of data into sound has been presentation of infrared spectra as musical patterns. We have found that if the stick spectra of two compounds are visibly different, their musical patterns will be audibly different. Other possibilities for auditory presentation of data are also described, among them listening to Fourier transforms of spectra, and encoding data in complex waveforms (including synthetic speech).
Nardon, N.; Berg, van den A.
This reports contains the results of an exploration of research and expertise activities on the theme of landscape perception and quality in France. The research was carried out by a French agronomy engineer. The objectives of this research project were to inform French researchers about Alterra's
Berthelsen, Connie Bøttcher; Hølge-Hazelton, Bibi
knowledge and practical research competencies among orthopaedic nurses and their interest and motivation to increase these in everyday practice. A newly developed questionnaire was given to a convenience sample of 87 orthopaedic nurses. Forty three orthopaedic nurses (49.4%) completed the questionnaire....... The results indicated that despite the majority of orthopaedic nurses having low self-perceived theoretical knowledge and practical research competencies, their interest and motivation to improve these were high, especially their inner motivation. However, the nurses' inner motivation was inhibited by a lack...
Stebbings, Kevin A; Lesicko, Alexandria M H; Llano, Daniel A
We live in a world imbued with a rich mixture of complex sounds. Successful acoustic communication requires the ability to extract meaning from those sounds, even when degraded. One strategy used by the auditory system is to harness high-level contextual cues to modulate the perception of incoming sounds. An ideal substrate for this process is the massive set of top-down projections emanating from virtually every level of the auditory system. In this review, we provide a molecular and circuit-level description of one of the largest of these pathways: the auditory corticocollicular pathway. While its functional role remains to be fully elucidated, activation of this projection system can rapidly and profoundly change the tuning of neurons in the inferior colliculus. Several specific issues are reviewed. First, we describe the complex heterogeneous anatomical organization of the corticocollicular pathway, with particular emphasis on the topography of the pathway. We also review the laminar origin of the corticocollicular projection and discuss known physiological and morphological differences between subsets of corticocollicular cells. Finally, we discuss recent findings about the molecular micro-organization of the inferior colliculus and how it interfaces with corticocollicular termination patterns. Given the assortment of molecular tools now available to the investigator, it is hoped that his review will help guide future research on the role of this pathway in normal hearing. Copyright © 2014 Elsevier B.V. All rights reserved.
Results. All medical schools in SA were sampled, and 51.5% (124/241) of surgical registrars completed the questionnaire. Challenges ... 2 Department of Internal Medicine, RK Khan Regional Hospital and School of Clinical Medicine, College of Health Sciences,. Nelson R ... research experience, may be a type II error.
Kalichman, Michael W.; Friedman, Paul J.
A survey of 549 biomedical trainees (graduate and postdoctoral students, medical students, residents, fellows) investigated training in research ethics, observation of scientific misconduct, and willingness to select, omit, or fabricate data to win a grant or publish a paper. The efficacy of current ethics instruction is questioned. (Author/MSE)
Berthelsen, Connie Bøttcher; Hølge-Hazelton, Bibi
of acceptance from colleagues and section head nurses and a shortage of time. This study forms a baseline as a part of a larger study and contributes knowledge useful to other orthopaedic departments with an interest in optimizing nursing research to improve orthopaedic nursing care quality....
... attitude to reading research articles and their utilization in nursing practice was conducted in University College Hospital (UCH) and Adeoyo Maternity Teaching Hospital (AMTH) both in the city of Ibadan, Nigeria. Data were collected through a 50-item-structured questionnaire using purposive sampling technique to select ...
Kaisa eTiippana; Riikka eMöttönen; Jean-Luc eSchwartz
International audience; This research topic presents speech as a natural, well-learned, multisensory communication signal, processed by multiple mechanisms. Reflecting the general status of the field, most articles focus on audiovisual speech perception and many utilize the McGurk effect, which arises when discrepant visual and auditory speech stimuli are presented (McGurk and MacDonald, 1976). Tiippana (2014) argues that the McGurk effect can be used as a proxy for multisensory integration p...
Full Text Available Gerardo González-Saldivar,1 René Rodríguez-Gutiérrez,2 José Luis Viramontes-Madrid,3 Alejandro Salcido-Montenegro,2 Kevin Erick Gabriel Carlos-Reyna,2 Andrés Marcelo Treviño-Alvarez,2 Neri Alejandro Álvarez-Villalobos,4 José Gerardo González-González2 1Ophthalmology Department, 2Endocrinology Division, Hospital Universitario “Dr. José E. González”, Facultad de Medicina, Universidad Autónoma de Nuevo León, Monterrey, Nuevo León, 3Instituto Nacional de Salud Pública, Cuernavaca, Morelos, 4Medical Statistics Department, Hospital Universitario “Dr. José E. González”, Facultad de Medicina, Universidad Autónoma de Nuevo León, Monterrey, Nuevo León, Mexico Background: There is scarce scientific information assessing participants’ perception of pharmaceutical research in developed and developing countries concerning the risks, safety, and purpose of clinical trials.Methods: To assess the perception that 604 trial participants (cases and 604 nonparticipants (controls of pharmaceutical clinical trials have about pharmaceutical clinical research, we surveyed participants with one of four chronic diseases from 12 research sites throughout Mexico.Results: Participation in clinical trials positively influences the perception of pharmaceutical clinical research. More cases (65.4% than controls (50.7% perceived that the main purpose of pharmaceutical research is to cure more diseases and to do so more effectively. In addition, more cases considered that there are significant benefits when participating in a research study, such as excellent medical care and extra free services, with this being the most important motivation to participate for both groups (cases 52%, controls 54.5%. We also found a sense of trust in their physicians to deal with adverse events, and the perception that clinical research is a benefit to their health, rather than a risk. More controls believed that clinical trial participants’ health is put at risk
Hall, J; Hubbard, A; Neely, S; Tubis, A
How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft . Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...
Research on perceptions and attitudes regarding intimate partner violence (IPV), a prominent predictor of IPV, is limited, and surveys on the relationships of the influencing factors are even rarer. Using a convenience sample of 2,057 students and assessed by the Revised Conflict Tactics Scale, this study explored Chinese university students' perceptions and attitudes concerning IPV to improve IPV prevention programs. It focused on the existences of the different perceptions and attitudes regarding gender, residence, major, and age under the same condition of educational attainment. Significant gender differences were found, with female students possessing better perceptions, which indicated that with the same education levels, the perceptions of females were better than those of males. Significant differences were also found for the first time in the literature between science students and arts students, with the latter holding better attitudes. No significant differences were seen between students from rural areas and students from urban areas, suggesting that with the same educational attainment, there were no perception differences between rural and urban residents. No significant perception differences were found among freshmen, sophomores, juniors, and seniors, which revealed that neither university education nor urban life had a significant effect on perceptions and attitudes concerning IPV for students who had finished high school education. In conclusion, the results of the current study indicated that among the other factors such as gender, residence, and age, education was the most powerful factor influencing perceptions and attitudes concerning IPV. © The Author(s) 2016.
Corporation for Public Broadcasting, Washington, DC.
In February 1993, the Corporation for Public Broadcasting commissioned focus groups with Hispanic viewers to determine the perceptions of public television by Hispanics. The project was conducted by Norman Hecht Research and included Hispanic viewers and non-viewers in four cities--New York, Miami, San Antonio, and Los Angeles. The topic for…
Myers, Nancy; Dillard, Benita R.
California Lutheran University is a regional site for the California Reading and Literature Project (CRLP). In 2010, CRLP began a two-year longitudinal study to examine the effects of participating in an institute called Reframing Teacher Leadership: Action Research Study Group had on PreK-12 teachers' attitudes and perceptions. The foundation…
Vandermaas-Peeler, Maureen; Miller, Paul C.; Peeples, Tim
Although an increasing number of studies have examined students' participation in undergraduate research (UR), little is known about faculty perceptions of mentoring in this context. The purpose of this exploratory study was to investigate four aspects of mentoring UR, including how faculty define high-quality UR mentoring and operationalize it in…
Murakami, Takenobu; Restle, Julia; Ziemann, Ulf
A left-hemispheric cortico-cortical network involving areas of the temporoparietal junction (Tpj) and the posterior inferior frontal gyrus (pIFG) is thought to support sensorimotor integration of speech perception into articulatory motor activation, but how this network links with the lip area of the primary motor cortex (M1) during speech…
Full Text Available The sense of taste is among the regulatory mechanism for acceptance or rejection of foods. In oral submucous fibrosis (OSMF patients, impairment of taste sensation has not received much attention, owing to limited research work in the field. This study was conducted to analyze the taste impairment in OSMF patients by using four basic tastes: Sweet, sour, salty and bitter, among a group of 30 subjects by using physiological taste stimuli tastants. In OSMF, significant taste alteration was found with sweet followed by salt, bitter and sour.
Jordan W. Smith
Full Text Available Immersive virtual environment (IVE technology offers a wide range of potential benefits to research focused on understanding how individuals perceive and respond to built and natural environments. In an effort to broaden awareness and use of IVE technology in perception, preference and behavior research, this review paper describes how IVE technology can be used to complement more traditional methods commonly applied in public health research. The paper also describes a relatively simple workflow for creating and displaying 360° virtual environments of built and natural settings and presents two freely-available and customizable applications that scientists from a variety of disciplines, including public health, can use to advance their research into human preferences, perceptions and behaviors related to built and natural settings.
The present research proposes that the presence of auditory feedback increases satisfaction with the shopping experience, confidence in the retailer, and the likelihood to return to the retailer...
Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D
To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.
Schwartz, Marc S; Wilkinson, Eric P
Auditory brainstem implants (ABIs), which have previously been used to restore auditory perception to deaf patients with neurofibromatosis type 2 (NF2), are now being utilized in other situations, including treatment of congenitally deaf children with cochlear malformations or cochlear nerve deficiencies. Concurrent with this expansion of indications, the number of centers placing and expressing interest in placing ABIs has proliferated. Because ABI placement involves posterior fossa craniotomy in order to access the site of implantation on the cochlear nucleus complex of the brainstem and is not without significant risk, we aim to highlight issues important in developing and maintaining successful ABI programs that would be in the best interests of patients. Especially with pediatric patients, the ultimate benefits of implantation will be known only after years of growth and development. These benefits have yet to be fully elucidated and continue to be an area of controversy. The limited number of publications in this area were reviewed. Review of the current literature was performed. Disease processes, risk/benefit analyses, degrees of evidence, and U.S. Food and Drug Administration approvals differ among various categories of patients in whom auditory brainstem implantation could be considered for use. We suggest sets of criteria necessary for the development of successful and sustaining ABI programs, including programs for NF2 patients, postlingually deafened adult nonneurofibromatosis type 2 patients, and congenitally deaf pediatric patients. Laryngoscope, 127:1909-1915, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Cappe, Céline; Thut, Gregor; Romei, Vincenzo; Murray, Micah M
An object's motion relative to an observer can confer ethologically meaningful information. Approaching or looming stimuli can signal threats/collisions to be avoided or prey to be confronted, whereas receding stimuli can signal successful escape or failed pursuit. Using movement detection and subjective ratings, we investigated the multisensory integration of looming and receding auditory and visual information by humans. While prior research has demonstrated a perceptual bias for unisensory and more recently multisensory looming stimuli, none has investigated whether there is integration of looming signals between modalities. Our findings reveal selective integration of multisensory looming stimuli. Performance was significantly enhanced for looming stimuli over all other multisensory conditions. Contrasts with static multisensory conditions indicate that only multisensory looming stimuli resulted in facilitation beyond that induced by the sheer presence of auditory-visual stimuli. Controlling for variation in physical energy replicated the advantage for multisensory looming stimuli. Finally, only looming stimuli exhibited a negative linear relationship between enhancement indices for detection speed and for subjective ratings. Maximal detection speed was attained when motion perception was already robust under unisensory conditions. The preferential integration of multisensory looming stimuli highlights that complex ethologically salient stimuli likely require synergistic cooperation between existing principles of multisensory integration. A new conceptualization of the neurophysiologic mechanisms mediating real-world multisensory perceptions and action is therefore supported.
O'Brien, Tara; Hathaway, Donna
Nursing students in baccalaureate programs report that research is not visible in practice, and faculty conducting research report rarely interacting with students in undergraduate nursing programs. We examined student and faculty perceptions of a research internship embedded in an existing evidence-based practice course. Students (n = 15) and faculty (n = 5) viewed the internship as a positive experience that provided meaningful hands-on skills while generating interest in a potential research career. The internship also provided faculty the opportunity to identify potential doctoral students.
Goycoolea, Marcos; Levy, Raquel; Ramírez, Carlos
There is seemingly some inherent component in selected musical compositions that elicits specific emotional perceptions, feelings, and physical conduct. The purpose of the study was to determine if the emotional perceptions of those listening to classical music are inherent in the composition or acquired by the listeners. Fifteen kindergarten students, aged 5 years, from three different sociocultural groups, were evaluated. They were exposed to portions of five purposefully selected classical compositions and asked to describe their emotions when listening to these musical pieces. All were instrumental compositions without human voices or spoken language. In addition, they were played to an audience of an age at which they were capable of describing their perceptions and supposedly had no significant previous experience of classical music. Regardless of their sociocultural background, the children in the three groups consistently identified similar emotions (e.g. fear, happiness, sadness), feelings (e.g. love), and mental images (e.g. giants or dangerous animals walking) when listening to specific compositions. In addition, the musical compositions generated physical conducts that were reflected by the children's corporal expressions. Although the sensations were similar, the way of expressing them differed according to their background.
D'Abramo, Flavio; Schildmann, Jan; Vollmann, Jochen
Appropriate information and consent has been one of the most intensely discussed topics within the context of biobank research. In parallel to the normative debate, many socio-empirical studies have been conducted to gather experiences, preferences and views of patients, healthy research participants and further stakeholders. However, there is scarcity of literature which connects the normative debate about justifications for different consent models with findings gained in empirical research. In this paper we discuss findings of a limited review of socio-empirical research on patients' and healthy research participants' experiences and views regarding consent to biobank research in light of ethical principles for appropriate information and consent. Review question: Which empirical data are available on research participants' perceptions and views regarding information and elicitation of consent for biobank research? Search of articles published till March 1st 2014 in Pubmed. Review of abstracts and potentially relevant full text articles by two authors independently. As categories for content analysis we defined (i) understanding or recall of information, (ii) preferences regarding information or consent, and (iii) research participants' concerns. The search in Pubmed yielded 337 abstracts of which 10 articles were included in this study. Approaches to information and consent varied considerably across the selected studies. The majority of research participants opted for some version of limited consent when being informed about such possibility. Among the factors influencing the type of preferred consent were information about sponsoring of biobank research by pharmaceutical industry and participants' trade-off between privacy and perceived utility. Studies investigating research participants' understanding and recall regarding the consent procedure indicated considerable lack of both aspects. Research participants' perceptions of benefits and harms differ across
Favrot, Sylvain Emmanuel; Buchholz, Jörg
the VAE development, special care was taken in order to achieve a realistic auditory percept and to avoid “artifacts” such as unnatural coloration. The performance of the VAE has been evaluated and optimized on a 29 loudspeaker setup using both objective and subjective measurement techniques....
Corluka, Adrijana; Hyder, Adnan A; Winch, Peter J; Segura, Elsa
Much of the published research on evidence-informed health policymaking in low- and middle-income countries has focused on policymakers, overlooking the role of health researchers in the research-to-policy process. Through 20 semi-structured, in-depth qualitative interviews conducted with researchers in Argentina's rural northwest and the capital of Buenos Aires, we explore the perspectives, experiences and attitudes of Argentine health researchers regarding the use and impact of health research in policymaking in Argentina. We find that the researcher, and the researcher's function of generating evidence, is nested within a broader complex system that influences the researcher's interaction with policymaking. This system comprises communities of practice, government departments/civil society organizations, bureaucratic processes and political governance and executive leadership. At the individual level, researcher capacity and determinants of research availability also play a role in contributing to evidence-informed policymaking. In addition, we find a recurrent theme around 'lack of trust' and explore the role of trust within a research system, finding that researchers' distrust towards policymakers and even other researchers are linked inextricably to the sociopolitical history of Argentina, which contributes to shaping researchers' identities in opposition to policymakers. For policymakers, national research councils and funders of national health research systems, this article provides a deeper understanding of researchers' perceptions which can help inform and improve programme design when developing interventions to enhance research utilization and develop equitable and rational health policies. For donors and development agencies interested in health research capacity building and achieving development goals, this research demonstrates a need for investment in building research capacity and training health researchers to interact with the public policy
Tess K. Koerner
Full Text Available Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.
The article argues that economic sociologists underestimate the problem of consumers’ price perception in their studies while it may be used as an effective key to the social orders of modern markets. Sociological studies of consumers’ price perception are very few and mostly performed at the theoretical level so the author makes an attempt to draw colleagues’ attention to the results of price perception research undertaken within marketing science and overviews its results in the light of so...
Hoover, Eric C; Souza, Pamela E; Gallun, Frederick J
Auditory complaints following mild traumatic brain injury (MTBI) are common, but few studies have addressed the role of auditory temporal processing in speech recognition complaints. In this study, deficits understanding speech in a background of speech noise following MTBI were evaluated with the goal of comparing the relative contributions of auditory and nonauditory factors. A matched-groups design was used in which a group of listeners with a history of MTBI were compared to a group matched in age and pure-tone thresholds, as well as a control group of young listeners with normal hearing (YNH). Of the 33 listeners who participated in the study, 13 were included in the MTBI group (mean age = 46.7 yr), 11 in the Matched group (mean age = 49 yr), and 9 in the YNH group (mean age = 20.8 yr). Speech-in-noise deficits were evaluated using subjective measures as well as monaural word (Words-in-Noise test) and sentence (Quick Speech-in-Noise test) tasks, and a binaural spatial release task. Performance on these measures was compared to psychophysical tasks that evaluate monaural and binaural temporal fine-structure tasks and spectral resolution. Cognitive measures of attention, processing speed, and working memory were evaluated as possible causes of differences between MTBI and Matched groups that might contribute to speech-in-noise perception deficits. A high proportion of listeners in the MTBI group reported difficulty understanding speech in noise (84%) compared to the Matched group (9.1%), and listeners who reported difficulty were more likely to have abnormal results on objective measures of speech in noise. No significant group differences were found between the MTBI and Matched listeners on any of the measures reported, but the number of abnormal tests differed across groups. Regression analysis revealed that a combination of auditory and auditory processing factors contributed to monaural speech-in-noise scores, but the benefit of spatial separation was
Lind, Uffe; Mose, Tina; Knudsen, Lisbeth E
BACKGROUND: Much environmental health research depends on human volunteers participating with biological samples. The perception study explores why and how people participate in a placenta perfusion study in Copenhagen. The participation implies donation of the placenta after birth and some...... of medical research. They participated in the placenta perfusion study due to a belief that societal progress follows medical research. They also felt that participating was a way of giving something back to the Danish health care system. The participants have trust in medical science and scientists...
Niels Chr. eHansen
Full Text Available Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty - a property of listeners’ prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure.Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex. Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty. We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty. Finally, we simulate listeners’ perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature.The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.
Hansen, Niels Chr; Pearce, Marcus T
Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.
Hansen, Niels Chr.; Pearce, Marcus T.
Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018
Treille, Avril; Cordeboeuf, Camille; Vilain, Coriandre; Sato, Marc
Speech can be perceived not only by the ear and by the eye but also by the hand, with speech gestures felt from manual tactile contact with the speaker׳s face. In the present electro-encephalographic study, early cross-modal interactions were investigated by comparing auditory evoked potentials during auditory, audio-visual and audio-haptic speech perception in dyadic interactions between a listener and a speaker. In line with previous studies, early auditory evoked responses were attenuated and speeded up during audio-visual compared to auditory speech perception. Crucially, shortened latencies of early auditory evoked potentials were also observed during audio-haptic speech perception. Altogether, these results suggest early bimodal interactions during live face-to-face and hand-to-face speech perception in dyadic interactions. Copyright © 2014. Published by Elsevier Ltd.
Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.
Plakke, Bethany; Romanski, Lizabeth M.
The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931
Gucciardo, Léonardo; De Koninck, Philip; Verfaillie, Catherine; Lories, Rik; Deprest, Jan
Stem cell and tissue engineering (SC&TE) research remain controversial. Polemics are potential hurdles for raising public funds for research and clinical implementation. In view of future applications of SC&TE in perinatal conditions, we aimed to measure the background knowledge, perceptions or beliefs on SC&TE research among clinicians and academic researchers with perinatal applications on the department's research agenda. We polled three professional categories: general obstetrician gynecologists, perinatologists and basic or translational researchers in development and regeneration. The survey included questions on demographics, work environment, educational background, general knowledge, expectations, opinions and ethical reflections of the respondent about SC&TE. The response rate was 39 %. Respondents were mainly female (54 %) and under 40 years (63 %). The general background knowledge about SC&TE is low. Respondents confirm that remaining controversies still arise from the confusion that stem cell research coincides with embryo manipulation. Clinicians assume that stem cell research has reached the level of clinical implementation, and accept the risks associated of purposely harvesting fetal amniotic cells. Researchers in contrast are more cautious about both implementation and risks. Professionals in the field of perinatology may benefit of a better background knowledge and information on current SC & TE research. Though clinicians may be less aware of the current state of knowledge, they are open to clinical implementation, whereas dedicated researchers remain cautious. In view of the clinical introduction of SC & TE, purposed designed informative action should be taken and safety studies executed, hence avoid sustaining needless polemics.
Full Text Available The paper presents outcomes of a research on features of perception of modern Russian political leaders in young people. A technique of multidimensional semantic differential was employed: the subjects were asked to assess 15 objects (political leaders, 'my ideal', 'ideal leader', 'antipathetic person' and 'Myself' according to 33 personality traits using a seven-point scale. The outcomes suggest that the structure of the students' perception of political leaders is quite simple and is based on three modalities: 'morality', 'power' and 'intelligence'. Comparing these outcomes with the research data obtained in 2004 using the same technique allowed the authors to conclude that the students do not assess modern political leaders according to the moral qualities of the latter, but rather perceive them through the qualities of power associated with social manipulation.
Liam D Harper
Full Text Available Qualitative research investigating soccer practitioners' perceptions can allow researchers to create practical research investigations. The extra-time period of soccer is understudied compared to other areas of soccer research. Using an open-ended online survey containing eleven main and nine sub questions, we gathered the perceptions of extra-time from 46 soccer practitioners, all working for different professional soccer clubs. Questions related to current practices, views on extra-time regulations, and ideas for future research. Using inductive content analysis, the following general dimensions were identified: 'importance of extra-time', 'rule changes', 'efficacy of extra-time hydro-nutritional provision', 'nutritional timing', 'future research directions', 'preparatory modulations' and 'recovery'. The majority of practitioners (63% either agreed or strongly agreed that extra-time is an important period for determining success in knockout football match-play. When asked if a fourth substitution should be permitted in extra-time, 67% agreed. The use of hydro-nutritional strategies prior to extra-time was predominately considered important or very important. However; only 41% of practitioners felt that it was the most important time point for the use of nutritional products. A similar number of practitioners account (50% and do not (50% account for the potential of extra-time when training and preparing players and 89% of practitioners stated that extra-time influences recovery practices following matches. In the five minute break prior to extra-time, the following practices (in order of priority were advocated to players: hydration, energy provision, massage, and tactical preparations. Additionally, 87% of practitioners advocate a particular nutritional supplementation strategy prior to extra-time. In order of importance, practitioners see the following as future research areas: nutritional interventions, fatigue responses, acute injury risk
Harper, Liam D; Fothergill, Melissa; West, Daniel J; Stevenson, Emma; Russell, Mark
Qualitative research investigating soccer practitioners' perceptions can allow researchers to create practical research investigations. The extra-time period of soccer is understudied compared to other areas of soccer research. Using an open-ended online survey containing eleven main and nine sub questions, we gathered the perceptions of extra-time from 46 soccer practitioners, all working for different professional soccer clubs. Questions related to current practices, views on extra-time regulations, and ideas for future research. Using inductive content analysis, the following general dimensions were identified: 'importance of extra-time', 'rule changes', 'efficacy of extra-time hydro-nutritional provision', 'nutritional timing', 'future research directions', 'preparatory modulations' and 'recovery'. The majority of practitioners (63%) either agreed or strongly agreed that extra-time is an important period for determining success in knockout football match-play. When asked if a fourth substitution should be permitted in extra-time, 67% agreed. The use of hydro-nutritional strategies prior to extra-time was predominately considered important or very important. However; only 41% of practitioners felt that it was the most important time point for the use of nutritional products. A similar number of practitioners account (50%) and do not (50%) account for the potential of extra-time when training and preparing players and 89% of practitioners stated that extra-time influences recovery practices following matches. In the five minute break prior to extra-time, the following practices (in order of priority) were advocated to players: hydration, energy provision, massage, and tactical preparations. Additionally, 87% of practitioners advocate a particular nutritional supplementation strategy prior to extra-time. In order of importance, practitioners see the following as future research areas: nutritional interventions, fatigue responses, acute injury risk, recovery
Full Text Available Background and Aim: Omega-3 fatty acid have structural and biological roles in the body 's various systems . Numerous studies have tried to research about it. Auditory system is affected a s well. The aim of this article was to review the researches about the effect of omega-3 on auditory system.Methods: We searched Medline , Google Scholar, PubMed, Cochrane Library and SID search engines with the "auditory" and "omega-3" keywords and read textbooks about this subject between 19 70 and 20 13.Conclusion: Both excess and deficient amounts of dietary omega-3 fatty acid can cause harmful effects on fetal and infant growth and development of brain and central nervous system esspesially auditory system. It is important to determine the adequate dosage of omega-3.
Glat, Rosana; Pletsch, Márcia Denise
The present text discusses self-perception of people who are stigmatized due to intellectual (mental), sensorial and /or physical handicapped; global developmental disturbance or high abilities. For this aim, it analyses a group of researches (Master dissertations and PhD thesis) in the field of Special Education in Graduate programs in Education and Psychology of Brazilian universities. All these studies had as theoretical-methodological reference the Life History Method, which utilizes as m...
Jones, Grace F; Forsyth, Katherine; Jenewein, Caitlin G; Ray, Rebecca D; DiMarco, Shannon; Pugh, Carla M
Skills decay is a known risk for surgical residents who have dedicated research time. We hypothesize that simulation-based assessments will reveal significant differences in perceived skill decay when assessing a variety of clinical scenarios in a longitudinal fashion. Residents (N = 46; Returning: n = 16, New: n = 30) completed four simulated procedures: urinary catheterization, central line, bowel anastomosis, and laparoscopic ventral hernia repair. Perception surveys were administered pre- and post-simulation. Perceptions of skill decay and task difficulty were similar for both groups across three procedures pre- and post-simulation. Due to a simulation modification, new residents were more confident in urinary catheterization than returning residents (F(1,4) = 11.44, p = 0.002). In addition, when assessing expectations for skill reduction, returning residents perceived greater skill reduction upon reassessment when compared to first time residents (t(35) = 2.37, p = 0.023). Research residents may benefit from longitudinal skills assessments and a wider variety of simulation scenarios during their research years. TABLE OF CONTENTS SUMMARY: As part of a longitudinal study, we assessed research residents' confidence, perceptions of task difficulty and surgical skill reduction. Residents completed surveys pre- and post-experience with four simulated procedures: urinary catheterization, subclavian central line insertion, bowel anastomosis, and laparoscopic ventral hernia repair. Returning residents perceived greater skill reduction upon reassessment when compared to residents participating for the first time. In addition, modification of the clinical scenarios affected perceptions of skills decay. Copyright © 2016 Elsevier Inc. All rights reserved.
... auditory potentials; Brainstem auditory evoked potentials; Evoked response audiometry; Auditory brainstem response; ABR; BAEP ... Normal results vary. Results will depend on the person and the instruments used to perform the test.
... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...
Talavage, Thomas M.; Gonzalez-Castillo, Javier; Scott, Sophie K.
For much of the past 30 years, investigations of auditory perception and language have been enhanced or even driven by the use of functional neuroimaging techniques that specialize in localization of central responses. Beginning with investigations using positron emission tomography (PET) and gradually shifting primarily to usage of functional magnetic resonance imaging (fMRI), auditory neuroimaging has greatly advanced our understanding of the organization and response properties of brain regions critical to the perception of and communication with the acoustic world in which we live. As the complexity of the questions being addressed has increased, the techniques, experiments and analyses applied have also become more nuanced and specialized. A brief review of the history of these investigations sets the stage for an overview and analysis of how these neuroimaging modalities are becoming ever more effective tools for understanding the auditory brain. We conclude with a brief discussion of open methodological issues as well as potential clinical applications for auditory neuroimaging. PMID:24076424
Zatorre, Robert J; Halpern, Andrea R
Most people intuitively understand what it means to "hear a tune in your head." Converging evidence now indicates that auditory cortical areas can be recruited even in the absence of sound and that this corresponds to the phenomenological experience of imagining music. We discuss these findings as well as some methodological challenges. We also consider the role of core versus belt areas in musical imagery, the relation between auditory and motor systems during imagery of music performance, and practical implications of this research.
Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan
Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.
Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.
Sharma, Mridula; Dhamani, Imran; Leung, Johahn; Carlile, Simon
The aim of this study was to examine attention, memory, and auditory processing in children with reported listening difficulty in noise (LDN) despite having clinically normal hearing. Twenty-one children with LDN and 15 children with no listening concerns (controls) participated. The clinically normed auditory processing tests included the Frequency/Pitch Pattern Test (FPT; Musiek, 2002), the Dichotic Digits Test (Musiek, 1983), the Listening in Spatialized Noise-Sentences (LiSN-S) test (Dillon, Cameron, Glyde, Wilson, & Tomlin, 2012), gap detection in noise (Baker, Jayewardene, Sayle, & Saeed, 2008), and masking level difference (MLD; Wilson, Moncrieff, Townsend, & Pillion, 2003). Also included were research-based psychoacoustic tasks, such as auditory stream segregation, localization, sinusoidal amplitude modulation (SAM), and fine structure perception. All were also evaluated on attention and memory test batteries. The LDN group was significantly slower switching their auditory attention and had poorer inhibitory control. Additionally, the group mean results showed significantly poorer performance on FPT, MLD, 4-Hz SAM, and memory tests. Close inspection of the individual data revealed that only 5 participants (out of 21) in the LDN group showed significantly poor performance on FPT compared with clinical norms. Further testing revealed the frequency discrimination of these 5 children to be significantly impaired. Thus, the LDN group showed deficits in attention switching and inhibitory control, whereas only a subset of these participants demonstrated an additional frequency resolution deficit.
Manolas, Christos; Pauletto, Sandra
Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.
Brébion, Gildas; Stephan-Otto, Christian; Usall, Judith; Huerta-Ramos, Elena; Perez del Olmo, Mireia; Cuevas-Esteban, Jorge; Haro, Josep Maria; Ochoa, Susana
A number of cognitive underpinnings of auditory hallucinations have been established in schizophrenia patients, but few have, as yet, been uncovered for visual hallucinations. In previous research, we unexpectedly observed that auditory hallucinations were associated with poor recognition of color, but not black-and-white (b/w), pictures. In this study, we attempted to replicate and explain this finding. Potential associations with visual hallucinations were explored. B/w and color pictures were presented to 50 schizophrenia patients and 45 healthy individuals under 2 conditions of visual context presentation corresponding to 2 levels of visual encoding complexity. Then, participants had to recognize the target pictures among distractors. Auditory-verbal hallucinations were inversely associated with the recognition of the color pictures presented under the most effortful encoding condition. This association was fully mediated by working-memory span. Visual hallucinations were associated with improved recognition of the color pictures presented under the less effortful condition. Patients suffering from visual hallucinations were not impaired, relative to the healthy participants, in the recognition of these pictures. Decreased working-memory span in patients with auditory-verbal hallucinations might impede the effortful encoding of stimuli. Visual hallucinations might be associated with facilitation in the visual encoding of natural scenes, or with enhanced color perception abilities. (c) 2015 APA, all rights reserved).
Georg F Meyer
Full Text Available We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues.
Large, Edward W; Almonte, Felix V
Tonal relationships are foundational in music, providing the basis upon which musical structures, such as melodies, are constructed and perceived. A recent dynamic theory of musical tonality predicts that networks of auditory neurons resonate nonlinearly to musical stimuli. Nonlinear resonance leads to stability and attraction relationships among neural frequencies, and these neural dynamics give rise to the perception of relationships among tones that we collectively refer to as tonal cognition. Because this model describes the dynamics of neural populations, it makes specific predictions about human auditory neurophysiology. Here, we show how predictions about the auditory brainstem response (ABR) are derived from the model. To illustrate, we derive a prediction about population responses to musical intervals that has been observed in the human brainstem. Our modeled ABR shows qualitative agreement with important features of the human ABR. This provides a source of evidence that fundamental principles of auditory neurodynamics might underlie the perception of tonal relationships, and forces reevaluation of the role of learning and enculturation in tonal cognition. © 2012 New York Academy of Sciences.
Full Text Available Public health research has several stakeholders that should be involved in identifying public health research agenda. A survey was conducted prior to a national consultation organized by the Department of Health Research with the objective to identify the key public health research priorities as perceived by the State health officials and public health researchers. A cross-sectional survey was done for the State health officials involved in public health programmes and public health researchers in various States of India. A self-administered semi-structured questionnaire was used for data collection. Overall, 35 State officials from 15 States and 17 public health researchers participated in the study. Five leading public health research priorities identified in the open ended query were maternal and child health (24%, non-communicable diseases (22%, vector borne diseases (6%, tuberculosis (6% and HIV/AIDS/STI (5%. Maternal and child health research was the leading priority; however, researchers also gave emphasis on the need for research in the emerging public health challenges such as non-communicable diseases. Structured initiatives are needed to promote interactions between policymakers and researchers at all stages of research starting from defining problems to the use of research to achieve the health goals as envisaged in the 12 th Plan over next five years.
A recent letter^1^ claimed integration of auditory and tactile information in speech perception. Although I have been an advocate of multisensory integration, neither perception nor integration was sufficiently formalized, operationalized, and tested to support this claim.
Juan San Juan
Full Text Available Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex and non-region of interest (adjacent non-auditory cortices and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz, broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to
San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory
Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom
Full Text Available Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5 to 4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically-trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.
Rochette, Françoise; Moussard, Aline; Bigand, Emmanuel
Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5-4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.
Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...
Proffitt, D. R.; Kaiser, M. K.
The advantages and limitations of using computer animated stimuli in studying motion perception are presented and discussed. Most current programs of motion perception research could not be pursued without the use of computer graphics animation. Computer generated displays afford latitudes of freedom and control that are almost impossible to attain through conventional methods. There are, however, limitations to this presentational medium. At present, computer generated displays present simplified approximations of the dynamics in natural events. Very little is known about how the differences between natural events and computer simulations influence perceptual processing. In practice, the differences are assumed to be irrelevant to the questions under study, and that findings with computer generated stimuli will generalize to natural events.
Tanimoto, Katia Suemi; Hiromoto, Goro, E-mail: email@example.com, E-mail: firstname.lastname@example.org [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
A project for site selection and construction of a national radioactive waste repository is underway at the Comissao Nacional de Energia Nuclear. Public acceptance is determinant to the deployment of an undertaking of this size. A major concern regarding the use of nuclear energy are the problems related to safe management of the radioactive waste. For effective communication between decision makers and the public, a mutual understanding of views, as well as attitudes towards risk, is needed. The use of opinions polls is necessary in order to achieve it. This work aims to point out the major aspects to be approached by an opinion poll for the study of risk perception on the candidate regions for repository construction. A risk perception research model is presented, to be applied to the case of radioactive waste disposal, along with theoretical support to the organization and implementation of its structure. (author)
Grande, David; Gollust, Sarah E; Pany, Maximilian; Seymour, Jane; Goss, Adeline; Kilaru, Austin; Meisel, Zachary
As the United States moves forward with health reform, the communication gap between researchers and policy makers will need to be narrowed to promote policies informed by evidence. Social media represent an expanding channel for communication. Academic journals, public health agencies, and health care organizations are increasingly using social media to communicate health information. For example, the Centers for Disease Control and Prevention now regularly tweets to 290,000 followers. We conducted a survey of health policy researchers about using social media and two traditional channels (traditional media and direct outreach) to disseminate research findings to policy makers. Researchers rated the efficacy of the three dissemination methods similarly but rated social media lower than the other two in three domains: researchers' confidence in their ability to use the method, peers' respect for its use, and how it is perceived in academic promotion. Just 14 percent of our participants reported tweeting, and 21 percent reported blogging about their research or related health policy in the past year. Researchers described social media as being incompatible with research, of high risk professionally, of uncertain efficacy, and an unfamiliar technology that they did not know how to use. Researchers will need evidence-based strategies, training, and institutional resources to use social media to communicate evidence. Project HOPE—The People-to-People Health Foundation, Inc.
Downey, Laura Hall; Castellanos, Diana Cuy; Yadrick, Kathy; Avis-Williams, Amanda; Graham-Kresge, Susan; Bogle, Margaret
Lower Mississippi Delta Nutrition Intervention Research Initiative (Delta NIRI) is an academic-community partnership between seven academic institutions and three communities in Mississippi, Arkansas, and Louisiana. A range of community-based participatory methods have been used to develop sustainable nutrition intervention strategies. Focus groups were conducted with 22 faculty and staff members from the academic partners on the project to document their perceptions of community-based participatory processes in a federally funded, multi-academic-community partnership spanning a decade. Focus groups were conducted to glean insights or lessons from the experiences of academic personnel. Focus groups were transcribed and analyzed using the constant comparative method. Two researchers analyzed each transcript independently and reached consensus on the consistent themes. Participants candidly shared their experiences of working with community members to devise research plans, implement programs, and evaluate outcomes. The majority of faculty and staff members were attracted to this project by an excitement for conducting a more egalitarian and potentially more successful type of research. Yet each academic partner voiced that there was an underlying disconnect between community practices and research procedures during the project. Additional barriers to collaboration and action, located in communities and academic institutions, were described. Academic partners stressed the importance of open and ongoing communication, collective decision-making strategies, and techniques that support power sharing between all parties involved in the project. Findings from this research can inform academic-community partnerships and hopefully improve the community-based participatory research process implemented by academic institutions and communities.
Scolobig, A.; de Marchi, B.; Borga, M.
Flash floods are characterised by short lead times and high levels of uncertainty. Adaptive strategies to face them need to take into account not only the physical characteristics of the hydro-geological phenomena, but also peoples' risk perceptions, attitudes and behaviours in case of an emergency. It is quite obvious that a precondition for an effective adaptation, e.g. in the case of a warning, is the awareness of being endangered. At the same time the perceptions of those at risk and their likely actions inform hazard warning strategies and recovery programmes following such events. Usually low risk awareness or "wrong perceptions" of the residents are considered among the causes of an inadequate preparedness or response to flash floods as well as a symptom of a scarce self-protection culture. In this paper we will focus on flood risk perception and on how research on this topic may contribute to design adaptive strategies and give inputs to flood policy decisions. We will report on a flood risk perception study of the population residing in four villages in an Italian Alpine Region (Trentino Alto-Adige), carried out between October 2005 and January 2006. A total of 400 standardised questionnaires were submitted to local residents by face to face interviews. The surveys were preceded by focus groups with officers from agencies in charge of flood risk management and semi-structured and in-depth interviews with policy, scientific and technical experts. Survey results indicated that people are not so worried about hydro-geological phenomena, and think that their community is more endangered than themselves. The knowledge of the territory and danger sources, the unpredictability of flash floods and the feeling of safety induced by structural devices are the main elements which make the difference in shaping residents' perceptions. The study also demonstrated a widespread lack of adoption of preparatory measures among residents, together with a general low
Kavallaris, Maria; Meachem, Sarah J; Hulett, Mark D; West, Catherine M; Pitt, Rachael E; Chesters, Jennifer J; Laffan, Warren S; Boreham, Paul R; Khachigian, Levon M
To report on the sentiments of the Australian health and medical research (HMR) workforce on issues related to employment and funding opportunities. In August 2006, the Australian Society for Medical Research (ASMR) invited all of its members to participate in an online survey. The survey took the form of a structured questionnaire that focused on career aspirations, career development and training opportunities, attitudes toward moving overseas to work, and employment conditions for medical researchers. Researchers' views on career opportunities, funding opportunities, salary and quality of the working environment; impact of these views on retaining a skilled medical research workforce in Australia. Of the 1258 ASMR members, 379 responded (30% response rate). Ninety-six per cent of respondents were currently based in Australia; 70% had a PhD or equivalent; and 58% were women. Most respondents worked at hospital research centres (37%), independent research institutes (28%) or university departments (24%). Sixty-nine per cent had funding from the National Health and Medical Research Council, with the remainder funded by other sources. Over the previous 5 years, 6% of respondents had left active research and 73% had considered leaving. Factors influencing decisions about whether to leave HMR included shortage of funding (91%), lack of career development opportunities (78%) and poor financial rewards (72%). Fifty-seven per cent of respondents were directly supported by grants or fellowships, with only 16% not reliant on grants for their continuing employment; 62% believed that funding had increased over the previous 5 years, yet only 30% perceived an increase in employment opportunities in HMR. Among the respondents, twice as many men as women held postgraduate qualifications and earned >or= dollars 100 000 a year. Employment insecurity and lack of funding are a cause of considerable anxiety among Australian health and medical researchers. This may have important
Brown, Rachel M; Palmer, Caroline
In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.
Full Text Available The Americans with Disabilities Act (ADA, 1990 is the cornerstone of civil rights policy for people with disabilities. Although enforced through the justice system, the legacy of the ADA transcends well beyond its legal ramifications. The policy’s framework and the rhetoric of Disability Rights suggest both an embrace of the spirit and the letter of the law, or promulgating both legislative and cultural change to ensure that the rights of people with disabilities are met. In attempting to understand how and if such change has happened, researchers have gathered extensive evidence since 1990. Much of this research evidence, however, remains fragmented, under-utilized, and at times inconclusive. This article presents the results of a rapid evidence review of a sample of such research that is crucial to understand the ADA’s progress. The study examines evidence about the ADA’s influence on knowledge, attitudes and perceptions about employment of people with disabilities. The research illustrates the importance of moving beyond the law to incorporate changes in knowledge about the law, perceptions of employability, and workplace culture.
Rickard, Natalie A; Smales, Caroline J; Rickard, Kurt L
One type of test commonly used to assess auditory processing disorders (APD) is the Frequency Pattern Test, in which triads of pure tones of two different frequencies are presented, and participants are required to accurately report the sequence of tones, typically using a verbal response. The test is widely used clinically, but in its current format, is an under-exploited means of addressing some candidate processes, such as temporal ordering and frequency discrimination, which might be affected in APD. Here we describe a computer-based version of an auditory pattern perception test, the BirdSong Game, which was designed to be an engaging research tool for use with school-aged children. In this study, 128 children aged 6-10 with normal peripheral hearing were tested. The BirdSong Game application was used to administer auditory sequential pattern tests, via a touch-screen presentation and response interface. A conditioning step was included prior to testing, in order to ensure that participants were able to adequately discriminate between the test tones, and reliably describe the difference using their own vocabulary. Responses were collected either verbally or manually, by having participants press cartoon images on the touch-screen in the appropriate sequence. The data was examined for age, gender and response mode differences. Findings on the auditory tests indicated a significant maturational effect across the age range studied, with no difference between response modes or gender. The BirdSong Game is sensitive to maturational changes in auditory sequencing ability, and the computer-based design of the test has several advantages which make it a potentially useful clinical and research tool. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Full Text Available Auditory verbal hallucinations (AVH in schizophrenia are typically characterized by rich emotional content. Despite the prominent role of emotion in regulating normal perception, the neural interface between emotion-processing regions such as the amygdala and auditory regions involved in perception remains relatively unexplored in AVH. Here, we studied brain metabolism using FDG-PET in 9 remitted patients with schizophrenia that previously reported severe AVH during an acute psychotic episode and 8 matched healthy controls. Participants were scanned twice: (1 at rest and (2 during the perception of aversive auditory stimuli mimicking the content of AVH. Compared to controls, remitted patients showed an exaggerated response to the AVH-like stimuli in limbic and paralimbic regions, including the left amygdala. Furthermore, patients displayed abnormally strong connections between the amygdala and auditory regions of the cortex and thalamus, along with abnormally weak connections between the amygdala and medial prefrontal cortex. These results suggest that abnormal modulation of the auditory cortex by limbic-thalamic structures might be involved in the pathophysiology of AVH and may potentially account for the emotional features that characterize hallucinatory percepts in schizophrenia.
Newington, Lisa; Metcalfe, Alison
Recruiting the required number of participants is vital to the success of clinical research and yet many studies fail to achieve their expected recruitment rate. Increasing research participation is a key agenda within the NHS and elsewhere, but the optimal methods of improving recruitment to clinical research remain elusive. The aim of this study was to identify the factors that researchers perceive as influential in the recruitment of participants to clinically focused research. Semi-structured interviews were conducted with 11 individuals from three clinical research teams based in London. Sampling was a combination of convenience and purposive. The interviews were audio recorded, transcribed verbatim and analysed using the framework method to identify key themes. Four themes were identified as influential to recruitment: infrastructure, nature of the research, recruiter characteristics and participant characteristics. The main reason individuals participate in clinical research was believed to be altruism, while logistical issues were considered important for those who declined. Suggestions to improve recruitment included reducing participant burden, providing support for individuals who do not speak English, and forming collaborations with primary care to improve the identification of, and access to, potentially eligible participants. Recruiting the target number of research participants was perceived as difficult, especially for clinical trials. New and diverse strategies to ensure that all potentially eligible patients are invited to participate may be beneficial and require further exploration in different settings. Establishing integrated clinical and academic teams with shared responsibilities for recruitment may also facilitate this process. Language barriers and long journey times were considered negative influences to recruitment; although more prominent, these issues are not unique to London and are likely to be important influences in other locations.
Chi, Chia-Fen; Dewi, Ratna Sari; Surbakti, Yopie Yutama; Hsieh, Dong-Yu
The current study applied Structural Equation Modelling to analyse the relationship among pitch, loudness, tempo and timbre and their relationship with perceived sound quality. Twenty-eight auditory signals of horn, indicator, door open warning and parking sensor were collected from 11 car brands. Twenty-one experienced drivers were recruited to evaluate all sound signals with 11 semantic differential scales. The results indicate that for the continuous sounds, pitch, loudness and timbre each had a direct impact on the perceived quality. Besides the direct impacts, pitch also had an impact on loudness perception. For the intermittent sounds, tempo and timbre each had a direct impact on the perceived quality. These results can help to identify the psychoacoustic attributes affecting the consumers' quality perception and help to design preferable sounds for vehicles. In the end, a design guideline is proposed for the development of auditory signals that adopts the current study's research findings as well as those of other relevant research. Practitioner Summary: This study applied Structural Equation Modelling to analyse the relationship among pitch, loudness, tempo and timbre and their relationship with perceived sound quality. The result can help to identify psychoacoustic attributes affecting the consumers' quality perception and help to design preferable sounds for vehicles.
Script theory's potential application in consumer be- haviour research lies in the possibility that scripts (per ... 1 Rousseau developed a model of adult purchase de- cision-making process for furniture (Du Plessis & ..... given form of perception e.g. visual, auditory, tactile). They are further presumed to be stored as frame-.
Horlick-Jones, T. [Surrey Univ., Centre for Environnement Strategy, Guildford (United Kingdom); Marchi, B. de; Del Zotto, M.; Pellizzoni, L.; Ungaro, D. [Institute for International Sociology, Gorizia (Italy); Prades Lopez, A.; Diaz Hidalgo, M. [CIEMAT, Centro de Investigacion Energica Medioambiental y Technologia (Spain); Pidgeon, N. [School of Psychology, University of Wales at Bangor (United Kingdom); Sime, J. [Jonathan-Sime Associates, Godalming, Surrey (United Kingdom)
Full text of publication follows: key themes: social dynamics of public risk perception; trust, tolerability, and risk management; discourses of environmental risk; implications for risk communication and environmental valuation; application of mixed qualitative/quantitative methods in risk perception research. This paper presents some of the key findings of a two-year comparative European study (the PRISP Project) on public perception of risks associated with industrial sites in the UK, Italy and Spain. The project utilised a mixed-method approach (comprising community ethnography, semi-structured interviews, questionnaire survey and focus groups), within a Grounded Theory framework, to examine the social dynamics of risk comprehension, tolerability and politics in settings adjacent to a range of industrial facilities. These often complex industrial zones present a portfolio of 'acute' and 'chronic' risks including hazards associated with sites regulated by the European Union COMAH Directive. Our findings have important implications for the regulation of both major accident hazard and pollution risks, risk communication programmes, industrial risk management practices and for the methodological basis of health and safety and environmental valuation techniques. (authors)
Black, David; Hansen, Christian; Nabavi, Arya; Kikinis, Ron; Hahn, Horst
This article investigates the current state of the art of the use of auditory display in image-guided medical interventions. Auditory display is a means of conveying information using sound, and we review the use of this approach to support navigated interventions. We discuss the benefits and drawbacks of published systems and outline directions for future investigation. We undertook a review of scientific articles on the topic of auditory rendering in image-guided intervention. This includes methods for avoidance of risk structures and instrument placement and manipulation. The review did not include auditory display for status monitoring, for instance in anesthesia. We identified 15 publications in the course of the search. Most of the literature (60%) investigates the use of auditory display to convey distance of a tracked instrument to an object using proximity or safety margins. The remainder discuss continuous guidance for navigated instrument placement. Four of the articles present clinical evaluations, 11 present laboratory evaluations, and 3 present informal evaluation (2 present both laboratory and clinical evaluations). Auditory display is a growing field that has been largely neglected in research in image-guided intervention. Despite benefits of auditory displays reported in both the reviewed literature and non-medical fields, adoption in medicine has been slow. Future challenges include increasing interdisciplinary cooperation with auditory display investigators to develop more meaningful auditory display designs and comprehensive evaluations which target the benefits and drawbacks of auditory display in image guidance.
This article reviews published research on auditory function in HIV-infected adults, while also highlighting the need for intensified research on this topic in Africa. It begins with an introduction to the effects of HIV disease and treatment on the auditory system, and so highlights the need to put auditory function in adults with ...
Nurmi, Sanna-Maria; Pietilä, Anna-Maija; Kangasniemi, Mari; Halkoaho, Arja
The aim of this study was to describe nurse leaders' perceptions of ethical recruitment in clinical research. Nurse leaders are expected to get involved in clinical research, but there are few studies that focus on their role, particularly the ethical issues. Qualitative data were collected from ten nurse leaders using thematic one-to-one interviews and analysed with content analysis. Nurse leaders considered clinical research at their workplace in relation to the key issues that enabled ethical recruitment of study subjects in clinical research. These were: early information and collaboration for incorporating clinical research in everyday work, an opportune and peaceful recruitment moment and positive research culture. Getting involved in clinical research is part of the nurse leader's professional responsibility in current health care. They have an essential role to play in ensuring that recruitment is ethical and that the dignity of study subjects is maintained. The duty of nurse leaders is to maintain good contact with other collaborators and to ensure good conditions for implementing clinical research at their site. This requires a comprehensive understanding of the overall situation on their wards. Implementing clinical research requires careful planning, together with educating, supporting and motivating nursing staff. © 2014 John Wiley & Sons Ltd.
Soto-Faraco, Salvador; Spence, Charles; Kingstone, Alan
This study investigated multisensory interactions in the perception of auditory and visual motion. When auditory and visual apparent motion streams are presented concurrently in opposite directions, participants often fail to discriminate the direction of motion of the auditory stream, whereas perception of the visual stream is unaffected by the…
Favrot, Sylvain Emmanuel
A loudspeaker-based virtual auditory environment (VAE) has been developed to provide a realistic versatile research environment for investigating the auditory signal processing in real environments, i.e., considering multiple sound sources and room reverberation. The VAE allows a full control...... of the acoustic scenario in order to systematically study the auditory processing of reverberant sounds. It is based on the ODEON software, which is state-of-the-art software for room acoustic simulations developed at Acoustic Technology, DTU. First, a MATLAB interface to the ODEON software has been developed...
Lerud, Karl D; Almonte, Felix V; Kim, Ji Chul; Large, Edward W
The auditory nervous system is highly nonlinear. Some nonlinear responses arise through active processes in the cochlea, while others may arise in neural populations of the cochlear nucleus, inferior colliculus and higher auditory areas. In humans, auditory brainstem recordings reveal nonlinear population responses to combinations of pure tones, and to musical intervals composed of complex tones. Yet the biophysical origin of central auditory nonlinearities, their signal processing properties, and their relationship to auditory perception remain largely unknown. Both stimulus components and nonlinear resonances are well represented in auditory brainstem nuclei due to neural phase-locking. Recently mode-locking, a generalization of phase-locking that implies an intrinsically nonlinear processing of sound, has been observed in mammalian auditory brainstem nuclei. Here we show that a canonical model of mode-locked neural oscillation predicts the complex nonlinear population responses to musical intervals that have been observed in the human brainstem. The model makes predictions about auditory signal processing and perception that are different from traditional delay-based models, and may provide insight into the nature of auditory population responses. We anticipate that the application of dynamical systems analysis will provide the starting point for generic models of auditory population dynamics, and lead to a deeper understanding of nonlinear auditory signal processing possibly arising in excitatory-inhibitory networks of the central auditory nervous system. This approach has the potential to link neural dynamics with the perception of pitch, music, and speech, and lead to dynamical models of auditory system development. Copyright © 2013 Elsevier B.V. All rights reserved.
Suh, Hyee; Shin, Yong-Il; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon
The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia.
Anatolij P. Suprun
Full Text Available This article describes a categorical structure of perception of still-life painting. Analysis is done on the system of visual opposition elements in still-life. A still-life is considered a "perceptual statement about the world", and a "visual aphorism" The research is based on such methods as: semantic spaces constructing and their transformation at introduction of additional elements in still-lifes. It also gives full analysis of an interpretation of complex images and understanding of types of still-life as a visual hermeneutics.
Puthoff, H. E.; Targ, R.
For more than 100 years, scientists have attempted to determine the truth or falsity of claims for the existence of a perceptual channel whereby certain individuals are able to perceive and describe remote data not presented to any known sense. This paper presents an outline of the history of scientific inquiry into such so-called paranormal perception and surveys the current state of the art in parapsychological research in the United States and abroad. The nature of this perceptual channel is examined in a series of experiments carried out in the Electronics and Bioengineering Laboratory of Stanford Research Institute. The perceptual modality most extensively investigated is the ability of both experienced subjects and inexperienced volunteers to view, by innate mental processes, remote geographical or technical targets including buildings, roads, and laboratory apparatus. The accumulated data indicate that the phenomenon is not a sensitive function of distance, and Faraday cage shielding does not in any apparent way degrade the quality and accuracy of perception. On the basis of this research, some areas of physics are suggested from which a description or explanation of the phenomenon could be forthcoming.
Full Text Available BACKGROUND: A public that is an informed partner in clinical research is important for ethical, methodological, and operational reasons. There are indications that the public is unaware or misinformed, and not sufficiently engaged in clinical research but studies on the topic are lacking. PARTAKE - Public Awareness of Research for Therapeutic Advancements through Knowledge and Empowerment is a program aimed at increasing public awareness and partnership in clinical research. The PARTAKE Survey is a component of the program. OBJECTIVE: To study public knowledge and perceptions of clinical research. METHODS: A 40-item questionnaire combining multiple-choice and open-ended questions was administered to 175 English- or Hindi-speaking individuals in 8 public locations representing various socioeconomic strata in New Delhi, India. RESULTS: Interviewees were 18-84 old (mean: 39.6, SD ± 16.6, 23.6% female, 68.6% employed, 7.3% illiterate, 26.3% had heard of research, 2.9% had participated and 58.9% expressed willingness to participate in clinical research. The following perceptions were reported (% true/% false/% not aware: 'research benefits society' (94.1%/3.5%/2.3%, 'the government protects against unethical clinical research' (56.7%/26.3%/16.9%, 'research hospitals provide better care' (67.2%/8.7%/23.9%, 'confidentiality is adequately protected' (54.1%/12.3%/33.5%, 'participation in research is voluntary' (85.3%/5.8%/8.7%; 'participants treated like 'guinea pigs'' (20.7%/53.2%/26.0%, and 'compensation for participation is adequate' (24.7%/12.9%/62.3%. CONCLUSIONS: Results suggest the Indian public is aware of some key features of clinical research (e.g., purpose, value, voluntary nature of participation, and supports clinical research in general but is unaware of other key features (e.g., compensation, confidentiality, protection of human participants and exhibits some distrust in the conduct and reporting of clinical trials. Larger, cross
Hoga, Luiza Akiko Komura; Reberte, Luciana Magnoni
The aim of this study was to verify the perception of participants regarding the use of the action-research methodology in the development of a group of pregnant women. The group was sponsored by the University of São Paulo's Hospital Universitário. Individual interviews were conducted in order to obtain data from the group's 12 participants. The action-research strategy brought benefits to the development of the group, stimulated participation, promoted the mutual identification of the group members, and responded to specific necessities. Some limitations imposed by the use of the strategy were mentioned and suggestions for improvement were cited. Based on the group members' positive evaluation, the use of the action-research strategy is encouraged.
Gammelgaard, A; Knudsen, Lisbeth E.; Bisgaard, H
OBJECTIVE: To analyse the motivations and perceptions of parents on the participation of their infants and young children in a comprehensive and invasive clinical research study. METHODS: Semistructured qualitative interviews were conducted with 23 mothers with asthma whose infants and young...... the parents of children with lung or skin symptoms and those of healthy children. CONCLUSIONS: It is possible to design and accomplish invasive clinical research on infants and young children in a manner that parents find ethically sound....... to prevent the possible development of asthma. Parents found it very important that their children enjoyed their visits to the research clinic, and that they could withdraw from the study if their child started responding negatively to those visits. No apparent difference was seen in the attitude between...
Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.
Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.
Full Text Available Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others’ emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of crossmodal prediction. In emotion perception, as in most other settings, visual information precedes the auditory one. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, it has not been addressed so far in audiovisual emotion perception. Based on the current state of the art in (a crossmodal prediction and (b multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG and magnetoencephalographic (MEG studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow for a more reliable prediction of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 response in the EEG and the duration of visual emotional but not non-emotional information. If the assumption that emotional content allows for more reliable predictions can be corroborated in future studies, crossmodal prediction is a crucial factor in our understanding of multisensory emotion perception.
Full Text Available One view of speech perception is that acoustic signals are transformed into representations for pattern matching to determine linguistic structure. This process can be taken as a statistical pattern-matching problem, assuming realtively stable linguistic categories are characterized by neural representations related to auditory properties of speech that can be compared to speech input. This kind of pattern matching can be termed a passive process which implies rigidity of processingd with few demands on cognitive processing. An alternative view is that speech recognition, even in early stages, is an active process in which speech analysis is attentionally guided. Note that this does not mean consciously guided but that information-contingent changes in early auditory encoding can occur as a function of context and experience. Active processing assumes that attention, plasticity, and listening goals are important in considering how listeners cope with adverse circumstances that impair hearing by masking noise in the environment or hearing loss. Although theories of speech perception have begun to incorporate some active processing, they seldom treat early speech encoding as plastic and attentionally guided. Recent research has suggested that speech perception is the product of both feedforward and feedback interactions between a number of brain regions that include descending projections perhaps as far downstream as the cochlea. It is important to understand how the ambiguity of the speech signal and constraints of context dynamically determine cognitive resources recruited during perception including focused attention, learning, and working memory. Theories of speech perception need to go beyond the current corticocentric approach in order to account for the intrinsic dynamics of the auditory encoding of speech. In doing so, this may provide new insights into ways in which hearing disorders and loss may be treated either through augementation or
’. In this paper, I review recent neurocognitive research suggesting that the auditory system is sensitive to structural information about real-world objects. Instead of focusing solely on perceptual sound features as determinants of auditory objects, I propose that real-world object properties are inherent......The auditory system transforms patterns of sound energy into perceptual objects but the precise definition of an ‘auditory object’ is much debated. In the context of music listening, Pierre Schaeffer argued that ‘sound objects’ are the fundamental perceptual units in ‘musical objects...
’. In this paper, I review recent neurocognitive research suggesting that the auditory system is sensitive to structural information about real-world objects. Instead of focusing solely on perceptual sound features as determinants of auditory objects, I propose that real-world object properties are inherent......The auditory system transforms patterns of sound energy into perceptual objects but the precise definition of an ‘auditory object’ is much debated. In the context of music listening, Pierre Schaeffer argued that ‘sound objects’ are the fundamental perceptual units in ‘musical objects...
Horie, Yoshinori; Toriizuka, Takashi
The focus of this study is a human's ability to make full use of listening and hearing. This ability consists of dividing auditory information into a signal and a noise. To evaluate the risk of using headphones, the study investigated the auditory perception when a warning sound is given in the presence of environmental noise.
Groen, Wouter B.; van Orsouw, Linda; ter Huurne, Niels; Swinkels, Sophie; van der Gaag, Rutger-Jan; Buitelaar, Jan K.; Zwiers, Marcel P.
The perceptual pattern in autism has been related to either a specific localized processing deficit or a pathway-independent, complexity-specific anomaly. We examined auditory perception in autism using an auditory disembedding task that required spectral and temporal integration. 23 children with high-functioning-autism and 23 matched controls…
Vander Werff, Kathy R; Rieger, Brian
but were again observed between controls and those mTBI subjects with abnormal behavioral auditory test performance. These differences were seen for the onset portions of the speech-ABR waveforms in quiet and were close to significant for the onset wave. Across groups, quiet versus noise comparisons were significant for most speech-ABR measures but the noise condition did not reveal more group differences than speech-ABR in quiet, likely because of variability and overall small amplitudes in this condition for both groups. The outcomes of this study indicate that subcortical neural encoding of auditory information is affected in a significant portion of individuals with long-term problems after mTBI. These subcortical differences appear to relate to performance on tests of auditory processing and perception, even in the absence of significant hearing loss on the audiogram. While confounds of age and slight differences in audiometric thresholds cannot be ruled out, these preliminary results are consistent with the idea that mTBI can result in neuronal changes within the subcortical auditory pathway that appear to relate to functional auditory outcomes. Although further research is needed, clinical audiological evaluation of individuals with ongoing post-mTBI symptoms is warranted for identification of individuals who may benefit from auditory rehabilitation as part of their overall treatment plan.
Kuriki, Shinya; Numao, Ryousuke; Nemoto, Iku
The auditory illusory perception "scale illusion" occurs when ascending and descending musical scale tones are delivered in a dichotic manner, such that the higher or lower tone at each instant is presented alternately to the right and left ears. Resulting tone sequences have a zigzag pitch in one ear and the reversed (zagzig) pitch in the other ear. Most listeners hear illusory smooth pitch sequences of up-down and down-up streams in the two ears separated in higher and lower halves of the scale. Although many behavioral studies have been conducted, how and where in the brain the illusory percept is formed have not been elucidated. In this study, we conducted functional magnetic resonance imaging using sequential tones that induced scale illusion (ILL) and those that mimicked the percept of scale illusion (PCP), and we compared the activation responses evoked by those stimuli by region-of-interest analysis. We examined the effects of adaptation, i.e., the attenuation of response that occurs when close-frequency sounds are repeated, which might interfere with the changes in activation by the illusion process. Results of the activation difference of the two stimuli, measured at varied tempi of tone presentation, in the superior temporal auditory cortex were not explained by adaptation. Instead, excess activation of the ILL stimulus from the PCP stimulus at moderate tempi (83 and 126 bpm) was significant in the posterior auditory cortex with rightward superiority, while significant prefrontal activation was dominant at the highest tempo (245 bpm). We suggest that the area of the planum temporale posterior to the primary auditory cortex is mainly involved in the illusion formation, and that the illusion-related process is strongly dependent on the rate of tone presentation. Copyright © 2016 Elsevier B.V. All rights reserved.
Limb, Charles J; Roy, Alexis T
Despite advances in technology, the ability to perceive music remains limited for many cochlear implant users. This paper reviews the technological, biological, and acoustical constraints that make music an especially challenging stimulus for cochlear implant users, while highlighting recent research efforts to overcome these shortcomings. The limitations of cochlear implant devices, which have been optimized for speech comprehension, become evident when applied to music, particularly with regards to inadequate spectral, fine-temporal, and dynamic range representation. Beyond the impoverished information transmitted by the device itself, both peripheral and central auditory nervous system deficits are seen in the presence of sensorineural hearing loss, such as auditory nerve degeneration and abnormal auditory cortex activation. These technological and biological constraints to effective music perception are further compounded by the complexity of the acoustical features of music itself that require the perceptual integration of varying rhythmic, melodic, harmonic, and timbral elements of sound. Cochlear implant users not only have difficulty perceiving spectral components individually (leading to fundamental disruptions in perception of pitch, melody, and harmony) but also display deficits with higher perceptual integration tasks required for music perception, such as auditory stream segregation. Despite these current limitations, focused musical training programs, new assessment methods, and improvements in the representation and transmission of the complex acoustical features of music through technological innovation offer the potential for significant advancements in cochlear implant-mediated music perception. Copyright © 2013 Elsevier B.V. All rights reserved.
Teresa Piñeiro Otero
Full Text Available Research in communication has recently gained relevance, leading to the development of a significant and specialised corpus of studies. Along these lines, this work focuses on radio studies from the perspective of the scientific community – an innovative approach in the framework of communication meta-research. Starting with the outcomes of an initial survey, an in-depth study was then conducted to focus on the importance, themes and quality of radio research. We looked into the 31 most important authors of contributions published in scientific journals (1980-2013 to understand their perceptions and impressions about Spanish research. This study helped confirm that radio research is still a minority and individual endeavour. The respondents explained that this is due to limited support from research and academic institutions. In any case, radio researchers find the object of their study to be relevant, influential and well rooted, even if the topics and approaches are permeable to the context where the studies take place.
Faizah Abd Majid
Full Text Available The 21st century global market demands a highly skilled workforce that is intellectually active, creative, innovative, articulate, adaptable and capable of critical thinking. Consequently, Malaysian higher education institutions of the 21st century will have the responsibility to ensure the targets are achived (Ministry of Higher Education Strategic Plan Report, 2007. Some strategies have been suggested by the Ministry of Higher Education to achieve the targets of producing researchers who are creative and innovative. This research sought to investigate the perceptions of Malaysian postgraduates on creativity and innovation in research. A survey of a selected group of postgraduates based on a convenience sampling technique was carried out to elicit relevant data. Quantitative data was analysed and presented in terms of means and percentages. Descriptive data was analysed thematically and categorised. The findings revealed that the respondents were aware of the national higher education agenda on enhancing research and innovation. Likewise, they were able to provide descriptions of creative and innovative researchers. However, they indicated that much more could be done in higher education institutions in order to prepare them to become creative and innovative researchers. Their suggestions include revising the curriculum in particular the content, assignments and assessment. Most importantly, they highlighted the need to include them as key players in research activities and to participate globally. These findings have direct implications for higher education policy makers, curriculum designers and postgraduate instructors.
Meraj, Lubna; Gul, Naheed; Akhter, Ijaz; Iram, Farreeha; Khan, Abdus Salam
To understand medical students' perceptions and attitudes towards research to help facilitators design specific courses according to their needs. The cross-sectional study was conducted at Shifa College of Medicine, Islamabad, Pakistan, from May to November 2013, and comprised undergraduate medical students. A pre-tested questionnaire was used for data collection. Students' response was recorded on a Likert scale from 'strongly disagree' 1 to 'strongly agree' 5. Analysis was done using statistical SPSS17. Of the 195 students enrolled, 172(88%) responded. Overall, 78(45.3%) students said they were aware of research. Research was considered useful for their professional careers and relevant to their daily life by 133(65.7%) students, while 72(41.9%) did not consider it worthwhile to pursue research as a career. Besides, 71(41.3%) students enjoyed research, while 120(70%) perceived research as stressful and 107(62.2) complex. Most students considered research valuable but at the same time they perceived it as stressful and complex.
Full Text Available The main definitions of the culture consuption – research possibilities of the perception of art, including artistic products, creators of art, consumers of art, and the wider society are studied in this article. The analysis of how consumers use art, what meanings it elicits in their minds, and how it eventually penetrates into the general society, how it is mediated by these individuals and is affected by their attitudes and values, their social location is made analysed in this article. Art has a special function to provide the aesthetic experience. The perception of art is analysed by the data of Institute for Social Research in two levels: cognitive and emotional. The young people with bigger cultural capital perceive art in an aesthetic way and like elaborate art. Globalization has made Hollywood production more popular artifact throughout the world and it is production is the source of images for the young peoples. The mass media, like cinema, has created a new reality, hyperreality, which comprises constructing of images and modelling the everyday life of respondents – this is discussed in the article.
Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig.......Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig....
Antes, Alison L; English, Tammy; Baldwin, Kari A; DuBois, James M
Successfully navigating the norms of a society is a complex task that involves recognizing diverse kinds of rules as well as the relative weight attached to them. In the United States (U.S.), different kinds of rules-federal statutes and regulations, scientific norms, and professional ideals-guide the work of researchers. Penalties for violating these different kinds of rules and norms can range from the displeasure of peers to criminal sanctions. We proposed that it would be more difficult for researchers working in the U.S. who were born in other nations to distinguish the seriousness of violating rules across diverse domains. We administered a new measure, the evaluating rules in science task (ERST), to National Institutes of Health-funded investigators (101 born in the U.S. and 102 born outside of the U.S.). The ERST assessed perceptions of the seriousness of violating research regulations, norms, and ideals, and allowed us to calculate the degree to which researchers distinguished between the seriousness of each rule category. The ERST also assessed researchers' predictions of the seriousness that research integrity officers (RIOs) would assign to the rules. We compared researchers' predictions to the seriousness ratings of 112 RIOs working at U.S. research-intensive universities. U.S.-born researchers were significantly better at distinguishing between the seriousness of violating federal research regulations and violating ideals of science, and they were more accurate in their predictions of the views of RIOs. Acculturation to the U.S. moderated the effects of nationality on accuracy. We discuss the implications of these findings in terms of future research and education.
Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.
Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha
The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.
Lee, Hweeling; Noppeney, Uta
To form a coherent percept of the environment, the brain needs to bind sensory signals emanating from a common source, but to segregate those from different sources . Temporal correlations and synchrony act as prominent cues for multisensory integration [2-4], but the neural mechanisms by which such cues are identified remain unclear. Predictive coding suggests that the brain iteratively optimizes an internal model of its environment by minimizing the errors between its predictions and the sensory inputs [5,6]. This model enables the brain to predict the temporal evolution of natural audiovisual inputs and their statistical (for example, temporal) relationship. A prediction of this theory is that asynchronous audiovisual signals violating the model's predictions induce an error signal that depends on the directionality of the audiovisual asynchrony. As the visual system generates the dominant temporal predictions for visual leading asynchrony, the delayed auditory inputs are expected to generate a prediction error signal in the auditory system (and vice versa for auditory leading asynchrony). Using functional magnetic resonance imaging (fMRI), we measured participants' brain responses to synchronous, visual leading and auditory leading movies of speech, sinewave speech or music. In line with predictive coding, auditory leading asynchrony elicited a prediction error in visual cortices and visual leading asynchrony in auditory cortices. Our results reveal predictive coding as a generic mechanism to temporally bind signals from multiple senses into a coherent percept. Copyright © 2014 Elsevier Ltd. All rights reserved.
Oechslin, Mathias S; Meyer, Martin; Jäncke, Lutz
Absolute pitch (AP) has been shown to be associated with morphological changes and neurophysiological adaptations in the planum temporale, a cortical area involved in higher-order auditory and speech perception processes...
Wightman, Frederic L.; Jenison, Rick
All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.
Idiazábal-Aletxa, M A; Saperas-Rodríguez, M
Specific language impairment (SLI) is diagnosed when a child has difficulty in producing or understanding spoken language for no apparent reason. The diagnosis in made when language development is out of keeping with other aspects of development, and possible explanatory causes have been excluded. During the last years neurosciences have approached to the study of SLI. The ability to process two or more rapidly presented, successive, auditory stimuli is believed to underlie successful language acquisition. It has been proposed that SLI is the consequence of low-level abnormalities in auditory perception. Too, children with SLI show a specific deficit in automatic discrimination of syllables. Electrophysiological methods may reveal underlying immaturity or other abnormality of auditory processing even when behavioural thresholds look normal. There is much controversy about the role of such deficits in causing their language problems, and it has been difficult to establish solid, replicable findings in this area because of the heterogeneity in the population and because insufficient attention has been paid to maturational aspects of auditory processing.
Di Salle, Francesco; Esposito, Fabrizio; Scarabino, Tommaso; Formisano, Elia; Marciano, Elio; Saulino, Claudio; Cirillo, Sossio; Elefante, Raffaele; Scheffler, Klaus; Seifritz, Erich
Functional magnetic resonance imaging (fMRI) has rapidly become the most widely used imaging method for studying brain functions in humans. This is a result of its extreme flexibility of use and of the astonishingly detailed spatial and temporal information it provides. Nevertheless, until very recently, the study of the auditory system has progressed at a considerably slower pace compared to other functional systems. Several factors have limited fMRI research in the auditory field, including some intrinsic features of auditory functional anatomy and some peculiar interactions between fMRI technique and audition. A well known difficulty arises from the high intensity acoustic noise produced by gradient switching in echo-planar imaging (EPI), as well as in other fMRI sequences more similar to conventional MR sequences. The acoustic noise interacts in an unpredictable way with the experimental stimuli both from a perceptual point of view and in the evoked hemodynamics. To overcome this problem, different approaches have been proposed recently that generally require careful tailoring of the experimental design and the fMRI methodology to the specific requirements posed by the auditory research. The novel methodological approaches can make the fMRI exploration of auditory processing much easier and more reliable, and thus may permit filling the gap with other fields of neuroscience research. As a result, some fundamental neural underpinnings of audition are being clarified, and the way sound stimuli are integrated in the auditory gestalt are beginning to be understood.
Taber, Jennifer M; Klein, William M P
Perceived risk for disease is included as a predictor of intentions and behavior in many health behavior theories. However, perceived risk is not always a strong predictor of intentions and behaviors. One reason may be suboptimal conceptualization and measurement of risk perceptions; in particular, research may not capture the conviction and certainty with which a risk perception is held. The rich and independent literature on attitudes might be leveraged to explore whether conviction is an important moderator of the effects of risk perceptions on intentions and behavior. Attitudes are more predictive of intentions when they are high in multiple aspects of attitude strength, including attitude certainty and being more accessible and stable over time. Working from the assumption that risk perceptions have a similar structure and function to attitudes, we consider whether factors known to strengthen the attitude-behavior correspondence might also strengthen the risk perception-behavior correspondence. Although by strict definition risk perceptions are not evaluations (a critical component of attitudes), the predictive validity of risk perceptions may be increased by attention to one's "conviction" or certainty of perceived risk. We also review recent strategies designed to improve risk perception measurement, including affective and experiential assessments of perceived risk and the importance of allowing people to indicate that they "don't know" their disease risk. The aim of this paper is to connect two disparate literatures-attitudes and persuasion in social psychology with risk perceptions in health psychology and decision science-in an attempt to stimulate more work on characteristics and proper measurement of risk perceptions.
Cardwell, Jacqueline M; Magnier, Kirsty; Kinnison, Tierney; Silva-Fletcher, Ayona
Although research underpins clinical work, many students training to be clinicians are not inherently interested in developing research skills. To characterise