WorldWideScience

Sample records for sound localization cue

  1. Development of the sound localization cues in cats

    Science.gov (United States)

    Tollin, Daniel J.

    2004-05-01

    Cats are a common model for developmental studies of the psychophysical and physiological mechanisms of sound localization. Yet, there are few studies on the development of the acoustical cues to location in cats. The magnitude of the three main cues, interaural differences in time (ITDs) and level (ILDs), and monaural spectral shape cues, vary with location in adults. However, the increasing interaural distance associated with a growing head and pinnae during development will result in cues that change continuously until maturation is complete. Here, we report measurements, in cats aged 1 week to adulthood, of the physical dimensions of the head and pinnae and the localization cues, computed from measurements of directional transfer functions. At 1 week, ILD depended little on azimuth for frequencies 10 dB) shift to lower frequencies, and the maximum ITD increases to nearly 370 μs. Changes in the cues are correlated with the increasing size of the head and pinnae. [Work supported by NIDCD DC05122.

  2. Do you hear where I hear?: Isolating the individualized sound localization cues.

    Directory of Open Access Journals (Sweden)

    Griffin David Romigh

    2014-12-01

    Full Text Available It is widely acknowledged that individualized head-related transfer function (HRTF measurements are needed to adequately capture all of the 3D spatial hearing cues. However, many perceptual studies have shown that localization accuracy in the lateral dimension is only minimally decreased by the use of non-individualized head-related transfer functions. This evidence supports the idea that the individualized components of an HRTF could be isolated from those that are more general in nature. In the present study we decomposed the HRTF at each location into average, lateral and intraconic spectral components, along with an ITD in an effort to isolate the sound localization cues that are responsible for the inter-individual differences in localization performance. HRTFs for a given listener were then reconstructed systematically with components that were both individualized and non-individualized in nature, and the effect of each modification was analyzed via a virtual localization test where brief 250-ms noise bursts were rendered with the modified HRTFs. Results indicate that the cues important for individualization of HRTFs are contained almost exclusively in the intraconic portion of the HRTF spectra and localization is only minimally affected by introducing non-individualized cues into the other HRTF components. These results provide new insights into what specific inter-individual differences in head-related acoustical features are most relevant to sound localization, and provide a framework for how future human-machine interfaces might be more effectively generalized and/or individualized.

  3. Cue Reliability Represented in the Shape of Tuning Curves in the Owl's Sound Localization System.

    Science.gov (United States)

    Cazettes, Fanny; Fischer, Brian J; Peña, Jose L

    2016-02-17

    Optimal use of sensory information requires that the brain estimates the reliability of sensory cues, but the neural correlate of cue reliability relevant for behavior is not well defined. Here, we addressed this issue by examining how the reliability of spatial cue influences neuronal responses and behavior in the owl's auditory system. We show that the firing rate and spatial selectivity changed with cue reliability due to the mechanisms generating the tuning to the sound localization cue. We found that the correlated variability among neurons strongly depended on the shape of the tuning curves. Finally, we demonstrated that the change in the neurons' selectivity was necessary and sufficient for a network of stochastic neurons to predict behavior when sensory cues were corrupted with noise. This study demonstrates that the shape of tuning curves can stand alone as a coding dimension of environmental statistics. In natural environments, sensory cues are often corrupted by noise and are therefore unreliable. To make the best decisions, the brain must estimate the degree to which a cue can be trusted. The behaviorally relevant neural correlates of cue reliability are debated. In this study, we used the barn owl's sound localization system to address this question. We demonstrated that the mechanisms that account for spatial selectivity also explained how neural responses changed with degraded signals. This allowed for the neurons' selectivity to capture cue reliability, influencing the population readout commanding the owl's sound-orienting behavior. Copyright © 2016 the authors 0270-6474/16/362101-10$15.00/0.

  4. Sound localization in common vampire bats: Acuity and use of the binaural time cue by a small mammal

    Science.gov (United States)

    Heffner, Rickye S.; Koay, Gimseong; Heffner, Henry E.

    2015-01-01

    Passive sound-localization acuity and the ability to use binaural time and intensity cues were determined for the common vampire bat (Desmodus rotundus). The bats were tested using a conditioned suppression/avoidance procedure in which they drank defibrinated blood from a spout in the presence of sounds from their right, but stopped drinking (i.e., broke contact with the spout) whenever a sound came from their left, thereby avoiding a mild shock. The mean minimum audible angle for three bats for a 100-ms noise burst was 13.1°—within the range of thresholds for other bats and near the mean for mammals. Common vampire bats readily localized pure tones of 20 kHz and higher, indicating they could use interaural intensity-differences. They could also localize pure tones of 5 kHz and lower, thereby demonstrating the use of interaural time-differences, despite their very small maximum interaural distance of 60 μs. A comparison of the use of locus cues among mammals suggests several implications for the evolution of sound localization and its underlying anatomical and physiological mechanisms. PMID:25618037

  5. Contribution of monaural and binaural cues to sound localization in listeners with acquired unilateral conductive hearing loss: improved directional hearing with a bone-conduction device.

    Science.gov (United States)

    Agterberg, Martijn J H; Snik, Ad F M; Hol, Myrthe K S; Van Wanrooij, Marc M; Van Opstal, A John

    2012-04-01

    Sound localization in the horizontal (azimuth) plane relies mainly on interaural time differences (ITDs) and interaural level differences (ILDs). Both are distorted in listeners with acquired unilateral conductive hearing loss (UCHL), reducing their ability to localize sound. Several studies demonstrated that UCHL listeners had some ability to localize sound in azimuth. To test whether listeners with acquired UCHL use strongly perturbed binaural difference cues, we measured localization while they listened with a sound-attenuating earmuff over their impaired ear. We also tested the potential use of monaural pinna-induced spectral-shape cues for localization in azimuth and elevation, by filling the cavities of the pinna of their better-hearing ear with a mould. These conditions were tested while a bone-conduction device (BCD), fitted to all UCHL listeners in order to provide hearing from the impaired side, was turned off. We varied stimulus presentation levels to investigate whether UCHL listeners were using sound level as an azimuth cue. Furthermore, we examined whether horizontal sound-localization abilities improved when listeners used their BCD. Ten control listeners without hearing loss demonstrated a significant decrease in their localization abilities when they listened with a monaural plug and muff. In 4/13 UCHL listeners we observed good horizontal localization of 65 dB SPL broadband noises with their BCD turned off. Localization was strongly impaired when the impaired ear was covered with the muff. The mould in the good ear of listeners with UCHL deteriorated the localization of broadband sounds presented at 45 dB SPL. This demonstrates that they used pinna cues to localize sounds presented at low levels. Our data demonstrate that UCHL listeners have learned to adapt their localization strategies under a wide variety of hearing conditions and that sound-localization abilities improved with their BCD turned on.

  6. Cues for localization in the horizontal plane

    DEFF Research Database (Denmark)

    Jeppesen, Jakob; Møller, Henrik

    2005-01-01

    Spatial localization of sound is often described as unconscious evaluation of cues given by the interaural time difference (ITD) and the spectral information of the sound that reaches the two ears. Our present knowledge suggests the hypothesis that the ITD roughly determines the cone of the perce...... independently in HRTFs used for binaural synthesis. The ITD seems to be dominant for localization in the horizontal plane even when the spectral information is severely degraded....

  7. Horizontal sound localization in cochlear implant users with a contralateral hearing aid.

    Science.gov (United States)

    Veugen, Lidwien C E; Hendrikse, Maartje M E; van Wanrooij, Marc M; Agterberg, Martijn J H; Chalupper, Josef; Mens, Lucas H M; Snik, Ad F M; John van Opstal, A

    2016-06-01

    Interaural differences in sound arrival time (ITD) and in level (ILD) enable us to localize sounds in the horizontal plane, and can support source segregation and speech understanding in noisy environments. It is uncertain whether these cues are also available to hearing-impaired listeners who are bimodally fitted, i.e. with a cochlear implant (CI) and a contralateral hearing aid (HA). Here, we assessed sound localization behavior of fourteen bimodal listeners, all using the same Phonak HA and an Advanced Bionics CI processor, matched with respect to loudness growth. We aimed to determine the availability and contribution of binaural (ILDs, temporal fine structure and envelope ITDs) and monaural (loudness, spectral) cues to horizontal sound localization in bimodal listeners, by systematically varying the frequency band, level and envelope of the stimuli. The sound bandwidth had a strong effect on the localization bias of bimodal listeners, although localization performance was typically poor for all conditions. Responses could be systematically changed by adjusting the frequency range of the stimulus, or by simply switching the HA and CI on and off. Localization responses were largely biased to one side, typically the CI side for broadband and high-pass filtered sounds, and occasionally to the HA side for low-pass filtered sounds. HA-aided thresholds better than 45 dB HL in the frequency range of the stimulus appeared to be a prerequisite, but not a guarantee, for the ability to indicate sound source direction. We argue that bimodal sound localization is likely based on ILD cues, even at frequencies below 1500 Hz for which the natural ILDs are small. These cues are typically perturbed in bimodal listeners, leading to a biased localization percept of sounds. The high accuracy of some listeners could result from a combination of sufficient spectral overlap and loudness balance in bimodal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Hearing in alpacas (Vicugna pacos): audiogram, localization acuity, and use of binaural locus cues.

    Science.gov (United States)

    Heffner, Rickye S; Koay, Gimseong; Heffner, Henry E

    2014-02-01

    Behavioral audiograms and sound localization abilities were determined for three alpacas (Vicugna pacos). Their hearing at a level of 60 dB sound pressure level (SPL) (re 20 μPa) extended from 40 Hz to 32.8 kHz, a range of 9.7 octaves. They were most sensitive at 8 kHz, with an average threshold of -0.5 dB SPL. The minimum audible angle around the midline for 100-ms broadband noise was 23°, indicating relatively poor localization acuity and potentially supporting the finding that animals with broad areas of best vision have poorer sound localization acuity. The alpacas were able to localize low-frequency pure tones, indicating that they can use the binaural phase cue, but they were unable to localize pure tones above the frequency of phase ambiguity, thus indicating complete inability to use the binaural intensity-difference cue. In contrast, the alpacas relied on their high-frequency hearing for pinna cues; they could discriminate front-back sound sources using 3-kHz high-pass noise, but not 3-kHz low-pass noise. These results are compared to those of other hoofed mammals and to mammals more generally.

  9. A dominance hierarchy of auditory spatial cues in barn owls.

    Directory of Open Access Journals (Sweden)

    Ilana B Witten

    2010-04-01

    Full Text Available Barn owls integrate spatial information across frequency channels to localize sounds in space.We presented barn owls with synchronous sounds that contained different bands of frequencies (3-5 kHz and 7-9 kHz from different locations in space. When the owls were confronted with the conflicting localization cues from two synchronous sounds of equal level, their orienting responses were dominated by one of the sounds: they oriented toward the location of the low frequency sound when the sources were separated in azimuth; in contrast, they oriented toward the location of the high frequency sound when the sources were separated in elevation. We identified neural correlates of this behavioral effect in the optic tectum (OT, superior colliculus in mammals, which contains a map of auditory space and is involved in generating orienting movements to sounds. We found that low frequency cues dominate the representation of sound azimuth in the OT space map, whereas high frequency cues dominate the representation of sound elevation.We argue that the dominance hierarchy of localization cues reflects several factors: 1 the relative amplitude of the sound providing the cue, 2 the resolution with which the auditory system measures the value of a cue, and 3 the spatial ambiguity in interpreting the cue. These same factors may contribute to the relative weighting of sound localization cues in other species, including humans.

  10. Cues for localization in the horizontal plane

    DEFF Research Database (Denmark)

    Jeppesen, Jakob; Møller, Henrik

    2005-01-01

    manipulated in HRTFs used for binaural synthesis of sound in the horizontal plane. The manipulation of cues resulted in HRTFs with cues ranging from correct combinations of spectral information and ITDs to combinations with severely conflicting cues. Both the ITD and the spectral information seem...

  11. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds

    Science.gov (United States)

    Dietz, Mathias; Marquardt, Torsten; Salminen, Nelli H.; McAlpine, David

    2013-01-01

    The ability to locate the direction of a target sound in a background of competing sources is critical to the survival of many species and important for human communication. Nevertheless, brain mechanisms that provide for such accurate localization abilities remain poorly understood. In particular, it remains unclear how the auditory brain is able to extract reliable spatial information directly from the source when competing sounds and reflections dominate all but the earliest moments of the sound wave reaching each ear. We developed a stimulus mimicking the mutual relationship of sound amplitude and binaural cues, characteristic to reverberant speech. This stimulus, named amplitude modulated binaural beat, allows for a parametric and isolated change of modulation frequency and phase relations. Employing magnetoencephalography and psychoacoustics it is demonstrated that the auditory brain uses binaural information in the stimulus fine structure only during the rising portion of each modulation cycle, rendering spatial information recoverable in an otherwise unlocalizable sound. The data suggest that amplitude modulation provides a means of “glimpsing” low-frequency spatial cues in a manner that benefits listening in noisy or reverberant environments. PMID:23980161

  12. Experience with speech sounds is not necessary for cue trading by budgerigars (Melopsittacus undulatus.

    Directory of Open Access Journals (Sweden)

    Mary Flaherty

    Full Text Available The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated, Passive speech exposure (regular exposure to human speech, and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with "d" or "t" and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal.

  13. A functional neuroimaging study of sound localization: visual cortex activity predicts performance in early-blind individuals.

    Directory of Open Access Journals (Sweden)

    Frédéric Gougoux

    2005-02-01

    Full Text Available Blind individuals often demonstrate enhanced nonvisual perceptual abilities. However, the neural substrate that underlies this improved performance remains to be fully understood. An earlier behavioral study demonstrated that some early-blind people localize sounds more accurately than sighted controls using monaural cues. In order to investigate the neural basis of these behavioral differences in humans, we carried out functional imaging studies using positron emission tomography and a speaker array that permitted pseudo-free-field presentations within the scanner. During binaural sound localization, a sighted control group showed decreased cerebral blood flow in the occipital lobe, which was not seen in early-blind individuals. During monaural sound localization (one ear plugged, the subgroup of early-blind subjects who were behaviorally superior at sound localization displayed two activation foci in the occipital cortex. This effect was not seen in blind persons who did not have superior monaural sound localization abilities, nor in sighted individuals. The degree of activation of one of these foci was strongly correlated with sound localization accuracy across the entire group of blind subjects. The results show that those blind persons who perform better than sighted persons recruit occipital areas to carry out auditory localization under monaural conditions. We therefore conclude that computations carried out in the occipital cortex specifically underlie the enhanced capacity to use monaural cues. Our findings shed light not only on intermodal compensatory mechanisms, but also on individual differences in these mechanisms and on inhibitory patterns that differ between sighted individuals and those deprived of vision early in life.

  14. Contralateral routing of signals disrupts monaural level and spectral cues to sound localisation on the horizontal plane.

    Science.gov (United States)

    Pedley, Adam J; Kitterick, Pádraig T

    2017-09-01

    Contra-lateral routing of signals (CROS) devices re-route sound between the deaf and hearing ears of unilaterally-deaf individuals. This rerouting would be expected to disrupt access to monaural level cues that can support monaural localisation in the horizontal plane. However, such a detrimental effect has not been confirmed by clinical studies of CROS use. The present study aimed to exercise strict experimental control over the availability of monaural cues to localisation in the horizontal plane and the fitting of the CROS device to assess whether signal routing can impair the ability to locate sources of sound and, if so, whether CROS selectively disrupts monaural level or spectral cues to horizontal location, or both. Unilateral deafness and CROS device use were simulated in twelve normal hearing participants. Monaural recordings of broadband white noise presented from three spatial locations (-60°, 0°, and +60°) were made in the ear canal of a model listener using a probe microphone with and without a CROS device. The recordings were presented to participants via an insert earphone placed in their right ear. The recordings were processed to disrupt either monaural level or spectral cues to horizontal sound location by roving presentation level or the energy across adjacent frequency bands, respectively. Localisation ability was assessed using a three-alternative forced-choice spatial discrimination task. Participants localised above chance levels in all conditions. Spatial discrimination accuracy was poorer when participants only had access to monaural spectral cues compared to when monaural level cues were available. CROS use impaired localisation significantly regardless of whether level or spectral cues were available. For both cues, signal re-routing had a detrimental effect on the ability to localise sounds originating from the side of the deaf ear (-60°). CROS use also impaired the ability to use level cues to localise sounds originating from

  15. The Effect of Microphone Placement on Interaural Level Differences and Sound Localization Across the Horizontal Plane in Bilateral Cochlear Implant Users.

    Science.gov (United States)

    Jones, Heath G; Kan, Alan; Litovsky, Ruth Y

    2016-01-01

    This study examined the effect of microphone placement on the interaural level differences (ILDs) available to bilateral cochlear implant (BiCI) users, and the subsequent effects on horizontal-plane sound localization. Virtual acoustic stimuli for sound localization testing were created individually for eight BiCI users by making acoustic transfer function measurements for microphones placed in the ear (ITE), behind the ear (BTE), and on the shoulders (SHD). The ILDs across source locations were calculated for each placement to analyze their effect on sound localization performance. Sound localization was tested using a repeated-measures, within-participant design for the three microphone placements. The ITE microphone placement provided significantly larger ILDs compared to BTE and SHD placements, which correlated with overall localization errors. However, differences in localization errors across the microphone conditions were small. The BTE microphones worn by many BiCI users in everyday life do not capture the full range of acoustic ILDs available, and also reduce the change in cue magnitudes for sound sources across the horizontal plane. Acute testing with an ITE placement reduced sound localization errors along the horizontal plane compared to the other placements in some patients. Larger improvements may be observed if patients had more experience with the new ILD cues provided by an ITE placement.

  16. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants.

    Directory of Open Access Journals (Sweden)

    Yi Zheng

    Full Text Available Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs sound localization is known to improve when bilateral CIs (BiCIs are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users.

  17. Sound localization with head movement: implications for 3-d audio displays.

    Directory of Open Access Journals (Sweden)

    Ken Ian McAnally

    2014-08-01

    Full Text Available Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants’ heads had rotated through windows ranging in width of 2°, 4°, 8°, 16°, 32°, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: The utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth may be required to ensure that spatial information is conveyed with high accuracy.

  18. Single-sided deafness & directional hearing: contribution of spectral cues and high-frequency hearing loss in the hearing ear

    Directory of Open Access Journals (Sweden)

    Martijn Johannes Hermanus Agterberg

    2014-07-01

    Full Text Available Direction-specific interactions of sound waves with the head, torso and pinna provide unique spectral-shape cues that are used for the localization of sounds in the vertical plane, whereas horizontal sound localization is based primarily on the processing of binaural acoustic differences in arrival time (interaural time differences, or ITDs and sound level (interaural level differences, or ILDs. Because the binaural sound-localization cues are absent in listeners with total single-sided deafness (SSD, their ability to localize sound is heavily impaired. However, some studies have reported that SSD listeners are able, to some extent, to localize sound sources in azimuth, although the underlying mechanisms used for localization are unclear. To investigate whether SSD listeners rely on monaural pinna-induced spectral-shape cues of their hearing ear for directional hearing, we investigated localization performance for low-pass filtered (LP, 3 kHz and broadband (BB, 0.5 – 20 kHz noises in the two-dimensional frontal hemifield. We tested whether localization performance of SSD listeners further deteriorated when the pinna cavities of their hearing ear were filled with a mold that disrupted their spectral-shape cues. To remove the potential use of perceived sound level as an invalid azimuth cue, we randomly varied stimulus presentation levels over a broad range (45-65 dB SPL. Several listeners with SSD could localize HP and BB sound sources in the horizontal plane, but inter-subject variability was considerable. Localization performance of these listeners strongly reduced after diminishing of their spectral pinna-cues. We further show that inter-subject variability of SSD can be explained to a large extent by the severity of high-frequency hearing loss in their hearing ear.

  19. Reef Sound as an Orientation Cue for Shoreward Migration by Pueruli of the Rock Lobster, Jasus edwardsii.

    Science.gov (United States)

    Hinojosa, Ivan A; Green, Bridget S; Gardner, Caleb; Hesse, Jan; Stanley, Jenni A; Jeffs, Andrew G

    2016-01-01

    The post-larval or puerulus stage of spiny, or rock, lobsters (Palinuridae) swim many kilometres from open oceans into coastal waters where they subsequently settle. The orientation cues used by the puerulus for this migration are unclear, but are presumed to be critical to finding a place to settle. Understanding this process may help explain the biological processes of dispersal and settlement, and be useful for developing realistic dispersal models. In this study, we examined the use of reef sound as an orientation cue by the puerulus stage of the southern rock lobster, Jasus edwardsii. Experiments were conducted using in situ binary choice chambers together with replayed recording of underwater reef sound. The experiment was conducted in a sandy lagoon under varying wind conditions. A significant proportion of puerulus (69%) swam towards the reef sound in calm wind conditions. However, in windy conditions (>25 m s-1) the orientation behaviour appeared to be less consistent with the inclusion of these results, reducing the overall proportion of pueruli that swam towards the reef sound (59.3%). These results resolve previous speculation that underwater reef sound is used as an orientation cue in the shoreward migration of the puerulus of spiny lobsters, and suggest that sea surface winds may moderate the ability of migrating pueruli to use this cue to locate coastal reef habitat to settle. Underwater sound may increase the chance of successful settlement and survival of this valuable species.

  20. The invisible cues that guide king penguin chicks home: use of magnetic and acoustic cues during orientation and short-range navigation.

    Science.gov (United States)

    Nesterova, Anna P; Chiffard, Jules; Couchoux, Charline; Bonadonna, Francesco

    2013-04-15

    King penguins (Aptenodytes patagonicus) live in large and densely populated colonies, where navigation can be challenging because of the presence of many conspecifics that could obstruct locally available cues. Our previous experiments demonstrated that visual cues were important but not essential for king penguin chicks' homing. The main objective of this study was to investigate the importance of non-visual cues, such as magnetic and acoustic cues, for chicks' orientation and short-range navigation. In a series of experiments, the chicks were individually displaced from the colony to an experimental arena where they were released under different conditions. In the magnetic experiments, a strong magnet was attached to the chicks' heads. Trials were conducted in daylight and at night to test the relative importance of visual and magnetic cues. Our results showed that when the geomagnetic field around the chicks was modified, their orientation in the arena and the overall ability to home was not affected. In a low sound experiment we limited the acoustic cues available to the chicks by putting ear pads over their ears, and in a loud sound experiment we provided additional acoustic cues by broadcasting colony sounds on the opposite side of the arena to the real colony. In the low sound experiment, the behavior of the chicks was not affected by the limited sound input. In the loud sound experiment, the chicks reacted strongly to the colony sound. These results suggest that king penguin chicks may use the sound of the colony while orienting towards their home.

  1. Localization Performance of Multiple Vibrotactile Cues on Both Arms.

    Science.gov (United States)

    Wang, Dangxiao; Peng, Cong; Afzal, Naqash; Li, Weiang; Wu, Dong; Zhang, Yuru

    2018-01-01

    To present information using vibrotactile stimuli in wearable devices, it is fundamental to understand human performance of localizing vibrotactile cues across the skin surface. In this paper, we studied human ability to identify locations of multiple vibrotactile cues activated simultaneously on both arms. Two haptic bands were mounted in proximity to the elbow and shoulder joints on each arm, and two vibrotactile motors were mounted on each band to provide vibration cues to the dorsal and palmar side of the arm. The localization performance under four conditions were compared, with the number of the simultaneously activated cues varying from one to four in each condition. Experimental results illustrate that the rate of correct localization decreases linearly with the increase in the number of activated cues. It was 27.8 percent for three activated cues, and became even lower for four activated cues. An analysis of the correct rate and error patterns show that the layout of vibrotactile cues can have significant effects on the localization performance of multiple vibrotactile cues. These findings might provide guidelines for using vibrotactile cues to guide the simultaneous motion of multiple joints on both arms.

  2. Statistics of natural binaural sounds.

    Directory of Open Access Journals (Sweden)

    Wiktor Młynarski

    Full Text Available Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD and level (ILD disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA. Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  3. Statistics of natural binaural sounds.

    Science.gov (United States)

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  4. Sound Localization in Patients With Congenital Unilateral Conductive Hearing Loss With a Transcutaneous Bone Conduction Implant.

    Science.gov (United States)

    Vyskocil, Erich; Liepins, Rudolfs; Kaider, Alexandra; Blineder, Michaela; Hamzavi, Sasan

    2017-03-01

    There is no consensus regarding the benefit of implantable hearing aids in congenital unilateral conductive hearing loss (UCHL). This study aimed to measure sound source localization performance in patients with congenital UCHL and contralateral normal hearing who received a new bone conduction implant. Evaluation of within-subject performance differences for sound source localization in a horizontal plane. Tertiary referral center. Five patients with atresia of the external auditory canal and contralateral normal hearing implanted with transcutaneous bone conduction implant at the Medical University of Vienna were tested. Activated/deactivated implant. Sound source localization test; localization performance quantified using the root mean square (RMS) error. Sound source localization ability was highly variable among individual subjects, with RMS errors ranging from 21 to 40 degrees. Horizontal plane localization performance in aided conditions showed statistically significant improvement compared with the unaided conditions, with RMS errors ranging from 17 to 27 degrees. The mean RMS error decreased by a factor of 0.71 (p conduction implant. Some patients with congenital UCHL might be capable of developing improved horizontal plane localization abilities with the binaural cues provided by this device.

  5. Looking at the ventriloquist: visual outcome of eye movements calibrates sound localization.

    Directory of Open Access Journals (Sweden)

    Daniel S Pages

    Full Text Available A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn. For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a 'guess and check' heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain's reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

  6. Effects of incongruent auditory and visual room-related cues on sound externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    Sounds presented via headphones are typically perceived inside the head. However, the illusion of a sound source located out in space away from the listener’s head can be generated with binaural headphone-based auralization systems by convolving anechoic sound signals with a binaural room impulse...... response (BRIR) measured with miniature microphones placed in the listener’s ear canals. Sound externalization of such virtual sounds can be very convincing and robust but there have been reports that the illusion might break down when the listening environment differs from the room in which the BRIRs were...... recorded [1,2,3]. This may be due to incongruent auditory cues between the recording and playback room during sound reproduction [2]. Alternatively, an expectation effect caused by the visual impression of the room may affect the position of the perceived auditory image [3]. Here, we systematically...

  7. Intercepting a sound without vision

    Science.gov (United States)

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  8. Magpies can use local cues to retrieve their food caches.

    Science.gov (United States)

    Feenders, Gesa; Smulders, Tom V

    2011-03-01

    Much importance has been placed on the use of spatial cues by food-hoarding birds in the retrieval of their caches. In this study, we investigate whether food-hoarding birds can be trained to use local cues ("beacons") in their cache retrieval. We test magpies (Pica pica) in an active hoarding-retrieval paradigm, where local cues are always reliable, while spatial cues are not. Our results show that the birds use the local cues to retrieve their caches, even when occasionally contradicting spatial information is available. The design of our study does not allow us to test rigorously whether the birds prefer using local over spatial cues, nor to investigate the process through which they learn to use local cues. We furthermore provide evidence that magpies develop landmark preferences, which improve their retrieval accuracy. Our findings support the hypothesis that birds are flexible in their use of memory information, using a combination of the most reliable or salient information to retrieve their caches. © Springer-Verlag 2010

  9. Continuous Re-Exposure to Environmental Sound Cues During Sleep Does Not Improve Memory for Semantically Unrelated Word Pairs.

    Science.gov (United States)

    Donohue, Kelly C; Spencer, Rebecca M C

    2011-06-01

    Two recent studies illustrated that cues present during encoding can enhance recall if re-presented during sleep. This suggests an academic strategy. Such effects have only been demonstrated with spatial learning and cue presentation was isolated to slow wave sleep (SWS). The goal of this study was to examine whether sounds enhance sleep-dependent consolidation of a semantic task if the sounds are re-presented continuously during sleep. Participants encoded a list of word pairs in the evening and recall was probed following an interval with overnight sleep. Participants encoded the pairs with the sound of "the ocean" from a sound machine. The first group slept with this sound; the second group slept with a different sound ("rain"); and the third group slept with no sound. Sleeping with sound had no impact on subsequent recall. Although a null result, this work provides an important test of the implications of context effects on sleep-dependent memory consolidation.

  10. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  11. Interactive jewellery as memory cue : designing a sound locket for individual reminiscence

    NARCIS (Netherlands)

    Niemantsverdriet, K.; Versteeg, M.F.

    2016-01-01

    In this paper we describe the design of Memento: an interactive sound locket for individual reminiscence that triggers a similar sense of intimacy and values as its non-technological predecessor. Jewellery often forms a cue for autobiographical memory. In this work we investigate the role that

  12. Continuous Re-Exposure to Environmental Sound Cues During Sleep Does Not Improve Memory for Semantically Unrelated Word Pairs

    OpenAIRE

    Donohue, Kelly C.; Spencer, Rebecca M. C.

    2011-01-01

    Two recent studies illustrated that cues present during encoding can enhance recall if re-presented during sleep. This suggests an academic strategy. Such effects have only been demonstrated with spatial learning and cue presentation was isolated to slow wave sleep (SWS). The goal of this study was to examine whether sounds enhance sleep-dependent consolidation of a semantic task if the sounds are re-presented continuously during sleep. Participants encoded a list of word pairs in the evening...

  13. Discrimination and streaming of speech sounds based on differences in interaural and spectral cues.

    Science.gov (United States)

    David, Marion; Lavandier, Mathieu; Grimault, Nicolas; Oxenham, Andrew J

    2017-09-01

    Differences in spatial cues, including interaural time differences (ITDs), interaural level differences (ILDs) and spectral cues, can lead to stream segregation of alternating noise bursts. It is unknown how effective such cues are for streaming sounds with realistic spectro-temporal variations. In particular, it is not known whether the high-frequency spectral cues associated with elevation remain sufficiently robust under such conditions. To answer these questions, sequences of consonant-vowel tokens were generated and filtered by non-individualized head-related transfer functions to simulate the cues associated with different positions in the horizontal and median planes. A discrimination task showed that listeners could discriminate changes in interaural cues both when the stimulus remained constant and when it varied between presentations. However, discrimination of changes in spectral cues was much poorer in the presence of stimulus variability. A streaming task, based on the detection of repeated syllables in the presence of interfering syllables, revealed that listeners can use both interaural and spectral cues to segregate alternating syllable sequences, despite the large spectro-temporal differences between stimuli. However, only the full complement of spatial cues (ILDs, ITDs, and spectral cues) resulted in obligatory streaming in a task that encouraged listeners to integrate the tokens into a single stream.

  14. Sound localization and occupational noise

    Directory of Open Access Journals (Sweden)

    Pedro de Lemos Menezes

    2014-02-01

    Full Text Available OBJECTIVE: The aim of this study was to determine the effects of occupational noise on sound localization in different spatial planes and frequencies among normal hearing firefighters. METHOD: A total of 29 adults with pure-tone hearing thresholds below 25 dB took part in the study. The participants were divided into a group of 19 firefighters exposed to occupational noise and a control group of 10 adults who were not exposed to such noise. All subjects were assigned a sound localization task involving 117 stimuli from 13 sound sources that were spatially distributed in horizontal, vertical, midsagittal and transverse planes. The three stimuli, which were square waves with fundamental frequencies of 500, 2,000 and 4,000 Hz, were presented at a sound level of 70 dB and were randomly repeated three times from each sound source. The angle between the speaker's axis in the same plane was 45°, and the distance to the subject was 1 m. RESULT: The results demonstrate that the sound localization ability of the firefighters was significantly lower (p<0.01 than that of the control group. CONCLUSION: Exposure to occupational noise, even when not resulting in hearing loss, may lead to a diminished ability to locate a sound source.

  15. Neuronal specializations for the processing of interaural difference cues in the chick

    Directory of Open Access Journals (Sweden)

    Harunori eOhmori

    2014-05-01

    Full Text Available Sound information is encoded as a series of spikes of the auditory nerve fibers (ANFs, and then transmitted to the brainstem auditory nuclei. Features such as timing and level are extracted from ANFs activity and further processed as the interaural time difference (ITD and the interaural level difference (ILD, respectively. These two interaural difference cues are used for sound source localization by behaving animals. Both cues depend on the head size of animals and are extremely small, requiring specialized neural properties in order to process these cues with precision. Moreover, the sound level and timing cues are not processed independently from one another. Neurons in the nucleus angularis (NA are specialized for coding sound level information in birds and the ILD is processed in the posterior part of the dorsal lateral lemniscus nucleus (LLDp. Processing of ILD is affected by the phase difference of binaural sound. Temporal features of sound are encoded in the pathway starting in nucleus magnocellularis (NM, and ITD is processed in the nucleus laminaris (NL. In this pathway a variety of specializations are found in synapse morphology, neuronal excitability, distribution of ion channels and receptors along the tonotopic axis, which reduces spike timing fluctuation in the ANFs-NM synapse, and imparts precise and stable ITD processing to the NL. Moreover, the contrast of ITD processing in NL is enhanced over a wide range of sound level through the activity of GABAergic inhibitory systems from both the superior olivary nucleus (SON and local inhibitory neurons that follow monosynaptic to NM activity.

  16. Relevance of Spectral Cues for Auditory Spatial Processing in the Occipital Cortex of the Blind

    Science.gov (United States)

    Voss, Patrice; Lepore, Franco; Gougoux, Frédéric; Zatorre, Robert J.

    2011-01-01

    We have previously shown that some blind individuals can localize sounds more accurately than their sighted counterparts when one ear is obstructed, and that this ability is strongly associated with occipital cortex activity. Given that spectral cues are important for monaurally localizing sounds when one ear is obstructed, and that blind individuals are more sensitive to small spectral differences, we hypothesized that enhanced use of spectral cues via occipital cortex mechanisms could explain the better performance of blind individuals in monaural localization. Using positron-emission tomography (PET), we scanned blind and sighted persons as they discriminated between sounds originating from a single spatial position, but with different spectral profiles that simulated different spatial positions based on head-related transfer functions. We show here that a sub-group of early blind individuals showing superior monaural sound localization abilities performed significantly better than any other group on this spectral discrimination task. For all groups, performance was best for stimuli simulating peripheral positions, consistent with the notion that spectral cues are more helpful for discriminating peripheral sources. PET results showed that all blind groups showed cerebral blood flow increases in the occipital cortex; but this was also the case in the sighted group. A voxel-wise covariation analysis showed that more occipital recruitment was associated with better performance across all blind subjects but not the sighted. An inter-regional covariation analysis showed that the occipital activity in the blind covaried with that of several frontal and parietal regions known for their role in auditory spatial processing. Overall, these results support the notion that the superior ability of a sub-group of early-blind individuals to localize sounds is mediated by their superior ability to use spectral cues, and that this ability is subserved by cortical processing in

  17. Mutation in the kv3.3 voltage-gated potassium channel causing spinocerebellar ataxia 13 disrupts sound-localization mechanisms.

    Directory of Open Access Journals (Sweden)

    John C Middlebrooks

    Full Text Available Normal sound localization requires precise comparisons of sound timing and pressure levels between the two ears. The primary localization cues are interaural time differences, ITD, and interaural level differences, ILD. Voltage-gated potassium channels, including Kv3.3, are highly expressed in the auditory brainstem and are thought to underlie the exquisite temporal precision and rapid spike rates that characterize brainstem binaural pathways. An autosomal dominant mutation in the gene encoding Kv3.3 has been demonstrated in a large Filipino kindred manifesting as spinocerebellar ataxia type 13 (SCA13. This kindred provides a rare opportunity to test in vivo the importance of a specific channel subunit for human hearing. Here, we demonstrate psychophysically that individuals with the mutant allele exhibit profound deficits in both ITD and ILD sensitivity, despite showing no obvious impairment in pure-tone sensitivity with either ear. Surprisingly, several individuals exhibited the auditory deficits even though they were pre-symptomatic for SCA13. We would expect that impairments of binaural processing as great as those observed in this family would result in prominent deficits in localization of sound sources and in loss of the "spatial release from masking" that aids in understanding speech in the presence of competing sounds.

  18. Effects of Interaural Level and Time Differences on the Externalization of Sound

    DEFF Research Database (Denmark)

    Dau, Torsten; Catic, Jasmina; Santurette, Sébastien

    Distant sound sources in our environment are perceived as externalized and are thus properly localized in both direction and distance. This is due to the acoustic filtering by the head, torso, and external ears, which provides frequency dependent shaping of binaural cues, such as interaural level...... differences (ILDs) and interaural time differences (ITDs). Further, the binaural cues provided by reverberation in an enclosed space may also contribute to externalization. While these spatial cues are available in their natural form when listening to real-world sound sources, hearing-aid signal processing...... is consistent with the physical analysis that showed that a decreased distance to the sound source also reduced the fluctuations in ILDs....

  19. Effects of interaural level differences on the externalization of sound

    DEFF Research Database (Denmark)

    Catic, Jasmina; Santurette, Sébastien; Dau, Torsten

    2012-01-01

    Distant sound sources in our environment are perceived as externalized and are thus properly localized in both direction and distance. This is due to the acoustic filtering by the head, torso, and external ears, which provides frequency-dependent shaping of binaural cues such as interaural level...... differences (ILDs) and interaural time differences (ITDs). In rooms, the sound reaching the two ears is further modified by reverberant energy, which leads to increased fluctuations in short-term ILDs and ITDs. In the present study, the effect of ILD fluctuations on the externalization of sound......, for sounds that contain frequencies above about 1 kHz the ILD fluctuations were found to be an essential cue for externalization....

  20. Food approach conditioning and discrimination learning using sound cues in benthic sharks.

    Science.gov (United States)

    Vila Pouca, Catarina; Brown, Culum

    2018-07-01

    The marine environment is filled with biotic and abiotic sounds. Some of these sounds predict important events that influence fitness while others are unimportant. Individuals can learn specific sound cues and 'soundscapes' and use them for vital activities such as foraging, predator avoidance, communication and orientation. Most research with sounds in elasmobranchs has focused on hearing thresholds and attractiveness to sound sources, but very little is known about their abilities to learn about sounds, especially in benthic species. Here we investigated if juvenile Port Jackson sharks could learn to associate a musical stimulus with a food reward, discriminate between two distinct musical stimuli, and whether individual personality traits were linked to cognitive performance. Five out of eight sharks were successfully conditioned to associate a jazz song with a food reward delivered in a specific corner of the tank. We observed repeatable individual differences in activity and boldness in all eight sharks, but these personality traits were not linked to the learning performance assays we examined. These sharks were later trained in a discrimination task, where they had to distinguish between the same jazz and a novel classical music song, and swim to opposite corners of the tank according to the stimulus played. The sharks' performance to the jazz stimulus declined to chance levels in the discrimination task. Interestingly, some sharks developed a strong side bias to the right, which in some cases was not the correct side for the jazz stimulus.

  1. The natural history of sound localization in mammals--a story of neuronal inhibition.

    Science.gov (United States)

    Grothe, Benedikt; Pecka, Michael

    2014-01-01

    Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds) and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems. Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds.

  2. The natural history of sound localization in mammals – a story of neuronal inhibition

    Directory of Open Access Journals (Sweden)

    Benedikt eGrothe

    2014-10-01

    Full Text Available Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems.Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds.

  3. Sound source localization and segregation with internally coupled ears

    DEFF Research Database (Denmark)

    Bee, Mark A; Christensen-Dalsgaard, Jakob

    2016-01-01

    to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla......, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating...

  4. Effects of multiple congruent cues on concurrent sound segregation during passive and active listening: an event-related potential (ERP) study.

    Science.gov (United States)

    Kocsis, Zsuzsanna; Winkler, István; Szalárdy, Orsolya; Bendixen, Alexandra

    2014-07-01

    In two experiments, we assessed the effects of combining different cues of concurrent sound segregation on the object-related negativity (ORN) and the P400 event-related potential components. Participants were presented with sequences of complex tones, half of which contained some manipulation: one or two harmonic partials were mistuned, delayed, or presented from a different location than the rest. In separate conditions, one, two, or three of these manipulations were combined. Participants watched a silent movie (passive listening) or reported after each tone whether they perceived one or two concurrent sounds (active listening). ORN was found in almost all conditions except for location difference alone during passive listening. Combining several cues or manipulating more than one partial consistently led to sub-additive effects on the ORN amplitude. These results support the view that ORN reflects a combined, feature-unspecific assessment of the auditory system regarding the contribution of two sources to the incoming sound. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Effects of Active and Passive Hearing Protection Devices on Sound Source Localization, Speech Recognition, and Tone Detection.

    Directory of Open Access Journals (Sweden)

    Andrew D Brown

    Full Text Available Hearing protection devices (HPDs such as earplugs offer to mitigate noise exposure and reduce the incidence of hearing loss among persons frequently exposed to intense sound. However, distortions of spatial acoustic information and reduced audibility of low-intensity sounds caused by many existing HPDs can make their use untenable in high-risk (e.g., military or law enforcement environments where auditory situational awareness is imperative. Here we assessed (1 sound source localization accuracy using a head-turning paradigm, (2 speech-in-noise recognition using a modified version of the QuickSIN test, and (3 tone detection thresholds using a two-alternative forced-choice task. Subjects were 10 young normal-hearing males. Four different HPDs were tested (two active, two passive, including two new and previously untested devices. Relative to unoccluded (control performance, all tested HPDs significantly degraded performance across tasks, although one active HPD slightly improved high-frequency tone detection thresholds and did not degrade speech recognition. Behavioral data were examined with respect to head-related transfer functions measured using a binaural manikin with and without tested HPDs in place. Data reinforce previous reports that HPDs significantly compromise a variety of auditory perceptual facilities, particularly sound localization due to distortions of high-frequency spectral cues that are important for the avoidance of front-back confusions.

  6. Evidence for cue-independent spatial representation in the human auditory cortex during active listening.

    Science.gov (United States)

    Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher

    2017-09-05

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.

  7. Assessing implicit odor localization in humans using a cross-modal spatial cueing paradigm.

    Science.gov (United States)

    Moessnang, Carolin; Finkelmeyer, Andreas; Vossen, Alexandra; Schneider, Frank; Habel, Ute

    2011-01-01

    Navigation based on chemosensory information is one of the most important skills in the animal kingdom. Studies on odor localization suggest that humans have lost this ability. However, the experimental approaches used so far were limited to explicit judgements, which might ignore a residual ability for directional smelling on an implicit level without conscious appraisal. A novel cueing paradigm was developed in order to determine whether an implicit ability for directional smelling exists. Participants performed a visual two-alternative forced choice task in which the target was preceded either by a side-congruent or a side-incongruent olfactory spatial cue. An explicit odor localization task was implemented in a second experiment. No effect of cue congruency on mean reaction times could be found. However, a time by condition interaction emerged, with significantly slower responses to congruently compared to incongruently cued targets at the beginning of the experiment. This cueing effect gradually disappeared throughout the course of the experiment. In addition, participants performed at chance level in the explicit odor localization task, thus confirming the results of previous research. The implicit cueing task suggests the existence of spatial information processing in the olfactory system. Response slowing after a side-congruent olfactory cue is interpreted as a cross-modal attentional interference effect. In addition, habituation might have led to a gradual disappearance of the cueing effect. It is concluded that under immobile conditions with passive monorhinal stimulation, humans are unable to explicitly determine the location of a pure odorant. Implicitly, however, odor localization seems to exert an influence on human behaviour. To our knowledge, these data are the first to show implicit effects of odor localization on overt human behaviour and thus support the hypothesis of residual directional smelling in humans. © 2011 Moessnang et al.

  8. Assessing implicit odor localization in humans using a cross-modal spatial cueing paradigm.

    Directory of Open Access Journals (Sweden)

    Carolin Moessnang

    Full Text Available Navigation based on chemosensory information is one of the most important skills in the animal kingdom. Studies on odor localization suggest that humans have lost this ability. However, the experimental approaches used so far were limited to explicit judgements, which might ignore a residual ability for directional smelling on an implicit level without conscious appraisal.A novel cueing paradigm was developed in order to determine whether an implicit ability for directional smelling exists. Participants performed a visual two-alternative forced choice task in which the target was preceded either by a side-congruent or a side-incongruent olfactory spatial cue. An explicit odor localization task was implemented in a second experiment.No effect of cue congruency on mean reaction times could be found. However, a time by condition interaction emerged, with significantly slower responses to congruently compared to incongruently cued targets at the beginning of the experiment. This cueing effect gradually disappeared throughout the course of the experiment. In addition, participants performed at chance level in the explicit odor localization task, thus confirming the results of previous research.The implicit cueing task suggests the existence of spatial information processing in the olfactory system. Response slowing after a side-congruent olfactory cue is interpreted as a cross-modal attentional interference effect. In addition, habituation might have led to a gradual disappearance of the cueing effect. It is concluded that under immobile conditions with passive monorhinal stimulation, humans are unable to explicitly determine the location of a pure odorant. Implicitly, however, odor localization seems to exert an influence on human behaviour. To our knowledge, these data are the first to show implicit effects of odor localization on overt human behaviour and thus support the hypothesis of residual directional smelling in humans.

  9. Do top predators cue on sound production by mesopelagic prey?

    Science.gov (United States)

    Baumann-Pickering, S.; Checkley, D. M., Jr.; Demer, D. A.

    2016-02-01

    Deep-scattering layer (DSL) organisms, comprising a variety of mesopelagic fishes, and squids, siphonophores, crustaceans, and other invertebrates, are preferred prey for numerous large marine predators, e.g. cetaceans, seabirds, and fishes. Some of the DSL species migrate from depth during daylight to feed near the surface at night, transitioning during dusk and dawn. We investigated if any DSL organisms create sound, particularly during the crepuscular periods. Over several nights in summer 2015, underwater sound was recorded in the San Diego Trough using a high-frequency acoustic recording package (HARP, 10 Hz to 100 kHz), suspended from a drifting surface float. Acoustic backscatter from the DSL was monitored nearby using a calibrated multiple-frequency (38, 70, 120, and 200 kHz) split-beam echosounder (Simrad EK60) on a small boat. DSL organisms produced sound, between 300 and 1000 Hz, and the received levels were highest when the animals migrated past the recorder during ascent and descent. The DSL are globally present, so the observed acoustic phenomenon, if also ubiquitous, has wide-reaching implications. Sound travels farther than light or chemicals and thus can be sensed at greater distances by predators, prey, and mates. If sound is a characteristic feature of pelagic ecosystems, it likely plays a role in predator-prey relationships and overall ecosystem dynamics. Our new finding inspires numerous questions such as: Which, how, and why have DSL organisms evolved to create sound, for what do they use it and under what circumstances? Is sound production by DSL organisms truly ubiquitous, or does it depend on the local environment and species composition? How may sound production and perception be adapted to a changing environment? Do predators react to changes in sound? Can sound be used to quantify the composition of mixed-species assemblages, component densities and abundances, and hence be used in stock assessment or predictive modeling?

  10. Local figure-ground cues are valid for natural images.

    Science.gov (United States)

    Fowlkes, Charless C; Martin, David R; Malik, Jitendra

    2007-06-08

    Figure-ground organization refers to the visual perception that a contour separating two regions belongs to one of the regions. Recent studies have found neural correlates of figure-ground assignment in V2 as early as 10-25 ms after response onset, providing strong support for the role of local bottom-up processing. How much information about figure-ground assignment is available from locally computed cues? Using a large collection of natural images, in which neighboring regions were assigned a figure-ground relation by human observers, we quantified the extent to which figural regions locally tend to be smaller, more convex, and lie below ground regions. Our results suggest that these Gestalt cues are ecologically valid, and we quantify their relative power. We have also developed a simple bottom-up computational model of figure-ground assignment that takes image contours as input. Using parameters fit to natural image statistics, the model is capable of matching human-level performance when scene context limited.

  11. Local spectral anisotropy is a valid cue for figure–ground organization in natural scenes

    OpenAIRE

    Ramenahalli, Sudarshan; Mihalas, Stefan; Niebur, Ernst

    2014-01-01

    An important step in the process of understanding visual scenes is its organization in different perceptual objects which requires figure-ground segregation. The determination which side of an occlusion boundary is figure (closer to the observer) and which is ground (further away from the observer) is made through a combination of global cues, like convexity, and local cues, like T-junctions. We here focus on a novel set of local cues in the intensity patterns along occlusion boundaries which...

  12. Sound Localization Strategies in Three Predators

    DEFF Research Database (Denmark)

    Carr, Catherine E; Christensen-Dalsgaard, Jakob

    2015-01-01

    . Despite the similar organization of their auditory systems, archosaurs and lizards use different strategies for encoding the ITDs that underlie localization of sound in azimuth. Barn owls encode ITD information using a place map, which is composed of neurons serving as labeled lines tuned for preferred......In this paper, we compare some of the neural strategies for sound localization and encoding interaural time differences (ITDs) in three predatory species of Reptilia, alligators, barn owls and geckos. Birds and crocodilians are sister groups among the extant archosaurs, while geckos are lepidosaurs...... spatial locations, while geckos may use a meter strategy or population code composed of broadly sensitive neurons that represent ITD via changes in the firing rate....

  13. Local sleep spindle modulations in relation to specific memory cues

    NARCIS (Netherlands)

    Cox, R.; Hofman, W.F.; de Boer, M.; Talamini, L.M.

    2014-01-01

    Sleep spindles have been connected to memory processes in various ways. In addition, spindles appear to be modulated at the local cortical network level. We investigated whether cueing specific memories during sleep leads to localized spindle modulations in humans. During learning of word-location

  14. Spectral and temporal cues for perception of material and action categories in impacted sound sources

    DEFF Research Database (Denmark)

    Hjortkjær, Jens; McAdams, Stephen

    2016-01-01

    In two experiments, similarity ratings and categorization performance with recorded impact sounds representing three material categories (wood, metal, glass) being manipulated by three different categories of action (drop, strike, rattle) were examined. Previous research focusing on single impact...... correlated with the pattern of confusion in categorization judgments. Listeners tended to confuse materials with similar spectral centroids, and actions with similar temporal centroids and onset densities. To confirm the influence of these different features, spectral cues were removed by applying...

  15. The role of reverberation-related binaural cues in the externalization of speech.

    Science.gov (United States)

    Catic, Jasmina; Santurette, Sébastien; Dau, Torsten

    2015-08-01

    The perception of externalization of speech sounds was investigated with respect to the monaural and binaural cues available at the listeners' ears in a reverberant environment. Individualized binaural room impulse responses (BRIRs) were used to simulate externalized sound sources via headphones. The measured BRIRs were subsequently modified such that the proportion of the response containing binaural vs monaural information was varied. Normal-hearing listeners were presented with speech sounds convolved with such modified BRIRs. Monaural reverberation cues were found to be sufficient for the externalization of a lateral sound source. In contrast, for a frontal source, an increased amount of binaural cues from reflections was required in order to obtain well externalized sound images. It was demonstrated that the interaction between the interaural cues of the direct sound and the reverberation strongly affects the perception of externalization. An analysis of the short-term binaural cues showed that the amount of fluctuations of the binaural cues corresponded well to the externalization ratings obtained in the listening tests. The results further suggested that the precedence effect is involved in the auditory processing of the dynamic binaural cues that are utilized for externalization perception.

  16. The effect of brain lesions on sound localization in complex acoustic environments.

    Science.gov (United States)

    Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg

    2014-05-01

    Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.

  17. Sound localization in the presence of one or two distracters

    NARCIS (Netherlands)

    Langendijk, E.H.A.; Kistler, D.J.; Wightman, F.L

    2001-01-01

    Localizing a target sound can be a challenge when one or more distracter sounds are present at the same time. This study measured the effect of distracter position on target localization for one distracter (17 positions) and two distracters (21 combinations of 17 positions). Listeners were

  18. Human Sound Externalization in Reverberant Environments

    DEFF Research Database (Denmark)

    Catic, Jasmina

    In everyday environments, listeners perceive sound sources as externalized. In listening conditions where the spatial cues that are relevant for externalization are not represented correctly, such as when listening through headphones or hearing aids, a degraded perception of externalization may...... occur. In this thesis, the spatial cues that arise from a combined effect of filtering due to the head, torso, and pinna and the acoustic environment were analysed and the impact of such cues for the perception of externalization in different frequency regions was investigated. Distant sound sources...... were simulated via headphones using individualized binaural room impulse responses (BRIRs). An investigation of the influence of spectral content of a sound source on externalization showed that effective externalization cues are present across the entire frequency range. The fluctuation of interaural...

  19. Ambient Sound-Based Collaborative Localization of Indeterministic Devices

    NARCIS (Netherlands)

    Kamminga, Jacob Wilhelm; Le Viet Duc, L Duc; Havinga, Paul J.M.

    2016-01-01

    Localization is essential in wireless sensor networks. To our knowledge, no prior work has utilized low-cost devices for collaborative localization based on only ambient sound, without the support of local infrastructure. The reason may be the fact that most low-cost devices are indeterministic and

  20. Sound lateralization test in adolescent blind individuals.

    Science.gov (United States)

    Yabe, Takao; Kaga, Kimitaka

    2005-06-21

    Blind individuals require to compensate for the lack of visual information by other sensory inputs. In particular, auditory inputs are crucial to such individuals. To investigate whether blind individuals localize sound in space better than sighted individuals, we tested the auditory ability of adolescent blind individuals using a sound lateralization method. The interaural time difference discrimination thresholds of blind individuals were statistically significantly shorter than those of blind individuals with residual vision and controls. These findings suggest that blind individuals have better auditory spatial ability than individuals with visual cues; therefore, some perceptual compensation occurred in the former.

  1. Sound localization under perturbed binaural hearing.

    NARCIS (Netherlands)

    Wanrooij, M.M. van; Opstal, A.J. van

    2007-01-01

    This paper reports on the acute effects of a monaural plug on directional hearing in the horizontal (azimuth) and vertical (elevation) planes of human listeners. Sound localization behavior was tested with rapid head-orienting responses toward brief high-pass filtered (>3 kHz; HP) and broadband

  2. Prior Visual Experience Modulates Learning of Sound Localization Among Blind Individuals.

    Science.gov (United States)

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-Jia; Li, Jian-Jun; Ting, Kin-Hung; Lu, Zhong-Lin; Whitfield-Gabrieli, Susan; Wang, Jun; Lee, Tatia M C

    2017-05-01

    Cross-modal learning requires the use of information from different sensory modalities. This study investigated how the prior visual experience of late blind individuals could modulate neural processes associated with learning of sound localization. Learning was realized by standardized training on sound localization processing, and experience was investigated by comparing brain activations elicited from a sound localization task in individuals with (late blind, LB) and without (early blind, EB) prior visual experience. After the training, EB showed decreased activation in the precuneus, which was functionally connected to a limbic-multisensory network. In contrast, LB showed the increased activation of the precuneus. A subgroup of LB participants who demonstrated higher visuospatial working memory capabilities (LB-HVM) exhibited an enhanced precuneus-lingual gyrus network. This differential connectivity suggests that visuospatial working memory due to the prior visual experience gained via LB-HVM enhanced learning of sound localization. Active visuospatial navigation processes could have occurred in LB-HVM compared to the retrieval of previously bound information from long-term memory for EB. The precuneus appears to play a crucial role in learning of sound localization, disregarding prior visual experience. Prior visual experience, however, could enhance cross-modal learning by extending binding to the integration of unprocessed information, mediated by the cognitive functions that these experiences develop.

  3. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    Science.gov (United States)

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  4. Physiological correlates of sound localization in a parasitoid fly, Ormia ochracea

    Science.gov (United States)

    Oshinsky, Michael Lee

    A major focus of research in the nervous system is the investigation of neural circuits. The question of how neurons connect to form functional units has driven modern neuroscience research from its inception. From the beginning, the neural circuits of the auditory system and specifically sound localization were used as a model system for investigating neural connectivity and computation. Sound localization lends itself to this task because there is no mapping of spatial information on a receptor sheet as in vision. With only one eye, an animal would still have positional information for objects. Since the receptor sheet in the ear is frequency oriented and not spatially oriented, positional information for a sound source does not exist with only one ear. The nervous system computes the location of a sound source based on differences in the physiology of the two ears. In this study, I investigated the neural circuits for sound localization in a fly, Ormia ochracea (Diptera, Tachinidae, Ormiini), which is a parasitoid of crickets. This fly possess a unique mechanically coupled hearing organ. The two ears are contained in one air sac and a cuticular bridge, that has a flexible spring-like structure at its center, connects them. This mechanical coupling preprocesses the sound before it is detected by the nervous system and provides the fly with directional information. The subject of this study is the neural coding of the location of sound stimuli by a mechanically coupled auditory system. In chapter 1, I present the natural history of an acoustic parasitoid and I review the peripheral processing of sound by the Ormian ear. In chapter 2, I describe the anatomy and physiology of the auditory afferents. I present this physiology in the context of sound localization. In chapter 3, I describe the directional dependent physiology for the thoracic local and ascending acoustic interneurons. In chapter 4, I quantify the threshold and I detail the kinematics of the phonotactic

  5. Learning to Localize Sound with a Lizard Ear Model

    DEFF Research Database (Denmark)

    Shaikh, Danish; Hallam, John; Christensen-Dalsgaard, Jakob

    The peripheral auditory system of a lizard is strongly directional in the azimuth plane due to the acoustical coupling of the animal's two eardrums. This feature by itself is insufficient to accurately localize sound as the extracted directional information cannot be directly mapped to the sound...

  6. Spherical loudspeaker array for local active control of sound.

    Science.gov (United States)

    Rafaely, Boaz

    2009-05-01

    Active control of sound has been employed to reduce noise levels around listeners' head using destructive interference from noise-canceling sound sources. Recently, spherical loudspeaker arrays have been studied as multiple-channel sound sources, capable of generating sound fields with high complexity. In this paper, the potential use of a spherical loudspeaker array for local active control of sound is investigated. A theoretical analysis of the primary and secondary sound fields around a spherical sound source reveals that the natural quiet zones for the spherical source have a shell-shape. Using numerical optimization, quiet zones with other shapes are designed, showing potential for quiet zones with extents that are significantly larger than the well-known limit of a tenth of a wavelength for monopole sources. The paper presents several simulation examples showing quiet zones in various configurations.

  7. Trading of dynamic interaural time and level difference cues and its effect on the auditory motion-onset response measured with electroencephalography.

    Science.gov (United States)

    Altmann, Christian F; Ueda, Ryuhei; Bucher, Benoit; Furukawa, Shigeto; Ono, Kentaro; Kashino, Makio; Mima, Tatsuya; Fukuyama, Hidenao

    2017-10-01

    Interaural time (ITD) and level differences (ILD) constitute the two main cues for sound localization in the horizontal plane. Despite extensive research in animal models and humans, the mechanism of how these two cues are integrated into a unified percept is still far from clear. In this study, our aim was to test with human electroencephalography (EEG) whether integration of dynamic ITD and ILD cues is reflected in the so-called motion-onset response (MOR), an evoked potential elicited by moving sound sources. To this end, ITD and ILD trajectories were determined individually by cue trading psychophysics. We then measured EEG while subjects were presented with either static click-trains or click-trains that contained a dynamic portion at the end. The dynamic part was created by combining ITD with ILD either congruently to elicit the percept of a right/leftward moving sound, or incongruently to elicit the percept of a static sound. In two experiments that differed in the method to derive individual dynamic cue trading stimuli, we observed an MOR with at least a change-N1 (cN1) component for both the congruent and incongruent conditions at about 160-190 ms after motion-onset. A significant change-P2 (cP2) component for both the congruent and incongruent ITD/ILD combination was found only in the second experiment peaking at about 250 ms after motion onset. In sum, this study shows that a sound which - by a combination of counter-balanced ITD and ILD cues - induces a static percept can still elicit a motion-onset response, indicative of independent ITD and ILD processing at the level of the MOR - a component that has been proposed to be, at least partly, generated in non-primary auditory cortex. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information

    Directory of Open Access Journals (Sweden)

    Fabian Draht

    2017-06-01

    Full Text Available Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.

  9. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information.

    Science.gov (United States)

    Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise

    2017-01-01

    Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.

  10. Improvement of directionality and sound-localization by internal ear coupling in barn owls

    DEFF Research Database (Denmark)

    Wagner, Hermann; Christensen-Dalsgaard, Jakob; Kettler, Lutz

    Mark Konishi was one of the first to quantify sound-localization capabilities in barn owls. He showed that frequencies between 3 and 10 kHz underlie precise sound localization in these birds, and that they derive spatial information from processing interaural time and interaural level differences....... However, despite intensive research during the last 40 years it is still unclear whether and how internal ear coupling contributes to sound localization in the barn owl. Here we investigated ear directionality in anesthetized birds with the help of laser vibrometry. Care was taken that anesthesia...... time difference in the low-frequency range, barn owls hesitate to approach prey or turn their heads when only low-frequency auditory information is present in a stimulus they receive. Thus, the barn-owl's sound localization system seems to be adapted to work best in frequency ranges where interaural...

  11. Hybrid local piezoelectric and conductive functions for high performance airborne sound absorption

    Science.gov (United States)

    Rahimabady, Mojtaba; Statharas, Eleftherios Christos; Yao, Kui; Sharifzadeh Mirshekarloo, Meysam; Chen, Shuting; Tay, Francis Eng Hock

    2017-12-01

    A concept of hybrid local piezoelectric and electrical conductive functions for improving airborne sound absorption is proposed and demonstrated in composite foam made of porous polar polyvinylidene fluoride (PVDF) mixed with conductive single-walled carbon nanotube (SWCNT). According to our hybrid material function design, the local piezoelectric effect in the PVDF matrix with the polar structure and the electrical resistive loss of SWCNT enhanced sound energy conversion to electrical energy and subsequently to thermal energy, respectively, in addition to the other known sound absorption mechanisms in a porous material. It is found that the overall energy conversion and hence the sound absorption performance are maximized when the concentration of the SWCNT is around the conductivity percolation threshold. For the optimal composition of PVDF/5 wt. % SWCNT, a sound reduction coefficient of larger than 0.58 has been obtained, with a high sound absorption coefficient higher than 50% at 600 Hz, showing their great values for passive noise mitigation even at a low frequency.

  12. The effect of interaural-level-difference fluctuations on the externalization of sound

    DEFF Research Database (Denmark)

    Catic, Jasmina; Santurette, Sébastien; Buchholz, Jörg M.

    2013-01-01

    Real-world sound sources are usually perceived as externalized and thus properly localized in both direction and distance. This is largely due to (1) the acoustic filtering by the head, torso, and pinna, resulting in modifications of the signal spectrum and thereby a frequency-dependent shaping...... of interaural cues and (2) interaural cues provided by the reverberation inside an enclosed space. This study first investigated the effect of room reverberation on the spectro-temporal behavior of interaural level differences (ILDs) by analyzing dummy-head recordings of speech played at different distances...... in a standard listening room. Next, the effect of ILD fluctuations on the degree of externalization was investigated in a psychoacoustic experiment performed in the same listening room. Individual binaural impulse responses were used to simulate a distant sound source delivered via headphones. The ILDs were...

  13. The role of reverberation-related binaural cues in the externalization of speech

    DEFF Research Database (Denmark)

    Catic, Jasmina; Santurette, Sébastien; Dau, Torsten

    2015-01-01

    The perception of externalization of speech sounds was investigated with respect to the monaural and binaural cues available at the listeners’ ears in a reverberant environment. Individualized binaural room impulse responses (BRIRs) were used to simulate externalized sound sources via headphones....... The measured BRIRs were subsequently modified such that the proportion of the response containing binaural vs monaural information was varied. Normal-hearing listeners were presented with speech sounds convolved with such modified BRIRs. Monaural reverberation cues were found to be sufficient...

  14. Local spectral anisotropy is a valid cue for figure-ground organization in natural scenes.

    Science.gov (United States)

    Ramenahalli, Sudarshan; Mihalas, Stefan; Niebur, Ernst

    2014-10-01

    An important step in the process of understanding visual scenes is its organization in different perceptual objects which requires figure-ground segregation. The determination of which side of an occlusion boundary is figure (closer to the observer) and which is ground (further away from the observer) is made through a combination of global cues, like convexity, and local cues, like T-junctions. We here focus on a novel set of local cues in the intensity patterns along occlusion boundaries which we show to differ between figure and ground. Image patches are extracted from natural scenes from two standard image sets along the boundaries of objects and spectral analysis is performed separately on figure and ground. On the figure side, oriented spectral power orthogonal to the occlusion boundary significantly exceeds that parallel to the boundary. This "spectral anisotropy" is present only for higher spatial frequencies, and absent on the ground side. The difference in spectral anisotropy between the two sides of an occlusion border predicts which is the figure and which the background with an accuracy exceeding 60% per patch. Spectral anisotropy of close-by locations along the boundary co-varies but is largely independent over larger distances which allows to combine results from different image regions. Given the low cost of this strictly local computation, we propose that spectral anisotropy along occlusion boundaries is a valuable cue for figure-ground segregation. A data base of images and extracted patches labeled for figure and ground is made freely available. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. The Influence of Visual Cues on Sound Externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    while listeners wore both earplugs and blindfolds. Half of the listeners were then blindfolded during testing but were provided auditory awareness of the room via a controlled noise source (condition A). The other half could see the room but were shielded from room-related acoustic input and tested......Background: The externalization of virtual sounds reproduced via binaural headphone-based auralization systems has been reported to be less robust when the listening environment differs from the room in which binaural room impulse responses (BRIRs) were recorded. It has been debated whether.......Methods: Eighteen naïve listeners rated the externalization of virtual stimuli in terms of perceived distance, azimuthal localization, and compactness in three rooms: 1) a standard IEC listening room, 2) a small reverberant room, and 3) a large dry room. Before testing, individual BRIRs were recorded in room 1...

  16. Conditioned responses elicited by experimentally produced cues for smoking.

    Science.gov (United States)

    Mucha, R F; Pauli, P; Angrilli, A

    1998-03-01

    Several theories of drug-craving postulate that a signal for drug elicits conditioned responses. However, depending on the theory, a drug cue is said to elicit drug similar, drug compensatory, positive motivational, and negative motivational effects. Since animal data alone cannot tease apart the relative importance of different cue-related processes in the addict, we developed and examined a model of drug cues in the human based on a two-sound, differential conditioning procedure using smoking as the reinforcer. After multiple pairings of a sound with smoking, there was a preference for the smoking cue on a conditioned preference test. The acute effects of smoking (increased heart rate, respiration rate, skin conductance level, skin conductance fluctuations, EEG beta power and trapezius EMG, decreased alpha power) were not affected by the smoking cue, although subjects drew more on their cigarette in the presence of the smoking cue than in the presence of a control cue. Moreover, the cue did not change baseline behaviour except for a possible increase in EEG beta power and an increase in trapezius EMG at about the time when smoking should have occurred. The findings confirm the value of experimental models of drug cues in the human for comparing different cue phenomena in the dependent individual. They indicate that an acquired signal for drug in the human may elicit incentive motivational effects and associated preparatory motor responses in addition to possible conditioned tolerance.

  17. Audio-Visual Fusion for Sound Source Localization and Improved Attention

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Byoung Gi; Choi, Jong Suk; Yoon, Sang Suk; Choi, Mun Taek; Kim, Mun Sang [Korea Institute of Science and Technology, Daejeon (Korea, Republic of); Kim, Dai Jin [Pohang University of Science and Technology, Pohang (Korea, Republic of)

    2011-07-15

    Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection.

  18. Audio-Visual Fusion for Sound Source Localization and Improved Attention

    International Nuclear Information System (INIS)

    Lee, Byoung Gi; Choi, Jong Suk; Yoon, Sang Suk; Choi, Mun Taek; Kim, Mun Sang; Kim, Dai Jin

    2011-01-01

    Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection

  19. Word segmentation with universal prosodic cues.

    Science.gov (United States)

    Endress, Ansgar D; Hauser, Marc D

    2010-09-01

    When listening to speech from one's native language, words seem to be well separated from one another, like beads on a string. When listening to a foreign language, in contrast, words seem almost impossible to extract, as if there was only one bead on the same string. This contrast reveals that there are language-specific cues to segmentation. The puzzle, however, is that infants must be endowed with a language-independent mechanism for segmentation, as they ultimately solve the segmentation problem for any native language. Here, we approach the acquisition problem by asking whether there are language-independent cues to segmentation that might be available to even adult learners who have already acquired a native language. We show that adult learners recognize words in connected speech when only prosodic cues to word-boundaries are given from languages unfamiliar to the participants. In both artificial and natural speech, adult English speakers, with no prior exposure to the test languages, readily recognized words in natural languages with critically different prosodic patterns, including French, Turkish and Hungarian. We suggest that, even though languages differ in their sound structures, they carry universal prosodic characteristics. Further, these language-invariant prosodic cues provide a universally accessible mechanism for finding words in connected speech. These cues may enable infants to start acquiring words in any language even before they are fine-tuned to the sound structure of their native language. Copyright © 2010. Published by Elsevier Inc.

  20. Attentional and Contextual Priors in Sound Perception.

    Science.gov (United States)

    Wolmetz, Michael; Elhilali, Mounya

    2016-01-01

    Behavioral and neural studies of selective attention have consistently demonstrated that explicit attentional cues to particular perceptual features profoundly alter perception and performance. The statistics of the sensory environment can also provide cues about what perceptual features to expect, but the extent to which these more implicit contextual cues impact perception and performance, as well as their relationship to explicit attentional cues, is not well understood. In this study, the explicit cues, or attentional prior probabilities, and the implicit cues, or contextual prior probabilities, associated with different acoustic frequencies in a detection task were simultaneously manipulated. Both attentional and contextual priors had similarly large but independent impacts on sound detectability, with evidence that listeners tracked and used contextual priors for a variety of sound classes (pure tones, harmonic complexes, and vowels). Further analyses showed that listeners updated their contextual priors rapidly and optimally, given the changing acoustic frequency statistics inherent in the paradigm. A Bayesian Observer model accounted for both attentional and contextual adaptations found with listeners. These results bolster the interpretation of perception as Bayesian inference, and suggest that some effects attributed to selective attention may be a special case of contextual prior integration along a feature axis.

  1. Emotional cues, emotional signals, and their contrasting effects on listener valence

    DEFF Research Database (Denmark)

    Christensen, Justin

    2015-01-01

    that are mimetic of emotional cues interact in less clear and less cohesive manners with their corresponding haptic signals. For my investigations, subjects listen to samples from the International Affective Digital Sounds Library[2] and selected musical works on speakers in combination with a tactile transducer...... and of benefit to both the sender and the receiver of the signal, otherwise they would cease to have the intended effect of communication. In contrast with signals, animal cues are much more commonly unimodal as they are unintentional by the sender. In my research, I investigate whether subjects exhibit...... are more emotional cues (e.g. sadness or calmness). My hypothesis is that musical and sound stimuli that are mimetic of emotional signals should combine to elicit a stronger response when presented as a multimodal stimulus as opposed to as a unimodal stimulus, whereas musical or sound stimuli...

  2. Perceptual representation and effectiveness of local figure-ground cues in natural contours.

    Science.gov (United States)

    Sakai, Ko; Matsuoka, Shouhei; Kurematsu, Ken; Hatori, Yasuhiro

    2015-01-01

    A contour shape strongly influences the perceptual segregation of a figure from the ground. We investigated the contribution of local contour shape to figure-ground segregation. Although previous studies have reported local contour features that evoke figure-ground perception, they were often image features and not necessarily perceptual features. First, we examined whether contour features, specifically, convexity, closure, and symmetry, underlie the perceptual representation of natural contour shapes. We performed similarity tests between local contours, and examined the contribution of the contour features to the perceptual similarities between the contours. The local contours were sampled from natural contours so that their distribution was uniform in the space composed of the three contour features. This sampling ensured the equal appearance frequency of the factors and a wide variety of contour shapes including those comprised of contradictory factors that induce figure in the opposite directions. This sampling from natural contours is advantageous in order to randomly pickup a variety of contours that satisfy a wide range of cue combinations. Multidimensional scaling analyses showed that the combinations of convexity, closure, and symmetry contribute to perceptual similarity, thus they are perceptual quantities. Second, we examined whether the three features contribute to local figure-ground perception. We performed psychophysical experiments to judge the direction of the figure along the local contours, and examined the contribution of the features to the figure-ground judgment. Multiple linear regression analyses showed that closure was a significant factor, but that convexity and symmetry were not. These results indicate that closure is dominant in the local figure-ground perception with natural contours when the other cues coexist with equal probability including contradictory cases.

  3. Perceptual representation and effectiveness of local figure–ground cues in natural contours

    Science.gov (United States)

    Sakai, Ko; Matsuoka, Shouhei; Kurematsu, Ken; Hatori, Yasuhiro

    2015-01-01

    A contour shape strongly influences the perceptual segregation of a figure from the ground. We investigated the contribution of local contour shape to figure–ground segregation. Although previous studies have reported local contour features that evoke figure–ground perception, they were often image features and not necessarily perceptual features. First, we examined whether contour features, specifically, convexity, closure, and symmetry, underlie the perceptual representation of natural contour shapes. We performed similarity tests between local contours, and examined the contribution of the contour features to the perceptual similarities between the contours. The local contours were sampled from natural contours so that their distribution was uniform in the space composed of the three contour features. This sampling ensured the equal appearance frequency of the factors and a wide variety of contour shapes including those comprised of contradictory factors that induce figure in the opposite directions. This sampling from natural contours is advantageous in order to randomly pickup a variety of contours that satisfy a wide range of cue combinations. Multidimensional scaling analyses showed that the combinations of convexity, closure, and symmetry contribute to perceptual similarity, thus they are perceptual quantities. Second, we examined whether the three features contribute to local figure–ground perception. We performed psychophysical experiments to judge the direction of the figure along the local contours, and examined the contribution of the features to the figure–ground judgment. Multiple linear regression analyses showed that closure was a significant factor, but that convexity and symmetry were not. These results indicate that closure is dominant in the local figure–ground perception with natural contours when the other cues coexist with equal probability including contradictory cases. PMID:26579057

  4. Perceptual Representation and Effectiveness of Local Figure-Ground Cues in Natural Contours

    Directory of Open Access Journals (Sweden)

    Ko eSakai

    2015-11-01

    Full Text Available A contour shape strongly influences the perceptual segregation of a figure from the ground. We investigated the contribution of local contour shape to figure-ground segregation. Although previous studies have reported local contour features that evoke figure-ground perception, they were often image features and not necessarily perceptual features. First, we examined whether contour features, specifically, convexity, closure, and symmetry, underlie the perceptual representation of natural contour shapes. We performed similarity tests between local contours, and examined the contribution of the contour features to the perceptual similarities between the contours. The local contours were sampled from natural contours so that their distribution was uniform in the space composed of the three contour features. This sampling ensured the equal appearance frequency of the factors and a wide variety of contour shapes including those comprised of contradictory factors that induce figure in the opposite directions. This sampling from natural contours is advantageous in order to randomly pickup a variety of contours that satisfy a wide range of cue combinations. Multidimensional scaling analyses showed that the combinations of convexity, closure, and symmetry contribute to perceptual similarity, thus they are perceptual quantities. Second, we examined whether the three features contribute to local figure-ground perception. We performed psychophysical experiments to judge the direction of the figure along the local contours, and examined the contribution of the features to the figure-ground judgment. Multiple linear regression analyses showed that closure was a significant factor, but that convexity and symmetry were not. These results indicate that closure is dominant in the local figure-ground perception with natural contours when the other cues coexist with equal probability including contradictory cases.

  5. The influence of imagery vividness on cognitive and perceptual cues in circular auditorily-induced vection

    Directory of Open Access Journals (Sweden)

    Aleksander eVäljamäe

    2014-12-01

    Full Text Available In the absence of other congruent multisensory motion cues, sound contribution to illusions of self-motion (vection is relatively weak and often attributed to purely cognitive, top-down processes. The present study addressed the influence of cognitive and perceptual factors in the experience of circular, yaw auditorily-induced vection (AIV, focusing on participants’ imagery vividness scores. We used different rotating sound sources (acoustic landmark vs. movable types and their filtered versions that provided different binaural cues (interaural time or level differences, ITD vs. ILD when delivering via loudspeaker array. The significant differences in circular vection intensity showed that 1 AIV was stronger for rotating sound fields containing auditory landmarks as compared to movable sound objects; 2 ITD based acoustic cues were more instrumental than ILD based ones for horizontal AIV; and 3 individual differences in imagery vividness significantly influenced the effects of contextual and perceptual cues. While participants with high scores of kinesthetic and visual imagery were helped by vection ``rich cues, i.e. acoustic landmarks and ITD cues, the participants from the low-vivid imagery group did not benefit from these cues automatically. Only when specifically asked to use their imagination intentionally did these external cues start influencing vection sensation in similar way to high-vivid imagers. These findings are in line with the recent fMRI work which suggested that high-vivid imagers employ automatic, almost unconscious mechanisms in imagery generation, while low-vivid imagers rely on more schematic and conscious framework. Consequently, our results provide an additional insight into the interaction between perceptual and contextual cues when experiencing purely auditorily or multisensorily induced vection.

  6. Veering re-visited: noise and posture cues in walking without sight.

    Science.gov (United States)

    Millar, S

    1999-01-01

    Effects of sound and posture cues on veering from the straight-ahead were tested with young blind children in an unfamiliar space that lacked orienting cues. In a pre-test with a previously heard target sound, all subjects walked straight to the target. A recording device, which sampled the locomotor trajectories automatically, showed that, without prior cues from target locations, subjects tended to veer more to the side from which they heard a brief, irrelevant noise. Carrying a load on one side produced more veering to the opposite side. The detailed samples showed that, underlying the main trajectories, were alternating concave and convex (left and right) movements, suggesting stepwise changes in body position. It is argued that the same external and body-centred cues that contribute to reference-frame orientation for locomotion when they converge and concur, influence the direction of veering when the cues occur in isolation in environments that lack converging reference information.

  7. Numerical value biases sound localization.

    Science.gov (United States)

    Golob, Edward J; Lewald, Jörg; Getzmann, Stephan; Mock, Jeffrey R

    2017-12-08

    Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perceptual judgments of sound location as a function of digit magnitude (1-9). The main finding was that for stimuli presented near the median plane there was a linear left-to-right bias for localizing smaller-to-larger numbers. At lateral locations there was a central-eccentric location bias in the pointing task, and either a bias restricted to the smaller numbers (left side) or no significant number bias (right side). Prior number location also biased subsequent number judgments towards the opposite side. Findings support a lexical influence on auditory spatial perception, with a linear mapping near midline and more complex relations at lateral locations. Results may reflect coding of dedicated spatial channels, with two representing lateral positions in each hemispace, and the midline area represented by either their overlap or a separate third channel.

  8. Orientation Estimation and Signal Reconstruction of a Directional Sound Source

    DEFF Research Database (Denmark)

    Guarato, Francesco

    , one for each call emission, were compared to those calculated through a pre-existing technique based on interpolation of sound-pressure levels at microphone locations. The application of the method to the bat calls could provide knowledge on bat behaviour that may be useful for a bat-inspired sensor......Previous works in the literature about one tone or broadband sound sources mainly deal with algorithms and methods developed in order to localize the source and, occasionally, estimate the source bearing angle (with respect to a global reference frame). The problem setting assumes, in these cases......, omnidirectional receivers collecting the acoustic signal from the source: analysis of arrival times in the recordings together with microphone positions and source directivity cues allows to get information about source position and bearing. Moreover, sound sources have been included into sensor systems together...

  9. Spatial resolution limits for the localization of noise sources using direct sound mapping

    DEFF Research Database (Denmark)

    Comesana, D. Fernandez; Holland, K. R.; Fernandez Grande, Efren

    2016-01-01

    the relationship between spatial resolution, noise level and geometry. The proposed expressions are validated via simulations and experiments. It is shown that particle velocity mapping yields better results for identifying closely spaced sound sources than sound pressure or sound intensity, especially...... extensively been used for many years to locate sound sources. However, it is not yet well defined when two sources should be regarded as resolved by means of direct sound mapping. This paper derives the limits of the direct representation of sound pressure, particle velocity and sound intensity by exploring......One of the main challenges arising from noise and vibration problems is how to identify the areas of a device, machine or structure that produce significant acoustic excitation, i.e. the localization of main noise sources. The direct visualization of sound, in particular sound intensity, has...

  10. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners with Bilateral and with Hearing-Preservation Cochlear Implants

    Science.gov (United States)

    Loiselle, Louise H.; Dorman, Michael F.; Yost, William A.; Cook, Sarah J.; Gifford, Rene H.

    2016-01-01

    Purpose: To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Methods: Eleven bilateral listeners with MED-EL…

  11. Developmental change in children's sensitivity to sound symbolism.

    Science.gov (United States)

    Tzeng, Christina Y; Nygaard, Lynne C; Namy, Laura L

    2017-08-01

    The current study examined developmental change in children's sensitivity to sound symbolism. Three-, five-, and seven-year-old children heard sound symbolic novel words and foreign words meaning round and pointy and chose which of two pictures (one round and one pointy) best corresponded to each word they heard. Task performance varied as a function of both word type and age group such that accuracy was greater for novel words than for foreign words, and task performance increased with age for both word types. For novel words, children in all age groups reliably chose the correct corresponding picture. For foreign words, 3-year-olds showed chance performance, whereas 5- and 7-year-olds showed reliably above-chance performance. Results suggest increased sensitivity to sound symbolic cues with development and imply that although sensitivity to sound symbolism may be available early and facilitate children's word-referent mappings, sensitivity to subtler sound symbolic cues requires greater language experience. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. How to generate a sound-localization map in fish

    Science.gov (United States)

    van Hemmen, J. Leo

    2015-03-01

    How sound localization is represented in the fish brain is a research field largely unbiased by theoretical analysis and computational modeling. Yet, there is experimental evidence that the axes of particle acceleration due to underwater sound are represented through a map in the midbrain of fish, e.g., in the torus semicircularis of the rainbow trout (Wubbels et al. 1997). How does such a map arise? Fish perceive pressure gradients by their three otolithic organs, each of which comprises a dense, calcareous, stone that is bathed in endolymph and attached to a sensory epithelium. In rainbow trout, the sensory epithelia of left and right utricle lie in the horizontal plane and consist of hair cells with equally distributed preferred orientations. We model the neuronal response of this system on the basis of Schuijf's vector detection hypothesis (Schuijf et al. 1975) and introduce a temporal spike code of sound direction, where optimality of hair cell orientation θj with respect to the acceleration direction θs is mapped onto spike phases via a von-Mises distribution. By learning to tune in to the earliest synchronized activity, nerve cells in the midbrain generate a map under the supervision of a locally excitatory, yet globally inhibitory visual teacher. Work done in collaboration with Daniel Begovic. Partially supported by BCCN - Munich.

  13. Estimating location without external cues.

    Directory of Open Access Journals (Sweden)

    Allen Cheung

    2014-10-01

    Full Text Available The ability to determine one's location is fundamental to spatial navigation. Here, it is shown that localization is theoretically possible without the use of external cues, and without knowledge of initial position or orientation. With only error-prone self-motion estimates as input, a fully disoriented agent can, in principle, determine its location in familiar spaces with 1-fold rotational symmetry. Surprisingly, localization does not require the sensing of any external cue, including the boundary. The combination of self-motion estimates and an internal map of the arena provide enough information for localization. This stands in conflict with the supposition that 2D arenas are analogous to open fields. Using a rodent error model, it is shown that the localization performance which can be achieved is enough to initiate and maintain stable firing patterns like those of grid cells, starting from full disorientation. Successful localization was achieved when the rotational asymmetry was due to the external boundary, an interior barrier or a void space within an arena. Optimal localization performance was found to depend on arena shape, arena size, local and global rotational asymmetry, and the structure of the path taken during localization. Since allothetic cues including visual and boundary contact cues were not present, localization necessarily relied on the fusion of idiothetic self-motion cues and memory of the boundary. Implications for spatial navigation mechanisms are discussed, including possible relationships with place field overdispersion and hippocampal reverse replay. Based on these results, experiments are suggested to identify if and where information fusion occurs in the mammalian spatial memory system.

  14. Training the Brain to Weight Speech Cues Differently: A Study of Finnish Second-language Users of English

    Science.gov (United States)

    Ylinen, Sari; Uther, Maria; Latvala, Antti; Vepsalainen, Sara; Iverson, Paul; Akahane-Yamada, Reiko; Naatanen, Risto

    2010-01-01

    Foreign-language learning is a prime example of a task that entails perceptual learning. The correct comprehension of foreign-language speech requires the correct recognition of speech sounds. The most difficult speech-sound contrasts for foreign-language learners often are the ones that have multiple phonetic cues, especially if the cues are…

  15. Estimating 3D tilt from local image cues in natural scenes

    OpenAIRE

    Burge, Johannes; McCann, Brian C.; Geisler, Wilson S.

    2016-01-01

    Estimating three-dimensional (3D) surface orientation (slant and tilt) is an important first step toward estimating 3D shape. Here, we examine how three local image cues from the same location (disparity gradient, luminance gradient, and dominant texture orientation) should be combined to estimate 3D tilt in natural scenes. We collected a database of natural stereoscopic images with precisely co-registered range images that provide the ground-truth distance at each pixel location. We then ana...

  16. Sound arithmetic: auditory cues in the rehabilitation of impaired fact retrieval.

    Science.gov (United States)

    Domahs, Frank; Zamarian, Laura; Delazer, Margarete

    2008-04-01

    The present single case study describes the rehabilitation of an acquired impairment of multiplication fact retrieval. In addition to a conventional drill approach, one set of problems was preceded by auditory cues while the other half was not. After extensive repetition, non-specific improvements could be observed for all trained problems (e.g., 3 * 7) as well as for their non-trained complementary problems (e.g., 7 * 3). Beyond this general improvement, specific therapy effects were found for problems trained with auditory cues. These specific effects were attributed to an involvement of implicit memory systems and/or attentional processes during training. Thus, the present results demonstrate that cues in the training of arithmetic facts do not have to be visual to be effective.

  17. Sound localization and speech identification in the frontal median plane with a hear-through headset

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.; Møller, Anders Kalsgaard; Christensen, Flemming

    2014-01-01

    signals can be superimposed via earphone reproduction. An important aspect of the hear-through headset is its transparency, i.e. how close to real life can the electronically amplied sounds be perceived. Here we report experiments conducted to evaluate the auditory transparency of a hear-through headset...... prototype by comparing human performance in natural, hear-through, and fully occluded conditions for two spatial tasks: frontal vertical-plane sound localization and speech-on-speech spatial release from masking. Results showed that localization performance was impaired by the hear-through headset relative...... to the natural condition though not as much as in the fully occluded condition. Localization was affected the least when the sound source was in front of the listeners. Different from the vertical localization performance, results from the speech task suggest that normal speech-on-speech spatial release from...

  18. Competition between auditory and visual spatial cues during visual task performance

    NARCIS (Netherlands)

    Koelewijn, T.; Bronkhorst, A.; Theeuwes, J.

    2009-01-01

    There is debate in the crossmodal cueing literature as to whether capture of visual attention by means of sound is a fully automatic process. Recent studies show that when visual attention is endogenously focused sound still captures attention. The current study investigated whether there is

  19. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    Science.gov (United States)

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-11-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli.

  20. Exploiting Deep Neural Networks and Head Movements for Robust Binaural Localization of Multiple Sources in Reverberant Environments

    DEFF Research Database (Denmark)

    Ma, Ning; May, Tobias; Brown, Guy J.

    2017-01-01

    This paper presents a novel machine-hearing system that exploits deep neural networks (DNNs) and head movements for robust binaural localization of multiple sources in reverberant environments. DNNs are used to learn the relationship between the source azimuth and binaural cues, consisting...... of the complete cross-correlation function (CCF) and interaural level differences (ILDs). In contrast to many previous binaural hearing systems, the proposed approach is not restricted to localization of sound sources in the frontal hemifield. Due to the similarity of binaural cues in the frontal and rear...

  1. Behavioural Response Thresholds in New Zealand Crab Megalopae to Ambient Underwater Sound

    Science.gov (United States)

    Stanley, Jenni A.; Radford, Craig A.; Jeffs, Andrew G.

    2011-01-01

    A small number of studies have demonstrated that settlement stage decapod crustaceans are able to detect and exhibit swimming, settlement and metamorphosis responses to ambient underwater sound emanating from coastal reefs. However, the intensity of the acoustic cue required to initiate the settlement and metamorphosis response, and therefore the potential range over which this acoustic cue may operate, is not known. The current study determined the behavioural response thresholds of four species of New Zealand brachyuran crab megalopae by exposing them to different intensity levels of broadcast reef sound recorded from their preferred settlement habitat and from an unfavourable settlement habitat. Megalopae of the rocky-reef crab, Leptograpsus variegatus, exhibited the lowest behavioural response threshold (highest sensitivity), with a significant reduction in time to metamorphosis (TTM) when exposed to underwater reef sound with an intensity of 90 dB re 1 µPa and greater (100, 126 and 135 dB re 1 µPa). Megalopae of the mud crab, Austrohelice crassa, which settle in soft sediment habitats, exhibited no response to any of the underwater reef sound levels. All reef associated species exposed to sound levels from an unfavourable settlement habitat showed no significant change in TTM, even at intensities that were similar to their preferred reef sound for which reductions in TTM were observed. These results indicated that megalopae were able to discern and respond selectively to habitat-specific acoustic cues. The settlement and metamorphosis behavioural response thresholds to levels of underwater reef sound determined in the current study of four species of crabs, enables preliminary estimation of the spatial range at which an acoustic settlement cue may be operating, from 5 m to 40 km depending on the species. Overall, these results indicate that underwater sound is likely to play a major role in influencing the spatial patterns of settlement of coastal crab

  2. 3-D inversion of airborne electromagnetic data parallelized and accelerated by local mesh and adaptive soundings

    Science.gov (United States)

    Yang, Dikun; Oldenburg, Douglas W.; Haber, Eldad

    2014-03-01

    Airborne electromagnetic (AEM) methods are highly efficient tools for assessing the Earth's conductivity structures in a large area at low cost. However, the configuration of AEM measurements, which typically have widely distributed transmitter-receiver pairs, makes the rigorous modelling and interpretation extremely time-consuming in 3-D. Excessive overcomputing can occur when working on a large mesh covering the entire survey area and inverting all soundings in the data set. We propose two improvements. The first is to use a locally optimized mesh for each AEM sounding for the forward modelling and calculation of sensitivity. This dedicated local mesh is small with fine cells near the sounding location and coarse cells far away in accordance with EM diffusion and the geometric decay of the signals. Once the forward problem is solved on the local meshes, the sensitivity for the inversion on the global mesh is available through quick interpolation. Using local meshes for AEM forward modelling avoids unnecessary computing on fine cells on a global mesh that are far away from the sounding location. Since local meshes are highly independent, the forward modelling can be efficiently parallelized over an array of processors. The second improvement is random and dynamic down-sampling of the soundings. Each inversion iteration only uses a random subset of the soundings, and the subset is reselected for every iteration. The number of soundings in the random subset, determined by an adaptive algorithm, is tied to the degree of model regularization. This minimizes the overcomputing caused by working with redundant soundings. Our methods are compared against conventional methods and tested with a synthetic example. We also invert a field data set that was previously considered to be too large to be practically inverted in 3-D. These examples show that our methodology can dramatically reduce the processing time of 3-D inversion to a practical level without losing resolution

  3. Localization of self-generated synthetic footstep sounds on different walked-upon materials through headphones

    DEFF Research Database (Denmark)

    Turchet, Luca; Spagnol, Simone; Geronazzo, Michele

    2016-01-01

    typologies of surface materials: solid (e.g., wood) and aggregate (e.g., gravel). Different sound delivery methods (mono, stereo, binaural) as well as several surface materials, in presence or absence of concurrent contextual auditory information provided as soundscapes, were evaluated in a vertical...... localization task. Results showed that solid surfaces were localized significantly farther from the walker's feet than the aggregate ones. This effect was independent of the used rendering technique, of the presence of soundscapes, and of merely temporal or spectral attributes of sound. The effect...

  4. Using Auditory Cues to Perceptually Extract Visual Data in Collaborative, Immersive Big-Data Display Systems

    Science.gov (United States)

    Lee, Wendy

    The advent of multisensory display systems, such as virtual and augmented reality, has fostered a new relationship between humans and space. Not only can these systems mimic real-world environments, they have the ability to create a new space typology made solely of data. In these spaces, two-dimensional information is displayed in three dimensions, requiring human senses to be used to understand virtual, attention-based elements. Studies in the field of big data have predominately focused on visual representations and extractions of information with little focus on sounds. The goal of this research is to evaluate the most efficient methods of perceptually extracting visual data using auditory stimuli in immersive environments. Using Rensselaer's CRAIVE-Lab, a virtual reality space with 360-degree panorama visuals and an array of 128 loudspeakers, participants were asked questions based on complex visual displays using a variety of auditory cues ranging from sine tones to camera shutter sounds. Analysis of the speed and accuracy of participant responses revealed that auditory cues that were more favorable for localization and were positively perceived were best for data extraction and could help create more user-friendly systems in the future.

  5. Localization of Simultaneous Moving Sound Sources for Mobile Robot Using a Frequency-Domain Steered Beamformer Approach

    OpenAIRE

    Valin, Jean-Marc; Michaud, François; Hadjou, Brahim; Rouat, Jean

    2016-01-01

    Mobile robots in real-life settings would benefit from being able to localize sound sources. Such a capability can nicely complement vision to help localize a person or an interesting event in the environment, and also to provide enhanced processing for other capabilities such as speech recognition. In this paper we present a robust sound source localization method in three-dimensional space using an array of 8 microphones. The method is based on a frequency-domain implementation of a steered...

  6. Experimental analysis of considering the sound pressure distribution pattern at the ear canal entrance as an unrevealed head-related localization clue

    Institute of Scientific and Technical Information of China (English)

    TONG Xin; QI Na; MENG Zihou

    2018-01-01

    By analyzing the differences between binaural recording and real listening,it was deduced that there were some unrevealed auditory localization clues,and the sound pressure distribution pattern at the entrance of ear canal was probably a clue.It was proved through the listening test that the unrevealed auditory localization clues really exist with the reduction to absurdity.And the effective frequency bands of the unrevealed localization clues were induced and summed.The result of finite element based simulations showed that the pressure distribution at the entrance of ear canal was non-uniform,and the pattern was related to the direction of sound source.And it was proved that the sound pressure distribution pattern at the entrance of the ear canal carried the sound source direction information and could be used as an unrevealed localization cluc.The frequency bands in which the sound pressure distribution patterns had significant differences between front and back sound source directions were roughly matched with the effective frequency bands of unrevealed localization clues obtained from the listening tests.To some extent,it supports the hypothesis that the sound pressure distribution pattern could be a kind of unrevealed auditory localization clues.

  7. Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua

    2015-01-01

    Most state-of-the-art Sound Source Localization (SSL) algorithms have been proposed for applications which are "uninformed'' about the target sound content; however, utilizing a wireless microphone worn by a target talker, enables recent Hearing Aid Systems (HASs) to access to an almost noise......-free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...

  8. Graded behavioral responses and habituation to sound in the common cuttlefish, Sepia officinalis

    NARCIS (Netherlands)

    Samson, J.E.; Mooney, T.A.; Gussekloo, S.W.S.; Hanlon, R.T.

    2014-01-01

    Sound is a widely available and vital cue in aquatic environments yet most bioacoustic research has focused on marine vertebrates, leaving sound detection in invertebrates poorly understood. Cephalopods are an ecologically key taxon that likely use sound and may be impacted by increasing

  9. The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl.

    Science.gov (United States)

    Baxter, Caitlin S; Nelson, Brian S; Takahashi, Terry T

    2013-02-01

    Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643-655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.

  10. The Role of Place Cues in Voluntary Stream Segregation for Cochlear Implant Users

    DEFF Research Database (Denmark)

    Paredes Gallardo, Andreu; Madsen, Sara Miay Kim; Dau, Torsten

    2018-01-01

    of the A and B sequences should improve performance. In Experiment 1, the electrode separation and the sequence duration were varied to clarify whether place cues help CI listeners to voluntarily segregate sounds and whether a two-stream percept needs time to build up. Results suggested that place cues can...

  11. The medial prefrontal cortex and memory of cue location in the rat.

    Science.gov (United States)

    Rawson, Tim; O'Kane, Michael; Talk, Andrew

    2010-01-01

    We developed a single-trial cue-location memory task in which rats experienced an auditory cue while exploring an environment. They then recalled and avoided the sound origination point after the cue was paired with shock in a separate context. Subjects with medial prefrontal cortical (mPFC) lesions made no such avoidance response, but both lesioned and control subjects avoided the cue itself when presented at test. A follow up assessment revealed no spatial learning impairment in either group. These findings suggest that the rodent mPFC is required for incidental learning or recollection of the location at which a discrete cue occurred, but is not required for cue recognition or for allocentric spatial memory. Copyright 2009 Elsevier Inc. All rights reserved.

  12. Speech cues contribute to audiovisual spatial integration.

    Directory of Open Access Journals (Sweden)

    Christopher W Bishop

    Full Text Available Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral 'what' and dorsal 'where' pathways.

  13. A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS

    Science.gov (United States)

    Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto

    At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.

  14. Crowing Sound Analysis of Gaga' Chicken; Local Chicken from South Sulawesi Indonesia

    OpenAIRE

    Aprilita Bugiwati, Sri Rachma; Ashari, Fachri

    2008-01-01

    Gaga??? chicken was known as a local chicken at South Sulawesi Indonesia which has unique, specific, and different crowing sound, especially at the ending of crowing sound which is like the voice character of human laughing, comparing with the other types of singing chicken in the world. 287 birds of Gaga??? chicken at 3 districts at the centre habitat of Gaga??? chicken were separated into 2 groups (163 birds of Dangdut type and 124 birds of Slow type) which is based on the speed...

  15. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    Science.gov (United States)

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  16. The effect of multimicrophone noise reduction systems on sound source localization by users of binaural hearing aids.

    Science.gov (United States)

    Van den Bogaert, Tim; Doclo, Simon; Wouters, Jan; Moonen, Marc

    2008-07-01

    This paper evaluates the influence of three multimicrophone noise reduction algorithms on the ability to localize sound sources. Two recently developed noise reduction techniques for binaural hearing aids were evaluated, namely, the binaural multichannel Wiener filter (MWF) and the binaural multichannel Wiener filter with partial noise estimate (MWF-N), together with a dual-monaural adaptive directional microphone (ADM), which is a widely used noise reduction approach in commercial hearing aids. The influence of the different algorithms on perceived sound source localization and their noise reduction performance was evaluated. It is shown that noise reduction algorithms can have a large influence on localization and that (a) the ADM only preserves localization in the forward direction over azimuths where limited or no noise reduction is obtained; (b) the MWF preserves localization of the target speech component but may distort localization of the noise component. The latter is dependent on signal-to-noise ratio and masking effects; (c) the MWF-N enables correct localization of both the speech and the noise components; (d) the statistical Wiener filter approach introduces a better combination of sound source localization and noise reduction performance than the ADM approach.

  17. Negative emotion provides cues for orienting auditory spatial attention

    Directory of Open Access Journals (Sweden)

    Erkin eAsutay

    2015-05-01

    Full Text Available The auditory stimuli provide information about the objects and events around us. They can also carry biologically significant emotional information (such as unseen dangers and conspecific vocalizations, which provides cues for allocation of attention and mental resources. Here, we investigated whether task-irrelevant auditory emotional information can provide cues for orientation of auditory spatial attention. We employed a covert spatial orienting task: the dot-probe task. In each trial, two task irrelevant auditory cues were simultaneously presented at two separate locations (left-right or front-back. Environmental sounds were selected to form emotional vs. neutral, emotional vs. emotional, and neutral vs. neutral cue pairs. The participants’ task was to detect the location of an acoustic target that was presented immediately after the task-irrelevant auditory cues. The target was presented at the same location as one of the auditory cues. The results indicated that participants were significantly faster to locate the target when it replaced the negative cue compared to when it replaced the neutral cue. The positive cues did not produce a clear attentional bias. Further, same valence pairs (emotional-emotional or neutral-neutral did not modulate reaction times due to a lack of spatial attention capture by one cue in the pair. Taken together, the results indicate that negative affect can provide cues for the orientation of spatial attention in the auditory domain.

  18. Near-Field Sound Localization Based on the Small Profile Monaural Structure

    Directory of Open Access Journals (Sweden)

    Youngwoong Kim

    2015-11-01

    Full Text Available The acoustic wave around a sound source in the near-field area presents unconventional properties in the temporal, spectral, and spatial domains due to the propagation mechanism. This paper investigates a near-field sound localizer in a small profile structure with a single microphone. The asymmetric structure around the microphone provides a distinctive spectral variation that can be recognized by the dedicated algorithm for directional localization. The physical structure consists of ten pipes of different lengths in a vertical fashion and rectangular wings positioned between the pipes in radial directions. The sound from an individual direction travels through the nearest open pipe, which generates the particular fundamental frequency according to the acoustic resonance. The Cepstral parameter is modified to evaluate the fundamental frequency. Once the system estimates the fundamental frequency of the received signal, the length of arrival and angle of arrival (AoA are derived by the designed model. From an azimuthal distance of 3–15 cm from the outer body of the pipes, the extensive acoustic experiments with a 3D-printed structure show that the direct and side directions deliver average hit rates of 89% and 73%, respectively. The closer positions to the system demonstrate higher accuracy, and the overall hit rate performance is 78% up to 15 cm away from the structure body.

  19. Associative cueing of attention through implicit feature-location binding.

    Science.gov (United States)

    Girardi, Giovanna; Nico, Daniele

    2017-09-01

    In order to assess associative learning between two task-irrelevant features in cueing spatial attention, we devised a task in which participants have to make an identity comparison between two sequential visual stimuli. Unbeknownst to them, location of the second stimulus could be predicted by the colour of the first or a concurrent sound. Albeit unnecessary to perform the identity-matching judgment the predictive features thus provided an arbitrary association favouring the spatial anticipation of the second stimulus. A significant advantage was found with faster responses at predicted compared to non-predicted locations. Results clearly demonstrated an associative cueing of attention via a second-order arbitrary feature/location association but with a substantial discrepancy depending on the sensory modality of the predictive feature. With colour as predictive feature, significant advantages emerged only after the completion of three blocks of trials. On the contrary, sound affected responses from the first block of trials and significant advantages were manifest from the beginning of the second. The possible mechanisms underlying the associative cueing of attention in both conditions are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Sound localization and word discrimination in reverberant environment in children with developmental dyslexia

    Directory of Open Access Journals (Sweden)

    Wendy Castro-Camacho

    2015-04-01

    Full Text Available Objective Compare if localization of sounds and words discrimination in reverberant environment is different between children with dyslexia and controls. Method We studied 30 children with dyslexia and 30 controls. Sound and word localization and discrimination was studied in five angles from left to right auditory fields (-90o, -45o, 0o, +45o, +90o, under reverberant and no-reverberant conditions; correct answers were compared. Results Spatial location of words in no-reverberant test was deficient in children with dyslexia at 0º and +90o. Spatial location for reverberant test was altered in children with dyslexia at all angles, except –-90o. Word discrimination in no-reverberant test in children with dyslexia had a poor performance at left angles. In reverberant test, children with dyslexia exhibited deficiencies at -45o, -90o, and +45o angles. Conclusion Children with dyslexia could had problems when have to locate sound, and discriminate words in extreme locations of the horizontal plane in classrooms with reverberation.

  1. Cortical processing of dynamic sound envelope transitions.

    Science.gov (United States)

    Zhou, Yi; Wang, Xiaoqin

    2010-12-08

    Slow envelope fluctuations in the range of 2-20 Hz provide important segmental cues for processing communication sounds. For a successful segmentation, a neural processor must capture envelope features associated with the rise and fall of signal energy, a process that is often challenged by the interference of background noise. This study investigated the neural representations of slowly varying envelopes in quiet and in background noise in the primary auditory cortex (A1) of awake marmoset monkeys. We characterized envelope features based on the local average and rate of change of sound level in envelope waveforms and identified envelope features to which neurons were selective by reverse correlation. Our results showed that envelope feature selectivity of A1 neurons was correlated with the degree of nonmonotonicity in their static rate-level functions. Nonmonotonic neurons exhibited greater feature selectivity than monotonic neurons in quiet and in background noise. The diverse envelope feature selectivity decreased spike-timing correlation among A1 neurons in response to the same envelope waveforms. As a result, the variability, but not the average, of the ensemble responses of A1 neurons represented more faithfully the dynamic transitions in low-frequency sound envelopes both in quiet and in background noise.

  2. Sound Source Localization through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network.

    Science.gov (United States)

    Beck, Christoph; Garreau, Guillaume; Georgiou, Julius

    2016-01-01

    Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  3. Effect of background noise on neuronal coding of interaural level difference cues in rat inferior colliculus.

    Science.gov (United States)

    Mokri, Yasamin; Worland, Kate; Ford, Mark; Rajan, Ramesh

    2015-07-01

    Humans can accurately localize sounds even in unfavourable signal-to-noise conditions. To investigate the neural mechanisms underlying this, we studied the effect of background wide-band noise on neural sensitivity to variations in interaural level difference (ILD), the predominant cue for sound localization in azimuth for high-frequency sounds, at the characteristic frequency of cells in rat inferior colliculus (IC). Binaural noise at high levels generally resulted in suppression of responses (55.8%), but at lower levels resulted in enhancement (34.8%) as well as suppression (30.3%). When recording conditions permitted, we then examined if any binaural noise effects were related to selective noise effects at each of the two ears, which we interpreted in light of well-known differences in input type (excitation and inhibition) from each ear shaping particular forms of ILD sensitivity in the IC. At high signal-to-noise ratios (SNR), in most ILD functions (41%), the effect of background noise appeared to be due to effects on inputs from both ears, while for a large percentage (35.8%) appeared to be accounted for by effects on excitatory input. However, as SNR decreased, change in excitation became the dominant contributor to the change due to binaural background noise (63.6%). These novel findings shed light on the IC neural mechanisms for sound localization in the presence of continuous background noise. They also suggest that some effects of background noise on encoding of sound location reported to be emergent in upstream auditory areas can also be observed at the level of the midbrain. © 2015 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. Emotional pictures and sounds: A review of multimodal interactions of emotion cues in multiple domains

    Directory of Open Access Journals (Sweden)

    Antje B M Gerdes

    2014-12-01

    Full Text Available In everyday life, multiple sensory channels jointly trigger emotional experiences and one channel may alter processing in another channel. For example, seeing an emotional facial expression and hearing the voice’s emotional tone will jointly create the emotional experience. This example, where auditory and visual input is related to social communication, has gained considerable attention by researchers. However, interactions of visual and auditory emotional information are not limited to social communication but can extend to much broader contexts including human, animal, and environmental cues. In this article, we review current research on audiovisual emotion processing beyond face-voice stimuli to develop a broader perspective on multimodal interactions in emotion processing. We argue that current concepts of multimodality should be extended in considering an ecologically valid variety of stimuli in audiovisual emotion processing. Therefore, we provide an overview of studies in which emotional sounds and interactions with complex pictures of scenes were investigated. In addition to behavioral studies, we focus on neuroimaging, electro- and peripher-physiological findings. Furthermore, we integrate these findings and identify similarities or differences. We conclude with suggestions for future research.

  5. ICE on the road to auditory sensitivity reduction and sound localization in the frog.

    Science.gov (United States)

    Narins, Peter M

    2016-10-01

    Frogs and toads are capable of producing calls at potentially damaging levels that exceed 110 dB SPL at 50 cm. Most frog species have internally coupled ears (ICE) in which the tympanic membranes (TyMs) communicate directly via the large, permanently open Eustachian tubes, resulting in an inherently directional asymmetrical pressure-difference receiver. One active mechanism for auditory sensitivity reduction involves the pressure increase during vocalization that distends the TyM, reducing its low-frequency airborne sound sensitivity. Moreover, if sounds generated by the vocal folds arrive at both surfaces of the TyM with nearly equal amplitudes and phases, the net motion of the eardrum would be greatly attenuated. Both of these processes appear to reduce the motion of the frog's TyM during vocalizations. The implications of ICE in amphibians with respect to sound localizations are discussed, and the particularly interesting case of frogs that use ultrasound for communication yet exhibit exquisitely small localization jump errors is brought to light.

  6. Analysis, Design and Implementation of an Embedded Realtime Sound Source Localization System Based on Beamforming Theory

    Directory of Open Access Journals (Sweden)

    Arko Djajadi

    2009-12-01

    Full Text Available This project is intended to analyze, design and implement a realtime sound source localization system by using a mobile robot as the media. The implementated system uses 2 microphones as the sensors, Arduino Duemilanove microcontroller system with ATMega328p as the microprocessor, two permanent magnet DC motors as the actuators for the mobile robot and a servo motor as the actuator to rotate the webcam directing to the location of the sound source, and a laptop/PC as the simulation and display media. In order to achieve the objective of finding the position of a specific sound source, beamforming theory is applied to the system. Once the location of the sound source is detected and determined, the choice is either the mobile robot will adjust its position according to the direction of the sound source or only webcam will rotate in the direction of the incoming sound simulating the use of this system in a video conference. The integrated system has been tested and the results show the system could localize in realtime a sound source placed randomly on a half circle area (0 - 1800 with a radius of 0.3m - 3m, assuming the system is the center point of the circle. Due to low ADC and processor speed, achievable best angular resolution is still limited to 25o.

  7. FREQUENCY COMPONENT EXTRACTION OF HEARTBEAT CUES WITH SHORT TIME FOURIER TRANSFORM (STFT

    Directory of Open Access Journals (Sweden)

    Sumarna Sumarna

    2017-01-01

      Electro-acoustic human heartbeat detector have been made with the main parts : (a stetoscope (piece chest, (b mic condenser, (c transistor amplifier, and (d cues analysis program with MATLAB. The frequency components that contained in heartbeat. cues have also been extracted with Short Time Fourier Transform (STFT from 9 volunteers. The results of the analysis showed that heart rate appeared in every cue frequency spectrum with their harmony. The steps of the research were including detector instrument design, test and instrument repair, cues heartbeat recording with Sound Forge 10 program and stored in wav file ; cues breaking at the start and the end, and extraction/cues analysis using MATLAB. The MATLAB program included filter (bandpass filter with bandwidth between 0.01 – 110 Hz, cues breaking with hamming window and every part was calculated using Fourier Transform (STFT mechanism and the result were shown in frequency spectrum graph.   Keywords: frequency components extraction, heartbeat cues, Short Time Fourier Transform

  8. Sound Rhythms Are Encoded by Postinhibitory Rebound Spiking in the Superior Paraolivary Nucleus

    Science.gov (United States)

    Felix, Richard A.; Fridberger, Anders; Leijon, Sara; Berrebi, Albert S.; Magnusson, Anna K.

    2013-01-01

    The superior paraolivary nucleus (SPON) is a prominent structure in the auditory brainstem. In contrast to the principal superior olivary nuclei with identified roles in processing binaural sound localization cues, the role of the SPON in hearing is not well understood. A combined in vitro and in vivo approach was used to investigate the cellular properties of SPON neurons in the mouse. Patch-clamp recordings in brain slices revealed that brief and well timed postinhibitory rebound spiking, generated by the interaction of two subthreshold-activated ion currents, is a hallmark of SPON neurons. The Ih current determines the timing of the rebound, whereas the T-type Ca2+ current boosts the rebound to spike threshold. This precisely timed rebound spiking provides a physiological explanation for the sensitivity of SPON neurons to sinusoidally amplitude-modulated (SAM) tones in vivo, where peaks in the sound envelope drive inhibitory inputs and SPON neurons fire action potentials during the waveform troughs. Consistent with this notion, SPON neurons display intrinsic tuning to frequency-modulated sinusoidal currents (1–15Hz) in vitro and discharge with strong synchrony to SAMs with modulation frequencies between 1 and 20 Hz in vivo. The results of this study suggest that the SPON is particularly well suited to encode rhythmic sound patterns. Such temporal periodicity information is likely important for detection of communication cues, such as the acoustic envelopes of animal vocalizations and speech signals. PMID:21880918

  9. Sound Source Localization Through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network

    Directory of Open Access Journals (Sweden)

    Christoph Beck

    2016-10-01

    Full Text Available Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  10. Second Sound for Heat Source Localization

    CERN Document Server

    Vennekate, Hannes; Uhrmacher, Michael; Quadt, Arnulf; Grosse-Knetter, Joern

    2011-01-01

    Defects on the surface of superconducting cavities can limit their accelerating gradient by localized heating. This results in a phase transition to the normal conduction state | a quench. A new application, involving Oscillating Superleak Transducers (OST) to locate such quench inducing heat spots on the surface of the cavities, has been developed by D. Hartill et al. at Cornell University in 2008. The OSTs enable the detection of heat transfer via second sound in super uid helium. This thesis presents new results on the analysis of their signal. Its behavior has been studied for dierent circumstances at setups at the University of Gottingen and at CERN. New approaches for an automated signal processing have been developed. Furthermore, a rst test setup for a single-cell Superconducting Proton Linac (SPL) cavity has been prepared. Recommendations of a better signal retrieving for its operation are presented.

  11. Robust Sound Localization: An Application of an Auditory Perception System for a Humanoid Robot

    National Research Council Canada - National Science Library

    Irie, Robert E

    1995-01-01

    .... This thesis presents an integrated auditory system for a humanoid robot, currently under development, that will, among other things, learn to localize normal, everyday sounds in a realistic environment...

  12. Boosting Vocabulary Learning by Verbal Cueing During Sleep.

    Science.gov (United States)

    Schreiner, Thomas; Rasch, Björn

    2015-11-01

    Reactivating memories during sleep by re-exposure to associated memory cues (e.g., odors or sounds) improves memory consolidation. Here, we tested for the first time whether verbal cueing during sleep can improve vocabulary learning. We cued prior learned Dutch words either during non-rapid eye movement sleep (NonREM) or during active or passive waking. Re-exposure to Dutch words during sleep improved later memory for the German translation of the cued words when compared with uncued words. Recall of uncued words was similar to an additional group receiving no verbal cues during sleep. Furthermore, verbal cueing failed to improve memory during active and passive waking. High-density electroencephalographic recordings revealed that successful verbal cueing during NonREM sleep is associated with a pronounced frontal negativity in event-related potentials, a higher frequency of frontal slow waves as well as a cueing-related increase in right frontal and left parietal oscillatory theta power. Our results indicate that verbal cues presented during NonREM sleep reactivate associated memories, and facilitate later recall of foreign vocabulary without impairing ongoing consolidation processes. Likewise, our oscillatory analysis suggests that both sleep-specific slow waves as well as theta oscillations (typically associated with successful memory encoding during wakefulness) might be involved in strengthening memories by cueing during sleep. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Evolution of Sound Source Localization Circuits in the Nonmammalian Vertebrate Brainstem

    DEFF Research Database (Denmark)

    Walton, Peggy L; Christensen-Dalsgaard, Jakob; Carr, Catherine E

    2017-01-01

    The earliest vertebrate ears likely subserved a gravistatic function for orientation in the aquatic environment. However, in addition to detecting acceleration created by the animal's own movements, the otolithic end organs that detect linear acceleration would have responded to particle movement...... to increased sensitivity to a broader frequency range and to modification of the preexisting circuitry for sound source localization....

  14. Decision Utility, Incentive Salience, and Cue-Triggered "Wanting"

    Science.gov (United States)

    Berridge, Kent C; Aldridge, J Wayne

    2009-01-01

    This chapter examines brain mechanisms of reward utility operating at particular decision moments in life-moments such as when one encounters an image, sound, scent, or other cue associated in the past with a particular reward or perhaps just when one vividly imagines that cue. Such a cue can often trigger a sudden motivational urge to pursue its reward and sometimes a decision to do so. Drawing on a utility taxonomy that distinguishes among subtypes of reward utility-predicted utility, decision utility, experienced utility, and remembered utility-it is shown how cue-triggered cravings, such as an addict's surrender to relapse, can hang on special transformations by brain mesolimbic systems of one utility subtype, namely, decision utility. The chapter focuses on a particular form of decision utility called incentive salience, a type of "wanting" for rewards that is amplified by brain mesolimbic systems. Sudden peaks of intensity of incentive salience, caused by neurobiological mechanisms, can elevate the decision utility of a particular reward at the moment its cue occurs. An understanding of what happens at such moments leads to a better understanding of the mechanisms at work in decision making in general.

  15. Design guidelines for the use of audio cues in computer interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Sumikawa, D.A.; Blattner, M.M.; Joy, K.I.; Greenberg, R.M.

    1985-07-01

    A logical next step in the evolution of the computer-user interface is the incorporation of sound thereby using our senses of ''hearing'' in our communication with the computer. This allows our visual and auditory capacities to work in unison leading to a more effective and efficient interpretation of information received from the computer than by sight alone. In this paper we examine earcons, which are audio cues, used in the computer-user interface to provide information and feedback to the user about computer entities (these include messages and functions, as well as states and labels). The material in this paper is part of a larger study that recommends guidelines for the design and use of audio cues in the computer-user interface. The complete work examines the disciplines of music, psychology, communication theory, advertising, and psychoacoustics to discover how sound is utilized and analyzed in those areas. The resulting information is organized according to the theory of semiotics, the theory of signs, into the syntax, semantics, and pragmatics of communication by sound. Here we present design guidelines for the syntax of earcons. Earcons are constructed from motives, short sequences of notes with a specific rhythm and pitch, embellished by timbre, dynamics, and register. Compound earcons and family earcons are introduced. These are related motives that serve to identify a family of related cues. Examples of earcons are given.

  16. Oyster larvae settle in response to habitat-associated underwater sounds.

    Science.gov (United States)

    Lillis, Ashlee; Eggleston, David B; Bohnenstiehl, DelWayne R

    2013-01-01

    Following a planktonic dispersal period of days to months, the larvae of benthic marine organisms must locate suitable seafloor habitat in which to settle and metamorphose. For animals that are sessile or sedentary as adults, settlement onto substrates that are adequate for survival and reproduction is particularly critical, yet represents a challenge since patchily distributed settlement sites may be difficult to find along a coast or within an estuary. Recent studies have demonstrated that the underwater soundscape, the distinct sounds that emanate from habitats and contain information about their biological and physical characteristics, may serve as broad-scale environmental cue for marine larvae to find satisfactory settlement sites. Here, we contrast the acoustic characteristics of oyster reef and off-reef soft bottoms, and investigate the effect of habitat-associated estuarine sound on the settlement patterns of an economically and ecologically important reef-building bivalve, the Eastern oyster (Crassostrea virginica). Subtidal oyster reefs in coastal North Carolina, USA show distinct acoustic signatures compared to adjacent off-reef soft bottom habitats, characterized by consistently higher levels of sound in the 1.5-20 kHz range. Manipulative laboratory playback experiments found increased settlement in larval oyster cultures exposed to oyster reef sound compared to unstructured soft bottom sound or no sound treatments. In field experiments, ambient reef sound produced higher levels of oyster settlement in larval cultures than did off-reef sound treatments. The results suggest that oyster larvae have the ability to respond to sounds indicative of optimal settlement sites, and this is the first evidence that habitat-related differences in estuarine sounds influence the settlement of a mollusk. Habitat-specific sound characteristics may represent an important settlement and habitat selection cue for estuarine invertebrates and could play a role in driving

  17. Oyster larvae settle in response to habitat-associated underwater sounds.

    Directory of Open Access Journals (Sweden)

    Ashlee Lillis

    Full Text Available Following a planktonic dispersal period of days to months, the larvae of benthic marine organisms must locate suitable seafloor habitat in which to settle and metamorphose. For animals that are sessile or sedentary as adults, settlement onto substrates that are adequate for survival and reproduction is particularly critical, yet represents a challenge since patchily distributed settlement sites may be difficult to find along a coast or within an estuary. Recent studies have demonstrated that the underwater soundscape, the distinct sounds that emanate from habitats and contain information about their biological and physical characteristics, may serve as broad-scale environmental cue for marine larvae to find satisfactory settlement sites. Here, we contrast the acoustic characteristics of oyster reef and off-reef soft bottoms, and investigate the effect of habitat-associated estuarine sound on the settlement patterns of an economically and ecologically important reef-building bivalve, the Eastern oyster (Crassostrea virginica. Subtidal oyster reefs in coastal North Carolina, USA show distinct acoustic signatures compared to adjacent off-reef soft bottom habitats, characterized by consistently higher levels of sound in the 1.5-20 kHz range. Manipulative laboratory playback experiments found increased settlement in larval oyster cultures exposed to oyster reef sound compared to unstructured soft bottom sound or no sound treatments. In field experiments, ambient reef sound produced higher levels of oyster settlement in larval cultures than did off-reef sound treatments. The results suggest that oyster larvae have the ability to respond to sounds indicative of optimal settlement sites, and this is the first evidence that habitat-related differences in estuarine sounds influence the settlement of a mollusk. Habitat-specific sound characteristics may represent an important settlement and habitat selection cue for estuarine invertebrates and could play a

  18. Robust Sound Localization: An Application of an Auditory Perception System for a Humanoid Robot

    National Research Council Canada - National Science Library

    Irie, Robert E

    1995-01-01

    Localizing sounds with different frequency and time domain characteristics in a dynamic listening environment is a challenging task that has not been explored in the field of robotics as much as other perceptual tasks...

  19. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Kotaro Hoshiba

    2017-11-01

    Full Text Available In search and rescue activities, unmanned aerial vehicles (UAV should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.

  20. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments.

    Science.gov (United States)

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Kumon, Makoto; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G

    2017-11-03

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.

  1. Techniques and applications for binaural sound manipulation in human-machine interfaces

    Science.gov (United States)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1992-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  2. Early visual deprivation prompts the use of body-centered frames of reference for auditory localization.

    Science.gov (United States)

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2018-01-01

    The effects of early visual deprivation on auditory spatial processing are controversial. Results from recent psychophysical studies show that people who were born blind have a spatial impairment in localizing sound sources within specific auditory settings, while previous psychophysical studies revealed enhanced auditory spatial abilities in early blind compared to sighted individuals. An explanation of why an auditory spatial deficit is sometimes observed within blind populations and its task-dependency remains to be clarified. We investigated auditory spatial perception in early blind adults and demonstrated that the deficit derives from blind individual's reduced ability to remap sound locations using an external frame of reference. We found that performance in blind population was severely impaired when they were required to localize brief auditory stimuli with respect to external acoustic landmarks (external reference frame) or when they had to reproduce the spatial distance between two sounds. However, they performed similarly to sighted controls when had to localize sounds with respect to their own hand (body-centered reference frame), or to judge the distances of sounds from their finger. These results suggest that early visual deprivation and the lack of visual contextual cues during the critical period induce a preference for body-centered over external spatial auditory representations. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Interaction of Object Binding Cues in Binaural Masking Pattern Experiments.

    Science.gov (United States)

    Verhey, Jesko L; Lübken, Björn; van de Par, Steven

    2016-01-01

    Object binding cues such as binaural and across-frequency modulation cues are likely to be used by the auditory system to separate sounds from different sources in complex auditory scenes. The present study investigates the interaction of these cues in a binaural masking pattern paradigm where a sinusoidal target is masked by a narrowband noise. It was hypothesised that beating between signal and masker may contribute to signal detection when signal and masker do not spectrally overlap but that this cue could not be used in combination with interaural cues. To test this hypothesis an additional sinusoidal interferer was added to the noise masker with a lower frequency than the noise whereas the target had a higher frequency than the noise. Thresholds increase when the interferer is added. This effect is largest when the spectral interferer-masker and masker-target distances are equal. The result supports the hypothesis that modulation cues contribute to signal detection in the classical masking paradigm and that these are analysed with modulation bandpass filters. A monaural model including an across-frequency modulation process is presented that account for this effect. Interestingly, the interferer also affects dichotic thresholds indicating that modulation cues also play a role in binaural processing.

  4. Heart Sound Localization and Reduction in Tracheal Sounds by Gabor Time-Frequency Masking

    OpenAIRE

    SAATCI, Esra; Akan, Aydın

    2018-01-01

    Background and aim: Respiratorysounds, i.e. tracheal and lung sounds, have been of great interest due to theirdiagnostic values as well as the potential of their use in the estimation ofthe respiratory dynamics (mainly airflow). Thus the aim of the study is topresent a new method to filter the heart sound interference from the trachealsounds. Materials and methods: Trachealsounds and airflow signals were collected by using an accelerometer from 10 healthysubjects. Tracheal sounds were then pr...

  5. Mice Lacking the Alpha9 Subunit of the Nicotinic Acetylcholine Receptor Exhibit Deficits in Frequency Difference Limens and Sound Localization

    Directory of Open Access Journals (Sweden)

    Amanda Clause

    2017-06-01

    Full Text Available Sound processing in the cochlea is modulated by cholinergic efferent axons arising from medial olivocochlear neurons in the brainstem. These axons contact outer hair cells in the mature cochlea and inner hair cells during development and activate nicotinic acetylcholine receptors composed of α9 and α10 subunits. The α9 subunit is necessary for mediating the effects of acetylcholine on hair cells as genetic deletion of the α9 subunit results in functional cholinergic de-efferentation of the cochlea. Cholinergic modulation of spontaneous cochlear activity before hearing onset is important for the maturation of central auditory circuits. In α9KO mice, the developmental refinement of inhibitory afferents to the lateral superior olive is disturbed, resulting in decreased tonotopic organization of this sound localization nucleus. In this study, we used behavioral tests to investigate whether the circuit anomalies in α9KO mice correlate with sound localization or sound frequency processing. Using a conditioned lick suppression task to measure sound localization, we found that three out of four α9KO mice showed impaired minimum audible angles. Using a prepulse inhibition of the acoustic startle response paradigm, we found that the ability of α9KO mice to detect sound frequency changes was impaired, whereas their ability to detect sound intensity changes was not. These results demonstrate that cholinergic, nicotinic α9 subunit mediated transmission in the developing cochlear plays an important role in the maturation of hearing.

  6. Towards a Synesthesia Laboratory: Real-time Localization and Visualization of a Sound Source for Virtual Reality Applications

    OpenAIRE

    Kose, Ahmet; Tepljakov, Aleksei; Astapov, Sergei; Draheim, Dirk; Petlenkov, Eduard; Vassiljeva, Kristina

    2018-01-01

    In this paper, we present our findings related to the problem of localization and visualization of a sound source placed in the same room as the listener. The particular effect that we aim to investigate is called synesthesia—the act of experiencing one sense modality as another, e.g., a person may vividly experience flashes of colors when listening to a series of sounds. Towards that end, we apply a series of recently developed methods for detecting sound source in a three-dimensional space ...

  7. Externalization versus Internalization of Sound in Normal-hearing and Hearing-impaired Listeners

    DEFF Research Database (Denmark)

    Ohl, Björn; Laugesen, Søren; Buchholz, Jörg

    2010-01-01

    The externalization of sound, i. e. the perception of auditory events as being located outside of the head, is a natural phenomenon for normalhearing listeners, when perceiving sound coming from a distant physical sound source. It is potentially useful for hearing in background noise......, but the relevant cues might be distorted by a hearing impairment and also by the processing of the incoming sound through hearing aids. In this project, two intuitive tests in natural real-life surroundings were developed, which capture the limits of the perception of externalization. For this purpose...

  8. Lung sound analysis helps localize airway inflammation in patients with bronchial asthma

    Directory of Open Access Journals (Sweden)

    Shimoda T

    2017-03-01

    sound recordings could be used to identify sites of local airway inflammation. Keywords: airway obstruction, expiration sound pressure level, inspiration sound pressure level, expiration-to-inspiration sound pressure ratio, 7-point analysis

  9. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments †

    Science.gov (United States)

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G.

    2017-01-01

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators. PMID:29099790

  10. Parents accidentally substitute similar sounding sibling names more often than dissimilar names.

    Directory of Open Access Journals (Sweden)

    Zenzi M Griffin

    Full Text Available When parents select similar sounding names for their children, do they set themselves up for more speech errors in the future? Questionnaire data from 334 respondents suggest that they do. Respondents whose names shared initial or final sounds with a sibling's reported that their parents accidentally called them by the sibling's name more often than those without such name overlap. Having a sibling of the same gender, similar appearance, or similar age was also associated with more frequent name substitutions. Almost all other name substitutions by parents involved other family members and over 5% of respondents reported a parent substituting the name of a pet, which suggests a strong role for social and situational cues in retrieving personal names for direct address. To the extent that retrieval cues are shared with other people or animals, other names become available and may substitute for the intended name, particularly when names sound similar.

  11. Modulation frequency as a cue for auditory speed perception.

    Science.gov (United States)

    Senna, Irene; Parise, Cesare V; Ernst, Marc O

    2017-07-12

    Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).

  12. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  13. Listeners' expectation of room acoustical parameters based on visual cues

    Science.gov (United States)

    Valente, Daniel L.

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer

  14. An investigation of the roles of geomagnetic and acoustic cues in whale navigation and orientation

    Science.gov (United States)

    Allen, Ann Nichole

    Many species of whales migrate annually between high-latitude feeding grounds and low-latitude breeding grounds. Yet, very little is known about how these animals navigate during these migrations. This thesis takes a first look at the roles of geomagnetic and acoustic cues in humpback whale navigation and orientation, in addition to documenting some effects of human-produced sound on beaked whales. The tracks of satellite-tagged humpback whales migrating from Hawaii to Alaska were found to have systematic deviations from the most direct route to their destination. For each whale, a migration track was modeled using only geomagnetic inclination and intensity as navigation cues. The directions in which the observed and modeled tracks deviated from the direct route were compared and found to match for 7 out of 9 tracks, which suggests that migrating humpback whales may use geomagnetic cues for navigation. Additionally, in all cases the observed tracks followed a more direct route to the destination than the modeled tracks, indicating that the whales are likely using additional navigational cues to improve their routes. There is a significant amount of sound available in the ocean to aid in navigation and orientation of a migrating whale. This research investigates the possibility that humpback whales migrating near-shore listen to sounds of snapping shrimp to detect the presence of obstacles, such as rocky islands. A visual tracking study was used, together with hydrophone recordings near a rocky island, to determine whether the whales initiated an avoidance reaction at distances that varied with the acoustic detection range of the island. No avoidance reaction was found. Propagation modeling of the snapping shrimp sounds suggested that the detection range of the island was beyond the visual limit of the survey, indicating that snapping shrimp sounds may be suited as a long-range indicator of a rocky island. Lastly, this thesis identifies a prolonged avoidance

  15. A novel method for direct localized sound speed measurement using the virtual source paradigm

    DEFF Research Database (Denmark)

    Byram, Brett; Trahey, Gregg E.; Jensen, Jørgen Arendt

    2007-01-01

    ) mediums. The inhomogeneous mediums were arranged as an oil layer, one 6 mm thick and the other 11 mm thick, on top of a water layer. To complement the phantom studies, sources of error for spatial registration of virtual detectors were simulated. The sources of error presented here are multiple sound...... registered virtual detector. Between a pair of registered virtual detectors a spherical wave is propagated. By beamforming the received data the time of flight between the two virtual sources can be calculated. From this information the local sound speed can be estimated. Validation of the estimator used...... both phantom and simulation results. The phantom consisted of two wire targets located near the transducer's axis at depths of 17 and 28 mm. Using this phantom the sound speed between the wires was measured for a homogeneous (water) medium and for two inhomogeneous (DB-grade castor oil and water...

  16. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    Science.gov (United States)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  17. Gay- and Lesbian-Sounding Auditory Cues Elicit Stereotyping and Discrimination.

    Science.gov (United States)

    Fasoli, Fabio; Maass, Anne; Paladino, Maria Paola; Sulpizio, Simone

    2017-07-01

    The growing body of literature on the recognition of sexual orientation from voice ("auditory gaydar") is silent on the cognitive and social consequences of having a gay-/lesbian- versus heterosexual-sounding voice. We investigated this issue in four studies (overall N = 276), conducted in Italian language, in which heterosexual listeners were exposed to single-sentence voice samples of gay/lesbian and heterosexual speakers. In all four studies, listeners were found to make gender-typical inferences about traits and preferences of heterosexual speakers, but gender-atypical inferences about those of gay or lesbian speakers. Behavioral intention measures showed that listeners considered lesbian and gay speakers as less suitable for a leadership position, and male (but not female) listeners took distance from gay speakers. Together, this research demonstrates that having a gay/lesbian rather than heterosexual-sounding voice has tangible consequences for stereotyping and discrimination.

  18. Matching cue size and task properties in exogenous attention.

    Science.gov (United States)

    Burnett, Katherine E; d'Avossa, Giovanni; Sapir, Ayelet

    2013-01-01

    Exogenous attention is an involuntary, reflexive orienting response that results in enhanced processing at the attended location. The standard view is that this enhancement generalizes across visual properties of a stimulus. We test whether the size of an exogenous cue sets the attentional field and whether this leads to different effects on stimuli with different visual properties. In a dual task with a random-dot kinematogram (RDK) in each quadrant of the screen, participants discriminated the direction of moving dots in one RDK and localized one red dot. Precues were uninformative and consisted of either a large or a small luminance-change frame. The motion discrimination task showed attentional effects following both large and small exogenous cues. The red dot probe localization task showed attentional effects following a small cue, but not a large cue. Two additional experiments showed that the different effects on localization were not due to reduced spatial uncertainty or suppression of RDK dots in the surround. These results indicate that the effects of exogenous attention depend on the size of the cue and the properties of the task, suggesting the involvement of receptive fields with different sizes in different tasks. These attentional effects are likely to be driven by bottom-up mechanisms in early visual areas.

  19. Hear where we are sound, ecology, and sense of place

    CERN Document Server

    Stocker, Michael

    2013-01-01

    Throughout history, hearing and sound perception have been typically framed in the context of how sound conveys information and how that information influences the listener. Hear Where We Are inverts this premise and examines how humans and other hearing animals use sound to establish acoustical relationships with their surroundings. This simple inversion reveals a panoply of possibilities by which we can re-evaluate how hearing animals use, produce, and perceive sound. Nuance in vocalizations become signals of enticement or boundary setting; silence becomes a field ripe in auditory possibilities; predator/prey relationships are infused with acoustic deception, and sounds that have been considered territorial cues become the fabric of cooperative acoustical communities. This inversion also expands the context of sound perception into a larger perspective that centers on biological adaptation within acoustic habitats. Here, the rapid synchronized flight patterns of flocking birds and the tight maneuvering of s...

  20. The challenge of localizing vehicle backup alarms: Effects of passive and electronic hearing protectors, ambient noise level, and backup alarm spectral content

    Directory of Open Access Journals (Sweden)

    Khaled A Alali

    2011-01-01

    Full Text Available A human factors experiment employed a hemi-anechoic sound field in which listeners were required to localize a vehicular backup alarm warning signal (both a standard and a frequency-augmented alarm in 360-degrees azimuth in pink noise of 60 dBA and 90 dBA. Measures of localization performance included: (1 percentage correct localization, (2 percentage of right--left localization errors, (3 percentage of front-rear localization errors, and (4 localization absolute deviation in degrees from the alarm′s actual location. In summary, the data demonstrated that, with some exceptions, normal hearing listeners′ ability to localize the backup alarm in 360-degrees azimuth did not improve when wearing augmented hearing protectors (including dichotic sound transmission earmuffs, flat attenuation earplugs, and level-dependent earplugs as compared to when wearing conventional passive earmuffs or earplugs of the foam or flanged types. Exceptions were that in the 90 dBA pink noise, the flat attenuation earplug yielded significantly better accuracy than the polyurethane foam earplug and both the dichotic and the custom-made diotic electronic sound transmission earmuffs. However, the flat attenuation earplug showed no benefit over the standard pre-molded earplug, the arc earplug, and the passive earmuff. Confusions of front-rear alarm directions were most significant in the 90 dBA noise condition, wherein two types of triple-flanged earplugs exhibited significantly fewer front-rear confusions than either of the electronic muffs. On all measures, the diotic sound transmission earmuff resulted in the poorest localization of any of the protectors due to the fact that its single-microphone design did not enable interaural cues to be heard. Localization was consistently more degraded in the 90 dBA pink noise as compared with the relatively quiet condition of the 60 dBA pink noise. A frequency-augmented backup alarm, which incorporated 400 Hz and 4000 Hz components

  1. Sound of mind : electrophysiological and behavioural evidence for the role of context, variation and informativity in human speech processing

    NARCIS (Netherlands)

    Nixon, Jessie Sophia

    2014-01-01

    Spoken communication involves transmission of a message which takes physical form in acoustic waves. Within any given language, acoustic cues pattern in language-specific ways along language-specific acoustic dimensions to create speech sound contrasts. These cues are utilized by listeners to

  2. Ormiaochracea as a Model Organism in Sound Localization Experiments and in Inventing Hearing Aids.

    Directory of Open Access Journals (Sweden)

    - -

    1998-09-01

    Full Text Available Hearing aid prescription for patients suffering hearing loss has always been one of the main concerns of the audiologists. Thanks to technology that has provided Hearing aids with digital and computerized systems which has improved the quality of sound heard by hearing aids. Though we can learn from nature in inventing such instruments as in the current article that has been channeled to a kind of fruit fly. Ormiaochracea is a small yellow nocturnal fly, a parasitoid of crickets. It is notable because of its exceptionally acute directional hearing. In the current article we will discuss how it has become a model organism in sound localization experiments and in inventing hearing aids.

  3. Characteristic sounds facilitate visual search.

    Science.gov (United States)

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  4. Say what? Coral reef sounds as indicators of community assemblages and reef conditions

    Science.gov (United States)

    Mooney, T. A.; Kaplan, M. B.

    2016-02-01

    Coral reefs host some of the highest diversity of life on the planet. Unfortunately, reef health and biodiversity is declining or is threatened as a result of climate change and human influences. Tracking these changes is necessary for effective resource management, yet estimating marine biodiversity and tracking trends in ecosystem health is a challenging and expensive task, especially in many pristine reefs which are remote and difficult to access. Many fishes, mammals and invertebrates make sound. These sounds are reflective of a number of vital biological processes and are a cue for settling reef larvae. Biological sounds may be a means to quantify ecosystem health and biodiversity, however the relationship between coral reef soundscapes and the actual taxa present remains largely unknown. This study presents a comparative evaluation of the soundscape of multiple reefs, naturally differing in benthic cover and fish diversity, in the U.S. Virgin Islands National Park. Using multiple recorders per reef we characterized spacio-temporal variation in biological sound production within and among reefs. Analyses of sounds recorded over 4 summer months indicated diel trends in both fish and snapping shrimp acoustic frequency bands with crepuscular peaks at all reefs. There were small but statistically significant acoustic differences among sites on a given reef raising the possibility of potentially localized acoustic habitats. The strength of diel trends in lower, fish-frequency bands were correlated with coral cover and fish density, yet no such relationship was found with shrimp sounds suggesting that fish sounds may be of higher relevance to tracking certain coral reef conditions. These findings indicate that, in spite of considerable variability within reef soundscapes, diel trends in low-frequency sound production reflect reef community assemblages. Further, monitoring soundscapes may be an efficient means of establishing and monitoring reef conditions.

  5. Learning foreign sounds in an alien world: videogame training improves non-native speech categorization.

    Science.gov (United States)

    Lim, Sung-joo; Holt, Lori L

    2011-01-01

    Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players' responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5h across 5 days exhibited improvements in /r/-/l/ perception on par with 2-4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights. Copyright © 2011 Cognitive Science Society, Inc.

  6. A configural dominant account of contextual cueing: Configural cues are stronger than colour cues.

    Science.gov (United States)

    Kunar, Melina A; John, Rebecca; Sweetman, Hollie

    2014-01-01

    Previous work has shown that reaction times to find a target in displays that have been repeated are faster than those for displays that have never been seen before. This learning effect, termed "contextual cueing" (CC), has been shown using contexts such as the configuration of the distractors in the display and the background colour. However, it is not clear how these two contexts interact to facilitate search. We investigated this here by comparing the strengths of these two cues when they appeared together. In Experiment 1, participants searched for a target that was cued by both colour and distractor configural cues, compared with when the target was only predicted by configural information. The results showed that the addition of a colour cue did not increase contextual cueing. In Experiment 2, participants searched for a target that was cued by both colour and distractor configuration compared with when the target was only cued by colour. The results showed that adding a predictive configural cue led to a stronger CC benefit. Experiments 3 and 4 tested the disruptive effects of removing either a learned colour cue or a learned configural cue and whether there was cue competition when colour and configural cues were presented together. Removing the configural cue was more disruptive to CC than removing colour, and configural learning was shown to overshadow the learning of colour cues. The data support a configural dominant account of CC, where configural cues act as the stronger cue in comparison to colour when they are presented together.

  7. Bionic Modeling of Knowledge-Based Guidance in Automated Underwater Vehicles.

    Science.gov (United States)

    1987-06-24

    bugs and their foraging movements are heard by the sound of rustling leaves or rhythmic wing beats . ASYMMETRY OF EARS The faces of owls have captured...sound source without moving. The barn owl has binaural and monaural cues as well as cues that operate in relative motion when either the target or the...owl moves. Table 1 lists the cues. 7 TM No. 87- 2068 fTable 1. Sound Localization Parameters Used by the Barn Owl I BINAURAL PARAMETERS: 1. the

  8. On the influence of microphone array geometry on HRTF-based Sound Source Localization

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua

    2015-01-01

    The direction dependence of Head Related Transfer Functions (HRTFs) forms the basis for HRTF-based Sound Source Localization (SSL) algorithms. In this paper, we show how spectral similarities of the HRTFs of different directions in the horizontal plane influence performance of HRTF-based SSL...... algorithms; the more similar the HRTFs of different angles to the HRTF of the target angle, the worse the performance. However, we also show how the microphone array geometry can assist in differentiating between the HRTFs of the different angles, thereby improving performance of HRTF-based SSL algorithms....... Furthermore, to demonstrate the analysis results, we show the impact of HRTFs similarities and microphone array geometry on an exemplary HRTF-based SSL algorithm, called MLSSL. This algorithm is well-suited for this purpose as it allows to estimate the Direction-of-Arrival (DoA) of the target sound using any...

  9. Acoustic analysis of trill sounds.

    Science.gov (United States)

    Dhananjaya, N; Yegnanarayana, B; Bhaskararao, Peri

    2012-04-01

    In this paper, the acoustic-phonetic characteristics of steady apical trills--trill sounds produced by the periodic vibration of the apex of the tongue--are studied. Signal processing methods, namely, zero-frequency filtering and zero-time liftering of speech signals, are used to analyze the excitation source and the resonance characteristics of the vocal tract system, respectively. Although it is natural to expect the effect of trilling on the resonances of the vocal tract system, it is interesting to note that trilling influences the glottal source of excitation as well. The excitation characteristics derived using zero-frequency filtering of speech signals are glottal epochs, strength of impulses at the glottal epochs, and instantaneous fundamental frequency of the glottal vibration. Analysis based on zero-time liftering of speech signals is used to study the dynamic resonance characteristics of vocal tract system during the production of trill sounds. Qualitative analysis of trill sounds in different vowel contexts, and the acoustic cues that may help spotting trills in continuous speech are discussed.

  10. The influence of ski helmets on sound perception and sound localisation on the ski slope

    Directory of Open Access Journals (Sweden)

    Lana Ružić

    2015-04-01

    Full Text Available Objectives: The aim of the study was to investigate whether a ski helmet interferes with the sound localization and the time of sound perception in the frontal plane. Material and Methods: Twenty-three participants (age 30.7±10.2 were tested on the slope in 2 conditions, with and without wearing the ski helmet, by 6 different spatially distributed sound stimuli per each condition. Each of the subjects had to react when hearing the sound as soon as possible and to signalize the correct side of the sound arrival. Results: The results showed a significant difference in the ability to localize the specific ski sounds; 72.5±15.6% of correct answers without a helmet vs. 61.3±16.2% with a helmet (p < 0.01. However, the performance on this test did not depend on whether they were used to wearing a helmet (p = 0.89. In identifying the timing, at which the sound was firstly perceived, the results were also in favor of the subjects not wearing a helmet. The subjects reported hearing the ski sound clues at 73.4±5.56 m without a helmet vs. 60.29±6.34 m with a helmet (p < 0.001. In that case the results did depend on previously used helmets (p < 0.05, meaning that that regular usage of helmets might help to diminish the attenuation of the sound identification that occurs because of the helmets. Conclusions: Ski helmets might limit the ability of a skier to localize the direction of the sounds of danger and might interfere with the moment, in which the sound is firstly heard.

  11. The Benefits of Targeted Memory Reactivation for Consolidation in Sleep are Contingent on Memory Accuracy and Direct Cue-Memory Associations.

    Science.gov (United States)

    Cairney, Scott A; Lindsay, Shane; Sobczak, Justyna M; Paller, Ken A; Gaskell, M Gareth

    2016-05-01

    To investigate how the effects of targeted memory reactivation (TMR) are influenced by memory accuracy prior to sleep and the presence or absence of direct cue-memory associations. 30 participants associated each of 50 pictures with an unrelated word and then with a screen location in two separate tasks. During picture-location training, each picture was also presented with a semantically related sound. The sounds were therefore directly associated with the picture locations but indirectly associated with the words. During a subsequent nap, half of the sounds were replayed in slow wave sleep (SWS). The effect of TMR on memory for the picture locations (direct cue-memory associations) and picture-word pairs (indirect cue-memory associations) was then examined. TMR reduced overall memory decay for recall of picture locations. Further analyses revealed a benefit of TMR for picture locations recalled with a low degree of accuracy prior to sleep, but not those recalled with a high degree of accuracy. The benefit of TMR for low accuracy memories was predicted by time spent in SWS. There was no benefit of TMR for memory of the picture-word pairs, irrespective of memory accuracy prior to sleep. TMR provides the greatest benefit to memories recalled with a low degree of accuracy prior to sleep. The memory benefits of TMR may also be contingent on direct cue-memory associations. © 2016 Associated Professional Sleep Societies, LLC.

  12. A "looming bias" in spatial hearing? Effects of acoustic intensity and spectrum on categorical sound source localization.

    Science.gov (United States)

    McCarthy, Lisa; Olsen, Kirk N

    2017-01-01

    Continuous increases of acoustic intensity (up-ramps) can indicate a looming (approaching) sound source in the environment, whereas continuous decreases of intensity (down-ramps) can indicate a receding sound source. From psychoacoustic experiments, an "adaptive perceptual bias" for up-ramp looming tonal stimuli has been proposed (Neuhoff, 1998). This theory postulates that (1) up-ramps are perceptually salient because of their association with looming and potentially threatening stimuli in the environment; (2) tonal stimuli are perceptually salient because of an association with single and potentially threatening biological sound sources in the environment, relative to white noise, which is more likely to arise from dispersed signals and nonthreatening/nonbiological sources (wind/ocean). In the present study, we extrapolated the "adaptive perceptual bias" theory and investigated its assumptions by measuring sound source localization in response to acoustic stimuli presented in azimuth to imply looming, stationary, and receding motion in depth. Participants (N = 26) heard three directions of intensity change (up-ramps, down-ramps, and steady state, associated with looming, receding, and stationary motion, respectively) and three levels of acoustic spectrum (a 1-kHz pure tone, the tonal vowel /ә/, and white noise) in a within-subjects design. We first hypothesized that if up-ramps are "perceptually salient" and capable of eliciting adaptive responses, then they would be localized faster and more accurately than down-ramps. This hypothesis was supported. However, the results did not support the second hypothesis. Rather, the white-noise and vowel conditions were localized faster and more accurately than the pure-tone conditions. These results are discussed in the context of auditory and visual theories of motion perception, auditory attentional capture, and the spectral causes of spatial ambiguity.

  13. Individual differences in using geometric and featural cues to maintain spatial orientation: cue quantity and cue ambiguity are more important than cue type.

    Science.gov (United States)

    Kelly, Jonathan W; McNamara, Timothy P; Bodenheimer, Bobby; Carr, Thomas H; Rieser, John J

    2009-02-01

    Two experiments explored the role of environmental cues in maintaining spatial orientation (sense of self-location and direction) during locomotion. Of particular interest was the importance of geometric cues (provided by environmental surfaces) and featural cues (nongeometric properties provided by striped walls) in maintaining spatial orientation. Participants performed a spatial updating task within virtual environments containing geometric or featural cues that were ambiguous or unambiguous indicators of self-location and direction. Cue type (geometric or featural) did not affect performance, but the number and ambiguity of environmental cues did. Gender differences, interpreted as a proxy for individual differences in spatial ability and/or experience, highlight the interaction between cue quantity and ambiguity. When environmental cues were ambiguous, men stayed oriented with either one or two cues, whereas women stayed oriented only with two. When environmental cues were unambiguous, women stayed oriented with one cue.

  14. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. The Encoding of Sound Source Elevation in the Human Auditory Cortex.

    Science.gov (United States)

    Trapeau, Régis; Schönwiesner, Marc

    2018-03-28

    Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the

  16. Numerical value biases sound localization

    OpenAIRE

    Golob, Edward J.; Lewald, Jörg; Getzmann, Stephan; Mock, Jeffrey R.

    2017-01-01

    Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perce...

  17. Root phonotropism: Early signalling events following sound perception in Arabidopsis roots.

    Science.gov (United States)

    Rodrigo-Moreno, Ana; Bazihizina, Nadia; Azzarello, Elisa; Masi, Elisa; Tran, Daniel; Bouteau, François; Baluska, Frantisek; Mancuso, Stefano

    2017-11-01

    Sound is a fundamental form of energy and it has been suggested that plants can make use of acoustic cues to obtain information regarding their environments and alter and fine-tune their growth and development. Despite an increasing body of evidence indicating that it can influence plant growth and physiology, many questions concerning the effect of sound waves on plant growth and the underlying signalling mechanisms remains unknown. Here we show that in Arabidopsis thaliana, exposure to sound waves (200Hz) for 2 weeks induced positive phonotropism in roots, which grew towards to sound source. We found that sound waves triggered very quickly (within  minutes) an increase in cytosolic Ca 2+ , possibly mediated by an influx through plasma membrane and a release from internal stock. Sound waves likewise elicited rapid reactive oxygen species (ROS) production and K + efflux. Taken together these results suggest that changes in ion fluxes (Ca 2+ and K + ) and an increase in superoxide production are involved in sound perception in plants, as previously established in animals. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. The Effect of Tactile Cues on Auditory Stream Segregation Ability of Musicians and Nonmusicians

    DEFF Research Database (Denmark)

    Slater, Kyle D.; Marozeau, Jeremy

    2016-01-01

    Difficulty perceiving music is often cited as one of the main problems facing hearing-impaired listeners. It has been suggested that musical enjoyment could be enhanced if sound information absent due to impairment is transmitted via other sensory modalities such as vision or touch. In this study...... the random melody. Tactile cues were applied to the listener’s fingers on half of the blocks. Results showed that tactile cues can significantly improve the melodic segregation ability in both musician and nonmusician groups in challenging listening conditions. Overall, the musician group performance...... was always better; however, the magnitude of improvement with the introduction of tactile cues was similar in both groups. This study suggests that hearing-impaired listeners could potentially benefit from a system transmitting such information via a tactile modality...

  19. Dementias show differential physiological responses to salient sounds.

    Science.gov (United States)

    Fletcher, Phillip D; Nicholas, Jennifer M; Shakespeare, Timothy J; Downey, Laura E; Golden, Hannah L; Agustus, Jennifer L; Clark, Camilla N; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    Abnormal responsiveness to salient sensory signals is often a prominent feature of dementia diseases, particularly the frontotemporal lobar degenerations, but has been little studied. Here we assessed processing of one important class of salient signals, looming sounds, in canonical dementia syndromes. We manipulated tones using intensity cues to create percepts of salient approaching ("looming") or less salient withdrawing sounds. Pupil dilatation responses and behavioral rating responses to these stimuli were compared in patients fulfilling consensus criteria for dementia syndromes (semantic dementia, n = 10; behavioral variant frontotemporal dementia, n = 16, progressive nonfluent aphasia, n = 12; amnestic Alzheimer's disease, n = 10) and a cohort of 26 healthy age-matched individuals. Approaching sounds were rated as more salient than withdrawing sounds by healthy older individuals but this behavioral response to salience did not differentiate healthy individuals from patients with dementia syndromes. Pupil responses to approaching sounds were greater than responses to withdrawing sounds in healthy older individuals and in patients with semantic dementia: this differential pupil response was reduced in patients with progressive nonfluent aphasia and Alzheimer's disease relative both to the healthy control and semantic dementia groups, and did not correlate with nonverbal auditory semantic function. Autonomic responses to auditory salience are differentially affected by dementias and may constitute a novel biomarker of these diseases.

  20. Binaural Sound Reduces Reaction Time in a Virtual Reality Search Task

    DEFF Research Database (Denmark)

    Høeg, Emil Rosenlund; Gerry, Lynda; Thomsen, Lui Albæk

    2017-01-01

    Salient features in a visual search task can direct attention and increase competency on these tasks. Simple cues, such as color change in a salient feature, called the "pop-out effect" can increase task solving efficiency [6]. Previous work has shown that nonspatial auditory signals temporally...... synched with a pop-out effect can improve reaction time in a visual search task, called the "pip and pop effect" [14]. This paper describes a within-group study on the effect of audiospatial attention in virtual reality given a 360-degree visual search. Three cue conditions were compared (no sound, stereo...

  1. Sound Stuff? Naïve materialism in middle-school students' conceptions of sound

    Science.gov (United States)

    Eshach, Haim; Schwartz, Judah L.

    2006-06-01

    Few studies have dealt with students’ preconceptions of sounds. The current research employs Reiner et al. (2000) substance schema to reveal new insights about students’ difficulties in understanding this fundamental topic. It aims not only to detect whether the substance schema is present in middle school students’ thinking, but also examines how students use the schema’s properties. It asks, moreover, whether the substance schema properties are used as islands of local consistency or whether one can identify more global coherent consistencies among the properties that the students use to explain the sound phenomena. In-depth standardized open-ended interviews were conducted with ten middle school students. Consistent with the substance schema, sound was perceived by our participants as being pushable, frictional, containable, or transitional. However, sound was also viewed as a substance different from the ordinary with respect to its stability, corpuscular nature, additive properties, and inertial characteristics. In other words, students’ conceptions of sound do not seem to fit Reiner et al.’s schema in all respects. Our results also indicate that students’ conceptualization of sound lack internal consistency. Analyzing our results with respect to local and global coherence, we found students’ conception of sound is close to diSessa’s “loosely connected, fragmented collection of ideas.” The notion that sound is perceived only as a “sort of a material,” we believe, requires some revision of the substance schema as it applies to sound. The article closes with a discussion concerning the implications of the results for instruction.

  2. Scene-Based Contextual Cueing in Pigeons

    Science.gov (United States)

    Wasserman, Edward A.; Teng, Yuejia; Brooks, Daniel I.

    2014-01-01

    Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of such contextual cueing. Pigeons had to peck a target which could appear in one of four locations on color photographs of real-world scenes. On half of the trials, each of four scenes was consistently paired with one of four possible target locations; on the other half of the trials, each of four different scenes was randomly paired with the same four possible target locations. In Experiments 1 and 2, pigeons exhibited robust contextual cueing when the context preceded the target by 1 s to 8 s, with reaction times to the target being shorter on predictive-scene trials than on random-scene trials. Pigeons also responded more frequently during the delay on predictive-scene trials than on random-scene trials; indeed, during the delay on predictive-scene trials, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. In Experiment 3, involving left-right and top-bottom scene reversals, pigeons exhibited stronger control by global than by local scene cues. These results attest to the robustness and associative basis of contextual cueing in pigeons. PMID:25546098

  3. Three-month-old human infants use vocal cues of body size.

    Science.gov (United States)

    Pietraszewski, David; Wertz, Annie E; Bryant, Gregory A; Wynn, Karen

    2017-06-14

    Differences in vocal fundamental ( F 0 ) and average formant ( F n ) frequencies covary with body size in most terrestrial mammals, such that larger organisms tend to produce lower frequency sounds than smaller organisms, both between species and also across different sex and life-stage morphs within species. Here we examined whether three-month-old human infants are sensitive to the relationship between body size and sound frequencies. Using a violation-of-expectation paradigm, we found that infants looked longer at stimuli inconsistent with the relationship-that is, a smaller organism producing lower frequency sounds, and a larger organism producing higher frequency sounds-than at stimuli that were consistent with it. This effect was stronger for fundamental frequency than it was for average formant frequency. These results suggest that by three months of age, human infants are already sensitive to the biologically relevant covariation between vocalization frequencies and visual cues to body size. This ability may be a consequence of developmental adaptations for building a phenotype capable of identifying and representing an organism's size, sex and life-stage. © 2017 The Author(s).

  4. Classification of Real and Imagined Sounds in Early Visual Cortex

    Directory of Open Access Journals (Sweden)

    Petra Vetter

    2011-10-01

    Full Text Available Early visual cortex has been thought to be mainly involved in the detection of low-level visual features. Here we show that complex natural sounds can be decoded from early visual cortex activity, in the absence of visual stimulation and both when sounds are actually displayed and when they are merely imagined. Blindfolded subjects listened to three complex natural sounds (bird singing, people talking, traffic noise; Exp. 1 or received word cues (“forest”, “people”, “traffic”; Exp 2 to imagine the associated scene. fMRI BOLD activation patterns from retinotopically defined early visual areas were fed into a multivariate pattern classification algorithm (a linear support vector machine. Actual sounds were discriminated above chance in V2 and V3 and imagined sounds were decoded in V1. Also cross-classification, ie, training the classifier to real sounds and testing it to imagined sounds and vice versa, was successful. Two further experiments showed that an orthogonal working memory task does not interfere with sound classification in early visual cortex (Exp. 3, however, an orthogonal visuo-spatial imagery task does (Exp. 4. These results demonstrate that early visual cortex activity contains content-specific information from hearing and from imagery, challenging the view of a strict modality-specific function of early visual cortex.

  5. Auditory Verbal Cues Alter the Perceived Flavor of Beverages and Ease of Swallowing: A Psychometric and Electrophysiological Analysis

    Directory of Open Access Journals (Sweden)

    Aya Nakamura

    2013-01-01

    Full Text Available We investigated the possible effects of auditory verbal cues on flavor perception and swallow physiology for younger and elder participants. Apple juice, aojiru (grass juice, and water were ingested with or without auditory verbal cues. Flavor perception and ease of swallowing were measured using a visual analog scale and swallow physiology by surface electromyography and cervical auscultation. The auditory verbal cues had significant positive effects on flavor and ease of swallowing as well as on swallow physiology. The taste score and the ease of swallowing score significantly increased when the participant’s anticipation was primed by accurate auditory verbal cues. There was no significant effect of auditory verbal cues on distaste score. Regardless of age, the maximum suprahyoid muscle activity significantly decreased when a beverage was ingested without auditory verbal cues. The interval between the onset of swallowing sounds and the peak timing point of the infrahyoid muscle activity significantly shortened when the anticipation induced by the cue was contradicted in the elderly participant group. These results suggest that auditory verbal cues can improve the perceived flavor of beverages and swallow physiology.

  6. Analysis of engagement behavior in children during dyadic interactions using prosodic cues.

    Science.gov (United States)

    Gupta, Rahul; Bone, Daniel; Lee, Sungbok; Narayanan, Shrikanth

    2016-05-01

    Child engagement is defined as the interaction of a child with his/her environment in a contextually appropriate manner. Engagement behavior in children is linked to socio-emotional and cognitive state assessment with enhanced engagement identified with improved skills. A vast majority of studies however rely solely, and often implicitly, on subjective perceptual measures of engagement. Access to automatic quantification could assist researchers/clinicians to objectively interpret engagement with respect to a target behavior or condition, and furthermore inform mechanisms for improving engagement in various settings. In this paper, we present an engagement prediction system based exclusively on vocal cues observed during structured interaction between a child and a psychologist involving several tasks. Specifically, we derive prosodic cues that capture engagement levels across the various tasks. Our experiments suggest that a child's engagement is reflected not only in the vocalizations, but also in the speech of the interacting psychologist. Moreover, we show that prosodic cues are informative of the engagement phenomena not only as characterized over the entire task (i.e., global cues), but also in short term patterns (i.e., local cues). We perform a classification experiment assigning the engagement of a child into three discrete levels achieving an unweighted average recall of 55.8% (chance is 33.3%). While the systems using global cues and local level cues are each statistically significant in predicting engagement, we obtain the best results after fusing these two components. We perform further analysis of the cues at local and global levels to achieve insights linking specific prosodic patterns to the engagement phenomenon. We observe that while the performance of our model varies with task setting and interacting psychologist, there exist universal prosodic patterns reflective of engagement.

  7. No two cues are alike: Depth of learning during infancy is dependent on what orients attention.

    Science.gov (United States)

    Wu, Rachel; Kirkham, Natasha Z

    2010-10-01

    Human infants develop a variety of attentional mechanisms that allow them to extract relevant information from a cluttered multimodal world. We know that both social and nonsocial cues shift infants' attention, but not how these cues differentially affect learning of multimodal events. Experiment 1 used social cues to direct 8- and 4-month-olds' attention to two audiovisual events (i.e., animations of a cat or dog accompanied by particular sounds) while identical distractor events played in another location. Experiment 2 directed 8-month-olds' attention with colorful flashes to the same events. Experiment 3 measured baseline learning without attention cues both with the familiarization and test trials (no cue condition) and with only the test trials (test control condition). The 8-month-olds exposed to social cues showed specific learning of audiovisual events. The 4-month-olds displayed only general spatial learning from social cues, suggesting that specific learning of audiovisual events from social cues may be a function of experience. Infants cued with the colorful flashes looked indiscriminately to both cued locations during test (similar to the 4-month-olds learning from social cues) despite attending for equal duration to the training trials as the 8-month-olds with the social cues. Results from Experiment 3 indicated that the learning effects in Experiments 1 and 2 resulted from exposure to the different cues and multimodal events. We discuss these findings in terms of the perceptual differences and relevance of the cues. Copyright 2010 Elsevier Inc. All rights reserved.

  8. Cue reactivity towards shopping cues in female participants.

    Science.gov (United States)

    Starcke, Katrin; Schlereth, Berenike; Domass, Debora; Schöler, Tobias; Brand, Matthias

    2013-03-01

    Background and aims It is currently under debate whether pathological buying can be considered as a behavioural addiction. Addictions have often been investigated with cue-reactivity paradigms to assess subjective, physiological and neural craving reactions. The current study aims at testing whether cue reactivity towards shopping cues is related to pathological buying tendencies. Methods A sample of 66 non-clinical female participants rated shopping related pictures concerning valence, arousal, and subjective craving. In a subgroup of 26 participants, electrodermal reactions towards those pictures were additionally assessed. Furthermore, all participants were screened concerning pathological buying tendencies and baseline craving for shopping. Results Results indicate a relationship between the subjective ratings of the shopping cues and pathological buying tendencies, even if baseline craving for shopping was controlled for. Electrodermal reactions were partly related to the subjective ratings of the cues. Conclusions Cue reactivity may be a potential correlate of pathological buying tendencies. Thus, pathological buying may be accompanied by craving reactions towards shopping cues. Results support the assumption that pathological buying can be considered as a behavioural addiction. From a methodological point of view, results support the view that the cue-reactivity paradigm is suited for the investigation of craving reactions in pathological buying and future studies should implement this paradigm in clinical samples.

  9. Phonetic Category Cues in Adult-Directed Speech: Evidence from Three Languages with Distinct Vowel Characteristics

    Science.gov (United States)

    Pons, Ferran; Biesanz, Jeremy C.; Kajikawa, Sachiyo; Fais, Laurel; Narayan, Chandan R.; Amano, Shigeaki; Werker, Janet F.

    2012-01-01

    Using an artificial language learning manipulation, Maye, Werker, and Gerken (2002) demonstrated that infants' speech sound categories change as a function of the distributional properties of the input. In a recent study, Werker et al. (2007) showed that Infant-directed Speech (IDS) input contains reliable acoustic cues that support distributional…

  10. A Survey of Sound Source Localization Methods in Wireless Acoustic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Maximo Cobos

    2017-01-01

    Full Text Available Wireless acoustic sensor networks (WASNs are formed by a distributed group of acoustic-sensing devices featuring audio playing and recording capabilities. Current mobile computing platforms offer great possibilities for the design of audio-related applications involving acoustic-sensing nodes. In this context, acoustic source localization is one of the application domains that have attracted the most attention of the research community along the last decades. In general terms, the localization of acoustic sources can be achieved by studying energy and temporal and/or directional features from the incoming sound at different microphones and using a suitable model that relates those features with the spatial location of the source (or sources of interest. This paper reviews common approaches for source localization in WASNs that are focused on different types of acoustic features, namely, the energy of the incoming signals, their time of arrival (TOA or time difference of arrival (TDOA, the direction of arrival (DOA, and the steered response power (SRP resulting from combining multiple microphone signals. Additionally, we discuss methods not only aimed at localizing acoustic sources but also designed to locate the nodes themselves in the network. Finally, we discuss current challenges and frontiers in this field.

  11. Dementias show differential physiological responses to salient sounds

    Directory of Open Access Journals (Sweden)

    Phillip David Fletcher

    2015-03-01

    Full Text Available Abnormal responsiveness to salient sensory signals is often a prominent feature of dementia diseases, particularly the frontotemporal lobar degenerations, but has been little studied. Here we assessed processing of one important class of salient signals, looming sounds, in canonical dementia syndromes. We manipulated tones using intensity cues to create percepts of salient approaching (‘looming’ or less salient withdrawing sounds. Pupil dilatation responses and behavioural rating responses to these stimuli were compared in patients fulfilling consensus criteria for dementia syndromes (semantic dementia, n=10; behavioural variant frontotemporal dementia, n=16, progressive non-fluent aphasia, n=12; amnestic Alzheimer’s disease, n=10 and a cohort of 26 healthy age-matched individuals. Approaching sounds were rated as more salient than withdrawing sounds by healthy older individuals but this behavioural response to salience did not differentiate healthy individuals from patients with dementia syndromes. Pupil responses to approaching sounds were greater than responses to withdrawing sounds in healthy older individuals and in patients with semantic dementia: this differential pupil response was reduced in patients with progressive nonfluent aphasia and Alzheimer’s disease relative both to the healthy control and semantic dementia groups, and did not correlate with nonverbal auditory semantic function. Autonomic responses to auditory salience are differentially affected by dementias and may constitute a novel biomarker of these diseases.

  12. Dementias show differential physiological responses to salient sounds

    Science.gov (United States)

    Fletcher, Phillip D.; Nicholas, Jennifer M.; Shakespeare, Timothy J.; Downey, Laura E.; Golden, Hannah L.; Agustus, Jennifer L.; Clark, Camilla N.; Mummery, Catherine J.; Schott, Jonathan M.; Crutch, Sebastian J.; Warren, Jason D.

    2015-01-01

    Abnormal responsiveness to salient sensory signals is often a prominent feature of dementia diseases, particularly the frontotemporal lobar degenerations, but has been little studied. Here we assessed processing of one important class of salient signals, looming sounds, in canonical dementia syndromes. We manipulated tones using intensity cues to create percepts of salient approaching (“looming”) or less salient withdrawing sounds. Pupil dilatation responses and behavioral rating responses to these stimuli were compared in patients fulfilling consensus criteria for dementia syndromes (semantic dementia, n = 10; behavioral variant frontotemporal dementia, n = 16, progressive nonfluent aphasia, n = 12; amnestic Alzheimer's disease, n = 10) and a cohort of 26 healthy age-matched individuals. Approaching sounds were rated as more salient than withdrawing sounds by healthy older individuals but this behavioral response to salience did not differentiate healthy individuals from patients with dementia syndromes. Pupil responses to approaching sounds were greater than responses to withdrawing sounds in healthy older individuals and in patients with semantic dementia: this differential pupil response was reduced in patients with progressive nonfluent aphasia and Alzheimer's disease relative both to the healthy control and semantic dementia groups, and did not correlate with nonverbal auditory semantic function. Autonomic responses to auditory salience are differentially affected by dementias and may constitute a novel biomarker of these diseases. PMID:25859194

  13. The Britannica Guide to Sound and Light

    CERN Document Server

    2010-01-01

    Audio and visual cues facilitate some of our most powerful sensory experiences and embed themselves deeply into our memories and subconscious. Sound and light waves interact with our ears and eyes?our biological interpreters?creating a textural experience and relationship with the world around us. This well-researched volume explores the science behind acoustics and optics and the broad application they have to everything from listening to music and watching television to ultrasonic and laser technologies that are crucial to the medical field.

  14. Comparison between bilateral cochlear implants and Neurelec Digisonic(®) SP Binaural cochlear implant: speech perception, sound localization and patient self-assessment.

    Science.gov (United States)

    Bonnard, Damien; Lautissier, Sylvie; Bosset-Audoit, Amélie; Coriat, Géraldine; Beraha, Max; Maunoury, Antoine; Martel, Jacques; Darrouzet, Vincent; Bébéar, Jean-Pierre; Dauman, René

    2013-01-01

    An alternative to bilateral cochlear implantation is offered by the Neurelec Digisonic(®) SP Binaural cochlear implant, which allows stimulation of both cochleae within a single device. The purpose of this prospective study was to compare a group of Neurelec Digisonic(®) SP Binaural implant users (denoted BINAURAL group, n = 7) with a group of bilateral adult cochlear implant users (denoted BILATERAL group, n = 6) in terms of speech perception, sound localization, and self-assessment of health status and hearing disability. Speech perception was assessed using word recognition at 60 dB SPL in quiet and in a 'cocktail party' noise delivered through five loudspeakers in the hemi-sound field facing the patient (signal-to-noise ratio = +10 dB). The sound localization task was to determine the source of a sound stimulus among five speakers positioned between -90° and +90° from midline. Change in health status was assessed using the Glasgow Benefit Inventory and hearing disability was evaluated with the Abbreviated Profile of Hearing Aid Benefit. Speech perception was not statistically different between the two groups, even though there was a trend in favor of the BINAURAL group (mean percent word recognition in the BINAURAL and BILATERAL groups: 70 vs. 56.7% in quiet, 55.7 vs. 43.3% in noise). There was also no significant difference with regard to performance in sound localization and self-assessment of health status and hearing disability. On the basis of the BINAURAL group's performance in hearing tasks involving the detection of interaural differences, implantation with the Neurelec Digisonic(®) SP Binaural implant may be considered to restore effective binaural hearing. Based on these first comparative results, this device seems to provide benefits similar to those of traditional bilateral cochlear implantation, with a new approach to stimulate both auditory nerves. Copyright © 2013 S. Karger AG, Basel.

  15. Reproduction of nearby sound sources using higher-order ambisonics with practical loudspeaker arrays

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel; Buchholz, Jörg

    2012-01-01

    the impact of two existing and a new proposed regularization function on the reproduced sound fields and on the main auditory cue for nearby sound sources outside the median plane, i.e, low-frequencies interaural level differences (ILDs). The proposed regularization function led to a better reproduction......In order to reproduce nearby sound sources with distant loudspeakers to a single listener, the near field compensated (NFC) method for higher-order Ambisonics (HOA) has been previously proposed. In practical realization, this method requires the use of regularization functions. This study analyzes...... of point source sound fields compared to existing regularization functions for NFC-HOA. Measurements in realistic playback environments showed that, for very close sources, significant ILDs for frequencies above about 250 Hz can be reproduced with NFC-HOA and the proposed regularization function whereas...

  16. Task-irrelevant novel sounds improve attentional performance in children with and without ADHD

    Directory of Open Access Journals (Sweden)

    Jana eTegelbeckers

    2016-01-01

    Full Text Available Task-irrelevant salient stimuli involuntarily capture attention and can lead to distraction from an ongoing task, especially in children with ADHD. However, there has been tentative evidence that the presentation of novel sounds can have beneficial effects on cognitive performance. In the present study, we aimed to investigate the influence of novel sounds compared to no sound and a repeatedly presented standard sound on attentional performance in children and adolescents with and without ADHD. We therefore had 32 patients with ADHD and 32 typically developing children and adolescents (8 to 13 years executed a flanker task in which each trial was preceded either by a repeatedly presented standard sound (33%, an unrepeated novel sound (33% or no auditory stimulation (33%. Task-irrelevant novel sounds facilitated attentional performance similarly in children with and without ADHD, as indicated by reduced omission error rates, reaction times, and reaction time variability without compromising performance accuracy. By contrast, standard sounds, while also reducing omission error rates and reaction times, led to increased commission error rates. Therefore, the beneficial effect of novel sounds exceeds cueing of the target display by potentially increased alerting and/or enhanced behavioral control.

  17. Cue combination encoding via contextual modulation of V1 and V2 neurons

    Directory of Open Access Journals (Sweden)

    Zarella MD

    2016-10-01

    Full Text Available Mark D Zarella, Daniel Y Ts’o Department of Neurosurgery, SUNY Upstate Medical University, Syracuse, NY, USA Abstract: Neurons in early visual cortical areas encode the local properties of a stimulus in a number of different feature dimensions such as color, orientation, and motion. It has been shown, however, that stimuli presented well beyond the confines of the classical receptive field can augment these responses in a way that emphasizes these local attributes within the greater context of the visual scene. This mechanism imparts global information to cells that are otherwise considered local feature detectors and can potentially serve as an important foundation for surface segmentation, texture representation, and figure–ground segregation. The role of early visual cortex toward these functions remains somewhat of an enigma, as it is unclear how surface segmentation cues are integrated from multiple feature dimensions. We examined the impact of orientation- and motion-defined surface segmentation cues in V1 and V2 neurons using a stimulus in which the two features are completely separable. We find that, although some cells are modulated in a cue-invariant manner, many cells are influenced by only one cue or the other. Furthermore, cells that are modulated by both cues tend to be more strongly affected when both cues are presented together than when presented individually. These results demonstrate two mechanisms by which cue combinations can enhance salience. We find that feature-specific populations are more frequently encountered in V1, while cue additivity is more prominent in V2. These results highlight how two strongly interconnected areas at different stages in the cortical hierarchy can potentially contribute to scene segmentation. Keywords: striate, extrastriate, extraclassical, texture, segmentation

  18. Visual form Cues, Biological Motions, Auditory Cues, and Even Olfactory Cues Interact to Affect Visual Sex Discriminations

    OpenAIRE

    Rick Van Der Zwan; Anna Brooks; Duncan Blair; Coralia Machatch; Graeme Hacker

    2011-01-01

    Johnson and Tassinary (2005) proposed that visually perceived sex is signalled by structural or form cues. They suggested also that biological motion cues signal sex, but do so indirectly. We previously have shown that auditory cues can mediate visual sex perceptions (van der Zwan et al., 2009). Here we demonstrate that structural cues to body shape are alone sufficient for visual sex discriminations but that biological motion cues alone are not. Interestingly, biological motions can resolve ...

  19. Global cue inconsistency diminishes learning of cue validity

    Directory of Open Access Journals (Sweden)

    Tony Wang

    2016-11-01

    Full Text Available We present a novel two-stage probabilistic learning task that examines the participants’ ability to learn and utilize valid cues across several levels of probabilistic feedback. In the first stage, participants sample from one of three cues that gives predictive information about the outcome of the second stage. Participants are rewarded for correct prediction of the outcome in stage two. Only one of the three cues gives valid predictive information and thus participants can maximise their reward by learning to sample from the valid cue. The validity of this predictive information, however, is reinforced across several levels of probabilistic feedback. A second manipulation involved changing the consistency of the predictive information in stage one and the outcome in stage two. The results show that participants, with higher probabilistic feedback, learned to utilise the valid cue. In inconsistent task conditions, however, participants were significantly less successful in utilising higher validity cues. We interpret this result as implying that learning in probabilistic categorization is based on developing a representation of the task that allows for goal-directed action.

  20. 3-D Sound for Virtual Reality and Multimedia

    Science.gov (United States)

    Begault, Durand R.; Trejo, Leonard J. (Technical Monitor)

    2000-01-01

    Technology and applications for the rendering of virtual acoustic spaces are reviewed. Chapter 1 deals with acoustics and psychoacoustics. Chapters 2 and 3 cover cues to spatial hearing and review psychoacoustic literature. Chapter 4 covers signal processing and systems overviews of 3-D sound systems. Chapter 5 covers applications to computer workstations, communication systems, aeronautics and space, and sonic arts. Chapter 6 lists resources. This TM is a reprint of the 1994 book from Academic Press.

  1. Flights of fear: a mechanical wing whistle sounds the alarm in a flocking bird.

    Science.gov (United States)

    Hingee, Mae; Magrath, Robert D

    2009-12-07

    Animals often form groups to increase collective vigilance and allow early detection of predators, but this benefit of sociality relies on rapid transfer of information. Among birds, alarm calls are not present in all species, while other proposed mechanisms of information transfer are inefficient. We tested whether wing sounds can encode reliable information on danger. Individuals taking off in alarm fly more quickly or ascend more steeply, so may produce different sounds in alarmed than in routine flight, which then act as reliable cues of alarm, or honest 'index' signals in which a signal's meaning is associated with its method of production. We show that crested pigeons, Ocyphaps lophotes, which have modified flight feathers, produce distinct wing 'whistles' in alarmed flight, and that individuals take off in alarm only after playback of alarmed whistles. Furthermore, amplitude-manipulated playbacks showed that response depends on whistle structure, such as tempo, not simply amplitude. We believe this is the first demonstration that flight noise can send information about alarm, and suggest that take-off noise could provide a cue of alarm in many flocking species, with feather modification evolving specifically to signal alarm in some. Similar reliable cues or index signals could occur in other animals.

  2. Auditory disorders and acquisition of the ability to localize sound in children born to HIV-positive mothers

    Directory of Open Access Journals (Sweden)

    Carla Gentile Matas

    Full Text Available The objective of the present study was to evaluate children born to HIV-infected mothers and to determine whether such children present auditory disorders or poor acquisition of the ability to localize sound. The population studied included 143 children (82 males and 61 females, ranging in age from one month to 30 months. The children were divided into three groups according to the classification system devised in 1994 by the Centers for Disease Control and Prevention: infected; seroreverted; and exposed. The children were then submitted to audiological evaluation, including behavioral audiometry, visual reinforcement audiometry and measurement of acoustic immittance. Statistical analysis showed that the incidence of auditory disorders was significantly higher in the infected group. In the seroreverted and exposed groups, there was a marked absence of auditory disorders. In the infected group as a whole, the findings were suggestive of central auditory disorders. Evolution of the ability to localize sound was found to be poorer among the children in the infected group than among those in the seroreverted and exposed groups.

  3. Spectro-temporal cues enhance modulation sensitivity in cochlear implant users

    Science.gov (United States)

    Zheng, Yi; Escabí, Monty; Litovsky, Ruth Y.

    2018-01-01

    Although speech understanding is highly variable amongst cochlear implants (CIs) subjects, the remarkably high speech recognition performance of many CI users is unexpected and not well understood. Numerous factors, including neural health and degradation of the spectral information in the speech signal of CIs, likely contribute to speech understanding. We studied the ability to use spectro-temporal modulations, which may be critical for speech understanding and discrimination, and hypothesize that CI users adopt a different perceptual strategy than normal-hearing (NH) individuals, whereby they rely more heavily on joint spectro-temporal cues to enhance detection of auditory cues. Modulation detection sensitivity was studied in CI users and NH subjects using broadband “ripple” stimuli that were modulated spectrally, temporally, or jointly, i.e., spectro-temporally. The spectro-temporal modulation transfer functions of CI users and NH subjects was decomposed into spectral and temporal dimensions and compared to those subjects’ spectral-only and temporal-only modulation transfer functions. In CI users, the joint spectro-temporal sensitivity was better than that predicted by spectral-only and temporal-only sensitivity, indicating a heightened spectro-temporal sensitivity. Such an enhancement through the combined integration of spectral and temporal cues was not observed in NH subjects. The unique use of spectro-temporal cues by CI patients can yield benefits for use of cues that are important for speech understanding. This finding has implications for developing sound processing strategies that may rely on joint spectro-temporal modulations to improve speech comprehension of CI users, and the findings of this study may be valuable for developing clinical assessment tools to optimize CI processor performance. PMID:28601530

  4. Spectro-temporal cues enhance modulation sensitivity in cochlear implant users.

    Science.gov (United States)

    Zheng, Yi; Escabí, Monty; Litovsky, Ruth Y

    2017-08-01

    Although speech understanding is highly variable amongst cochlear implants (CIs) subjects, the remarkably high speech recognition performance of many CI users is unexpected and not well understood. Numerous factors, including neural health and degradation of the spectral information in the speech signal of CIs, likely contribute to speech understanding. We studied the ability to use spectro-temporal modulations, which may be critical for speech understanding and discrimination, and hypothesize that CI users adopt a different perceptual strategy than normal-hearing (NH) individuals, whereby they rely more heavily on joint spectro-temporal cues to enhance detection of auditory cues. Modulation detection sensitivity was studied in CI users and NH subjects using broadband "ripple" stimuli that were modulated spectrally, temporally, or jointly, i.e., spectro-temporally. The spectro-temporal modulation transfer functions of CI users and NH subjects was decomposed into spectral and temporal dimensions and compared to those subjects' spectral-only and temporal-only modulation transfer functions. In CI users, the joint spectro-temporal sensitivity was better than that predicted by spectral-only and temporal-only sensitivity, indicating a heightened spectro-temporal sensitivity. Such an enhancement through the combined integration of spectral and temporal cues was not observed in NH subjects. The unique use of spectro-temporal cues by CI patients can yield benefits for use of cues that are important for speech understanding. This finding has implications for developing sound processing strategies that may rely on joint spectro-temporal modulations to improve speech comprehension of CI users, and the findings of this study may be valuable for developing clinical assessment tools to optimize CI processor performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Improvements of sound localization abilities by the facial ruff of the barn owl (Tyto alba as demonstrated by virtual ruff removal.

    Directory of Open Access Journals (Sweden)

    Laura Hausmann

    Full Text Available BACKGROUND: When sound arrives at the eardrum it has already been filtered by the body, head, and outer ear. This process is mathematically described by the head-related transfer functions (HRTFs, which are characteristic for the spatial position of a sound source and for the individual ear. HRTFs in the barn owl (Tyto alba are also shaped by the facial ruff, a specialization that alters interaural time differences (ITD, interaural intensity differences (ILD, and the frequency spectrum of the incoming sound to improve sound localization. Here we created novel stimuli to simulate the removal of the barn owl's ruff in a virtual acoustic environment, thus creating a situation similar to passive listening in other animals, and used these stimuli in behavioral tests. METHODOLOGY/PRINCIPAL FINDINGS: HRTFs were recorded from an owl before and after removal of the ruff feathers. Normal and ruff-removed conditions were created by filtering broadband noise with the HRTFs. Under normal virtual conditions, no differences in azimuthal head-turning behavior between individualized and non-individualized HRTFs were observed. The owls were able to respond differently to stimuli from the back than to stimuli from the front having the same ITD. By contrast, such a discrimination was not possible after the virtual removal of the ruff. Elevational head-turn angles were (slightly smaller with non-individualized than with individualized HRTFs. The removal of the ruff resulted in a large decrease in elevational head-turning amplitudes. CONCLUSIONS/SIGNIFICANCE: The facial ruff a improves azimuthal sound localization by increasing the ITD range and b improves elevational sound localization in the frontal field by introducing a shift of iso-ILD lines out of the midsagittal plane, which causes ILDs to increase with increasing stimulus elevation. The changes at the behavioral level could be related to the changes in the binaural physical parameters that occurred after the

  6. When speaker identity is unavoidable: Neural processing of speaker identity cues in natural speech.

    Science.gov (United States)

    Tuninetti, Alba; Chládková, Kateřina; Peter, Varghese; Schiller, Niels O; Escudero, Paola

    2017-11-01

    Speech sound acoustic properties vary largely across speakers and accents. When perceiving speech, adult listeners normally disregard non-linguistic variation caused by speaker or accent differences, in order to comprehend the linguistic message, e.g. to correctly identify a speech sound or a word. Here we tested whether the process of normalizing speaker and accent differences, facilitating the recognition of linguistic information, is found at the level of neural processing, and whether it is modulated by the listeners' native language. In a multi-deviant oddball paradigm, native and nonnative speakers of Dutch were exposed to naturally-produced Dutch vowels varying in speaker, sex, accent, and phoneme identity. Unexpectedly, the analysis of mismatch negativity (MMN) amplitudes elicited by each type of change shows a large degree of early perceptual sensitivity to non-linguistic cues. This finding on perception of naturally-produced stimuli contrasts with previous studies examining the perception of synthetic stimuli wherein adult listeners automatically disregard acoustic cues to speaker identity. The present finding bears relevance to speech normalization theories, suggesting that at an unattended level of processing, listeners are indeed sensitive to changes in fundamental frequency in natural speech tokens. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Counterconditioning reduces cue-induced craving and actual cue-elicited consumption.

    Science.gov (United States)

    Van Gucht, Dinska; Baeyens, Frank; Vansteenwegen, Debora; Hermans, Dirk; Beckers, Tom

    2010-10-01

    Cue-induced craving is not easily reduced by an extinction or exposure procedure and may constitute an important route toward relapse in addictive behavior after treatment. In the present study, we investigated the effectiveness of counterconditioning as an alternative procedure to reduce cue-induced craving, in a nonclinical population. We found that a cue, initially paired with chocolate consumption, did not cease to elicit craving for chocolate after extinction (repeated presentation of the cue without chocolate consumption), but did so after counterconditioning (repeated pairing of the cue with consumption of a highly disliked liquid, Polysorbate 20). This effect persisted after 1 week. Counterconditioning moreover was more effective than extinction in disrupting reported expectancy to get to eat chocolate, and also appeared to be more effective in reducing actual cue-elicited chocolate consumption. These results suggest that counterconditioning may be more promising than cue exposure for the prevention of relapse in addictive behavior. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  8. Retrieval of bilingual autobiographical memories: effects of cue language and cue imageability.

    Science.gov (United States)

    Mortensen, Linda; Berntsen, Dorthe; Bohn, Ocke-Schwen

    2015-01-01

    An important issue in theories of bilingual autobiographical memory is whether linguistically encoded memories are represented in language-specific stores or in a common language-independent store. Previous research has found that autobiographical memory retrieval is facilitated when the language of the cue is the same as the language of encoding, consistent with language-specific memory stores. The present study examined whether this language congruency effect is influenced by cue imageability. Danish-English bilinguals retrieved autobiographical memories in response to Danish and English high- or low-imageability cues. Retrieval latencies were shorter to Danish than English cues and shorter to high- than low-imageability cues. Importantly, the cue language effect was stronger for low-than high-imageability cues. To examine the relationship between cue language and the language of internal retrieval, participants identified the language in which the memories were internally retrieved. More memories were retrieved when the cue language was the same as the internal language than when the cue was in the other language, and more memories were identified as being internally retrieved in Danish than English, regardless of the cue language. These results provide further evidence for language congruency effects in bilingual memory and suggest that this effect is influenced by cue imageability.

  9. The effect of looming and receding sounds on the perceived in-depth orientation of depth-ambiguous biological motion figures.

    Directory of Open Access Journals (Sweden)

    Ben Schouten

    Full Text Available BACKGROUND: The focus in the research on biological motion perception traditionally has been restricted to the visual modality. Recent neurophysiological and behavioural evidence, however, supports the idea that actions are not represented merely visually but rather audiovisually. The goal of the present study was to test whether the perceived in-depth orientation of depth-ambiguous point-light walkers (plws is affected by the presentation of looming or receding sounds synchronized with the footsteps. METHODOLOGY/PRINCIPAL FINDINGS: In Experiment 1 orthographic frontal/back projections of plws were presented either without sound or with sounds of which the intensity level was rising (looming, falling (receding or stationary. Despite instructions to ignore the sounds and to only report the visually perceived in-depth orientation, plws accompanied with looming sounds were more often judged to be facing the viewer whereas plws paired with receding sounds were more often judged to be facing away from the viewer. To test whether the effects observed in Experiment 1 act at a perceptual level rather than at the decisional level, in Experiment 2 observers perceptually compared orthographic plws without sound or paired with either looming or receding sounds to plws without sound but with perspective cues making them objectively either facing towards or facing away from the viewer. Judging whether either an orthographic plw or a plw with looming (receding perspective cues is visually most looming becomes harder (easier when the orthographic plw is paired with looming sounds. CONCLUSIONS/SIGNIFICANCE: The present results suggest that looming and receding sounds alter the judgements of the in-depth orientation of depth-ambiguous point-light walkers. While looming sounds are demonstrated to act at a perceptual level and make plws look more looming, it remains a challenge for future research to clarify at what level in the processing hierarchy receding sounds

  10. Enhanced Soundings for Local Coupling Studies Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, Craig R [University at Albany, State University of New York; Santanello, Joseph A [NASA Goddard Space Flight Center (GSFC), Greenbelt, MD (United States); Gentine, Pierre [Columbia Univ., New York, NY (United States)

    2016-04-01

    This document presents initial analyses of the enhanced radiosonde observations obtained during the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility Enhanced Soundings for Local Coupling Studies Field Campaign (ESLCS), which took place at the ARM Southern Great Plains (SGP) Central Facility (CF) from June 15 to August 31, 2015. During ESLCS, routine 4-times-daily radiosonde measurements at the ARM-SGP CF were augmented on 12 days (June 18 and 29; July 11, 14, 19, and 26; August 15, 16, 21, 25, 26, and 27) with daytime 1-hourly radiosondes and 10-minute ‘trailer’ radiosondes every 3 hours. These 12 intensive operational period (IOP) days were selected on the basis of prior-day qualitative forecasts of potential land-atmosphere coupling strength. The campaign captured 2 dry soil convection advantage days (June 29 and July 14) and 10 atmospherically controlled days. Other noteworthy IOP events include: 2 soil dry-down sequences (July 11-14-19 and August 21-25-26), a 2-day clear-sky case (August 15-16), and the passing of Tropical Storm Bill (June 18). To date, the ESLCS data set constitutes the highest-temporal-resolution sampling of the evolution of the daytime planetary boundary layer (PBL) using radiosondes at the ARM-SGP. The data set is expected to contribute to: 1) improved understanding and modeling of the diurnal evolution of the PBL, particularly with regard to the role of local soil wetness, and (2) new insights into the appropriateness of current ARM-SGP CF thermodynamic sampling strategies.

  11. Acoustic Constraints and Musical Consequences: Exploring Composers' Use of Cues for Musical Emotion

    Science.gov (United States)

    Schutz, Michael

    2017-01-01

    Emotional communication in music is based in part on the use of pitch and timing, two cues effective in emotional speech. Corpus analyses of natural speech illustrate that happy utterances tend to be higher and faster than sad. Although manipulations altering melodies show that passages changed to be higher and faster sound happier, corpus analyses of unaltered music paralleling those of natural speech have proven challenging. This partly reflects the importance of modality (i.e., major/minor), a powerful musical cue whose use is decidedly imbalanced in Western music. This imbalance poses challenges for creating musical corpora analogous to existing speech corpora for purposes of analyzing emotion. However, a novel examination of music by Bach and Chopin balanced in modality illustrates that, consistent with predictions from speech, their major key (nominally “happy”) pieces are approximately a major second higher and 29% faster than their minor key pieces (Poon and Schutz, 2015). Although this provides useful evidence for parallels in use of emotional cues between these domains, it raises questions about how composers “trade off” cue differentiation in music, suggesting interesting new potential research directions. This Focused Review places those results in a broader context, highlighting their connections with previous work on the natural use of cues for musical emotion. Together, these observational findings based on unaltered music—widely recognized for its artistic significance—complement previous experimental work systematically manipulating specific parameters. In doing so, they also provide a useful musical counterpart to fruitful studies of the acoustic cues for emotion found in natural speech. PMID:29249997

  12. Acoustic Constraints and Musical Consequences: Exploring Composers' Use of Cues for Musical Emotion.

    Science.gov (United States)

    Schutz, Michael

    2017-01-01

    Emotional communication in music is based in part on the use of pitch and timing, two cues effective in emotional speech. Corpus analyses of natural speech illustrate that happy utterances tend to be higher and faster than sad. Although manipulations altering melodies show that passages changed to be higher and faster sound happier, corpus analyses of unaltered music paralleling those of natural speech have proven challenging. This partly reflects the importance of modality (i.e., major/minor), a powerful musical cue whose use is decidedly imbalanced in Western music. This imbalance poses challenges for creating musical corpora analogous to existing speech corpora for purposes of analyzing emotion. However, a novel examination of music by Bach and Chopin balanced in modality illustrates that, consistent with predictions from speech, their major key (nominally "happy") pieces are approximately a major second higher and 29% faster than their minor key pieces (Poon and Schutz, 2015). Although this provides useful evidence for parallels in use of emotional cues between these domains, it raises questions about how composers "trade off" cue differentiation in music, suggesting interesting new potential research directions. This Focused Review places those results in a broader context, highlighting their connections with previous work on the natural use of cues for musical emotion. Together, these observational findings based on unaltered music-widely recognized for its artistic significance-complement previous experimental work systematically manipulating specific parameters. In doing so, they also provide a useful musical counterpart to fruitful studies of the acoustic cues for emotion found in natural speech.

  13. Acoustic Constraints and Musical Consequences: Exploring Composers' Use of Cues for Musical Emotion

    Directory of Open Access Journals (Sweden)

    Michael Schutz

    2017-11-01

    Full Text Available Emotional communication in music is based in part on the use of pitch and timing, two cues effective in emotional speech. Corpus analyses of natural speech illustrate that happy utterances tend to be higher and faster than sad. Although manipulations altering melodies show that passages changed to be higher and faster sound happier, corpus analyses of unaltered music paralleling those of natural speech have proven challenging. This partly reflects the importance of modality (i.e., major/minor, a powerful musical cue whose use is decidedly imbalanced in Western music. This imbalance poses challenges for creating musical corpora analogous to existing speech corpora for purposes of analyzing emotion. However, a novel examination of music by Bach and Chopin balanced in modality illustrates that, consistent with predictions from speech, their major key (nominally “happy” pieces are approximately a major second higher and 29% faster than their minor key pieces (Poon and Schutz, 2015. Although this provides useful evidence for parallels in use of emotional cues between these domains, it raises questions about how composers “trade off” cue differentiation in music, suggesting interesting new potential research directions. This Focused Review places those results in a broader context, highlighting their connections with previous work on the natural use of cues for musical emotion. Together, these observational findings based on unaltered music—widely recognized for its artistic significance—complement previous experimental work systematically manipulating specific parameters. In doing so, they also provide a useful musical counterpart to fruitful studies of the acoustic cues for emotion found in natural speech.

  14. Great cormorants ( Phalacrocorax carbo) can detect auditory cues while diving

    Science.gov (United States)

    Hansen, Kirstin Anderson; Maxwell, Alyssa; Siebert, Ursula; Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-06-01

    In-air hearing in birds has been thoroughly investigated. Sound provides birds with auditory information for species and individual recognition from their complex vocalizations, as well as cues while foraging and for avoiding predators. Some 10% of existing species of birds obtain their food under the water surface. Whether some of these birds make use of acoustic cues while underwater is unknown. An interesting species in this respect is the great cormorant ( Phalacrocorax carbo), being one of the most effective marine predators and relying on the aquatic environment for food year round. Here, its underwater hearing abilities were investigated using psychophysics, where the bird learned to detect the presence or absence of a tone while submerged. The greatest sensitivity was found at 2 kHz, with an underwater hearing threshold of 71 dB re 1 μPa rms. The great cormorant is better at hearing underwater than expected, and the hearing thresholds are comparable to seals and toothed whales in the frequency band 1-4 kHz. This opens up the possibility of cormorants and other aquatic birds having special adaptations for underwater hearing and making use of underwater acoustic cues from, e.g., conspecifics, their surroundings, as well as prey and predators.

  15. Effect of three cueing devices for people with Parkinson's disease with gait initiation difficulties.

    Science.gov (United States)

    McCandless, Paula J; Evans, Brenda J; Janssen, Jessie; Selfe, James; Churchill, Andrew; Richards, Jim

    2016-02-01

    Freezing of gait (FOG) remains one of the most common debilitating aspects of Parkinson's disease and has been linked to injuries, falls and reduced quality of life. Although commercially available portable cueing devices exist claiming to assist with overcoming freezing; their immediate effectiveness in overcoming gait initiation failure is currently unknown. This study investigated the effects of three different types of cueing device in people with Parkinson's disease who experience freezing. Twenty participants with idiopathic Parkinson's disease who experienced freezing during gait but who were able to walk short distances indoors independently were recruited. At least three attempts at gait initiation were recorded using a 10 camera Qualisys motion analysis system and four force platforms. Test conditions were; Laser Cane, sound metronome, vibrating metronome, walking stick and no intervention. During testing 12 of the 20 participants had freezing episodes, from these participants 100 freezing and 91 non-freezing trials were recorded. Clear differences in the movement patterns were seen between freezing and non-freezing episodes. The Laser Cane was most effective cueing device at improving the forwards/backwards and side to side movement and had the least number of freezing episodes. The walking stick also showed significant improvements compared to the other conditions. The vibration metronome appeared to disrupt movement compared to the sound metronome at the same beat frequency. This study identified differences in the movement patterns between freezing episodes and non-freezing episodes, and identified immediate improvements during gait initiation when using the Laser Cane over the other interventions. Copyright © 2015. Published by Elsevier B.V.

  16. Sustained Magnetic Responses in Temporal Cortex Reflect Instantaneous Significance of Approaching and Receding Sounds.

    Directory of Open Access Journals (Sweden)

    Dominik R Bach

    Full Text Available Rising sound intensity often signals an approaching sound source and can serve as a powerful warning cue, eliciting phasic attention, perception biases and emotional responses. How the evaluation of approaching sounds unfolds over time remains elusive. Here, we capitalised on the temporal resolution of magnetoencephalograpy (MEG to investigate in humans a dynamic encoding of perceiving approaching and receding sounds. We compared magnetic responses to intensity envelopes of complex sounds to those of white noise sounds, in which intensity change is not perceived as approaching. Sustained magnetic fields over temporal sensors tracked intensity change in complex sounds in an approximately linear fashion, an effect not seen for intensity change in white noise sounds, or for overall intensity. Hence, these fields are likely to track approach/recession, but not the apparent (instantaneous distance of the sound source, or its intensity as such. As a likely source of this activity, the bilateral inferior temporal gyrus and right temporo-parietal junction emerged. Our results indicate that discrete temporal cortical areas parametrically encode behavioural significance in moving sound sources where the signal unfolded in a manner reminiscent of evidence accumulation. This may help an understanding of how acoustic percepts are evaluated as behaviourally relevant, where our results highlight a crucial role of cortical areas.

  17. Sensory modality of smoking cues modulates neural cue reactivity.

    Science.gov (United States)

    Yalachkov, Yavor; Kaiser, Jochen; Görres, Andreas; Seehaus, Arne; Naumer, Marcus J

    2013-01-01

    Behavioral experiments have demonstrated that the sensory modality of presentation modulates drug cue reactivity. The present study on nicotine addiction tested whether neural responses to smoking cues are modulated by the sensory modality of stimulus presentation. We measured brain activation using functional magnetic resonance imaging (fMRI) in 15 smokers and 15 nonsmokers while they viewed images of smoking paraphernalia and control objects and while they touched the same objects without seeing them. Haptically presented, smoking-related stimuli induced more pronounced neural cue reactivity than visual cues in the left dorsal striatum in smokers compared to nonsmokers. The severity of nicotine dependence correlated positively with the preference for haptically explored smoking cues in the left inferior parietal lobule/somatosensory cortex, right fusiform gyrus/inferior temporal cortex/cerebellum, hippocampus/parahippocampal gyrus, posterior cingulate cortex, and supplementary motor area. These observations are in line with the hypothesized role of the dorsal striatum for the expression of drug habits and the well-established concept of drug-related automatized schemata, since haptic perception is more closely linked to the corresponding object-specific action pattern than visual perception. Moreover, our findings demonstrate that with the growing severity of nicotine dependence, brain regions involved in object perception, memory, self-processing, and motor control exhibit an increasing preference for haptic over visual smoking cues. This difference was not found for control stimuli. Considering the sensory modality of the presented cues could serve to develop more reliable fMRI-specific biomarkers, more ecologically valid experimental designs, and more effective cue-exposure therapies of addiction.

  18. Distributed acoustic cues for caller identity in macaque vocalization.

    Science.gov (United States)

    Fukushima, Makoto; Doyle, Alex M; Mullarkey, Matthew P; Mishkin, Mortimer; Averbeck, Bruno B

    2015-12-01

    Individual primates can be identified by the sound of their voice. Macaques have demonstrated an ability to discern conspecific identity from a harmonically structured 'coo' call. Voice recognition presumably requires the integrated perception of multiple acoustic features. However, it is unclear how this is achieved, given considerable variability across utterances. Specifically, the extent to which information about caller identity is distributed across multiple features remains elusive. We examined these issues by recording and analysing a large sample of calls from eight macaques. Single acoustic features, including fundamental frequency, duration and Weiner entropy, were informative but unreliable for the statistical classification of caller identity. A combination of multiple features, however, allowed for highly accurate caller identification. A regularized classifier that learned to identify callers from the modulation power spectrum of calls found that specific regions of spectral-temporal modulation were informative for caller identification. These ranges are related to acoustic features such as the call's fundamental frequency and FM sweep direction. We further found that the low-frequency spectrotemporal modulation component contained an indexical cue of the caller body size. Thus, cues for caller identity are distributed across identifiable spectrotemporal components corresponding to laryngeal and supralaryngeal components of vocalizations, and the integration of those cues can enable highly reliable caller identification. Our results demonstrate a clear acoustic basis by which individual macaque vocalizations can be recognized.

  19. Nonlinear dynamics of human locomotion: effects of rhythmic auditory cueing on local dynamic stability

    Directory of Open Access Journals (Sweden)

    Philippe eTerrier

    2013-09-01

    Full Text Available It has been observed that times series of gait parameters (stride length (SL, stride time (ST and stride speed (SS, exhibit long-term persistence and fractal-like properties. Synchronizing steps with rhythmic auditory stimuli modifies the persistent fluctuation pattern to anti-persistence. Another nonlinear method estimates the degree of resilience of gait control to small perturbations, i.e. the local dynamic stability (LDS. The method makes use of the maximal Lyapunov exponent, which estimates how fast a nonlinear system embedded in a reconstructed state space (attractor diverges after an infinitesimal perturbation. We propose to use an instrumented treadmill to simultaneously measure basic gait parameters (time series of SL, ST and SS from which the statistical persistence among consecutive strides can be assessed, and the trajectory of the center of pressure (from which the LDS can be estimated. In 20 healthy participants, the response to rhythmic auditory cueing (RAC of LDS and of statistical persistence (assessed with detrended fluctuation analysis (DFA was compared. By analyzing the divergence curves, we observed that long-term LDS (computed as the reverse of the average logarithmic rate of divergence between the 4th and the 10th strides downstream from nearest neighbors in the reconstructed attractor was strongly enhanced (relative change +47%. That is likely the indication of a more dampened dynamics. The change in short-term LDS (divergence over one step was smaller (+3%. DFA results (scaling exponents confirmed an anti-persistent pattern in ST, SL and SS. Long-term LDS (but not short-term LDS and scaling exponents exhibited a significant correlation between them (r=0.7. Both phenomena probably result from the more conscious/voluntary gait control that is required by RAC. We suggest that LDS and statistical persistence should be used to evaluate the efficiency of cueing therapy in patients with neurological gait disorders.

  20. Neural mechanisms underlying valence inferences to sound: The role of the right angular gyrus.

    Science.gov (United States)

    Bravo, Fernando; Cross, Ian; Hawkins, Sarah; Gonzalez, Nadia; Docampo, Jorge; Bruno, Claudio; Stamatakis, Emmanuel Andreas

    2017-07-28

    We frequently infer others' intentions based on non-verbal auditory cues. Although the brain underpinnings of social cognition have been extensively studied, no empirical work has yet examined the impact of musical structure manipulation on the neural processing of emotional valence during mental state inferences. We used a novel sound-based theory-of-mind paradigm in which participants categorized stimuli of different sensory dissonance level in terms of positive/negative valence. Whilst consistent with previous studies which propose facilitated encoding of consonances, our results demonstrated that distinct levels of consonance/dissonance elicited differential influences on the right angular gyrus, an area implicated in mental state attribution and attention reorienting processes. Functional and effective connectivity analyses further showed that consonances modulated a specific inhibitory interaction from associative memory to mental state attribution substrates. Following evidence suggesting that individuals with autism may process social affective cues differently, we assessed the relationship between participants' task performance and self-reported autistic traits in clinically typical adults. Higher scores on the social cognition scales of the AQ were associated with deficits in recognising positive valence in consonant sound cues. These findings are discussed with respect to Bayesian perspectives on autistic perception, which highlight a functional failure to optimize precision in relation to prior beliefs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Compass cues used by a nocturnal bull ant, Myrmecia midas.

    Science.gov (United States)

    Freas, Cody A; Narendra, Ajay; Cheng, Ken

    2017-05-01

    Ants use both terrestrial landmarks and celestial cues to navigate to and from their nest location. These cues persist even as light levels drop during the twilight/night. Here, we determined the compass cues used by a nocturnal bull ant, Myrmecia midas , in which the majority of individuals begin foraging during the evening twilight period. Myrmecia midas foragers with vectors of ≤5   m when displaced to unfamiliar locations did not follow the home vector, but instead showed random heading directions. Foragers with larger home vectors (≥10   m) oriented towards the fictive nest, indicating a possible increase in cue strength with vector length. When the ants were displaced locally to create a conflict between the home direction indicated by the path integrator and terrestrial landmarks, foragers oriented using landmark information exclusively and ignored any accumulated home vector regardless of vector length. When the visual landmarks at the local displacement site were blocked, foragers were unable to orient to the nest direction and their heading directions were randomly distributed. Myrmecia midas ants typically nest at the base of the tree and some individuals forage on the same tree. Foragers collected on the nest tree during evening twilight were unable to orient towards the nest after small lateral displacements away from the nest. This suggests the possibility of high tree fidelity and an inability to extrapolate landmark compass cues from information collected on the tree and at the nest site to close displacement sites. © 2017. Published by The Company of Biologists Ltd.

  2. Local Mechanisms for Loud Sound-Enhanced Aminoglycoside Entry into Outer Hair Cells

    Directory of Open Access Journals (Sweden)

    Hongzhe eLi

    2015-04-01

    Full Text Available Loud sound exposure exacerbates aminoglycoside ototoxicity, increasing the risk of permanent hearing loss and degrading the quality of life in affected individuals. We previously reported that loud sound exposure induces temporary threshold shifts (TTS and enhances uptake of aminoglycosides, like gentamicin, by cochlear outer hair cells (OHCs. Here, we explore mechanisms by which loud sound exposure and TTS could increase aminoglycoside uptake by OHCs that may underlie this form of ototoxic synergy.Mice were exposed to loud sound levels to induce TTS, and received fluorescently-tagged gentamicin (GTTR for 30 minutes prior to fixation. The degree of TTS was assessed by comparing auditory brainstem responses before and after loud sound exposure. The number of tip links, which gate the GTTR-permeant mechanoelectrical transducer (MET channels, was determined in OHC bundles, with or without exposure to loud sound, using scanning electron microscopy.We found wide-band noise (WBN levels that induce TTS also enhance OHC uptake of GTTR compared to OHCs in control cochleae. In cochlear regions with TTS, the increase in OHC uptake of GTTR was significantly greater than in adjacent pillar cells. In control mice, we identified stereociliary tip links at ~50% of potential positions in OHC bundles. However, the number of OHC tip links was significantly reduced in mice that received WBN at levels capable of inducing TTS.These data suggest that GTTR uptake by OHCs during TTS occurs by increased permeation of surviving, mechanically-gated MET channels, and/or non-MET aminoglycoside-permeant channels activated following loud sound exposure. Loss of tip links would hyperpolarize hair cells and potentially increase drug uptake via aminoglycoside-permeant channels expressed by hair cells. The effect of TTS on aminoglycoside-permeant channel kinetics will shed new light on the mechanisms of loud sound-enhanced aminoglycoside uptake, and consequently on ototoxic

  3. Optimal Prediction of Moving Sound Source Direction in the Owl.

    Directory of Open Access Journals (Sweden)

    Weston Cox

    2015-07-01

    Full Text Available Capturing nature's statistical structure in behavioral responses is at the core of the ability to function adaptively in the environment. Bayesian statistical inference describes how sensory and prior information can be combined optimally to guide behavior. An outstanding open question of how neural coding supports Bayesian inference includes how sensory cues are optimally integrated over time. Here we address what neural response properties allow a neural system to perform Bayesian prediction, i.e., predicting where a source will be in the near future given sensory information and prior assumptions. The work here shows that the population vector decoder will perform Bayesian prediction when the receptive fields of the neurons encode the target dynamics with shifting receptive fields. We test the model using the system that underlies sound localization in barn owls. Neurons in the owl's midbrain show shifting receptive fields for moving sources that are consistent with the predictions of the model. We predict that neural populations can be specialized to represent the statistics of dynamic stimuli to allow for a vector read-out of Bayes-optimal predictions.

  4. Great cormorants (Phalacrocorax carbo) can detect auditory cues while diving

    DEFF Research Database (Denmark)

    Hansen, Kirstin Anderson; Maxwell, Alyssa; Siebert, Ursula

    2017-01-01

    In-air hearing in birds has been thoroughly investigated. Sound provides birds with auditory information for species and individual recognition from their complex vocalizations, as well as cues while foraging and for avoiding predators. Some 10% of existing species of birds obtain their food under...... the water surface. Whether some of these birds make use of acoustic cues while underwater is unknown. An interesting species in this respect is the great cormorant (Phalacrocorax carbo), being one of the most effective marine predators and relying on the aquatic environment for food year round. Here, its...... underwater hearing abilities were investigated using psychophysics, where the bird learned to detect the presence or absence of a tone while submerged. The greatest sensitivity was found at 2 kHz, with an underwater hearing threshold of 71 dB re 1 μPa rms. The great cormorant is better at hearing underwater...

  5. The natural horn as an efficient sound radiating system ...

    African Journals Online (AJOL)

    Results obtained showed that the locally made horn are efficient sound radiating systems and are therefore excellent for sound production in local musical renditions. These findings, in addition to the portability and low cost of the horns qualify them to be highly recommended for use in music making and for other purposes ...

  6. Binaural hearing in children using Gaussian enveloped and transposed tones.

    Science.gov (United States)

    Ehlers, Erica; Kan, Alan; Winn, Matthew B; Stoelb, Corey; Litovsky, Ruth Y

    2016-04-01

    Children who use bilateral cochlear implants (BiCIs) show significantly poorer sound localization skills than their normal hearing (NH) peers. This difference has been attributed, in part, to the fact that cochlear implants (CIs) do not faithfully transmit interaural time differences (ITDs) and interaural level differences (ILDs), which are known to be important cues for sound localization. Interestingly, little is known about binaural sensitivity in NH children, in particular, with stimuli that constrain acoustic cues in a manner representative of CI processing. In order to better understand and evaluate binaural hearing in children with BiCIs, the authors first undertook a study on binaural sensitivity in NH children ages 8-10, and in adults. Experiments evaluated sound discrimination and lateralization using ITD and ILD cues, for stimuli with robust envelope cues, but poor representation of temporal fine structure. Stimuli were spondaic words, Gaussian-enveloped tone pulse trains (100 pulse-per-second), and transposed tones. Results showed that discrimination thresholds in children were adult-like (15-389 μs for ITDs and 0.5-6.0 dB for ILDs). However, lateralization based on the same binaural cues showed higher variability than seen in adults. Results are discussed in the context of factors that may be responsible for poor representation of binaural cues in bilaterally implanted children.

  7. Listenmee and Listenmee smartphone application: synchronizing walking to rhythmic auditory cues to improve gait in Parkinson's disease.

    Science.gov (United States)

    Lopez, William Omar Contreras; Higuera, Carlos Andres Escalante; Fonoff, Erich Talamoni; Souza, Carolina de Oliveira; Albicker, Ulrich; Martinez, Jairo Alberto Espinoza

    2014-10-01

    Evidence supports the use of rhythmic external auditory signals to improve gait in PD patients (Arias & Cudeiro, 2008; Kenyon & Thaut, 2000; McIntosh, Rice & Thaut, 1994; McIntosh et al., 1997; Morris, Iansek, & Matyas, 1994; Thaut, McIntosh, & Rice, 1997; Suteerawattananon, Morris, Etnyre, Jankovic, & Protas , 2004; Willems, Nieuwboer, Chavert, & Desloovere, 2006). However, few prototypes are available for daily use, and to our knowledge, none utilize a smartphone application allowing individualized sounds and cadence. Therefore, we analyzed the effects on gait of Listenmee®, an intelligent glasses system with a portable auditory device, and present its smartphone application, the Listenmee app®, offering over 100 different sounds and an adjustable metronome to individualize the cueing rate as well as its smartwatch with accelerometer to detect magnitude and direction of the proper acceleration, track calorie count, sleep patterns, steps count and daily distances. The present study included patients with idiopathic PD presented gait disturbances including freezing. Auditory rhythmic cues were delivered through Listenmee®. Performance was analyzed in a motion and gait analysis laboratory. The results revealed significant improvements in gait performance over three major dependent variables: walking speed in 38.1%, cadence in 28.1% and stride length in 44.5%. Our findings suggest that auditory cueing through Listenmee® may significantly enhance gait performance. Further studies are needed to elucidate the potential role and maximize the benefits of these portable devices. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Exposure to arousal-inducing sounds facilitates visual search.

    Science.gov (United States)

    Asutay, Erkin; Västfjäll, Daniel

    2017-09-04

    Exposure to affective stimuli could enhance perception and facilitate attention via increasing alertness, vigilance, and by decreasing attentional thresholds. However, evidence on the impact of affective sounds on perception and attention is scant. Here, a novel aspect of affective facilitation of attention is studied: whether arousal induced by task-irrelevant auditory stimuli could modulate attention in a visual search. In two experiments, participants performed a visual search task with and without auditory-cues that preceded the search. Participants were faster in locating high-salient targets compared to low-salient targets. Critically, search times and search slopes decreased with increasing auditory-induced arousal while searching for low-salient targets. Taken together, these findings suggest that arousal induced by sounds can facilitate attention in a subsequent visual search. This novel finding provides support for the alerting function of the auditory system by showing an auditory-phasic alerting effect in visual attention. The results also indicate that stimulus arousal modulates the alerting effect. Attention and perception are our everyday tools to navigate our surrounding world and the current findings showing that affective sounds could influence visual attention provide evidence that we make use of affective information during perceptual processing.

  9. Zebra finches can use positional and transitional cues to distinguish vocal element strings.

    Science.gov (United States)

    Chen, Jiani; Ten Cate, Carel

    2015-08-01

    Learning sequences is of great importance to humans and non-human animals. Many motor and mental actions, such as singing in birds and speech processing in humans, rely on sequential learning. At least two mechanisms are considered to be involved in such learning. The chaining theory proposes that learning of sequences relies on memorizing the transitions between adjacent items, while the positional theory suggests that learners encode the items according to their ordinal position in the sequence. Positional learning is assumed to dominate sequential learning. However, human infants exposed to a string of speech sounds can learn transitional (chaining) cues. So far, it is not clear whether birds, an increasingly important model for examining vocal processing, can do this. In this study we use a Go-Nogo design to examine whether zebra finches can use transitional cues to distinguish artificially constructed strings of song elements. Zebra finches were trained with sequences differing in transitional and positional information and next tested with novel strings sharing positional and transitional similarities with the training strings. The results show that they can attend to both transitional and positional cues and that their sequential coding strategies can be biased toward transitional cues depending on the learning context. This article is part of a Special Issue entitled: In Honor of Jerry Hogan. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Working memory load and the retro-cue effect: A diffusion model account.

    Science.gov (United States)

    Shepherdson, Peter; Oberauer, Klaus; Souza, Alessandra S

    2018-02-01

    Retro-cues (i.e., cues presented between the offset of a memory array and the onset of a probe) have consistently been found to enhance performance in working memory tasks, sometimes ameliorating the deleterious effects of increased memory load. However, the mechanism by which retro-cues exert their influence remains a matter of debate. To inform this debate, we applied a hierarchical diffusion model to data from 4 change detection experiments using single item, location-specific probes (i.e., a local recognition task) with either visual or verbal memory stimuli. Results showed that retro-cues enhanced the quality of information entering the decision process-especially for visual stimuli-and decreased the time spent on nondecisional processes. Further, cues interacted with memory load primarily on nondecision time, decreasing or abolishing load effects. To explain these findings, we propose an account whereby retro-cues act primarily to reduce the time taken to access the relevant representation in memory upon probe presentation, and in addition protect cued representations from visual interference. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  11. How male sound pressure level influences phonotaxis in virgin female Jamaican field crickets (Gryllus assimilis

    Directory of Open Access Journals (Sweden)

    Karen Pacheco

    2014-06-01

    Full Text Available Understanding female mate preference is important for determining the strength and direction of sexual trait evolution. The sound pressure level (SPL acoustic signalers use is often an important predictor of mating success because higher sound pressure levels are detectable at greater distances. If females are more attracted to signals produced at higher sound pressure levels, then the potential fitness impacts of signalling at higher sound pressure levels should be elevated beyond what would be expected from detection distance alone. Here we manipulated the sound pressure level of cricket mate attraction signals to determine how female phonotaxis was influenced. We examined female phonotaxis using two common experimental methods: spherical treadmills and open arenas. Both methods showed similar results, with females exhibiting greatest phonotaxis towards loud sound pressure levels relative to the standard signal (69 vs. 60 dB SPL but showing reduced phonotaxis towards very loud sound pressure level signals relative to the standard (77 vs. 60 dB SPL. Reduced female phonotaxis towards supernormal stimuli may signify an acoustic startle response, an absence of other required sensory cues, or perceived increases in predation risk.

  12. Dynamic Mechanical and Nanofibrous Topological Combinatory Cues Designed for Periodontal Ligament Engineering.

    Science.gov (United States)

    Kim, Joong-Hyun; Kang, Min Sil; Eltohamy, Mohamed; Kim, Tae-Hyun; Kim, Hae-Won

    2016-01-01

    Complete reconstruction of damaged periodontal pockets, particularly regeneration of periodontal ligament (PDL) has been a significant challenge in dentistry. Tissue engineering approach utilizing PDL stem cells and scaffolding matrices offers great opportunity to this, and applying physical and mechanical cues mimicking native tissue conditions are of special importance. Here we approach to regenerate periodontal tissues by engineering PDL cells supported on a nanofibrous scaffold under a mechanical-stressed condition. PDL stem cells isolated from rats were seeded on an electrospun polycaprolactone/gelatin directionally-oriented nanofiber membrane and dynamic mechanical stress was applied to the cell/nanofiber construct, providing nanotopological and mechanical combined cues. Cells recognized the nanofiber orientation, aligning in parallel, and the mechanical stress increased the cell alignment. Importantly, the cells cultured on the oriented nanofiber combined with the mechanical stress produced significantly stimulated PDL specific markers, including periostin and tenascin with simultaneous down-regulation of osteogenesis, demonstrating the roles of topological and mechanical cues in altering phenotypic change in PDL cells. Tissue compatibility of the tissue-engineered constructs was confirmed in rat subcutaneous sites. Furthermore, in vivo regeneration of PDL and alveolar bone tissues was examined under the rat premaxillary periodontal defect models. The cell/nanofiber constructs engineered under mechanical stress showed sound integration into tissue defects and the regenerated bone volume and area were significantly improved. This study provides an effective tissue engineering approach for periodontal regeneration-culturing PDL stem cells with combinatory cues of oriented nanotopology and dynamic mechanical stretch.

  13. Post-cueing deficits with maintained cueing benefits in patients with Parkinson's disease dementia

    Directory of Open Access Journals (Sweden)

    Susanne eGräber

    2014-11-01

    Full Text Available In Parkinson’s disease (PD internal cueing mechanisms are impaired leading to symptoms such as like hypokinesia. However external cues can improve movement execution by using cortical resources. These cortical processes can be affected by cognitive decline in dementia.It is still unclear how dementia in PD influences external cueing. We investigated a group of 25 PD patients with dementia (PDD and 25 non-demented PD patients (PDnD matched by age, sex and disease duration in a simple reaction time (SRT task using an additional acoustic cue. PDD patients benefited from the additional cue in similar magnitude as did PDnD patients. However, withdrawal of the cue led to a significantly increased reaction time in the PDD group compared to the PDnD patients. Our results indicate that even PDD patients can benefit from strategies using external cue presentation but the process of cognitive worsening can reduce the effect when cues are withdrawn.

  14. Cue-reactors: individual differences in cue-induced craving after food or smoking abstinence.

    Directory of Open Access Journals (Sweden)

    Stephen V Mahler

    Full Text Available BACKGROUND: Pavlovian conditioning plays a critical role in both drug addiction and binge eating. Recent animal research suggests that certain individuals are highly sensitive to conditioned cues, whether they signal food or drugs. Are certain humans also more reactive to both food and drug cues? METHODS: We examined cue-induced craving for both cigarettes and food, in the same individuals (n = 15 adult smokers. Subjects viewed smoking-related or food-related images after abstaining from either smoking or eating. RESULTS: Certain individuals reported strong cue-induced craving after both smoking and food cues. That is, subjects who reported strong cue-induced craving for cigarettes also rated stronger cue-induced food craving. CONCLUSIONS: In humans, like in nonhumans, there may be a "cue-reactive" phenotype, consisting of individuals who are highly sensitive to conditioned stimuli. This finding extends recent reports from nonhuman studies. Further understanding this subgroup of smokers may allow clinicians to individually tailor therapies for smoking cessation.

  15. Cue-reactors: individual differences in cue-induced craving after food or smoking abstinence.

    Science.gov (United States)

    Mahler, Stephen V; de Wit, Harriet

    2010-11-10

    Pavlovian conditioning plays a critical role in both drug addiction and binge eating. Recent animal research suggests that certain individuals are highly sensitive to conditioned cues, whether they signal food or drugs. Are certain humans also more reactive to both food and drug cues? We examined cue-induced craving for both cigarettes and food, in the same individuals (n = 15 adult smokers). Subjects viewed smoking-related or food-related images after abstaining from either smoking or eating. Certain individuals reported strong cue-induced craving after both smoking and food cues. That is, subjects who reported strong cue-induced craving for cigarettes also rated stronger cue-induced food craving. In humans, like in nonhumans, there may be a "cue-reactive" phenotype, consisting of individuals who are highly sensitive to conditioned stimuli. This finding extends recent reports from nonhuman studies. Further understanding this subgroup of smokers may allow clinicians to individually tailor therapies for smoking cessation.

  16. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  17. Reproduction of nearby sources by imposing true interaural differences on a sound field control approach

    DEFF Research Database (Denmark)

    Badajoz, Javier; Chang, Ji-ho; Agerkvist, Finn T.

    2015-01-01

    In anechoic conditions, the Interaural Level Difference (ILD) is the most significant auditory cue to judge the distance to a sound source located within 1 m of the listener's head. This is due to the unique characteristics of a point source in its near field, which result in exceptionally high...... as Pressure Matching (PM), and a binaural control technique. While PM aims at reproducing the incident sound field, the objective of the binaural control technique is to ensure a correct reproduction of interaural differences. The combination of these two approaches gives rise to the following features: (i......, distance dependent ILDs. When reproducing the sound field of sources located near the head with line or circular arrays of loudspeakers, the reproduced ILDs are generally lower than expected, due to physical limitations. This study presents an approach that combines a sound field reproduction method, known...

  18. Localizing semantic interference from distractor sounds in picture naming: A dual-task study.

    Science.gov (United States)

    Mädebach, Andreas; Kieseler, Marie-Luise; Jescheniak, Jörg D

    2017-10-13

    In this study we explored the locus of semantic interference in a novel picture-sound interference task in which participants name pictures while ignoring environmental distractor sounds. In a previous study using this task (Mädebach, Wöhner, Kieseler, & Jescheniak, in Journal of Experimental Psychology: Human Perception and Performance, 43, 1629-1646, 2017), we showed that semantically related distractor sounds (e.g., BARKING dog ) interfere with a picture-naming response (e.g., "horse") more strongly than unrelated distractor sounds do (e.g., DRUMMING drum ). In the experiment reported here, we employed the psychological refractory period (PRP) approach to explore the locus of this effect. We combined a geometric form classification task (square vs. circle; Task 1) with the picture-sound interference task (Task 2). The stimulus onset asynchrony (SOA) between the tasks was systematically varied (0 vs. 500 ms). There were three central findings. First, the semantic interference effect from distractor sounds was replicated. Second, picture naming (in Task 2) was slower with the short than with the long task SOA. Third, both effects were additive-that is, the semantic interference effects were of similar magnitude at both task SOAs. This suggests that the interference arises during response selection or later stages, not during early perceptual processing. This finding corroborates the theory that semantic interference from distractor sounds reflects a competitive selection mechanism in word production.

  19. Sound Spectrum Influences Auditory Distance Perception of Sound Sources Located in a Room Environment

    Directory of Open Access Journals (Sweden)

    Ignacio Spiousas

    2017-06-01

    Full Text Available Previous studies on the effect of spectral content on auditory distance perception (ADP focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction or when the sound travels distances >15 m (high-frequency energy losses due to air absorption. Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1–6 m influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects. Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation. The results obtained in this study show that, depending on

  20. Electromagnetic sounding of the Earth's interior

    CERN Document Server

    Spichak, Viacheslav V

    2015-01-01

    Electromagnetic Sounding of the Earth's Interior 2nd edition provides a comprehensive up-to-date collection of contributions, covering methodological, computational and practical aspects of Electromagnetic sounding of the Earth by different techniques at global, regional and local scales. Moreover, it contains new developments such as the concept of self-consistent tasks of geophysics and , 3-D interpretation of the TEM sounding which, so far, have not all been covered by one book. Electromagnetic Sounding of the Earth's Interior 2nd edition consists of three parts: I- EM sounding methods, II- Forward modelling and inversion techniques, and III - Data processing, analysis, modelling and interpretation. The new edition includes brand new chapters on Pulse and frequency electromagnetic sounding for hydrocarbon offshore exploration. Additionally all other chapters have been extensively updated to include new developments. Presents recently developed methodological findings of the earth's study, including seism...

  1. Retrieval-induced forgetting and interference between cues:Training a cue-outcome association attenuates retrieval by alternative cues

    OpenAIRE

    Ortega-Castro, Nerea; Vadillo Nistal, Miguel

    2013-01-01

    Some researchers have attempted to determine whether situations in which a single cue is paired with several outcomes (A-B, A-C interference or interference between outcomes) involve the same learning and retrieval mechanisms as situations in which several cues are paired with a single outcome (A-B, C-B interference or interference between cues). Interestingly, current research on a related effect, which is known as retrieval-induced forgetting, can illuminate this debate. Most retrieval-indu...

  2. The Good, The Bad, and The Distant: Soundscape Cues for Larval Fish.

    Science.gov (United States)

    Piercy, Julius J B; Smith, David J; Codling, Edward A; Hill, Adam J; Simpson, Stephen D

    2016-01-01

    Coral reef noise is an important navigation cue for settling reef fish larvae and can thus potentially affect reef population dynamics. Recent evidence has shown that fish are able to discriminate between the soundscapes of different types of habitat (e.g., mangrove and reef). In this study, we investigated whether discernible acoustic differences were present between sites within the same coral reef system. Differences in sound intensity and transient content were found between sites, but site-dependent temporal variation was also present. We discuss the implications of these findings for settling fish larvae.

  3. Mobile phone conversations, listening to music and quiet (electric) cars : are traffic sounds important for safe cycling?

    NARCIS (Netherlands)

    Stelling-Konczak, A. Wee, G.P. van Commandeur, J.J.F. & Hagenzieker, M.P.

    2017-01-01

    Listening to music or talking on the phone while cycling as well as the growing number of quiet (electric) cars on the road can make the use of auditory cues challenging for cyclists. The present study examined to what extent and in which traffic situations traffic sounds are important for safe

  4. Mobile phone conversations, listening to music and quiet (electric) cars : Are traffic sounds important for safe cycling?

    NARCIS (Netherlands)

    Stelling-Konczak, A.; van Wee, G. P.; Commandeur, J. J.F.; Hagenzieker, M.

    2017-01-01

    Listening to music or talking on the phone while cycling as well as the growing number of quiet (electric) cars on the road can make the use of auditory cues challenging for cyclists. The present study examined to what extent and in which traffic situations traffic sounds are important for safe

  5. Cue-induced craving in patients with cocaine use disorder predicts cognitive control deficits toward cocaine cues.

    Science.gov (United States)

    DiGirolamo, Gregory J; Smelson, David; Guevremont, Nathan

    2015-08-01

    Cue-induced craving is a clinically important aspect of cocaine addiction influencing ongoing use and sobriety. However, little is known about the relationship between cue-induced craving and cognitive control toward cocaine cues. While studies suggest that cocaine users have an attentional bias toward cocaine cues, the present study extends this research by testing if cocaine use disorder patients (CDPs) can control their eye movements toward cocaine cues and whether their response varied by cue-induced craving intensity. Thirty CDPs underwent a cue exposure procedure to dichotomize them into high and low craving groups followed by a modified antisaccade task in which subjects were asked to control their eye movements toward either a cocaine or neutral drug cue by looking away from the suddenly presented cue. The relationship between breakdowns in cognitive control (as measured by eye errors) and cue-induced craving (changes in self-reported craving following cocaine cue exposure) was investigated. CDPs overall made significantly more errors toward cocaine cues compared to neutral cues, with higher cravers making significantly more errors than lower cravers even though they did not differ significantly in addiction severity, impulsivity, anxiety, or depression levels. Cue-induced craving was the only specific and significant predictor of subsequent errors toward cocaine cues. Cue-induced craving directly and specifically relates to breakdowns of cognitive control toward cocaine cues in CDPs, with higher cravers being more susceptible. Hence, it may be useful identifying high cravers and target treatment toward curbing craving to decrease the likelihood of a subsequent breakdown in control. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. The acoustic and perceptual cues affecting melody segregation for listeners with a cochlear implant.

    Directory of Open Access Journals (Sweden)

    Jeremy eMarozeau

    2013-11-01

    Full Text Available Our ability to listen selectively to single sound sources in complex auditory environments is termed ‘auditory stream segregation.’ This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody.The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found between the perceptual dimensions linked to the F0 and the spectral envelope.Combined with our previous results in normally-hearing musicians and non-musicians, the results show that differences in training as well as differences in peripheral auditory processing (hearing impairment and the use of a hearing device influences the way that listeners use different acoustic cues for segregating interleaved musical streams.

  7. Biophysics of directional hearing in the American alligator (Alligator mississippiensis)

    DEFF Research Database (Denmark)

    Bierman, Hilary S; Thornton, Jennifer L; Jones, Heath G

    2014-01-01

    Physiological and anatomical studies have suggested that alligators have unique adaptations for spatial hearing. Sound localization cues are primarily generated by the filtering of sound waves by the head. Different vertebrate lineages have evolved external and/or internal anatomical adaptations ...... in the extinct dinosaurs....

  8. Developmental Changes in Locating Voice and Sound in Space

    Science.gov (United States)

    Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi

    2017-01-01

    We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220

  9. Cue-induced craving among inhalant users: Development and preliminary validation of a visual cue paradigm.

    Science.gov (United States)

    Jain, Shobhit; Dhawan, Anju; Kumaran, S Senthil; Pattanayak, Raman Deep; Jain, Raka

    2017-12-01

    Cue-induced craving is known to be associated with a higher risk of relapse, wherein drug-specific cues become conditioned stimuli, eliciting conditioned responses. Cue-reactivity paradigm are important tools to study psychological responses and functional neuroimaging changes. However, till date, there has been no specific study or a validated paradigm for inhalant cue-induced craving research. The study aimed to develop and validate visual cue stimulus for inhalant cue-associated craving. The first step (picture selection) involved screening and careful selection of 30 cue- and 30 neutral-pictures based on their relevance for naturalistic settings. In the second step (time optimization), a random selection of ten cue-pictures each was presented for 4s, 6s, and 8s to seven adolescent male inhalant users, and pre-post craving response was compared using a Visual Analogue Scale(VAS) for each of the picture and time. In the third step (validation), craving response for each of 30 cue- and 30 neutral-pictures were analysed among 20 adolescent inhalant users. Findings revealed a significant difference in before and after craving response for the cue-pictures, but not neutral-pictures. Using ROC-curve, pictures were arranged in order of craving intensity. Finally, 20 best cue- and 20 neutral-pictures were used for the development of a 480s visual cue paradigm. This is the first study to systematically develop an inhalant cue picture paradigm which can be used as a tool to examine cue induced craving in neurobiological studies. Further research, including its further validation in larger study and diverse samples, is required. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. The effects of intervening interference on working memory for sound location as a function of inter-comparison interval.

    Science.gov (United States)

    Ries, Dennis T; Hamilton, Traci R; Grossmann, Aurora J

    2010-09-01

    This study examined the effects of inter-comparison interval duration and intervening interference on auditory working memory (AWM) for auditory location. Interaural phase differences were used to produce localization cues for tonal stimuli and the difference limen for interaural phase difference (DL-IPD) specified as the equivalent angle of incidence between two sound sources was measured in five different conditions. These conditions consisted of three different inter-comparison intervals [300 ms (short), 5000 ms (medium), and 15,000 ms (long)], the medium and long of which were presented both in the presence and absence of intervening tones. The presence of intervening stimuli within the medium and long inter-comparison intervals produced a significant increase in the DL-IPD compared to the medium and long inter-comparison intervals condition without intervening tones. The result obtained in the condition with a short inter-comparison interval was roughly equivalent to that obtained for the medium inter-comparison interval without intervening tones. These results suggest that the ability to retain information about the location of a sound within AWM decays slowly; however, the presence of intervening sounds readily disrupts the retention process. Overall, the results suggest that the temporal decay of information within AWM regarding the location of a sound from a listener's environment is so gradual that it can be maintained in trace memory for tens of seconds in the absence of intervening acoustic signals. Conversely, the presence of intervening sounds within the retention interval may facilitate the use of context memory, even for shorter retention intervals, resulting in a less detailed, but relevant representation of the location that is resistant to further degradation. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  11. Contribution of self-motion perception to acoustic target localization.

    Science.gov (United States)

    Pettorossi, V E; Brosch, M; Panichi, R; Botti, F; Grassi, S; Troiani, D

    2005-05-01

    The findings of this study suggest that acoustic spatial perception during head movement is achieved by the vestibular system, which is responsible for the correct dynamic of acoustic target pursuit. The ability to localize sounds in space during whole-body rotation relies on the auditory localization system, which recognizes the position of sound in a head-related frame, and on the sensory systems, namely the vestibular system, which perceive head and body movement. The aim of this study was to analyse the contribution of head motion cues to the spatial representation of acoustic targets in humans. Healthy subjects standing on a rotating platform in the dark were asked to pursue with a laser pointer an acoustic target which was horizontally rotated while the body was kept stationary or maintained stationary while the whole body was rotated. The contribution of head motion to the spatial acoustic representation could be inferred by comparing the gains and phases of the pursuit in the two experimental conditions when the frequency was varied. During acoustic target rotation there was a reduction in the gain and an increase in the phase lag, while during whole-body rotations the gain tended to increase and the phase remained constant. The different contributions of the vestibular and acoustic systems were confirmed by analysing the acoustic pursuit during asymmetric body rotation. In this particular condition, in which self-motion perception gradually diminished, an increasing delay in target pursuit was observed.

  12. Sparse representation of Gravitational Sound

    Science.gov (United States)

    Rebollo-Neira, Laura; Plastino, A.

    2018-03-01

    Gravitational Sound clips produced by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Massachusetts Institute of Technology (MIT) are considered within the particular context of data reduction. We advance a procedure to this effect and show that these types of signals can be approximated with high quality using significantly fewer elementary components than those required within the standard orthogonal basis framework. Furthermore, a local measure sparsity is shown to render meaningful information about the variation of a signal along time, by generating a set of local sparsity values which is much smaller than the dimension of the signal. This point is further illustrated by recourse to a more complex signal, generated by Milde Science Communication to divulge Gravitational Sound in the form of a ring tone.

  13. Propagation of Sound in a Bose-Einstein Condensate

    International Nuclear Information System (INIS)

    Andrews, M.R.; Kurn, D.M.; Miesner, H.; Durfee, D.S.; Townsend, C.G.; Inouye, S.; Ketterle, W.

    1997-01-01

    Sound propagation has been studied in a magnetically trapped dilute Bose-Einstein condensate. Localized excitations were induced by suddenly modifying the trapping potential using the optical dipole force of a focused laser beam. The resulting propagation of sound was observed using a novel technique, rapid sequencing of nondestructive phase-contrast images. The speed of sound was determined as a function of density and found to be consistent with Bogoliubov theory. This method may generally be used to observe high-lying modes and perhaps second sound. copyright 1997 The American Physical Society

  14. The Influence of Cue Reliability and Cue Representation on Spatial Reorientation in Young Children

    Science.gov (United States)

    Lyons, Ian M.; Huttenlocher, Janellen; Ratliff, Kristin R.

    2014-01-01

    Previous studies of children's reorientation have focused on cue representation (e.g., whether cues are geometric) as a predictor of performance but have not addressed cue reliability (the regularity of the relation between a given cue and an outcome) as a predictor of performance. Here we address both factors within the same series of…

  15. Awareness in contextual cueing of visual search as measured with concurrent access- and phenomenal-consciousness tasks.

    Science.gov (United States)

    Schlagbauer, Bernhard; Müller, Hermann J; Zehetleitner, Michael; Geyer, Thomas

    2012-10-25

    In visual search, context information can serve as a cue to guide attention to the target location. When observers repeatedly encounter displays with identical target-distractor arrangements, reaction times (RTs) are faster for repeated relative to nonrepeated displays, the latter containing novel configurations. This effect has been termed "contextual cueing." The present study asked whether information about the target location in repeated displays is "explicit" (or "conscious") in nature. To examine this issue, observers performed a test session (after an initial training phase in which RTs to repeated and nonrepeated displays were measured) in which the search stimuli were presented briefly and terminated by visual masks; following this, observers had to make a target localization response (with accuracy as the dependent measure) and indicate their visual experience and confidence associated with the localization response. The data were examined at the level of individual displays, i.e., in terms of whether or not a repeated display actually produced contextual cueing. The results were that (a) contextual cueing was driven by only a very small number of about four actually learned configurations; (b) localization accuracy was increased for learned relative to nonrepeated displays; and (c) both consciousness measures were enhanced for learned compared to nonrepeated displays. It is concluded that contextual cueing is driven by only a few repeated displays and the ability to locate the target in these displays is associated with increased visual experience.

  16. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    Science.gov (United States)

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  17. What and Where in auditory sensory processing: A high-density electrical mapping study of distinct neural processes underlying sound object recognition and sound localization

    Directory of Open Access Journals (Sweden)

    Victoria M Leavitt

    2011-06-01

    Full Text Available Functionally distinct dorsal and ventral auditory pathways for sound localization (where and sound object recognition (what have been described in non-human primates. A handful of studies have explored differential processing within these streams in humans, with highly inconsistent findings. Stimuli employed have included simple tones, noise bursts and speech sounds, with simulated left-right spatial manipulations, and in some cases participants were not required to actively discriminate the stimuli. Our contention is that these paradigms were not well suited to dissociating processing within the two streams. Our aim here was to determine how early in processing we could find evidence for dissociable pathways using better titrated what and where task conditions. The use of more compelling tasks should allow us to amplify differential processing within the dorsal and ventral pathways. We employed high-density electrical mapping using a relatively large and environmentally realistic stimulus set (seven animal calls delivered from seven free-field spatial locations; with stimulus configuration identical across the where and what tasks. Topographic analysis revealed distinct dorsal and ventral auditory processing networks during the where and what tasks with the earliest point of divergence seen during the N1 component of the auditory evoked response, beginning at approximately 100 ms. While this difference occurred during the N1 timeframe, it was not a simple modulation of N1 amplitude as it displayed a wholly different topographic distribution to that of the N1. Global dissimilarity measures using topographic modulation analysis confirmed that this difference between tasks was driven by a shift in the underlying generator configuration. Minimum norm source reconstruction revealed distinct activations that corresponded well with activity within putative dorsal and ventral auditory structures.

  18. Suppressive competition: how sounds may cheat sight.

    Science.gov (United States)

    Kayser, Christoph; Remedios, Ryan

    2012-02-23

    In this issue of Neuron, Iurilli et al. (2012) demonstrate that auditory cortex activation directly engages local GABAergic circuits in V1 to induce sound-driven hyperpolarizations in layer 2/3 and layer 6 pyramidal neurons. Thereby, sounds can directly suppress V1 activity and visual driven behavior. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Recycling Sounds in Commercials

    DEFF Research Database (Denmark)

    Larsen, Charlotte Rørdam

    2012-01-01

    Commercials offer the opportunity for intergenerational memory and impinge on cultural memory. TV commercials for foodstuffs often make reference to past times as a way of authenticating products. This is frequently achieved using visual cues, but in this paper I would like to demonstrate how...... such references to the past and ‘the good old days’ can be achieved through sounds. In particular, I will look at commercials for Danish non-dairy spreads, especially for OMA margarine. These commercials are notable in that they contain a melody and a slogan – ‘Say the name: OMA margarine’ – that have basically...... remained the same for 70 years. Together these identifiers make OMA an interesting Danish case to study. With reference to Ann Rigney’s memorial practices or mechanisms, the study aims to demonstrate how the auditory aspects of Danish margarine commercials for frying tend to be limited in variety...

  20. Perception of Animacy from the Motion of a Single Sound Object.

    Science.gov (United States)

    Nielsen, Rasmus Høll; Vuust, Peter; Wallentin, Mikkel

    2015-02-01

    Research in the visual modality has shown that the presence of certain dynamics in the motion of an object has a strong effect on whether or not the entity is perceived as animate. Cues for animacy are, among others, self-propelled motion and direction changes that are seemingly not caused by entities external to, or in direct contact with, the moving object. The present study aimed to extend this research into the auditory domain by determining if similar dynamics could influence the perceived animacy of a sound source. In two experiments, participants were presented with single, synthetically generated 'mosquito' sounds moving along trajectories in space, and asked to rate how certain they were that each sound-emitting entity was alive. At a random point on a linear motion trajectory, the sound source would deviate from its initial path and speed. Results confirm findings from the visual domain that a change in the velocity of motion is positively correlated with perceived animacy, and changes in direction were found to influence animacy judgment as well. This suggests that an ability to facilitate and sustain self-movement is perceived as a living quality not only in the visual domain, but in the auditory domain as well. © 2015 SAGE Publications.

  1. Synthesis of walking sounds for alleviating gait disturbances in Parkinson's disease.

    Science.gov (United States)

    Rodger, Matthew W M; Young, William R; Craig, Cathy M

    2014-05-01

    Managing gait disturbances in people with Parkinson's disease is a pressing challenge, as symptoms can contribute to injury and morbidity through an increased risk of falls. While drug-based interventions have limited efficacy in alleviating gait impairments, certain nonpharmacological methods, such as cueing, can also induce transient improvements to gait. The approach adopted here is to use computationally-generated sounds to help guide and improve walking actions. The first method described uses recordings of force data taken from the steps of a healthy adult which in turn were used to synthesize realistic gravel-footstep sounds that represented different spatio-temporal parameters of gait, such as step duration and step length. The second method described involves a novel method of sonifying, in real time, the swing phase of gait using real-time motion-capture data to control a sound synthesis engine. Both approaches explore how simple but rich auditory representations of action based events can be used by people with Parkinson's to guide and improve the quality of their walking, reducing the risk of falls and injury. Studies with Parkinson's disease patients are reported which show positive results for both techniques in reducing step length variability. Potential future directions for how these sound approaches can be used to manage gait disturbances in Parkinson's are also discussed.

  2. Local field potential correlates of auditory working memory in primate dorsal temporal pole.

    Science.gov (United States)

    Bigelow, James; Ng, Chi-Wing; Poremba, Amy

    2016-06-01

    Dorsal temporal pole (dTP) is a cortical region at the rostral end of the superior temporal gyrus that forms part of the ventral auditory object processing pathway. Anatomical connections with frontal and medial temporal areas, as well as a recent single-unit recording study, suggest this area may be an important part of the network underlying auditory working memory (WM). To further elucidate the role of dTP in auditory WM, local field potentials (LFPs) were recorded from the left dTP region of two rhesus macaques during an auditory delayed matching-to-sample (DMS) task. Sample and test sounds were separated by a 5-s retention interval, and a behavioral response was required only if the sounds were identical (match trials). Sensitivity of auditory evoked responses in dTP to behavioral significance and context was further tested by passively presenting the sounds used as auditory WM memoranda both before and after the DMS task. Average evoked potentials (AEPs) for all cue types and phases of the experiment comprised two small-amplitude early onset components (N20, P40), followed by two broad, large-amplitude components occupying the remainder of the stimulus period (N120, P300), after which a final set of components were observed following stimulus offset (N80OFF, P170OFF). During the DMS task, the peak amplitude and/or latency of several of these components depended on whether the sound was presented as the sample or test, and whether the test matched the sample. Significant differences were also observed among the DMS task and passive exposure conditions. Comparing memory-related effects in the LFP signal with those obtained in the spiking data raises the possibility some memory-related activity in dTP may be locally produced and actively generated. The results highlight the involvement of dTP in auditory stimulus identification and recognition and its sensitivity to the behavioral significance of sounds in different contexts. This article is part of a Special

  3. Inhibition of histone deacetylase 3 via RGFP966 facilitates cortical plasticity underlying unusually accurate auditory associative cue memory for excitatory and inhibitory cue-reward associations.

    Science.gov (United States)

    Shang, Andrea; Bylipudi, Sooraz; Bieszczad, Kasia M

    2018-05-31

    Epigenetic mechanisms are key for regulating long-term memory (LTM) and are known to exert control on memory formation in multiple systems of the adult brain, including the sensory cortex. One epigenetic mechanism is chromatin modification by histone acetylation. Blocking the action of histone de-acetylases (HDACs) that normally negatively regulate LTM by repressing transcription, has been shown to enable memory formation. Indeed, HDAC-inhibition appears to facilitate memory by altering the dynamics of gene expression events important for memory consolidation. However less understood are the ways in which molecular-level consolidation processes alter subsequent memory to enhance storage or facilitate retrieval. Here we used a sensory perspective to investigate whether the characteristics of memory formed with HDAC inhibitors are different from naturally-formed memory. One possibility is that HDAC inhibition enables memory to form with greater sensory detail than normal. Because the auditory system undergoes learning-induced remodeling that provides substrates for sound-specific LTM, we aimed to identify behavioral effects of HDAC inhibition on memory for specific sound features using a standard model of auditory associative cue-reward learning, memory, and cortical plasticity. We found that three systemic post-training treatments of an HDAC3-inhibitor (RGPF966, Abcam Inc.) in rats in the early phase of training facilitated auditory discriminative learning, changed auditory cortical tuning, and increased the specificity for acoustic frequency formed in memory of both excitatory (S+) and inhibitory (S-) associations for at least 2 weeks. The findings support that epigenetic mechanisms act on neural and behavioral sensory acuity to increase the precision of associative cue memory, which can be revealed by studying the sensory characteristics of long-term associative memory formation with HDAC inhibitors. Published by Elsevier B.V.

  4. Localização sonora em usuários de aparelhos de amplificação sonora individual Sound localization by hearing aid users

    Directory of Open Access Journals (Sweden)

    Paula Cristina Rodrigues

    2010-06-01

    Full Text Available OBJETIVO: comparar o desempenho, no teste de localização de fontes sonoras, de usuários de aparelhos de amplificação sonora individual (AASI do tipo retroauricular e intracanal, com o desempenho de ouvintes normais, nos planos espaciais horizontal e sagital mediano, para as frequências de 500, 2.000 e 4.500 Hz; e correlacionar os acertos no teste de localização sonora com o tempo de uso dos AASI. MÉTODOS: foram testados oito ouvintes normais e 20 usuários de próteses auditivas, subdivididos em dois grupos. Um formado por 10 indivíduos usuários de próteses auditivas do tipo intracanal e o outro grupo formado por 10 usuários de próteses auditivas do tipo retroauricular. Todos foram submetidos ao teste de localização de fontes sonoras, no qual foram apresentados, aleatoriamente, três tipos de ondas quadradas, com frequências fundamentais em 0,5 kHz, 2 kHz e 4,5 kHz, na intensidade de 70 dBA. RESULTADOS: encontrou-se percentuais de acertos médios de 78,4%, 72,2% e 72,9% para os ouvintes normais, em 0,5 kHz, 2 kHz e 4,5 kHz, respectivamente e 40,1%, 39,4% e 41,7% para os usuários de aparelho de amplificação sonora individual. Quanto aos tipos de aparelhos, os usuários do modelo intracanal acertaram a origem da fonte sonora em 47,2% das vezes e os usuários do modelo retroauricular em 37,4% das vezes. Não foi observada correlação entre o percentual de acertos no teste de localização sonora e o tempo de uso da prótese auditiva. CONCLUSÃO: ouvintes normais localizam as fontes sonoras de maneira mais eficiente que os usuários de aparelho de amplificação sonora individual e, dentre estes, os que utilizam o modelo intracanal obtiveram melhor desempenho. Além disso, o tempo de uso não interferiu no desempenho para localizar a origem das fontes sonoras.PURPOSE: to compare the sound localization performance of hearing aids users, with the performance of normal hearing in the horizontal and sagittal planes, at 0.5, 2 and 4

  5. Effects of self-relevant cues and cue valence on autobiographical memory specificity in dysphoria.

    Science.gov (United States)

    Matsumoto, Noboru; Mochizuki, Satoshi

    2017-04-01

    Reduced autobiographical memory specificity (rAMS) is a characteristic memory bias observed in depression. To corroborate the capture hypothesis in the CaRFAX (capture and rumination, functional avoidance, executive capacity and control) model, we investigated the effects of self-relevant cues and cue valence on rAMS using an adapted Autobiographical Memory Test conducted with a nonclinical population. Hierarchical linear modelling indicated that the main effects of depression and self-relevant cues elicited rAMS. Moreover, the three-way interaction among valence, self-relevance, and depression scores was significant. A simple slope test revealed that dysphoric participants experienced rAMS in response to highly self-relevant positive cues and low self-relevant negative cues. These results partially supported the capture hypothesis in nonclinical dysphoria. It is important to consider cue valence in future studies examining the capture hypothesis.

  6. L-type calcium channels refine the neural population code of sound level

    Science.gov (United States)

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  7. Binaural Processing of Multiple Sound Sources

    Science.gov (United States)

    2016-08-18

    AFRL-AFOSR-VA-TR-2016-0298 Binaural Processing of Multiple Sound Sources William Yost ARIZONA STATE UNIVERSITY 660 S MILL AVE STE 312 TEMPE, AZ 85281...18-08-2016 2. REPORT TYPE Final Performance 3. DATES COVERED (From - To) 15 Jul 2012 to 14 Jul 2016 4. TITLE AND SUBTITLE Binaural Processing of...three topics cited above are entirely within the scope of the AFOSR grant. 15. SUBJECT TERMS Binaural hearing, Sound Localization, Interaural signal

  8. Intensive treatment with ultrasound visual feedback for speech sound errors in childhood apraxia

    Directory of Open Access Journals (Sweden)

    Jonathan L Preston

    2016-08-01

    Full Text Available Ultrasound imaging is an adjunct to traditional speech therapy that has shown to be beneficial in the remediation of speech sound errors. Ultrasound biofeedback can be utilized during therapy to provide clients additional knowledge about their tongue shapes when attempting to produce sounds that are in error. The additional feedback may assist children with childhood apraxia of speech in stabilizing motor patterns, thereby facilitating more consistent and accurate productions of sounds and syllables. However, due to its specialized nature, ultrasound visual feedback is a technology that is not widely available to clients. Short-term intensive treatment programs are one option that can be utilized to expand access to ultrasound biofeedback. Schema-based motor learning theory suggests that short-term intensive treatment programs (massed practice may assist children in acquiring more accurate motor patterns. In this case series, three participants ages 10-14 diagnosed with childhood apraxia of speech attended 16 hours of speech therapy over a two-week period to address residual speech sound errors. Two participants had distortions on rhotic sounds, while the third participant demonstrated lateralization of sibilant sounds. During therapy, cues were provided to assist participants in obtaining a tongue shape that facilitated a correct production of the erred sound. Additional practice without ultrasound was also included. Results suggested that all participants showed signs of acquisition of sounds in error. Generalization and retention results were mixed. One participant showed generalization and retention of sounds that were treated; one showed generalization but limited retention; and the third showed no evidence of generalization or retention. Individual characteristics that may facilitate generalization are discussed. Short-term intensive treatment programs using ultrasound biofeedback may result in the acquisition of more accurate motor

  9. Multiple reward-cue contingencies favor expectancy over uncertainty in shaping the reward-cue attentional salience.

    Science.gov (United States)

    De Tommaso, Matteo; Mastropasqua, Tommaso; Turatto, Massimo

    2018-01-25

    Reward-predicting cues attract attention because of their motivational value. A debated question regards the conditions under which the cue's attentional salience is governed more by reward expectancy rather than by reward uncertainty. To help shedding light on this relevant issue, here, we manipulated expectancy and uncertainty using three levels of reward-cue contingency, so that, for example, a high level of reward expectancy (p = .8) was compared with the highest level of reward uncertainty (p = .5). In Experiment 1, the best reward-cue during conditioning was preferentially attended in a subsequent visual search task. This result was replicated in Experiment 2, in which the cues were matched in terms of response history. In Experiment 3, we implemented a hybrid procedure consisting of two phases: an omission contingency procedure during conditioning, followed by a visual search task as in the previous experiments. Crucially, during both phases, the reward-cues were never task relevant. Results confirmed that, when multiple reward-cue contingencies are explored by a human observer, expectancy is the major factor controlling both the attentional and the oculomotor salience of the reward-cue.

  10. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  11. The detection of 'virtual' objects using echoes by humans: Spectral cues.

    Science.gov (United States)

    Rowan, Daniel; Papadopoulos, Timos; Archer, Lauren; Goodhew, Amanda; Cozens, Hayley; Lopez, Ricardo Guzman; Edwards, David; Holmes, Hannah; Allen, Robert

    2017-07-01

    Some blind people use echoes to detect discrete, silent objects to support their spatial orientation/navigation, independence, safety and wellbeing. The acoustical features that people use for this are not well understood. Listening to changes in spectral shape due to the presence of an object could be important for object detection and avoidance, especially at short range, although it is currently not known whether it is possible with echolocation-related sounds. Bands of noise were convolved with recordings of binaural impulse responses of objects in an anechoic chamber to create 'virtual objects', which were analysed and played to sighted and blind listeners inexperienced in echolocation. The sounds were also manipulated to remove cues unrelated to spectral shape. Most listeners could accurately detect hard flat objects using changes in spectral shape. The useful spectral changes for object detection occurred above approximately 3 kHz, as with object localisation. However, energy in the sounds below 3 kHz was required to exploit changes in spectral shape for object detection, whereas energy below 3 kHz impaired object localisation. Further recordings showed that the spectral changes were diminished by room reverberation. While good high-frequency hearing is generally important for echolocation, the optimal echo-generating stimulus will probably depend on the task. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  12. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  13. Noise source separation of diesel engine by combining binaural sound localization method and blind source separation method

    Science.gov (United States)

    Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei

    2017-11-01

    In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.

  14. Sounds of Space

    Science.gov (United States)

    Gurnett, D. A.

    2005-12-01

    Starting in the early 1960s, spacecraft-borne plasma wave instruments revealed that space is filled with an astonishing variety of radio and plasma wave sounds, which have come to be called "sounds of space." For over forty years these sounds have been collected and played to a wide variety of audiences, often as the result of press conferences or press releases involving various NASA projects for which the University of Iowa has provided plasma wave instruments. This activity has led to many interviews on local and national radio programs, and occasionally on programs haviang world-wide coverage, such as the BBC. As a result of this media coverage, we have been approached many times by composers requesting copies of our space sounds for use in their various projects, many of which involve electronic synthesis of music. One of these collaborations led to "Sun Rings," which is a musical event produced by the Kronos Quartet that has played to large audiences all over the world. With the availability of modern computer graphic techniques we have recently been attempting to integrate some of these sound of space into an educational audio/video web site that illustrates the scientific principles involved in the origin of space plasma waves. Typically I try to emphasize that a substantial gas pressure exists everywhere in space in the form of an ionized gas called a plasma, and that this plasma can lead to a wide variety of wave phenomenon. Examples of some of this audio/video material will be presented.

  15. Cross-modal cueing in audiovisual spatial attention

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Greenlee, Mark W.; Gondan, Matthias

    2015-01-01

    effects have been reported for endogenous visual cues while exogenous cues seem to be mostly ineffective. In three experiments, we investigated cueing effects on the processing of audiovisual signals. In Experiment 1 we used endogenous cues to investigate their effect on the detection of auditory, visual......, and audiovisual targets presented with onset asynchrony. Consistent cueing effects were found in all target conditions. In Experiment 2 we used exogenous cues and found cueing effects only for visual target detection, but not auditory target detection. In Experiment 3 we used predictive exogenous cues to examine...

  16. Expectations in Culturally Unfamiliar Music: Influences of Proximal and Distal Cues and Timbral Characteristics

    Directory of Open Access Journals (Sweden)

    Catherine J Stevens

    2013-11-01

    Full Text Available Listeners’ musical perception is influenced by cues that can be stored in short-term memory (e.g. within the same musical piece or long-term memory (e.g. based on one’s own musical culture. The present study tested how these cues (referred to as respectively proximal and distal cues influence the perception of music from an unfamiliar culture. Western listeners who were naïve to Gamelan music judged completeness and coherence for newly constructed melodies in the Balinese gamelan tradition. In these melodies, we manipulated the final tone with three possibilities: the original gong tone, an in-scale tone replacement or an out-of-scale tone replacement. We also manipulated the musical timbre employed in Gamelan pieces. We hypothesized that novice listeners are sensitive to out-of-scale changes, but not in-scale changes, and that this might be influenced by the more unfamiliar timbre created by Gamelan sister instruments whose harmonics beat with the harmonics of the other instrument, creating a timbrally shimmering sound. The results showed: 1 out-of-scale endings were judged less complete than original gong and in-scale endings; 2 for melodies played with sister instruments, in-scale endings were judged as less complete than original endings. Furthermore, melodies using the original scale tones were judged more coherent than melodies containing few or multiple tone replacements; melodies played on single instruments were judged more coherent than the same melodies played on sister instruments. Additionally, there is some indication of within-session statistical learning, with expectations for the initially-novel materials developing during the course of the experiment. The data suggest the influence of both distal cues (e.g. previously unfamiliar timbres and proximal cues (within the same sequence and over the experimental session on the perception of melodies from other cultural systems based on unfamiliar tunings and scale systems.

  17. Expectations in culturally unfamiliar music: influences of proximal and distal cues and timbral characteristics.

    Science.gov (United States)

    Stevens, Catherine J; Tardieu, Julien; Dunbar-Hall, Peter; Best, Catherine T; Tillmann, Barbara

    2013-01-01

    Listeners' musical perception is influenced by cues that can be stored in short-term memory (e.g., within the same musical piece) or long-term memory (e.g., based on one's own musical culture). The present study tested how these cues (referred to as, respectively, proximal and distal cues) influence the perception of music from an unfamiliar culture. Western listeners who were naïve to Gamelan music judged completeness and coherence for newly constructed melodies in the Balinese gamelan tradition. In these melodies, we manipulated the final tone with three possibilities: the original gong tone, an in-scale tone replacement or an out-of-scale tone replacement. We also manipulated the musical timbre employed in Gamelan pieces. We hypothesized that novice listeners are sensitive to out-of-scale changes, but not in-scale changes, and that this might be influenced by the more unfamiliar timbre created by Gamelan "sister" instruments whose harmonics beat with the harmonics of the other instrument, creating a timbrally "shimmering" sound. The results showed: (1) out-of-scale endings were judged less complete than original gong and in-scale endings; (2) for melodies played with "sister" instruments, in-scale endings were judged as less complete than original endings. Furthermore, melodies using the original scale tones were judged more coherent than melodies containing few or multiple tone replacements; melodies played on single instruments were judged more coherent than the same melodies played on sister instruments. Additionally, there is some indication of within-session statistical learning, with expectations for the initially-novel materials developing during the course of the experiment. The data suggest the influence of both distal cues (e.g., previously unfamiliar timbres) and proximal cues (within the same sequence and over the experimental session) on the perception of melodies from other cultural systems based on unfamiliar tunings and scale systems.

  18. Effect of pictorial depth cues, binocular disparity cues and motion parallax depth cues on lightness perception in three-dimensional virtual scenes.

    Directory of Open Access Journals (Sweden)

    Michiteru Kitazaki

    2008-09-01

    Full Text Available Surface lightness perception is affected by scene interpretation. There is some experimental evidence that perceived lightness under bi-ocular viewing conditions is different from perceived lightness in actual scenes but there are also reports that viewing conditions have little or no effect on perceived color. We investigated how mixes of depth cues affect perception of lightness in three-dimensional rendered scenes containing strong gradients of illumination in depth.Observers viewed a virtual room (4 m width x 5 m height x 17.5 m depth with checkerboard walls and floor. In four conditions, the room was presented with or without binocular disparity (BD depth cues and with or without motion parallax (MP depth cues. In all conditions, observers were asked to adjust the luminance of a comparison surface to match the lightness of test surfaces placed at seven different depths (8.5-17.5 m in the scene. We estimated lightness versus depth profiles in all four depth cue conditions. Even when observers had only pictorial depth cues (no MP, no BD, they partially but significantly discounted the illumination gradient in judging lightness. Adding either MP or BD led to significantly greater discounting and both cues together produced the greatest discounting. The effects of MP and BD were approximately additive. BD had greater influence at near distances than far.These results suggest the surface lightness perception is modulated by three-dimensional perception/interpretation using pictorial, binocular-disparity, and motion-parallax cues additively. We propose a two-stage (2D and 3D processing model for lightness perception.

  19. Preconditioning of Spatial and Auditory Cues: Roles of the Hippocampus, Frontal Cortex, and Cue-Directed Attention

    Directory of Open Access Journals (Sweden)

    Andrew C. Talk

    2016-12-01

    Full Text Available Loss of function of the hippocampus or frontal cortex is associated with reduced performance on memory tasks, in which subjects are incidentally exposed to cues at specific places in the environment and are subsequently asked to recollect the location at which the cue was experienced. Here, we examined the roles of the rodent hippocampus and frontal cortex in cue-directed attention during encoding of memory for the location of a single incidentally experienced cue. During a spatial sensory preconditioning task, rats explored an elevated platform while an auditory cue was incidentally presented at one corner. The opposite corner acted as an unpaired control location. The rats demonstrated recollection of location by avoiding the paired corner after the auditory cue was in turn paired with shock. Damage to either the dorsal hippocampus or the frontal cortex impaired this memory ability. However, we also found that hippocampal lesions enhanced attention directed towards the cue during the encoding phase, while frontal cortical lesions reduced cue-directed attention. These results suggest that the deficit in spatial sensory preconditioning caused by frontal cortical damage may be mediated by inattention to the location of cues during the latent encoding phase, while deficits following hippocampal damage must be related to other mechanisms such as generation of neural plasticity.

  20. Preconditioning of Spatial and Auditory Cues: Roles of the Hippocampus, Frontal Cortex, and Cue-Directed Attention

    Science.gov (United States)

    Talk, Andrew C.; Grasby, Katrina L.; Rawson, Tim; Ebejer, Jane L.

    2016-01-01

    Loss of function of the hippocampus or frontal cortex is associated with reduced performance on memory tasks, in which subjects are incidentally exposed to cues at specific places in the environment and are subsequently asked to recollect the location at which the cue was experienced. Here, we examined the roles of the rodent hippocampus and frontal cortex in cue-directed attention during encoding of memory for the location of a single incidentally experienced cue. During a spatial sensory preconditioning task, rats explored an elevated platform while an auditory cue was incidentally presented at one corner. The opposite corner acted as an unpaired control location. The rats demonstrated recollection of location by avoiding the paired corner after the auditory cue was in turn paired with shock. Damage to either the dorsal hippocampus or the frontal cortex impaired this memory ability. However, we also found that hippocampal lesions enhanced attention directed towards the cue during the encoding phase, while frontal cortical lesions reduced cue-directed attention. These results suggest that the deficit in spatial sensory preconditioning caused by frontal cortical damage may be mediated by inattention to the location of cues during the latent encoding phase, while deficits following hippocampal damage must be related to other mechanisms such as generation of neural plasticity. PMID:27999366

  1. Grasp cueing and joint attention.

    Science.gov (United States)

    Tschentscher, Nadja; Fischer, Martin H

    2008-10-01

    We studied how two different hand posture cues affect joint attention in normal observers. Visual targets appeared over lateralized objects, with different delays after centrally presented hand postures. Attention was cued by either hand direction or the congruency between hand aperture and object size. Participants pressed a button when they detected a target. Direction cues alone facilitated target detection following short delays but aperture cues alone were ineffective. In contrast, when hand postures combined direction and aperture cues, aperture congruency effects without directional congruency effects emerged and persisted, but only for power grips. These results suggest that parallel parameter specification makes joint attention mechanisms exquisitely sensitive to the timing and content of contextual cues.

  2. Impact of DCS-facilitated cue exposure therapy on brain activation to cocaine cues in cocaine dependence.

    Science.gov (United States)

    Prisciandaro, James J; Myrick, Hugh; Henderson, Scott; McRae-Clark, Aimee L; Santa Ana, Elizabeth J; Saladin, Michael E; Brady, Kathleen T

    2013-09-01

    The development of addiction is marked by a pathological associative learning process that imbues incentive salience to stimuli associated with drug use. Recent efforts to treat addiction have targeted this learning process using cue exposure therapy augmented with d-cycloserine (DCS), a glutamatergic agent hypothesized to enhance extinction learning. To better understand the impact of DCS-facilitated extinction on neural reactivity to drug cues, the present study reports fMRI findings from a randomized, double-blind, placebo-controlled trial of DCS-facilitated cue exposure for cocaine dependence. Twenty-five participants completed two MRI sessions (before and after intervention), with a cocaine-cue reactivity fMRI task. The intervention consisted of 50mg of DCS or placebo, combined with two sessions of cocaine cue exposure and skills training. Participants demonstrated cocaine cue activation in a variety of brain regions at baseline. From the pre- to post-study scan, participants experienced decreased activation to cues in a number of regions (e.g., accumbens, caudate, frontal poles). Unexpectedly, placebo participants experienced decreases in activation to cues in the left angular and middle temporal gyri and the lateral occipital cortex, while DCS participants did not. Three trials of DCS-facilitated cue exposure therapy for cocaine dependence have found that DCS either increases or does not significantly impact response to cocaine cues. The present study adds to this literature by demonstrating that DCS may prevent extinction to cocaine cues in temporal and occipital brain regions. Although consistent with past research, results from the present study should be considered preliminary until replicated in larger samples. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Assessment of rival males through the use of multiple sensory cues in the fruitfly Drosophila pseudoobscura.

    Directory of Open Access Journals (Sweden)

    Chris P Maguire

    Full Text Available Environments vary stochastically, and animals need to behave in ways that best fit the conditions in which they find themselves. The social environment is particularly variable, and responding appropriately to it can be vital for an animal's success. However, cues of social environment are not always reliable, and animals may need to balance accuracy against the risk of failing to respond if local conditions or interfering signals prevent them detecting a cue. Recent work has shown that many male Drosophila fruit flies respond to the presence of rival males, and that these responses increase their success in acquiring mates and fathering offspring. In Drosophila melanogaster males detect rivals using auditory, tactile and olfactory cues. However, males fail to respond to rivals if any two of these senses are not functioning: a single cue is not enough to produce a response. Here we examined cue use in the detection of rival males in a distantly related Drosophila species, D. pseudoobscura, where auditory, olfactory, tactile and visual cues were manipulated to assess the importance of each sensory cue singly and in combination. In contrast to D. melanogaster, male D. pseudoobscura require intact olfactory and tactile cues to respond to rivals. Visual cues were not important for detecting rival D. pseudoobscura, while results on auditory cues appeared puzzling. This difference in cue use in two species in the same genus suggests that cue use is evolutionarily labile, and may evolve in response to ecological or life history differences between species.

  4. Cue integration vs. exemplar-based reasoning in multi-attribute decisions from memory: A matter of cue representation

    OpenAIRE

    Arndt Broeder; Ben R. Newell; Christine Platzer

    2010-01-01

    Inferences about target variables can be achieved by deliberate integration of probabilistic cues or by retrieving similar cue-patterns (exemplars) from memory. In tasks with cue information presented in on-screen displays, rule-based strategies tend to dominate unless the abstraction of cue-target relations is unfeasible. This dominance has also been demonstrated --- surprisingly --- in experiments that demanded the retrieval of cue values from memory (M. Persson \\& J. Rieskamp, 2009). In th...

  5. Assessing the contribution of binaural cues for apparent source width perception via a functional model

    DEFF Research Database (Denmark)

    Käsbach, Johannes; Hahmann, Manuel; May, Tobias

    2016-01-01

    In echoic conditions, sound sources are not perceived as point sources but appear to be expanded. The expansion in the horizontal dimension is referred to as apparent source width (ASW). To elicit this perception, the auditory system has access to fluctuations of binaural cues, the interaural time...... a statistical representation of ITDs and ILDs based on percentiles integrated over time and frequency. The model’s performance was evaluated against psychoacoustic data obtained with noise, speech and music signals in loudspeakerbased experiments. A robust model prediction of ASW was achieved using a cross...

  6. An Eye Tracking Comparison of External Pointing Cues and Internal Continuous Cues in Learning with Complex Animations

    Science.gov (United States)

    Boucheix, Jean-Michel; Lowe, Richard K.

    2010-01-01

    Two experiments used eye tracking to investigate a novel cueing approach for directing learner attention to low salience, high relevance aspects of a complex animation. In the first experiment, comprehension of a piano mechanism animation containing spreading-colour cues was compared with comprehension obtained with arrow cues or no cues. Eye…

  7. Spatial aspects of sound quality - and by multichannel systems subjective assessment of sound reproduced by stereo

    DEFF Research Database (Denmark)

    Choisel, Sylvain

    the fidelity with which sound reproduction systems can re-create the desired stereo image, a laser pointing technique was developed to accurately collect subjects' responses in a localization task. This method is subsequently applied in an investigation of the effects of loudspeaker directivity...... on the perceived direction of panned sources. The second part of the thesis addresses the identification of auditory attributes which play a role in the perception of sound reproduced by multichannel systems. Short musical excerpts were presented in mono, stereo and several multichannel formats to evoke various...

  8. Introspective responses to cues and motivation to reduce cigarette smoking influence state and behavioral responses to cue exposure.

    Science.gov (United States)

    Veilleux, Jennifer C; Skinner, Kayla D

    2016-09-01

    In the current study, we aimed to extend smoking cue-reactivity research by evaluating delay discounting as an outcome of cigarette cue exposure. We also separated introspection in response to cues (e.g., self-reporting craving and affect) from cue exposure alone, to determine if introspection changes behavioral responses to cigarette cues. Finally, we included measures of quit motivation and resistance to smoking to assess motivational influences on cue exposure. Smokers were invited to participate in an online cue-reactivity study. Participants were randomly assigned to view smoking images or neutral images, and were randomized to respond to cues with either craving and affect questions (e.g., introspection) or filler questions. Following cue exposure, participants completed a delay discounting task and then reported state affect, craving, and resistance to smoking, as well as an assessment of quit motivation. We found that after controlling for trait impulsivity, participants who introspected on craving and affect showed higher delay discounting, irrespective of cue type, but we found no effect of response condition on subsequent craving (e.g., craving reactivity). We also found that motivation to quit interacted with experimental conditions to predict state craving and state resistance to smoking. Although asking about craving during cue exposure did not increase later craving, it resulted in greater delaying of discounted rewards. Overall, our findings suggest the need to further assess the implications of introspection and motivation on behavioral outcomes of cue exposure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Effects of cue-exposure treatment on neural cue reactivity in alcohol dependence: a randomized trial.

    Science.gov (United States)

    Vollstädt-Klein, Sabine; Loeber, Sabine; Kirsch, Martina; Bach, Patrick; Richter, Anne; Bühler, Mira; von der Goltz, Christoph; Hermann, Derik; Mann, Karl; Kiefer, Falk

    2011-06-01

    In alcohol-dependent patients, alcohol-associated cues elicit brain activation in mesocorticolimbic networks involved in relapse mechanisms. Cue-exposure based extinction training (CET) has been shown to be efficacious in the treatment of alcoholism; however, it has remained unexplored whether CET mediates its therapeutic effects via changes of activity in mesolimbic networks in response to alcohol cues. In this study, we assessed CET treatment effects on cue-induced responses using functional magnetic resonance imaging (fMRI). In a randomized controlled trial, abstinent alcohol-dependent patients were randomly assigned to a CET group (n = 15) or a control group (n = 15). All patients underwent an extended detoxification treatment comprising medically supervised detoxification, health education, and supportive therapy. The CET patients additionally received nine CET sessions over 3 weeks, exposing the patient to his/her preferred alcoholic beverage. Cue-induced fMRI activation to alcohol cues was measured at pretreatment and posttreatment. Compared with pretreatment, fMRI cue-reactivity reduction was greater in the CET relative to the control group, especially in the anterior cingulate gyrus and the insula, as well as limbic and frontal regions. Before treatment, increased cue-induced fMRI activation was found in limbic and reward-related brain regions and in visual areas. After treatment, the CET group showed less activation than the control group in the left ventral striatum. The study provides first evidence that an exposure-based psychotherapeutic intervention in the treatment of alcoholism impacts on brain areas relevant for addiction memory and attentional focus to alcohol-associated cues and affects mesocorticolimbic reward pathways suggested to be pathophysiologically involved in addiction. Copyright © 2011 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  10. Global Repetition Influences Contextual Cueing

    Science.gov (United States)

    Zang, Xuelian; Zinchenko, Artyom; Jia, Lina; Li, Hong

    2018-01-01

    Our visual system has a striking ability to improve visual search based on the learning of repeated ambient regularities, an effect named contextual cueing. Whereas most of the previous studies investigated contextual cueing effect with the same number of repeated and non-repeated search displays per block, the current study focused on whether a global repetition frequency formed by different presentation ratios between the repeated and non-repeated configurations influence contextual cueing effect. Specifically, the number of repeated and non-repeated displays presented in each block was manipulated: 12:12, 20:4, 4:20, and 4:4 in Experiments 1–4, respectively. The results revealed a significant contextual cueing effect when the global repetition frequency is high (≥1:1 ratio) in Experiments 1, 2, and 4, given that processing of repeated displays was expedited relative to non-repeated displays. Nevertheless, the contextual cueing effect reduced to a non-significant level when the repetition frequency reduced to 4:20 in Experiment 3. These results suggested that the presentation frequency of repeated relative to the non-repeated displays could influence the strength of contextual cueing. In other words, global repetition statistics could be a crucial factor to mediate contextual cueing effect. PMID:29636716

  11. Can listening to sound sequences facilitate movement? The potential for motor rehabilitation

    DEFF Research Database (Denmark)

    Bodak, Rebeka; Stewart, Lauren; Stephan, Marianne

    examining the impact of auditory exposure on the formation of new motor memories in healthy nonmusicians. Following an audiomotor mapping session, participants will be asked to listen to and memorise sequence A or sequence B in a sound-only task. Employing a congruent/incongruent crossover design......, participants’ motor performance will be tested using visuospatial stimuli to cue key presses, either to the congruent sequence they heard, or to the incongruent unfamiliar sequence. It is predicted that the congruent group will perform faster than the incongruent group. The findings of this study have...

  12. Songbirds and humans apply different strategies in a sound sequence discrimination task

    Directory of Open Access Journals (Sweden)

    Yoshimasa eSeki

    2013-07-01

    Full Text Available The abilities of animals and humans to extract rules from sound sequences have previously been compared using observation of spontaneous responses and conditioning techniques. However, the results were inconsistently interpreted across studies possibly due to methodological and/or species differences. Therefore, we examined the strategies for discrimination of sound sequences in Bengalese finches and humans using the same protocol. Birds were trained on a GO/NOGO task to discriminate between two categories of sound stimulus generated based on an AAB or ABB rule. The sound elements used were taken from a variety of male (M and female (F calls, such that the sequences could be represented as MMF and MFF. In test sessions, FFM and FMM sequences, which were never presented in the training sessions but conformed to the rule, were presented as probe stimuli. The results suggested two discriminative strategies were being applied: 1 memorizing sound patterns of either GO or NOGO stimuli and generating the appropriate responses for only those sounds; and 2 using the repeated element as a cue. There was no evidence that the birds successfully extracted the abstract rule (i.e. AAB and ABB; MMF-GO subjects did not produce a GO response for FFM and vice versa. Next we examined whether those strategies were also applicable for human participants on the same task. The results and questionnaires revealed that participants extracted the abstract rule, and most of them employed it to discriminate the sequences. This strategy was never observed in bird subjects, although some participants used strategies similar to the birds when responding to the probe stimuli. Our results showed that the human participants applied the abstract rule in the task even without instruction but Bengalese finches did not, thereby reconfirming that humans have to extract abstract rules from sound sequences that is distinct from non-human animals.

  13. Songbirds and humans apply different strategies in a sound sequence discrimination task.

    Science.gov (United States)

    Seki, Yoshimasa; Suzuki, Kenta; Osawa, Ayumi M; Okanoya, Kazuo

    2013-01-01

    The abilities of animals and humans to extract rules from sound sequences have previously been compared using observation of spontaneous responses and conditioning techniques. However, the results were inconsistently interpreted across studies possibly due to methodological and/or species differences. Therefore, we examined the strategies for discrimination of sound sequences in Bengalese finches and humans using the same protocol. Birds were trained on a GO/NOGO task to discriminate between two categories of sound stimulus generated based on an "AAB" or "ABB" rule. The sound elements used were taken from a variety of male (M) and female (F) calls, such that the sequences could be represented as MMF and MFF. In test sessions, FFM and FMM sequences, which were never presented in the training sessions but conformed to the rule, were presented as probe stimuli. The results suggested two discriminative strategies were being applied: (1) memorizing sound patterns of either GO or NOGO stimuli and generating the appropriate responses for only those sounds; and (2) using the repeated element as a cue. There was no evidence that the birds successfully extracted the abstract rule (i.e., AAB and ABB); MMF-GO subjects did not produce a GO response for FFM and vice versa. Next we examined whether those strategies were also applicable for human participants on the same task. The results and questionnaires revealed that participants extracted the abstract rule, and most of them employed it to discriminate the sequences. This strategy was never observed in bird subjects, although some participants used strategies similar to the birds when responding to the probe stimuli. Our results showed that the human participants applied the abstract rule in the task even without instruction but Bengalese finches did not, thereby reconfirming that humans have to extract abstract rules from sound sequences that is distinct from non-human animals.

  14. Guidelines for the integration of audio cues into computer user interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Sumikawa, D.A.

    1985-06-01

    Throughout the history of computers, vision has been the main channel through which information is conveyed to the computer user. As the complexities of man-machine interactions increase, more and more information must be transferred from the computer to the user and then successfully interpreted by the user. A logical next step in the evolution of the computer-user interface is the incorporation of sound and thereby using the sense of ''hearing'' in the computer experience. This allows our visual and auditory capabilities to work naturally together in unison leading to more effective and efficient interpretation of all information received by the user from the computer. This thesis presents an initial set of guidelines to assist interface developers in designing an effective sight and sound user interface. This study is a synthesis of various aspects of sound, human communication, computer-user interfaces, and psychoacoustics. We introduce the notion of an earcon. Earcons are audio cues used in the computer-user interface to provide information and feedback to the user about some computer object, operation, or interaction. A possible construction technique for earcons, the use of earcons in the interface, how earcons are learned and remembered, and the affects of earcons on their users are investigated. This study takes the point of view that earcons are a language and human/computer communication issue and are therefore analyzed according to the three dimensions of linguistics; syntactics, semantics, and pragmatics.

  15. Enhanced Excitatory Connectivity and Disturbed Sound Processing in the Auditory Brainstem of Fragile X Mice.

    Science.gov (United States)

    Garcia-Pino, Elisabet; Gessele, Nikodemus; Koch, Ursula

    2017-08-02

    interactions, contributing to their isolation. Here, a mouse model of FXS was used to investigate the auditory brainstem where basic sound information is first processed. Loss of the Fragile X mental retardation protein leads to excessive excitatory compared with inhibitory inputs in neurons extracting information about sound levels. Functionally, this elevated excitation results in increased firing rates, and abnormal coding of frequency and binaural sound localization cues. Imbalanced early-stage sound level processing could partially explain the auditory processing deficits in FXS. Copyright © 2017 the authors 0270-6474/17/377403-17$15.00/0.

  16. Improving Robustness against Environmental Sounds for Directing Attention of Social Robots

    DEFF Research Database (Denmark)

    Thomsen, Nicolai Bæk; Tan, Zheng-Hua; Lindberg, Børge

    2015-01-01

    This paper presents a multi-modal system for finding out where to direct the attention of a social robot in a dialog scenario, which is robust against environmental sounds (door slamming, phone ringing etc.) and short speech segments. The method is based on combining voice activity detection (VAD......) and sound source localization (SSL) and furthermore apply post-processing to SSL to filter out short sounds. The system is tested against a baseline system in four different real-world experiments, where different sounds are used as interfering sounds. The results are promising and show a clear improvement....

  17. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    Science.gov (United States)

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  18. The cues have it; nest-based, cue-mediated recruitment to carbohydrate resources in a swarm-founding social wasp

    Science.gov (United States)

    Schueller, Teresa I.; Nordheim, Erik V.; Taylor, Benjamin J.; Jeanne, Robert L.

    2010-11-01

    This study explores whether or not foragers of the Neotropical swarm-founding wasp Polybia occidentalis use nest-based recruitment to direct colony mates to carbohydrate resources. Recruitment allows social insect colonies to rapidly exploit ephemeral resources, an ability especially advantageous to species such as P. occidentalis, which store nectar and prey in their nests. Although recruitment is often defined as being strictly signal mediated, it can also occur via cue-mediated information transfer. Previous studies indicated that P. occidentalis employs local enhancement, a type of cue-mediated recruitment in which the presence of conspecifics at a site attracts foragers. This recruitment is resource-based, and as such, is a blunt recruitment tool, which does not exclude non-colony mates. We therefore investigated whether P. occidentalis also employs a form of nest-based recruitment. A scented sucrose solution was applied directly to the nest. This mimicked a scented carbohydrate resource brought back by employed foragers, but, as foragers were not allowed to return to the nest with the resource, there was no possibility for on-nest recruitment behavior. Foragers were offered two dishes—one containing the test scent and the other an alternate scent. Foragers chose the test scent more often, signifying that its presence in the nest induces naïve foragers to search for it off-nest. P. occidentalis, therefore, employs a form of nest-based recruitment to carbohydrate resources that is mediated by a cue, the presence of a scented resource in the nest.

  19. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  20. The case for infrasound as the long-range map cue in avian navigation

    Science.gov (United States)

    Hagstrum, J.T.

    2007-01-01

    Of the various 'map' and 'compass' components of Kramer's avian navigational model, the long-range map component is the least well understood. In this paper atmospheric infrasounds are proposed as the elusive longrange cues constituting the avian navigational map. Although infrasounds were considered a viable candidate for the avian map in the 1970s, and pigeons in the laboratory were found to detect sounds at surprisingly low frequencies (0.05 Hz), other tests appeared to support either of the currently favored olfactory or magnetic maps. Neither of these hypotheses, however, is able to explain the full set of observations, and the field has been at an impasse for several decades. To begin, brief descriptions of infrasonic waves and their passage through the atmosphere are given, followed by accounts of previously unexplained release results. These examples include 'release-site biases' which are deviations of departing pigeons from the homeward bearing, an annual variation in homing performance observed only in Europe, difficulties orienting over lakes and above temperature inversions, and the mysterious disruption of several pigeon races. All of these irregularities can be consistently explained by the deflection or masking of infrasonic cues by atmospheric conditions or by other infrasonic sources (microbaroms, sonic booms), respectively. A source of continuous geographic infrasound generated by atmosphere-coupled microseisms is also proposed. In conclusion, several suggestions are made toward resolving some of the conflicting experimental data with the pigeons' possible use of infrasonic cues.

  1. Locating and classification of structure-borne sound occurrence using wavelet transformation

    International Nuclear Information System (INIS)

    Winterstein, Martin; Thurnreiter, Martina

    2011-01-01

    For the surveillance of nuclear facilities with respect to detached or loose parts within the pressure boundary structure-borne sound detector systems are used. The impact of loose parts on the wall causes energy transfer to the wall that is measured a so called singular sound event. The run-time differences of sound signals allow a rough locating of the loose part. The authors performed a finite element based simulation of structure-borne sound measurements using real geometries. New knowledge on sound wave propagation, signal analysis and processing, neuronal networks or hidden Markov models were considered. Using the wavelet transformation it is possible to improve the localization of structure-borne sound events.

  2. Compound cueing in free recall

    Science.gov (United States)

    Lohnas, Lynn J.; Kahana, Michael J.

    2013-01-01

    According to the retrieved context theory of episodic memory, the cue for recall of an item is a weighted sum of recently activated cognitive states, including previously recalled and studied items as well as their associations. We show that this theory predicts there should be compound cueing in free recall. Specifically, the temporal contiguity effect should be greater when the two most recently recalled items were studied in contiguous list positions. A meta-analysis of published free recall experiments demonstrates evidence for compound cueing in both conditional response probabilities and inter-response times. To help rule out a rehearsal-based account of these compound cueing effects, we conducted an experiment with immediate, delayed and continual-distractor free recall conditions. Consistent with retrieved context theory but not with a rehearsal-based account, compound cueing was present in all conditions, and was not significantly influenced by the presence of interitem distractors. PMID:23957364

  3. Visible propagation from invisible exogenous cueing.

    Science.gov (United States)

    Lin, Zhicheng; Murray, Scott O

    2013-09-20

    Perception and performance is affected not just by what we see but also by what we do not see-inputs that escape our awareness. While conscious processing and unconscious processing have been assumed to be separate and independent, here we report the propagation of unconscious exogenous cueing as determined by conscious motion perception. In a paradigm combining masked exogenous cueing and apparent motion, we show that, when an onset cue was rendered invisible, the unconscious exogenous cueing effect traveled, manifesting at uncued locations (4° apart) in accordance with conscious perception of visual motion; the effect diminished when the cue-to-target distance was 8° apart. In contrast, conscious exogenous cueing manifested in both distances. Further evidence reveals that the unconscious and conscious nonretinotopic effects could not be explained by an attentional gradient, nor by bottom-up, energy-based motion mechanisms, but rather they were subserved by top-down, tracking-based motion mechanisms. We thus term these effects mobile cueing. Taken together, unconscious mobile cueing effects (a) demonstrate a previously unknown degree of flexibility of unconscious exogenous attention; (b) embody a simultaneous dissociation and association of attention and consciousness, in which exogenous attention can occur without cue awareness ("dissociation"), yet at the same time its effect is contingent on conscious motion tracking ("association"); and (c) underscore the interaction of conscious and unconscious processing, providing evidence for an unconscious effect that is not automatic but controlled.

  4. Objective function analysis for electric soundings (VES), transient electromagnetic soundings (TEM) and joint inversion VES/TEM

    Science.gov (United States)

    Bortolozo, Cassiano Antonio; Bokhonok, Oleg; Porsani, Jorge Luís; Monteiro dos Santos, Fernando Acácio; Diogo, Liliana Alcazar; Slob, Evert

    2017-11-01

    Ambiguities in geophysical inversion results are always present. How these ambiguities appear in most cases open to interpretation. It is interesting to investigate ambiguities with regard to the parameters of the models under study. Residual Function Dispersion Map (RFDM) can be used to differentiate between global ambiguities and local minima in the objective function. We apply RFDM to Vertical Electrical Sounding (VES) and TEM Sounding inversion results. Through topographic analysis of the objective function we evaluate the advantages and limitations of electrical sounding data compared with TEM sounding data, and the benefits of joint inversion in comparison with the individual methods. The RFDM analysis proved to be a very interesting tool for understanding the joint inversion method of VES/TEM. Also the advantage of the applicability of the RFDM analyses in real data is explored in this paper to demonstrate not only how the objective function of real data behaves but the applicability of the RFDM approach in real cases. With the analysis of the results, it is possible to understand how the joint inversion can reduce the ambiguity of the methods.

  5. The Role of Global and Local Visual Information during Gaze-Cued Orienting of Attention.

    Science.gov (United States)

    Munsters, Nicolette M; van den Boomen, Carlijn; Hooge, Ignace T C; Kemner, Chantal

    2016-01-01

    Gaze direction is an important social communication tool. Global and local visual information are known to play specific roles in processing socially relevant information from a face. The current study investigated whether global visual information has a primary role during gaze-cued orienting of attention and, as such, may influence quality of interaction. Adults performed a gaze-cueing task in which a centrally presented face cued (valid or invalid) the location of a peripheral target through a gaze shift. We measured brain activity (electroencephalography) towards the cue and target and behavioral responses (manual and saccadic reaction times) towards the target. The faces contained global (i.e. lower spatial frequencies), local (i.e. higher spatial frequencies), or a selection of both global and local (i.e. mid-band spatial frequencies) visual information. We found a gaze cue-validity effect (i.e. valid versus invalid), but no interaction effects with spatial frequency content. Furthermore, behavioral responses towards the target were in all cue conditions slower when lower spatial frequencies were not present in the gaze cue. These results suggest that whereas gaze-cued orienting of attention can be driven by both global and local visual information, global visual information determines the speed of behavioral responses towards other entities appearing in the surrounding of gaze cue stimuli.

  6. The (unclear effects of invalid retro-cues.

    Directory of Open Access Journals (Sweden)

    Marcel eGressmann

    2016-03-01

    Full Text Available Studies with the retro-cue paradigm have shown that validly cueing objects in visual working memory long after encoding can still benefit performance on subsequent change detection tasks. With regard to the effects of invalid cues, the literature is less clear. Some studies reported costs, others did not. We here revisit two recent studies that made interesting suggestions concerning invalid retro-cues: One study suggested that costs only occur for larger set sizes, and another study suggested that inclusion of invalid retro-cues diminishes the retro-cue benefit. New data from one experiment and a reanalysis of published data are provided to address these conclusions. The new data clearly show costs (and benefits that were independent of set size, and the reanalysis suggests no influence of the inclusion of invalid retro-cues on the retro-cue benefit. Thus, previous interpretations may be taken with some caution at present.

  7. Otite média recorrente e habilidade de localização sonora em pré-escolares Otitis media and sound localization ability in preschool children

    Directory of Open Access Journals (Sweden)

    Aveliny Mantovan Lima-Gregio

    2010-12-01

    Full Text Available OBJETIVO: comparar o desempenho de 40 pré-escolares no teste de localização sonora com as respostas de seus pais para um questionário que investigou a ocorrência de episódios de otite média (OM e os sintomas indicativos de desordens audiológicas e do processamento auditivo. MÉTODOS: após aplicação e análise das respostas do questionário, dois grupos foram formados: GO, com histórico de OM, e GC, sem este histórico. Cada grupo com 20 pré-escolares de ambos os gêneros foi submetido ao teste de localização da fonte sonora em cinco direções (Pereira, 1993. RESULTADOS: a comparação entre GO e GC não mostrou diferença estatisticamente significante (p=1,0000. CONCLUSÃO: as otites recorrentes na primeira infância não influenciaram no desempenho da habilidade de localização sonora dos pré-escolares deste estudo. Embora sejam dois instrumentos baratos e de fácil aplicação, o questionário e o teste de localização não foram suficientes para diferenciar os dois grupos testados.PURPOSE: to compare the sound localization ability of 40 preschool children with their parents' answers. The questionnaire answered by the parents investigated otitis media (OM episodes and symptoms that indicated the audiological and auditory processing disabilities. METHODS: after applying and analyzing the questionnaire's answers, two groups were formed: OG (with OM and CG (control group. Each group with 20 preschool children, of both genders, was submitted to the sound localization test in five directions (according to Pereira, 1993. RESULTS: comparison between OG and CG did not reveal statistically significant difference (p=1.0000. CONCLUSION: OM episodes during first infancy did not influence the sound localization ability in this preschool children study. Although both used evaluation instruments (questionnaire and sound localization test are cheap and easy to apply they are not sufficient to differ both tested groups.

  8. Modulation of auditory spatial attention by visual emotional cues: differential effects of attentional engagement and disengagement for pleasant and unpleasant cues.

    Science.gov (United States)

    Harrison, Neil R; Woodhouse, Rob

    2016-05-01

    Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue.

  9. Audibility of individual reflections in a complete sound field, III

    DEFF Research Database (Denmark)

    Bech, Søren

    1996-01-01

    This paper reports on the influence of individual reflections on the auditory localization of a loudspeaker in a small room. The sound field produced by a single loudspeaker positioned in a normal listening room has been simulated using an electroacoustic setup. The setup models the direct sound......-independent absorption coefficients of the room surfaces, and (2) a loudspeaker with directivity according to a standard two-way system and absorption coefficients according to real materials. The results have shown that subjects can distinguish reliably between timbre and localization, that the spectrum level above 2 k...

  10. Restrictions of frequent frames as cues to categories: the case of Dutch

    NARCIS (Netherlands)

    Erkelens, M.A.; Chan, H.; Kapia, E.; Jacob, H.

    2008-01-01

    Why Dutch 12-month-old infants do not use frequent frames in early categorization Mintz (2003) proposes that very local distributional contexts of words in the input-so-called 'frequent frames'-function as reliable cues for categories corresponding to the adult verb and noun. He shows that

  11. Segregating Top-Down Selective Attention from Response Inhibition in a Spatial Cueing Go/NoGo Task: An ERP and Source Localization Study.

    Science.gov (United States)

    Hong, Xiangfei; Wang, Yao; Sun, Junfeng; Li, Chunbo; Tong, Shanbao

    2017-08-29

    Successfully inhibiting a prepotent response tendency requires the attentional detection of signals which cue response cancellation. Although neuroimaging studies have identified important roles of stimulus-driven processing in the attentional detection, the effects of top-down control were scarcely investigated. In this study, scalp EEG was recorded from thirty-two participants during a modified Go/NoGo task, in which a spatial-cueing approach was implemented to manipulate top-down selective attention. We observed classical event-related potential components, including N2 and P3, in the attended condition of response inhibition. While in the ignored condition of response inhibition, a smaller P3 was observed and N2 was absent. The correlation between P3 and CNV during the foreperiod suggested an inhibitory role of P3 in both conditions. Furthermore, source analysis suggested that P3 generation was mainly localized to the midcingulate cortex, and the attended condition showed increased activation relative to the ignored condition in several regions, including inferior frontal gyrus, middle frontal gyrus, precentral gyrus, insula and uncus, suggesting that these regions were involved in top-down attentional control rather than inhibitory processing. Taken together, by segregating electrophysiological correlates of top-down selective attention from those of response inhibition, our findings provide new insights in understanding the neural mechanisms of response inhibition.

  12. Counterbalancing in smoking cue research: a critical analysis.

    Science.gov (United States)

    Sayette, Michael A; Griffin, Kasey M; Sayers, W Michael

    2010-11-01

    Cue exposure research has been used to examine key issues in smoking research, such as predicting relapse, testing new medications, investigating the neurobiology of nicotine dependence, and examining reactivity among smokers with comorbid psychopathologies. Determining the order that cues are presented is one of the most critical steps in the design of these investigations. It is widely assumed that cue exposure studies should counterbalance the order in which smoking and control (neutral) cues are presented. This article examines the premises underlying the use of counterbalancing in experimental research, and it evaluates the degree to which counterbalancing is appropriate in smoking cue exposure studies. We reviewed the available literature on the use of counterbalancing techniques in human smoking cue exposure research. Many studies counterbalancing order of cues have not provided critical analyses to determine whether this approach was appropriate. Studies that have reported relevant data, however, suggest that order of cue presentation interacts with type of cue (smoking vs. control), which raises concerns about the utility of counterbalancing. Primarily, this concern arises from potential carryover effects, in which exposure to smoking cues affects subsequent responding to neutral cues. Cue type by order of cue interactions may compromise the utility of counterbalancing. Unfortunately, there is no obvious alternative that is optimal across studies. Strengths and limitations of several alternative designs are considered, and key questions are identified to advance understanding of the optimal conditions for conducting smoking cue exposure studies.

  13. Counterbalancing in Smoking Cue Research: A Critical Analysis

    Science.gov (United States)

    Griffin, Kasey M.; Sayers, W. Michael

    2010-01-01

    Introduction: Cue exposure research has been used to examine key issues in smoking research, such as predicting relapse, testing new medications, investigating the neurobiology of nicotine dependence, and examining reactivity among smokers with comorbid psychopathologies. Determining the order that cues are presented is one of the most critical steps in the design of these investigations. It is widely assumed that cue exposure studies should counterbalance the order in which smoking and control (neutral) cues are presented. This article examines the premises underlying the use of counterbalancing in experimental research, and it evaluates the degree to which counterbalancing is appropriate in smoking cue exposure studies. Methods: We reviewed the available literature on the use of counterbalancing techniques in human smoking cue exposure research. Results: Many studies counterbalancing order of cues have not provided critical analyses to determine whether this approach was appropriate. Studies that have reported relevant data, however, suggest that order of cue presentation interacts with type of cue (smoking vs. control), which raises concerns about the utility of counterbalancing. Primarily, this concern arises from potential carryover effects, in which exposure to smoking cues affects subsequent responding to neutral cues. Conclusions: Cue type by order of cue interactions may compromise the utility of counterbalancing. Unfortunately, there is no obvious alternative that is optimal across studies. Strengths and limitations of several alternative designs are considered, and key questions are identified to advance understanding of the optimal conditions for conducting smoking cue exposure studies. PMID:20884695

  14. Sound Exposure During Outdoor Music Festivals

    Science.gov (United States)

    Tronstad, Tron V.; Gelderblom, Femke B.

    2016-01-01

    Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival's duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization's recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization's recommendations. The results also show that front-of-house measurements reliably predict participant exposure. PMID:27569410

  15. Sound exposure during outdoor music festivals

    Directory of Open Access Journals (Sweden)

    Tron V Tronstad

    2016-01-01

    Full Text Available Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival’s duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization’s recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization’s recommendations. The results also show that front-of-house measurements reliably predict participant exposure.

  16. High frequency source localization in a shallow ocean sound channel using frequency difference matched field processing.

    Science.gov (United States)

    Worthmann, Brian M; Song, H C; Dowling, David R

    2015-12-01

    Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, particularly those involving high frequency signals, imperfect knowledge of the actual propagation environment prevents accurate propagation modeling and source localization via MFP fails. For beamforming applications, this actual-to-model mismatch problem was mitigated through a frequency downshift, made possible by a nonlinear array-signal-processing technique called frequency difference beamforming [Abadi, Song, and Dowling (2012). J. Acoust. Soc. Am. 132, 3018-3029]. Here, this technique is extended to conventional (Bartlett) MFP using simulations and measurements from the 2011 Kauai Acoustic Communications MURI experiment (KAM11) to produce ambiguity surfaces at frequencies well below the signal bandwidth where the detrimental effects of mismatch are reduced. Both the simulation and experimental results suggest that frequency difference MFP can be more robust against environmental mismatch than conventional MFP. In particular, signals of frequency 11.2 kHz-32.8 kHz were broadcast 3 km through a 106-m-deep shallow ocean sound channel to a sparse 16-element vertical receiving array. Frequency difference MFP unambiguously localized the source in several experimental data sets with average peak-to-side-lobe ratio of 0.9 dB, average absolute-value range error of 170 m, and average absolute-value depth error of 10 m.

  17. Local anaphor licensing in an SOV language: implications for retrieval strategies

    Science.gov (United States)

    Kush, Dave; Phillips, Colin

    2014-01-01

    Because morphological and syntactic constraints govern the distribution of potential antecedents for local anaphors, local antecedent retrieval might be expected to make equal use of both syntactic and morphological cues. However, previous research (e.g., Dillon et al., 2013) has shown that local antecedent retrieval is not susceptible to the same morphological interference effects observed during the resolution of morphologically-driven grammatical dependencies, such as subject-verb agreement checking (e.g., Pearlmutter et al., 1999). Although this lack of interference has been taken as evidence that syntactic cues are given priority over morphological cues in local antecedent retrieval, the absence of interference could also be the result of a confound in the materials used: the post-verbal position of local anaphors in prior studies may obscure morphological interference that would otherwise be visible if the critical anaphor were in a different position. We investigated the licensing of local anaphors (reciprocals) in Hindi, an SOV language, in order to determine whether pre-verbal anaphors are subject to morphological interference from feature-matching distractors in a way that post-verbal anaphors are not. Computational simulations using a version of the ACT-R parser (Lewis and Vasishth, 2005) predicted that a feature-matching distractor should facilitate the processing of an unlicensed reciprocal if morphological cues are used in antecedent retrieval. In a self-paced reading study we found no evidence that distractors eased processing of an unlicensed reciprocal. However, the presence of a distractor increased difficulty of processing following the reciprocal. We discuss the significance of these results for theories of cue selection in retrieval. PMID:25414680

  18. Local Anaphor Licensing in an SOV Language: Implications for Retrieval Strategies

    Directory of Open Access Journals (Sweden)

    Dave eKush

    2014-11-01

    Full Text Available Because morphological and syntactic constraints govern the distribution of potential antecedents for local anaphors, local antecedent retrieval might be expected to make equal use of both syntactic and morphological cues. However, previous research (e.g., Dillon et al., 2013 has shown that local antecedent retrieval is not susceptible to the same morphological interference effects observed during the resolution of morphologically-driven grammatical dependencies, such as subject-verb agreement checking (e.g., Pearlmutter, Garnsey and Bock, 1999. Although this lack of interference has been taken as evidence that syntactic cues are given priority over morphological cues in local antecedent retrieval, the absence of interference could also be the result of a confound in the materials used: the post-verbal position of local anaphors in prior studies may obscure morphological interference that would otherwise be visible if the critical anaphor were in a different position. We investigated the licensing of local anaphors (reciprocals in Hindi, an SOV language, in order to determine whether pre-verbal anaphors are subject to morphological interference from feature-matching distractors in a way that post-verbal anaphors are not. Computational simulations using a version of the ACT-R parser (Lewis and Vasishth, 2005 predicted that a feature-matching distractor should facilitate the processing of an unlicensed reciprocal if morphological cues are used in antecedent retrieval. In a self-paced reading study we found no evidence that distractors eased processing of an unlicensed reciprocal. However, the presence of a distractor increased difficulty of processing following the reciprocal. We discuss the significance of these results for theories of cue selection in retrieval.

  19. Extinction and renewal of cue-elicited reward-seeking.

    Science.gov (United States)

    Bezzina, Louise; Lee, Jessica C; Lovibond, Peter F; Colagiuri, Ben

    2016-12-01

    Reward cues can contribute to overconsumption of food and drugs and can relapse. The failure of exposure therapies to reduce overconsumption and relapse is generally attributed to the context-specificity of extinction. However, no previous study has examined whether cue-elicited reward-seeking (as opposed to cue-reactivity) is sensitive to context renewal. We tested this possibility in 160 healthy volunteers using a Pavlovian-instrumental transfer (PIT) design involving voluntary responding for a high value natural reward (chocolate). One reward cue underwent Pavlovian extinction in the same (Group AAA) or different context (Group ABA) to all other phases. This cue was compared with a second non-extinguished reward cue and an unpaired control cue. There was a significant overall PIT effect with both reward cues eliciting reward-seeking on test relative to the unpaired cue. Pavlovian extinction substantially reduced this effect, with the extinguished reward cue eliciting less reward-seeking than the non-extinguished reward cue. Most interestingly, extinction of cue-elicited reward-seeking was sensitive to renewal, with extinction less effective for reducing PIT when conducted in a different context. These findings have important implications for extinction-based interventions for reducing maladaptive reward-seeking in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Why 'piss' is ruder than 'pee'? The role of sound in affective meaning making.

    Directory of Open Access Journals (Sweden)

    Arash Aryani

    Full Text Available Most language users agree that some words sound harsh (e.g. grotesque whereas others sound soft and pleasing (e.g. lagoon. While this prominent feature of human language has always been creatively deployed in art and poetry, it is still largely unknown whether the sound of a word in itself makes any contribution to the word's meaning as perceived and interpreted by the listener. In a large-scale lexicon analysis, we focused on the affective substrates of words' meaning (i.e. affective meaning and words' sound (i.e. affective sound; both being measured on a two-dimensional space of valence (ranging from pleasant to unpleasant and arousal (ranging from calm to excited. We tested the hypothesis that the sound of a word possesses affective iconic characteristics that can implicitly influence listeners when evaluating the affective meaning of that word. The results show that a significant portion of the variance in affective meaning ratings of printed words depends on a number of spectral and temporal acoustic features extracted from these words after converting them to their spoken form (study1. In order to test the affective nature of this effect, we independently assessed the affective sound of these words using two different methods: through direct rating (study2a, and through acoustic models that we implemented based on pseudoword materials (study2b. In line with our hypothesis, the estimated contribution of words' sound to ratings of words' affective meaning was indeed associated with the affective sound of these words; with a stronger effect for arousal than for valence. Further analyses revealed crucial phonetic features potentially causing the effect of sound on meaning: For instance, words with short vowels, voiceless consonants, and hissing sibilants (as in 'piss' feel more arousing and negative. Our findings suggest that the process of meaning making is not solely determined by arbitrary mappings between formal aspects of words and

  1. Why 'piss' is ruder than 'pee'? The role of sound in affective meaning making.

    Science.gov (United States)

    Aryani, Arash; Conrad, Markus; Schmidtke, David; Jacobs, Arthur

    2018-01-01

    Most language users agree that some words sound harsh (e.g. grotesque) whereas others sound soft and pleasing (e.g. lagoon). While this prominent feature of human language has always been creatively deployed in art and poetry, it is still largely unknown whether the sound of a word in itself makes any contribution to the word's meaning as perceived and interpreted by the listener. In a large-scale lexicon analysis, we focused on the affective substrates of words' meaning (i.e. affective meaning) and words' sound (i.e. affective sound); both being measured on a two-dimensional space of valence (ranging from pleasant to unpleasant) and arousal (ranging from calm to excited). We tested the hypothesis that the sound of a word possesses affective iconic characteristics that can implicitly influence listeners when evaluating the affective meaning of that word. The results show that a significant portion of the variance in affective meaning ratings of printed words depends on a number of spectral and temporal acoustic features extracted from these words after converting them to their spoken form (study1). In order to test the affective nature of this effect, we independently assessed the affective sound of these words using two different methods: through direct rating (study2a), and through acoustic models that we implemented based on pseudoword materials (study2b). In line with our hypothesis, the estimated contribution of words' sound to ratings of words' affective meaning was indeed associated with the affective sound of these words; with a stronger effect for arousal than for valence. Further analyses revealed crucial phonetic features potentially causing the effect of sound on meaning: For instance, words with short vowels, voiceless consonants, and hissing sibilants (as in 'piss') feel more arousing and negative. Our findings suggest that the process of meaning making is not solely determined by arbitrary mappings between formal aspects of words and concepts they

  2. Interaction between scene-based and array-based contextual cueing.

    Science.gov (United States)

    Rosenbaum, Gail M; Jiang, Yuhong V

    2013-07-01

    Contextual cueing refers to the cueing of spatial attention by repeated spatial context. Previous studies have demonstrated distinctive properties of contextual cueing by background scenes and by an array of search items. Whereas scene-based contextual cueing reflects explicit learning of the scene-target association, array-based contextual cueing is supported primarily by implicit learning. In this study, we investigated the interaction between scene-based and array-based contextual cueing. Participants searched for a target that was predicted by both the background scene and the locations of distractor items. We tested three possible patterns of interaction: (1) The scene and the array could be learned independently, in which case cueing should be expressed even when only one cue was preserved; (2) the scene and array could be learned jointly, in which case cueing should occur only when both cues were preserved; (3) overshadowing might occur, in which case learning of the stronger cue should preclude learning of the weaker cue. In several experiments, we manipulated the nature of the contextual cues present during training and testing. We also tested explicit awareness of scenes, scene-target associations, and arrays. The results supported the overshadowing account: Specifically, scene-based contextual cueing precluded array-based contextual cueing when both were predictive of the location of a search target. We suggest that explicit, endogenous cues dominate over implicit cues in guiding spatial attention.

  3. Three-year experience with the Sophono in children with congenital conductive unilateral hearing loss: tolerability, audiometry, and sound localization compared to a bone-anchored hearing aid.

    Science.gov (United States)

    Nelissen, Rik C; Agterberg, Martijn J H; Hol, Myrthe K S; Snik, Ad F M

    2016-10-01

    Bone conduction devices (BCDs) are advocated as an amplification option for patients with congenital conductive unilateral hearing loss (UHL), while other treatment options could also be considered. The current study compared a transcutaneous BCD (Sophono) with a percutaneous BCD (bone-anchored hearing aid, BAHA) in 12 children with congenital conductive UHL. Tolerability, audiometry, and sound localization abilities with both types of BCD were studied retrospectively. The mean follow-up was 3.6 years for the Sophono users (n = 6) and 4.7 years for the BAHA users (n = 6). In each group, two patients had stopped using their BCD. Tolerability was favorable for the Sophono. Aided thresholds with the Sophono were unsatisfactory, as they did not reach under a mean pure tone average of 30 dB HL. Sound localization generally improved with both the Sophono and the BAHA, although localization abilities did not reach the level of normal hearing children. These findings, together with previously reported outcomes, are important to take into account when counseling patients and their caretakers. The selection of a suitable amplification option should always be made deliberately and on individual basis for each patient in this diverse group of children with congenital conductive UHL.

  4. Retro-dimension-cue benefit in visual working memory

    OpenAIRE

    Ye, Chaoxiong; Hu, Zhonghua; Ristaniemi, Tapani; Gendron, Maria; Liu, Qiang

    2016-01-01

    In visual working memory (VWM) tasks, participants? performance can be improved by a retro-object-cue. However, previous studies have not investigated whether participants? performance can also be improved by a retro-dimension-cue. Three experiments investigated this issue. We used a recall task with a retro-dimension-cue in all experiments. In Experiment 1, we found benefits from retro-dimension-cues compared to neutral cues. This retro-dimension-cue benefit is reflected in an increased prob...

  5. Self-construal differences in neural responses to negative social cues.

    Science.gov (United States)

    Liddell, Belinda J; Felmingham, Kim L; Das, Pritha; Whitford, Thomas J; Malhi, Gin S; Battaglini, Eva; Bryant, Richard A

    2017-10-01

    Cultures differ substantially in representations of the self. Whereas individualistic cultural groups emphasize an independent self, reflected in processing biases towards centralized salient objects, collectivistic cultures are oriented towards an interdependent self, attending to contextual associations between visual cues. It is unknown how these perceptual biases may affect brain activity in response to negative social cues. Moreover, while some studies have shown that individual differences in self-construal moderate cultural group comparisons, few have examined self-construal differences separate to culture. To investigate these issues, a final sample of a group of healthy participants high in trait levels of collectivistic self-construal (n=16) and individualistic self-construal (n=19), regardless of cultural background, completed a negative social cue evaluation task designed to engage face/object vs context-specific neural processes whilst undergoing fMRI scanning. Between-group analyses revealed that the collectivistic group exclusively engaged the parahippocampal gyrus (parahippocampal place area) - a region critical to contextual integration - during negative face processing - suggesting compensatory activations when contextual information was missing. The collectivist group also displayed enhanced negative context dependent brain activity involving the left superior occipital gyrus/cuneus and right anterior insula. By contrast, the individualistic group did not engage object or localized face processing regions as predicted, but rather demonstrated heightened appraisal and self-referential activations in medial prefrontal and temporoparietal regions to negative contexts - again suggesting compensatory processes when focal cues were absent. While individualists also appeared more sensitive to negative faces in the scenes, activating the right middle cingulate gyrus, dorsal prefrontal and parietal activations, this activity was observed relative to the

  6. Effects of the timing and identity of retrieval cues in individual recall: an attempt to mimic cross-cueing in collaborative recall.

    Science.gov (United States)

    Andersson, Jan; Hitch, Graham; Meudell, Peter

    2006-01-01

    Inhibitory effects in collaborative recall have been attributed to cross-cueing among partners, in the same way that part-set cues are known to impair recall in individuals. However, studies of part-set cueing in individuals typically involve presenting cues visually at the start of recall, whereas cross-cueing in collaboration is likely to be spoken and distributed over time. In an attempt to bridge this gap, three experiments investigated effects of presenting spoken part-set or extra-list cues at different times during individual recall. Cues had an inhibitory effect on recollection in the early part of the recall period, especially when presented in immediate succession at the start of recall. There was no difference between the effects of part-set and extra-list cues under these presentation conditions. However, more inhibition was generated by part-set than extra-list cues when cue presentation was distributed throughout recall. These results are interpreted as suggesting that cues presented during recall disrupt memory in two ways, corresponding to either blocking or modifying retrieval processes. Implications for explaining and possibly ameliorating inhibitory effects in collaborative recall are discussed.

  7. Dominant Glint Based Prey Localization in Horseshoe Bats: A Possible Strategy for Noise Rejection

    OpenAIRE

    Vanderelst, Dieter; Reijniers, Jonas; Firzlaff, Uwe; Peremans, Herbert

    2011-01-01

    Rhinolophidae or Horseshoe bats emit long and narrowband calls. Fluttering insect prey generates echoes in which amplitude and frequency shifts are present, i.e. glints. These glints are reliable cues about the presence of prey and also encode certain properties of the prey. In this paper, we propose that these glints, i.e. the dominant glints, are also reliable signals upon which to base prey localization. In contrast to the spectral cues used by many other bats, the localization cues in Rhi...

  8. Eliciting nicotine craving with virtual smoking cues.

    Science.gov (United States)

    Gamito, Pedro; Oliveira, Jorge; Baptista, André; Morais, Diogo; Lopes, Paulo; Rosa, Pedro; Santos, Nuno; Brito, Rodrigo

    2014-08-01

    Craving is a strong desire to consume that emerges in every case of substance addiction. Previous studies have shown that eliciting craving with an exposure cues protocol can be a useful option for the treatment of nicotine dependence. Thus, the main goal of this study was to develop a virtual platform in order to induce craving in smokers. Fifty-five undergraduate students were randomly assigned to two different virtual environments: high arousal contextual cues and low arousal contextual cues scenarios (17 smokers with low nicotine dependency were excluded). An eye-tracker system was used to evaluate attention toward these cues. Eye fixation on smoking-related cues differed between smokers and nonsmokers, indicating that smokers focused more often on smoking-related cues than nonsmokers. Self-reports of craving are in agreement with these results and suggest a significant increase in craving after exposure to smoking cues. In sum, these data support the use of virtual environments for eliciting craving.

  9. Strategy selection in cue-based decision making.

    Science.gov (United States)

    Bryant, David J

    2014-06-01

    People can make use of a range of heuristic and rational, compensatory strategies to perform a multiple-cue judgment task. It has been proposed that people are sensitive to the amount of cognitive effort required to employ decision strategies. Experiment 1 employed a dual-task methodology to investigate whether participants' preference for heuristic versus compensatory decision strategies can be altered by increasing the cognitive demands of the task. As indicated by participants' decision times, a secondary task interfered more with the performance of a heuristic than compensatory decision strategy but did not affect the proportions of participants using either type of strategy. A stimulus set effect suggested that the conjunction of cue salience and cue validity might play a determining role in strategy selection. The results of Experiment 2 indicated that when a perceptually salient cue was also the most valid, the majority of participants preferred a single-cue heuristic strategy. Overall, the results contradict the view that heuristics are more likely to be adopted when a task is made more cognitively demanding. It is argued that people employ 2 learning processes during training, one an associative learning process in which cue-outcome associations are developed by sampling multiple cues, and another that involves the sequential examination of single cues to serve as a basis for a single-cue heuristic.

  10. Automatic Retrieval of Newly Instructed Cue-Task Associations Seen in Task-Conflict Effects in the First Trial after Cue-Task Instructions.

    Science.gov (United States)

    Meiran, Nachshon; Pereg, Maayan

    2017-01-01

    Novel stimulus-response associations are retrieved automatically even without prior practice. Is this true for novel cue-task associations? The experiment involved miniblocks comprising three phases and task switching. In the INSTRUCTION phase, two new stimuli (or familiar cues) were arbitrarily assigned as cues for up-down/right-left tasks performed on placeholder locations. In the UNIVALENT phase, there was no task cue since placeholder's location afforded one task but the placeholders were the stimuli that we assigned as task cues for the following BIVALENT phase (involving target locations affording both tasks). Thus, participants held the novel cue-task associations in memory while executing the UNIVALENT phase. Results show poorer performance in the first univalent trial when the placeholder was associated with the opposite task (incompatible) than when it was compatible, an effect that was numerically larger with newly instructed cues than with familiar cues. These results indicate automatic retrieval of newly instructed cue-task associations.

  11. Kin-informative recognition cues in ants

    DEFF Research Database (Denmark)

    Nehring, Volker; Evison, Sophie E F; Santorelli, Lorenzo A

    2011-01-01

    behaviour is thought to be rare in one of the classic examples of cooperation--social insect colonies--because the colony-level costs of individual selfishness select against cues that would allow workers to recognize their closest relatives. In accord with this, previous studies of wasps and ants have...... found little or no kin information in recognition cues. Here, we test the hypothesis that social insects do not have kin-informative recognition cues by investigating the recognition cues and relatedness of workers from four colonies of the ant Acromyrmex octospinosus. Contrary to the theoretical...... prediction, we show that the cuticular hydrocarbons of ant workers in all four colonies are informative enough to allow full-sisters to be distinguished from half-sisters with a high accuracy. These results contradict the hypothesis of non-heritable recognition cues and suggest that there is more potential...

  12. Multiscale Cues Drive Collective Cell Migration

    Science.gov (United States)

    Nam, Ki-Hwan; Kim, Peter; Wood, David K.; Kwon, Sunghoon; Provenzano, Paolo P.; Kim, Deok-Ho

    2016-07-01

    To investigate complex biophysical relationships driving directed cell migration, we developed a biomimetic platform that allows perturbation of microscale geometric constraints with concomitant nanoscale contact guidance architectures. This permits us to elucidate the influence, and parse out the relative contribution, of multiscale features, and define how these physical inputs are jointly processed with oncogenic signaling. We demonstrate that collective cell migration is profoundly enhanced by the addition of contract guidance cues when not otherwise constrained. However, while nanoscale cues promoted migration in all cases, microscale directed migration cues are dominant as the geometric constraint narrows, a behavior that is well explained by stochastic diffusion anisotropy modeling. Further, oncogene activation (i.e. mutant PIK3CA) resulted in profoundly increased migration where extracellular multiscale directed migration cues and intrinsic signaling synergistically conspire to greatly outperform normal cells or any extracellular guidance cues in isolation.

  13. Multi-modal homing in sea turtles: modeling dual use of geomagnetic and chemical cues in island-finding

    Directory of Open Access Journals (Sweden)

    Courtney S Endres

    2016-02-01

    Full Text Available Sea turtles are capable of navigating across large expanses of ocean to arrive at remote islands for nesting, but how they do so has remained enigmatic. An interesting example involves green turtles (Chelonia mydas that nest on Ascension Island, a tiny land mass located approximately 2000 km from the turtles' foraging grounds along the coast of Brazil. Sensory cues that turtles are known to detect, and which might hypothetically be used to help locate Ascension Island, include the geomagnetic field, airborne odorants, and waterborne odorants. One possibility is that turtles use magnetic cues to arrive in the vicinity of the island, then use chemical cues to pinpoint its location. As a first step toward investigating this hypothesis, we used oceanic, atmospheric, and geomagnetic models to assess whether magnetic and chemical cues might plausibly be used by turtles to locate Ascension Island. Results suggest that waterborne and airborne odorants alone are insufficient to guide turtles from Brazil to Ascension, but might permit localization of the island once turtles arrive in its vicinity. By contrast, magnetic cues might lead turtles into the vicinity of the island, but would not typically permit its localization because the field shifts gradually over time. Simulations reveal, however, that the sequential use of magnetic and chemical cues can potentially provide a robust navigational strategy for locating Ascension Island. Specifically, one strategy that appears viable is following a magnetic isoline into the vicinity of Ascension Island until an odor plume emanating from the island is encountered, after which turtles might either: (1 initiate a search strategy; or (2 follow the plume to its island source. These findings are consistent with the hypothesis that sea turtles, and perhaps other marine animals, use a multi-modal navigational strategy for locating remote islands.

  14. Multi-Modal Homing in Sea Turtles: Modeling Dual Use of Geomagnetic and Chemical Cues in Island-Finding.

    Science.gov (United States)

    Endres, Courtney S; Putman, Nathan F; Ernst, David A; Kurth, Jessica A; Lohmann, Catherine M F; Lohmann, Kenneth J

    2016-01-01

    Sea turtles are capable of navigating across large expanses of ocean to arrive at remote islands for nesting, but how they do so has remained enigmatic. An interesting example involves green turtles (Chelonia mydas) that nest on Ascension Island, a tiny land mass located approximately 2000 km from the turtles' foraging grounds along the coast of Brazil. Sensory cues that turtles are known to detect, and which might hypothetically be used to help locate Ascension Island, include the geomagnetic field, airborne odorants, and waterborne odorants. One possibility is that turtles use magnetic cues to arrive in the vicinity of the island, then use chemical cues to pinpoint its location. As a first step toward investigating this hypothesis, we used oceanic, atmospheric, and geomagnetic models to assess whether magnetic and chemical cues might plausibly be used by turtles to locate Ascension Island. Results suggest that waterborne and airborne odorants alone are insufficient to guide turtles from Brazil to Ascension, but might permit localization of the island once turtles arrive in its vicinity. By contrast, magnetic cues might lead turtles into the vicinity of the island, but would not typically permit its localization because the field shifts gradually over time. Simulations reveal, however, that the sequential use of magnetic and chemical cues can potentially provide a robust navigational strategy for locating Ascension Island. Specifically, one strategy that appears viable is following a magnetic isoline into the vicinity of Ascension Island until an odor plume emanating from the island is encountered, after which turtles might either: (1) initiate a search strategy; or (2) follow the plume to its island source. These findings are consistent with the hypothesis that sea turtles, and perhaps other marine animals, use a multi-modal navigational strategy for locating remote islands.

  15. Assessment of the health effects of low-frequency sounds and infra-sounds from wind farms. ANSES Opinion. Collective expertise report

    International Nuclear Information System (INIS)

    Lepoutre, Philippe; Avan, Paul; Cheveigne, Alain de; Ecotiere, David; Evrard, Anne-Sophie; Hours, Martine; Lelong, Joel; Moati, Frederique; Michaud, David; Toppila, Esko; Beugnet, Laurent; Bounouh, Alexandre; Feltin, Nicolas; Campo, Pierre; Dore, Jean-Francois; Ducimetiere, Pierre; Douki, Thierry; Flahaut, Emmanuel; Gaffet, Eric; Lafaye, Murielle; Martinsons, Christophe; Mouneyrac, Catherine; Ndagijimana, Fabien; Soyez, Alain; Yardin, Catherine; Cadene, Anthony; Merckel, Olivier; Niaudet, Aurelie; Cadene, Anthony; Saddoki, Sophia; Debuire, Brigitte; Genet, Roger

    2017-03-01

    a health effect has not been documented. In this context, ANSES recommends: Concerning studies and research: - verifying whether or not there is a possible mechanism modulating the perception of audible sound at intensities of infra-sound similar to those measured from local residents; - studying the effects of the amplitude modulation of the acoustic signal on the noise-related disturbance felt; - studying the assumption that cochlea-vestibular effects may be responsible for pathophysiological effects; - undertaking a survey of residents living near wind farms enabling the identification of an objective signature of a physiological effect. Concerning information for local residents and the monitoring of noise levels: - enhancing information for local residents during the construction of wind farms and participation in public inquiries undertaken in rural areas; - systematically measuring the noise emissions of wind turbines before and after they are brought into service; - setting up, especially in the event of controversy, continuous noise measurement systems around wind farms (based on experience at airports, for example). Lastly, the Agency reiterates that the current regulations state that the distance between a wind turbine and the first home should be evaluated on a case-by-case basis, taking the conditions of wind farms into account. This distance, of at least 500 metres, may be increased further to the results of an impact assessment, in order to comply with the limit values for noise exposure. Current knowledge of the potential health effects of exposure to infra-sounds and low-frequency noise provides no justification for changing the current limit values or for extending the spectrum of noise currently taken into consideration

  16. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  17. Retro-dimension-cue benefit in visual working memory.

    Science.gov (United States)

    Ye, Chaoxiong; Hu, Zhonghua; Ristaniemi, Tapani; Gendron, Maria; Liu, Qiang

    2016-10-24

    In visual working memory (VWM) tasks, participants' performance can be improved by a retro-object-cue. However, previous studies have not investigated whether participants' performance can also be improved by a retro-dimension-cue. Three experiments investigated this issue. We used a recall task with a retro-dimension-cue in all experiments. In Experiment 1, we found benefits from retro-dimension-cues compared to neutral cues. This retro-dimension-cue benefit is reflected in an increased probability of reporting the target, but not in the probability of reporting the non-target, as well as increased precision with which this item is remembered. Experiment 2 replicated the retro-dimension-cue benefit and showed that the length of the blank interval after the cue disappeared did not influence recall performance. Experiment 3 replicated the results of Experiment 2 with a lower memory load. Our studies provide evidence that there is a robust retro-dimension-cue benefit in VWM. Participants can use internal attention to flexibly allocate cognitive resources to a particular dimension of memory representations. The results also support the feature-based storing hypothesis.

  18. Gaze Cueing by Pareidolia Faces

    Directory of Open Access Journals (Sweden)

    Kohske Takahashi

    2013-12-01

    Full Text Available Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon. While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process.

  19. Gaze cueing by pareidolia faces.

    Science.gov (United States)

    Takahashi, Kohske; Watanabe, Katsumi

    2013-01-01

    Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process.

  20. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  1. Assessment of Spectral and Temporal Resolution in Cochlear Implant Users Using Psychoacoustic Discrimination and Speech Cue Categorization.

    Science.gov (United States)

    Winn, Matthew B; Won, Jong Ho; Moon, Il Joon

    This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). The authors hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. The authors further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Nineteen cochlear implant listeners and 10 listeners with normal hearing participated in a suite of tasks that included spectral ripple discrimination, temporal modulation detection, and syllable categorization, which was split into a spectral cue-based task (targeting the /ba/-/da/ contrast) and a timing cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for cochlear implant listeners. Cochlear implant users were generally less successful at utilizing both spectral and temporal cues for categorization compared with listeners with normal hearing. For the cochlear implant listener group, spectral ripple discrimination was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. Temporal modulation detection using 100- and 10-Hz-modulated noise was not correlated either with the cochlear implant subjects' categorization of

  2. SOUND-SPEED INVERSION OF THE SUN USING A NONLOCAL STATISTICAL CONVECTION THEORY

    International Nuclear Information System (INIS)

    Zhang Chunguang; Deng Licai; Xiong Darun; Christensen-Dalsgaard, Jørgen

    2012-01-01

    Helioseismic inversions reveal a major discrepancy in sound speed between the Sun and the standard solar model just below the base of the solar convection zone. We demonstrate that this discrepancy is caused by the inherent shortcomings of the local mixing-length theory adopted in the standard solar model. Using a self-consistent nonlocal convection theory, we construct an envelope model of the Sun for sound-speed inversion. Our solar model has a very smooth transition from the convective envelope to the radiative interior, and the convective energy flux changes sign crossing the boundaries of the convection zone. It shows evident improvement over the standard solar model, with a significant reduction in the discrepancy in sound speed between the Sun and local convection models.

  3. Effects of rodent species, seed species, and predator cues on seed fate

    Science.gov (United States)

    Sivy, Kelly J.; Ostoja, Steven M.; Schupp, Eugene W.; Durham, Susan

    2011-07-01

    Seed selection, removal and subsequent management by granivorous animals is thought to be a complex interaction of factors including qualities of the seeds themselves (e.g., seed size, nutritional quality) and features of the local habitat (e.g. perceived predator risk). At the same time, differential seed selection and dispersal is thought to have profound effects on seed fate and potentially vegetation dynamics. In a feeding arena, we tested whether rodent species, seed species, and indirect and direct predation cues influence seed selection and handling behaviors (e.g., scatter hoarding versus larder hoarding) of two heteromyid rodents, Ord's kangaroo rat ( Dipodomys ordii) and the Great Basin pocket mouse ( Perognathus parvus). The indirect cue was shrub cover, a feature of the environment. Direct cues, presented individually, were (1) control, (2) coyote ( Canis latrans) vocalization, (3) coyote scent, (4) red fox ( Vulpes vulpes) scent, or (5) short-eared owl ( Asio flammeus) vocalization. We offered seeds of three sizes: two native grasses, Indian ricegrass ( Achnatherum hymenoides) and bluebunch wheatgrass ( Pseudoroegneria spicata), and the non-native cereal rye ( Secale cereale), each in separate trays. Kangaroo rats preferentially harvested Indian ricegrass while pocket mice predominately harvested Indian ricegrass and cereal rye. Pocket mice were more likely to scatter hoard preferred seeds, whereas kangaroo rats mostly consumed and/or larder hoarded preferred seeds. No predator cue significantly affected seed preferences. However, both species altered seed handling behavior in response to direct predation cues by leaving more seeds available in the seed pool, though they responded to different predator cues. If these results translate to natural dynamics on the landscape, the two rodents are expected to have different impacts on seed survival and plant recruitment via their different seed selection and seed handling behaviors.

  4. Cue generation: How learners flexibly support future retrieval.

    Science.gov (United States)

    Tullis, Jonathan G; Benjamin, Aaron S

    2015-08-01

    The successful use of memory requires us to be sensitive to the cues that will be present during retrieval. In many situations, we have some control over the external cues that we will encounter. For instance, learners create shopping lists at home to help remember what items to later buy at the grocery store, and they generate computer file names to help remember the contents of those files. Generating cues in the service of later cognitive goals is a complex task that lies at the intersection of metacognition, communication, and memory. In this series of experiments, we investigated how and how well learners generate external mnemonic cues. Across 5 experiments, learners generated a cue for each target word in a to-be-remembered list and received these cues during a later cued recall test. Learners flexibly generated cues in response to different instructional demands and study list compositions. When generating mnemonic cues, as compared to descriptions of target items, learners produced cues that were more distinct than mere descriptions and consequently elicited greater cued recall performance than those descriptions. When learners were aware of competing targets in the study list, they generated mnemonic cues with smaller cue-to-target associative strength but that were even more distinct. These adaptations led to fewer confusions among competing targets and enhanced cued recall performance. These results provide another example of the metacognitively sophisticated tactics that learners use to effectively support future retrieval.

  5. March 1964 Prince William Sound, USA Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Prince William Sound magnitude 9.2 Mw earthquake on March 28, 1964 at 03:36 GMT (March 27 at 5:36 pm local time), was the largest U.S. earthquake ever recorded...

  6. Retrieval-induced forgetting and interference between cues: training a cue-outcome association attenuates retrieval by alternative cues.

    Science.gov (United States)

    Ortega-Castro, Nerea; Vadillo, Miguel A

    2013-03-01

    Some researchers have attempted to determine whether situations in which a single cue is paired with several outcomes (A-B, A-C interference or interference between outcomes) involve the same learning and retrieval mechanisms as situations in which several cues are paired with a single outcome (A-B, C-B interference or interference between cues). Interestingly, current research on a related effect, which is known as retrieval-induced forgetting, can illuminate this debate. Most retrieval-induced forgetting experiments are based on an experimental design that closely resembles the A-B, A-C interference paradigm. In the present experiment, we found that a similar effect may be observed when items are rearranged such that the general structure of the task more closely resembles the A-B, C-B interference paradigm. This result suggests that, as claimed by other researchers in the area of contingency learning, the two types of interference, namely A-B, A-C and A-B, C-B interference, may share some basic mechanisms. Moreover, the type of inhibitory processes assumed to underlie retrieval-induced forgetting may also play a role in these phenomena. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Effects of user training with electronically-modulated sound transmission hearing protectors and the open ear on horizontal localization ability.

    Science.gov (United States)

    Casali, John G; Robinette, Martin B

    2015-02-01

    To determine if training with electronically-modulated hearing protection (EMHP) and the open ear results in auditory learning on a horizontal localization task. Baseline localization testing was conducted in three listening conditions (open-ear, in-the-ear (ITE) EMHP, and over-the-ear (OTE) EMHP). Participants then wore either an ITE or OTE EMHP for 12, almost daily, one-hour training sessions. After training was complete, participants again underwent localization testing in all three listening conditions. A computer with a custom software and hardware interface presented localization sounds and collected participant responses. Twelve participants were recruited from the student population at Virginia Tech. Audiometric requirements were 35 dBHL at 500, 1000, and 2000 Hz bilaterally, and 55 dBHL at 4000 Hz in at least one ear. Pre-training localization performance with an ITE or OTE EMHP was worse than open-ear performance. After training with any given listening condition, including open-ear, performance in that listening condition improved, in part from a practice effect. However, post-training localization performance showed near equal performance between the open-ear and training EMHP. Auditory learning occurred for the training EMHP, but not for the non-training EMHP; that is, there was no significant training crossover effect between the ITE and the OTE devices. It is evident from this study that auditory learning (improved horizontal localization performance) occurred with the EMHP for which training was performed. However, performance improvements found with the training EMHP were not realized in the non-training EMHP. Furthermore, localization performance in the open-ear condition also benefitted from training on the task.

  8. Forgotten but not gone: Retro-cue costs and benefits in a double-cueing paradigm suggest multiple states in visual short-term memory.

    Science.gov (United States)

    van Moorselaar, Dirk; Olivers, Christian N L; Theeuwes, Jan; Lamme, Victor A F; Sligte, Ilja G

    2015-11-01

    Visual short-term memory (VSTM) performance is enhanced when the to-be-tested item is cued after encoding. This so-called retro-cue benefit is typically accompanied by a cost for the noncued items, suggesting that information is lost from VSTM upon presentation of a retrospective cue. Here we assessed whether noncued items can be restored to VSTM when made relevant again by a subsequent second cue. We presented either 1 or 2 consecutive retro-cues (80% valid) during the retention interval of a change-detection task. Relative to no cue, a valid cue increased VSTM capacity by 2 items, while an invalid cue decreased capacity by 2. Importantly, when a second, valid cue followed an invalid cue, capacity regained 2 items, so that performance was back on par. In addition, when the second cue was also invalid, there was no extra loss of information from VSTM, suggesting that those items that survived a first invalid cue, automatically also survived a second. We conclude that these results are in support of a very versatile VSTM system, in which memoranda adopt different representational states depending on whether they are deemed relevant now, in the future, or not at all. We discuss a neural model that is consistent with this conclusion. (c) 2015 APA, all rights reserved).

  9. A comparison of ambient casino sound and music: effects on dissociation and on perceptions of elapsed time while playing slot machines.

    Science.gov (United States)

    Noseworthy, Theodore J; Finlay, Karen

    2009-09-01

    This research examined the effects of a casino's auditory character on estimates of elapsed time while gambling. More specifically, this study varied whether the sound heard while gambling was ambient casino sound alone or ambient casino sound accompanied by music. The tempo and volume of both the music and ambient sound were varied to manipulate temporal engagement and introspection. One hundred and sixty (males = 91) individuals played slot machines in groups of 5-8, after which they provided estimates of elapsed time. The findings showed that the typical ambient casino auditive environment, which characterizes the majority of gaming venues, promotes understated estimates of elapsed duration of play. In contrast, when music is introduced into the ambient casino environment, it appears to provide a cue of interval from which players can more accurately reconstruct elapsed duration of play. This is particularly the case when the tempo of the music is slow and the volume is high. Moreover, the confidence with which time estimates are held (as reflected by latency of response) is higher in an auditive environment with music than in an environment that is comprised of ambient casino sounds alone. Implications for casino management are discussed.

  10. Hearing visuo-tactile synchrony - Sound-induced proprioceptive drift in the invisible hand illusion.

    Science.gov (United States)

    Darnai, Gergely; Szolcsányi, Tibor; Hegedüs, Gábor; Kincses, Péter; Kállai, János; Kovács, Márton; Simon, Eszter; Nagy, Zsófia; Janszky, József

    2017-02-01

    The rubber hand illusion (RHI) and its variant the invisible hand illusion (IHI) are useful for investigating multisensory aspects of bodily self-consciousness. Here, we explored whether auditory conditioning during an RHI could enhance the trisensory visuo-tactile-proprioceptive interaction underlying the IHI. Our paradigm comprised of an IHI session that was followed by an RHI session and another IHI session. The IHI sessions had two parts presented in counterbalanced order. One part was conducted in silence, whereas the other part was conducted on the backdrop of metronome beats that occurred in synchrony with the brush movements used for the induction of the illusion. In a first experiment, the RHI session also involved metronome beats and was aimed at creating an associative memory between the brush stroking of a rubber hand and the sounds. An analysis of IHI sessions showed that the participants' perceived hand position drifted more towards the body-midline in the metronome relative to the silent condition without any sound-related session differences. Thus, the sounds, but not the auditory RHI conditioning, influenced the IHI. In a second experiment, the RHI session was conducted without metronome beats. This confirmed the conditioning-independent presence of sound-induced proprioceptive drift in the IHI. Together, these findings show that the influence of visuo-tactile integration on proprioceptive updating is modifiable by irrelevant auditory cues merely through the temporal correspondence between the visuo-tactile and auditory events. © 2016 The British Psychological Society.

  11. Broadband sound blocking in phononic crystals with rotationally symmetric inclusions.

    Science.gov (United States)

    Lee, Joong Seok; Yoo, Sungmin; Ahn, Young Kwan; Kim, Yoon Young

    2015-09-01

    This paper investigates the feasibility of broadband sound blocking with rotationally symmetric extensible inclusions introduced in phononic crystals. By varying the size of four equally shaped inclusions gradually, the phononic crystal experiences remarkable changes in its band-stop properties, such as shifting/widening of multiple Bragg bandgaps and evolution to resonance gaps. Necessary extensions of the inclusions to block sound effectively can be determined for given incident frequencies by evaluating power transmission characteristics. By arraying finite dissimilar unit cells, the resulting phononic crystal exhibits broadband sound blocking from combinational effects of multiple Bragg scattering and local resonances even with small-numbered cells.

  12. Initial uncertainty in Pavlovian reward prediction persistently elevates incentive salience and extends sign-tracking to normally unattractive cues.

    Science.gov (United States)

    Robinson, Mike J F; Anselme, Patrick; Fischer, Adam M; Berridge, Kent C

    2014-06-01

    Uncertainty is a component of many gambling games and may play a role in incentive motivation and cue attraction. Uncertainty can increase the attractiveness for predictors of reward in the Pavlovian procedure of autoshaping, visible as enhanced sign-tracking (or approach and nibbles) by rats of a metal lever whose sudden appearance acts as a conditioned stimulus (CS+) to predict sucrose pellets as an unconditioned stimulus (UCS). Here we examined how reward uncertainty might enhance incentive salience as sign-tracking both in intensity and by broadening the range of attractive CS+s. We also examined whether initially induced uncertainty enhancements of CS+ attraction can endure beyond uncertainty itself, and persist even when Pavlovian prediction becomes 100% certain. Our results show that uncertainty can broaden incentive salience attribution to make CS cues attractive that would otherwise not be (either because they are too distal from reward or too risky to normally attract sign-tracking). In addition, uncertainty enhancement of CS+ incentive salience, once induced by initial exposure, persisted even when Pavlovian CS-UCS correlations later rose toward 100% certainty in prediction. Persistence suggests an enduring incentive motivation enhancement potentially relevant to gambling, which in some ways resembles incentive-sensitization. Higher motivation to uncertain CS+s leads to more potent attraction to these cues when they predict the delivery of uncertain rewards. In humans, those cues might possibly include the sights and sounds associated with gambling, which contribute a major component of the play immersion experienced by problematic gamblers. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. On the relevance of source effects in geomagnetic pulsations for induction soundings

    Science.gov (United States)

    Neska, Anne; Tadeusz Reda, Jan; Leszek Neska, Mariusz; Petrovich Sumaruk, Yuri

    2018-03-01

    This study is an attempt to close a gap between recent research on geomagnetic pulsations and their usage as source signals in electromagnetic induction soundings (i.e., magnetotellurics, geomagnetic depth sounding, and magnetovariational sounding). The plane-wave assumption as a precondition for the proper performance of these methods is partly violated by the local nature of field line resonances which cause a considerable portion of pulsations at mid latitudes. It is demonstrated that and explained why in spite of this, the application of remote reference stations in quasi-global distances for the suppression of local correlated-noise effects in induction arrows is possible in the geomagnetic pulsation range. The important role of upstream waves and of the magnetic equatorial region for such applications is emphasized. Furthermore, the principal difference between application of reference stations for local transfer functions (which result in sounding curves and induction arrows) and for inter-station transfer functions is considered. The preconditions for the latter are much stricter than for the former. Hence a failure to estimate an inter-station transfer function to be interpreted in terms of electromagnetic induction, e.g., because of field line resonances, does not necessarily prohibit use of the station pair for a remote reference estimation of the impedance tensor.

  14. On the relevance of source effects in geomagnetic pulsations for induction soundings

    Directory of Open Access Journals (Sweden)

    A. Neska

    2018-03-01

    Full Text Available This study is an attempt to close a gap between recent research on geomagnetic pulsations and their usage as source signals in electromagnetic induction soundings (i.e., magnetotellurics, geomagnetic depth sounding, and magnetovariational sounding. The plane-wave assumption as a precondition for the proper performance of these methods is partly violated by the local nature of field line resonances which cause a considerable portion of pulsations at mid latitudes. It is demonstrated that and explained why in spite of this, the application of remote reference stations in quasi-global distances for the suppression of local correlated-noise effects in induction arrows is possible in the geomagnetic pulsation range. The important role of upstream waves and of the magnetic equatorial region for such applications is emphasized. Furthermore, the principal difference between application of reference stations for local transfer functions (which result in sounding curves and induction arrows and for inter-station transfer functions is considered. The preconditions for the latter are much stricter than for the former. Hence a failure to estimate an inter-station transfer function to be interpreted in terms of electromagnetic induction, e.g., because of field line resonances, does not necessarily prohibit use of the station pair for a remote reference estimation of the impedance tensor.

  15. The cue is key : design for real-life remembering

    NARCIS (Netherlands)

    Hoven, van den E.A.W.H.; Eggen, J.H.

    2014-01-01

    This paper aims to put the memory cue in the spotlight. We will show how memory cues are incorporated in the area of interaction design. The focus will be on external memory cues: cues that exist outside the human mind but have an internal effect on memory reconstruction. Examples of external cues

  16. Cue reactivity in virtual reality: the role of context.

    Science.gov (United States)

    Paris, Megan M; Carter, Brian L; Traylor, Amy C; Bordnick, Patrick S; Day, Susan X; Armsworth, Mary W; Cinciripini, Paul M

    2011-07-01

    Cigarette smokers in laboratory experiments readily respond to smoking stimuli with increased craving. An alternative to traditional cue-reactivity methods (e.g., exposure to cigarette photos), virtual reality (VR) has been shown to be a viable cue presentation method to elicit and assess cigarette craving within complex virtual environments. However, it remains poorly understood whether contextual cues from the environment contribute to craving increases in addition to specific cues, like cigarettes. This study examined the role of contextual cues in a VR environment to evoke craving. Smokers were exposed to a virtual convenience store devoid of any specific cigarette cues followed by exposure to the same convenience store with specific cigarette cues added. Smokers reported increased craving following exposure to the virtual convenience store without specific cues, and significantly greater craving following the convenience store with cigarette cues added. However, increased craving recorded after the second convenience store may have been due to the pre-exposure to the first convenience store. This study offers evidence that an environmental context where cigarette cues are normally present (but are not), elicits significant craving in the absence of specific cigarette cues. This finding suggests that VR may have stronger ecological validity over traditional cue reactivity exposure methods by exposing smokers to the full range of cigarette-related environmental stimuli, in addition to specific cigarette cues, that smokers typically experience in their daily lives. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Common cues to emotion in the dynamic facial expressions of speech and song.

    Science.gov (United States)

    Livingstone, Steven R; Thompson, William F; Wanderley, Marcelo M; Palmer, Caroline

    2015-01-01

    Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech-song differences. Vocalists' jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech-song. Vocalists' emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists' facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.

  18. Cueing musical emotions: An empirical analysis of 24-piece sets by Bach and Chopin documents parallels with emotional speech

    Directory of Open Access Journals (Sweden)

    Matthew ePoon

    2015-11-01

    Full Text Available Acoustic cues such as pitch height and timing are effective at communicating emotion in both music and speech. Numerous experiments altering musical passages have shown that higher and faster melodies generally sound happier than lower and slower melodies, findings consistent with corpus analyses of emotional speech. However, equivalent corpus analyses of complex time-varying cues in music are less common, due in part to the challenges of assembling an appropriate corpus. Here we describe a novel, score-based exploration of the use of pitch height and timing in a set of balanced major and minor key compositions. Our corpus contained all 24 Preludes and 24 Fugues from Bach’s Well Tempered Clavier (book 1, as well as all 24 of Chopin’s Preludes for piano. These three sets are balanced with respect to both modality (major/minor and key chroma (A, B, C, etc.. Consistent with predictions derived from speech, we found major-key (nominally happy pieces to be two semitones higher in pitch height and 29% faster than minor-key (nominally sad pieces. This demonstrates that our balanced corpus of major and minor key pieces uses low-level acoustic cues for emotion in a manner consistent with speech. A series of post-hoc analyses illustrate interesting trade-offs, with

  19. Design and Calibration Tests of an Active Sound Intensity Probe

    Directory of Open Access Journals (Sweden)

    Thomas Kletschkowski

    2008-01-01

    Full Text Available The paper presents an active sound intensity probe that can be used for sound source localization in standing wave fields. The probe consists of a sound hard tube that is terminated by a loudspeaker and an integrated pair of microphones. The microphones are used to decompose the standing wave field inside the tube into its incident and reflected part. The latter is cancelled by an adaptive controller that calculates proper driving signals for the loudspeaker. If the open end of the actively controlled tube is placed close to a vibrating surface, the radiated sound intensity can be determined by measuring the cross spectral density between the two microphones. A one-dimensional free field can be realized effectively, as first experiments performed on a simplified test bed have shown. Further tests proved that a prototype of the novel sound intensity probe can be calibrated.

  20. Cues of maternal condition influence offspring selfishness.

    Science.gov (United States)

    Wong, Janine W Y; Lucas, Christophe; Kölliker, Mathias

    2014-01-01

    The evolution of parent-offspring communication was mostly studied from the perspective of parents responding to begging signals conveying information about offspring condition. Parents should respond to begging because of the differential fitness returns obtained from their investment in offspring that differ in condition. For analogous reasons, offspring should adjust their behavior to cues/signals of parental condition: parents that differ in condition pay differential costs of care and, hence, should provide different amounts of food. In this study, we experimentally tested in the European earwig (Forficula auricularia) if cues of maternal condition affect offspring behavior in terms of sibling cannibalism. We experimentally manipulated female condition by providing them with different amounts of food, kept nymph condition constant, allowed for nymph exposure to chemical maternal cues over extended time, quantified nymph survival (deaths being due to cannibalism) and extracted and analyzed the females' cuticular hydrocarbons (CHC). Nymph survival was significantly affected by chemical cues of maternal condition, and this effect depended on the timing of breeding. Cues of poor maternal condition enhanced nymph survival in early broods, but reduced nymph survival in late broods, and vice versa for cues of good condition. Furthermore, female condition affected the quantitative composition of their CHC profile which in turn predicted nymph survival patterns. Thus, earwig offspring are sensitive to chemical cues of maternal condition and nymphs from early and late broods show opposite reactions to the same chemical cues. Together with former evidence on maternal sensitivities to condition-dependent nymph chemical cues, our study shows context-dependent reciprocal information exchange about condition between earwig mothers and their offspring, potentially mediated by cuticular hydrocarbons.

  1. Cues of maternal condition influence offspring selfishness.

    Directory of Open Access Journals (Sweden)

    Janine W Y Wong

    Full Text Available The evolution of parent-offspring communication was mostly studied from the perspective of parents responding to begging signals conveying information about offspring condition. Parents should respond to begging because of the differential fitness returns obtained from their investment in offspring that differ in condition. For analogous reasons, offspring should adjust their behavior to cues/signals of parental condition: parents that differ in condition pay differential costs of care and, hence, should provide different amounts of food. In this study, we experimentally tested in the European earwig (Forficula auricularia if cues of maternal condition affect offspring behavior in terms of sibling cannibalism. We experimentally manipulated female condition by providing them with different amounts of food, kept nymph condition constant, allowed for nymph exposure to chemical maternal cues over extended time, quantified nymph survival (deaths being due to cannibalism and extracted and analyzed the females' cuticular hydrocarbons (CHC. Nymph survival was significantly affected by chemical cues of maternal condition, and this effect depended on the timing of breeding. Cues of poor maternal condition enhanced nymph survival in early broods, but reduced nymph survival in late broods, and vice versa for cues of good condition. Furthermore, female condition affected the quantitative composition of their CHC profile which in turn predicted nymph survival patterns. Thus, earwig offspring are sensitive to chemical cues of maternal condition and nymphs from early and late broods show opposite reactions to the same chemical cues. Together with former evidence on maternal sensitivities to condition-dependent nymph chemical cues, our study shows context-dependent reciprocal information exchange about condition between earwig mothers and their offspring, potentially mediated by cuticular hydrocarbons.

  2. Sound speeds, cracking and the stability of self-gravitating anisotropic compact objects

    International Nuclear Information System (INIS)

    Abreu, H; Hernandez, H; Nunez, L A

    2007-01-01

    Using the concept of cracking we explore the influence that density fluctuations and local anisotropy have on the stability of local and non-local anisotropic matter configurations in general relativity. This concept, conceived to describe the behavior of a fluid distribution just after its departure from equilibrium, provides an alternative approach to consider the stability of self-gravitating compact objects. We show that potentially unstable regions within a configuration can be identified as a function of the difference of propagations of sound along tangential and radial directions. In fact, it is found that these regions could occur when, at a particular point within the distribution, the tangential speed of sound is greater than the radial one

  3. Action experience changes attention to kinematic cues

    Directory of Open Access Journals (Sweden)

    Courtney eFilippi

    2016-02-01

    Full Text Available The current study used remote corneal reflection eye-tracking to examine the relationship between motor experience and action anticipation in 13-month-old infants. To measure online anticipation of actions infants watched videos where the actor’s hand provided kinematic information (in its orientation about the type of object that the actor was going to reach for. The actor’s hand orientation either matched the orientation of a rod (congruent cue or did not match the orientation of the rod (incongruent cue. To examine relations between motor experience and action anticipation, we used a 2 (reach first vs. observe first x 2 (congruent kinematic cue vs. incongruent kinematic cue between-subjects design. We show that 13-month-old infants in the observe first condition spontaneously generate rapid online visual predictions to congruent hand orientation cues and do not visually anticipate when presented incongruent cues. We further demonstrate that the speed that these infants generate predictions to congruent motor cues is correlated with their own ability to pre-shape their hands. Finally, we demonstrate that following reaching experience, infants generate rapid predictions to both congruent and incongruent hand shape cues—suggesting that short-term experience changes attention to kinematics.

  4. Temporal Organization of Sound Information in Auditory Memory

    Directory of Open Access Journals (Sweden)

    Kun Song

    2017-06-01

    Full Text Available Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  5. Temporal Organization of Sound Information in Auditory Memory.

    Science.gov (United States)

    Song, Kun; Luo, Huan

    2017-01-01

    Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  6. Problems in nonlinear acoustics: Scattering of sound by sound, parametric receiving arrays, nonlinear effects in asymmetric sound beams and pulsed finite amplitude sound beams

    Science.gov (United States)

    Hamilton, Mark F.

    1989-08-01

    Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.

  7. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  8. Phase of Spontaneous Slow Oscillations during Sleep Influences Memory-Related Processing of Auditory Cues.

    Science.gov (United States)

    Batterink, Laura J; Creery, Jessica D; Paller, Ken A

    2016-01-27

    Slow oscillations during slow-wave sleep (SWS) may facilitate memory consolidation by regulating interactions between hippocampal and cortical networks. Slow oscillations appear as high-amplitude, synchronized EEG activity, corresponding to upstates of neuronal depolarization and downstates of hyperpolarization. Memory reactivations occur spontaneously during SWS, and can also be induced by presenting learning-related cues associated with a prior learning episode during sleep. This technique, targeted memory reactivation (TMR), selectively enhances memory consolidation. Given that memory reactivation is thought to occur preferentially during the slow-oscillation upstate, we hypothesized that TMR stimulation effects would depend on the phase of the slow oscillation. Participants learned arbitrary spatial locations for objects that were each paired with a characteristic sound (eg, cat-meow). Then, during SWS periods of an afternoon nap, one-half of the sounds were presented at low intensity. When object location memory was subsequently tested, recall accuracy was significantly better for those objects cued during sleep. We report here for the first time that this memory benefit was predicted by slow-wave phase at the time of stimulation. For cued objects, location memories were categorized according to amount of forgetting from pre- to post-nap. Conditions of high versus low forgetting corresponded to stimulation timing at different slow-oscillation phases, suggesting that learning-related stimuli were more likely to be processed and trigger memory reactivation when they occurred at the optimal phase of a slow oscillation. These findings provide insight into mechanisms of memory reactivation during sleep, supporting the idea that reactivation is most likely during cortical upstates. Slow-wave sleep (SWS) is characterized by synchronized neural activity alternating between active upstates and quiet downstates. The slow-oscillation upstates are thought to provide a

  9. Hunger, taste, and normative cues in predictions about food intake.

    Science.gov (United States)

    Vartanian, Lenny R; Reily, Natalie M; Spanos, Samantha; McGuirk, Lucy C; Herman, C Peter; Polivy, Janet

    2017-09-01

    Normative eating cues (portion size, social factors) have a powerful impact on people's food intake, but people often fail to acknowledge the influence of these cues, instead explaining their food intake in terms of internal (hunger) or sensory (taste) cues. This study examined whether the same biases apply when making predictions about how much food a person would eat. Participants (n = 364) read a series of vignettes describing an eating scenario and predicted how much food the target person would eat in each situation. Some scenarios consisted of a single eating cue (hunger, taste, or a normative cue) that would be expected to increase intake (e.g., high hunger) or decrease intake (e.g., a companion who eats very little). Other scenarios combined two cues that were in conflict with one another (e.g., high hunger + a companion who eats very little). In the cue-conflict scenarios involving an inhibitory internal/sensory cue (e.g., low hunger) with an augmenting normative cue (e.g., a companion who eats a lot), participants predicted a low level of food intake, suggesting a bias toward the internal/sensory cue. For scenarios involving an augmenting internal/sensory cue (e.g., high hunger) and an inhibitory normative cue (e.g., a companion who eats very little), participants predicted an intermediate level of food intake, suggesting that they were influenced by both the internal/sensory and normative cue. Overall, predictions about food intake tend to reflect a general bias toward internal/sensory cues, but also include normative cues when those cues are inhibitory. If people are systematically biased toward internal, sensory, and inhibitory cues, then they may underestimate how much food they or other people will eat in many situations, particularly when normative cues promoting eating are present. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Context cue focality influences strategic prospective memory monitoring.

    Science.gov (United States)

    Hunter Ball, B; Bugg, Julie M

    2018-02-12

    Monitoring the environment for the occurrence of prospective memory (PM) targets is a resource-demanding process that produces cost (e.g., slower responding) to ongoing activities. However, research suggests that individuals are able to monitor strategically by using contextual cues to reduce monitoring in contexts in which PM targets are not expected to occur. In the current study, we investigated the processes supporting context identification (i.e., determining whether or not the context is appropriate for monitoring) by testing the context cue focality hypothesis. This hypothesis predicts that the ability to monitor strategically depends on whether the ongoing task orients attention to the contextual cues that are available to guide monitoring. In Experiment 1, participants performed an ongoing lexical decision task and were told that PM targets (TOR syllable) would only occur in word trials (focal context cue condition) or in items starting with consonants (nonfocal context cue condition). In Experiment 2, participants performed an ongoing first letter judgment (consonant/vowel) task and were told that PM targets would only occur in items starting with consonants (focal context cue condition) or in word trials (nonfocal context cue condition). Consistent with the context cue focality hypothesis, strategic monitoring was only observed during focal context cue conditions in which the type of ongoing task processing automatically oriented attention to the relevant features of the contextual cue. These findings suggest that strategic monitoring is dependent on limited-capacity processing resources and may be relatively limited when the attentional demands of context identification are sufficiently high.

  11. Interactive Sound Propagation using Precomputation and Statistical Approximations

    Science.gov (United States)

    Antani, Lakulish

    Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques---Ambient Reverberance, and Aural Proxies---to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.

  12. Spontaneous Hedonic Reactions to Social Media Cues.

    Science.gov (United States)

    van Koningsbruggen, Guido M; Hartmann, Tilo; Eden, Allison; Veling, Harm

    2017-05-01

    Why is it so difficult to resist the desire to use social media? One possibility is that frequent social media users possess strong and spontaneous hedonic reactions to social media cues, which, in turn, makes it difficult to resist social media temptations. In two studies (total N = 200), we investigated less-frequent and frequent social media users' spontaneous hedonic reactions to social media cues using the Affect Misattribution Procedure-an implicit measure of affective reactions. Results demonstrated that frequent social media users showed more favorable affective reactions in response to social media (vs. control) cues, whereas less-frequent social media users' affective reactions did not differ between social media and control cues (Studies 1 and 2). Moreover, the spontaneous hedonic reactions to social media (vs. control) cues were related to self-reported cravings to use social media and partially accounted for the link between social media use and social media cravings (Study 2). These findings suggest that frequent social media users' spontaneous hedonic reactions in response to social media cues might contribute to their difficulties in resisting desires to use social media.

  13. 76 FR 39292 - Special Local Regulations & Safety Zones; Marine Events in Captain of the Port Long Island Sound...

    Science.gov (United States)

    2011-07-06

    ... Port Long Island Sound Zone AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast... and fireworks displays within the Captain of the Port (COTP) Long Island Sound Zone. This action is... Island Sound. DATES: This rule is effective in the CFR on July 6, 2011 through 6 p.m. on October 2, 2011...

  14. What Does a Cue Do? Comparing Phonological and Semantic Cues for Picture Naming in Aphasia

    Science.gov (United States)

    Meteyard, Lotte; Bose, Arpita

    2018-01-01

    Purpose: Impaired naming is one of the most common symptoms in aphasia, often treated with cued picture naming paradigms. It has been argued that semantic cues facilitate the reliable categorization of the picture, and phonological cues facilitate the retrieval of target phonology. To test these hypotheses, we compared the effectiveness of…

  15. Heart sound segmentation of pediatric auscultations using wavelet analysis.

    Science.gov (United States)

    Castro, Ana; Vinhoza, Tiago T V; Mattos, Sandra S; Coimbra, Miguel T

    2013-01-01

    Auscultation is widely applied in clinical activity, nonetheless sound interpretation is dependent on clinician training and experience. Heart sound features such as spatial loudness, relative amplitude, murmurs, and localization of each component may be indicative of pathology. In this study we propose a segmentation algorithm to extract heart sound components (S1 and S2) based on it's time and frequency characteristics. This algorithm takes advantage of the knowledge of the heart cycle times (systolic and diastolic periods) and of the spectral characteristics of each component, through wavelet analysis. Data collected in a clinical environment, and annotated by a clinician was used to assess algorithm's performance. Heart sound components were correctly identified in 99.5% of the annotated events. S1 and S2 detection rates were 90.9% and 93.3% respectively. The median difference between annotated and detected events was of 33.9 ms.

  16. Variation in habitat soundscape characteristics influences settlement of a reef-building coral.

    Science.gov (United States)

    Lillis, Ashlee; Bohnenstiehl, DelWayne; Peters, Jason W; Eggleston, David

    2016-01-01

    Coral populations, and the productive reef ecosystems they support, rely on successful recruitment of reef-building species, beginning with settlement of dispersing larvae into habitat favourable to survival. Many substrate cues have been identified as contributors to coral larval habitat selection; however, the potential for ambient acoustic cues to influence coral settlement responses is unknown. Using in situ settlement chambers that excluded other habitat cues, larval settlement of a dominant Caribbean reef-building coral, Orbicella faveolata , was compared in response to three local soundscapes, with differing acoustic and habitat properties. Differences between reef sites in the number of larvae settled in chambers isolating acoustic cues corresponded to differences in sound levels and reef characteristics, with sounds at the loudest reef generating significantly higher settlement during trials compared to the quietest site (a 29.5 % increase). These results suggest that soundscapes could be an important influence on coral settlement patterns and that acoustic cues associated with reef habitat may be related to larval settlement. This study reports an effect of soundscape variation on larval settlement for a key coral species, and adds to the growing evidence that soundscapes affect marine ecosystems by influencing early life history processes of foundational species.

  17. Cueing musical emotions: An empirical analysis of 24-piece sets by Bach and Chopin documents parallels with emotional speech.

    Science.gov (United States)

    Poon, Matthew; Schutz, Michael

    2015-01-01

    Acoustic cues such as pitch height and timing are effective at communicating emotion in both music and speech. Numerous experiments altering musical passages have shown that higher and faster melodies generally sound "happier" than lower and slower melodies, findings consistent with corpus analyses of emotional speech. However, equivalent corpus analyses of complex time-varying cues in music are less common, due in part to the challenges of assembling an appropriate corpus. Here, we describe a novel, score-based exploration of the use of pitch height and timing in a set of "balanced" major and minor key compositions. Our analysis included all 24 Preludes and 24 Fugues from Bach's Well-Tempered Clavier (book 1), as well as all 24 of Chopin's Preludes for piano. These three sets are balanced with respect to both modality (major/minor) and key chroma ("A," "B," "C," etc.). Consistent with predictions derived from speech, we found major-key (nominally "happy") pieces to be two semitones higher in pitch height and 29% faster than minor-key (nominally "sad") pieces. This demonstrates that our balanced corpus of major and minor key pieces uses low-level acoustic cues for emotion in a manner consistent with speech. A series of post hoc analyses illustrate interesting trade-offs, with sets featuring greater emphasis on timing distinctions between modalities exhibiting the least pitch distinction, and vice-versa. We discuss these findings in the broader context of speech-music research, as well as recent scholarship exploring the historical evolution of cue use in Western music.

  18. On the motivational properties of reward cues: Individual differences.

    Science.gov (United States)

    Robinson, Terry E; Yager, Lindsay M; Cogan, Elizabeth S; Saunders, Benjamin T

    2014-01-01

    Cues associated with rewards, such as food or drugs of abuse, can themselves acquire motivational properties. Acting as incentive stimuli, such cues can exert powerful control over motivated behavior, and in the case of cues associated with drugs, they can goad continued drug-seeking behavior and relapse. However, recent studies reviewed here suggest that there are large individual differences in the extent to which food and drug cues are attributed with incentive salience. Rats prone to approach reward cues (sign-trackers) attribute greater motivational value to discrete localizable cues and interoceptive cues than do rats less prone to approach reward cues (goal-trackers). In contrast, contextual cues appear to exert greater control over motivated behavior in goal-trackers than sign-trackers. It is possible to predict, therefore, before any experience with drugs, in which animals specific classes of drug cues will most likely reinstate drug-seeking behavior. The finding that different individuals may be sensitive to different triggers capable of motivating behavior and producing relapse suggests there may be different pathways to addiction, and has implications for thinking about individualized treatment. This article is part of a Special Issue entitled 'NIDA 40th Anniversary Issue'. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Default mode network deactivation to smoking cue relative to food cue predicts treatment outcome in nicotine use disorder.

    Science.gov (United States)

    Wilcox, Claire E; Claus, Eric D; Calhoun, Vince D; Rachakonda, Srinivas; Littlewood, Rae A; Mickey, Jessica; Arenella, Pamela B; Goodreau, Natalie; Hutchison, Kent E

    2018-01-01

    Identifying predictors of treatment outcome for nicotine use disorders (NUDs) may help improve efficacy of established treatments, like varenicline. Brain reactivity to drug stimuli predicts relapse risk in nicotine and other substance use disorders in some studies. Activity in the default mode network (DMN) is affected by drug cues and other palatable cues, but its clinical significance is unclear. In this study, 143 individuals with NUD (male n = 91, ages 18-55 years) received a functional magnetic resonance imaging scan during a visual cue task during which they were presented with a series of smoking-related or food-related video clips prior to randomization to treatment with varenicline (n = 80) or placebo. Group independent components analysis was utilized to isolate the DMN, and temporal sorting was used to calculate the difference between the DMN blood-oxygen-level dependent signal during smoke cues and that during food cues for each individual. Food cues were associated with greater deactivation compared with smoke cues in the DMN. In correcting for baseline smoking and other clinical variables, which have been shown to be related to treatment outcome in previous work, a less positive Smoke - Food difference score predicted greater smoking at 6 and 12 weeks when both treatment groups were combined (P = 0.005, β = -0.766). An exploratory analysis of executive control and salience networks demonstrated that a more positive Smoke - Food difference score for executive control network predicted a more robust response to varenicline relative to placebo. These findings provide further support to theories that brain reactivity to palatable cues, and in particular in DMN, may have a direct clinical relevance in NUD. © 2017 Society for the Study of Addiction.

  20. Facilitation of voluntary goal-directed action by reward cues.

    Science.gov (United States)

    Lovibond, Peter F; Colagiuri, Ben

    2013-10-01

    Reward-associated cues are known to influence motivation to approach both natural and man-made rewards, such as food and drugs. However, the mechanisms underlying these effects are not well understood. To model these processes in the laboratory with humans, we developed an appetitive Pavlovian-instrumental transfer procedure with a chocolate reward. We used a single unconstrained response that led to an actual rather than symbolic reward to assess the strength of reward motivation. Presentation of a chocolate-paired cue, but not an unpaired cue, markedly enhanced instrumental responding over a 30-s period. The same pattern was observed with 10-s and 30-s cues, showing that close cue-reward contiguity is not necessary for facilitation of reward-directed action. The results confirm that reward-related cues can instigate voluntary action to obtain that reward. The effectiveness of long-duration cues suggests that in clinical settings, attention should be directed to both proximal and distal cues for reward.

  1. Peak provoked craving: an alternative to smoking cue-reactivity.

    Science.gov (United States)

    Sayette, Michael A; Tiffany, Stephen T

    2013-06-01

    Smoking cue-exposure research has provided a powerful tool for examining cravings in the laboratory. A key attraction of this method is that tightly controlled experimental procedures can model craving experiences that are presumed to relate to addiction. Despite its appeal, key assumptions underlying the clinical relevance of smoking cue-reactivity studies have been questioned recently. For both conceptual and methodological reasons it may be difficult to tease apart cue-based and abstinence-based cravings. Moreover, conventional cue-reactivity procedures typically generate levels of craving with only minimal clinical relevance. We argue here that sometimes it is unfeasible-and in some instances conceptually misguided-to disentangle abstinence-based and cued components of cigarette cravings. In light of the challenges associated with cue-reactivity research, we offer an alternative approach to smoking cue-exposure experimental research focusing on peak provoked craving (PPC) states. The PPC approach uses nicotine-deprived smokers and focuses on urges during smoking cue-exposure without subtracting out urge ratings during control cue or baseline assessments. This design relies on two factors found in many cue-exposure studies-nicotine deprivation and exposure to explicit smoking cues-which, when combined, can create powerful craving states. The PPC approach retains key aspects of the cue-exposure method, and in many circumstances may be a viable design for studies examining robust laboratory-induced cravings. © 2012 The Authors, Addiction © 2012 Society for the Study of Addiction.

  2. Plant acoustics: in the search of a sound mechanism for sound signaling in plants.

    Science.gov (United States)

    Mishra, Ratnesh Chandra; Ghosh, Ritesh; Bae, Hanhong

    2016-08-01

    Being sessile, plants continuously deal with their dynamic and complex surroundings, identifying important cues and reacting with appropriate responses. Consequently, the sensitivity of plants has evolved to perceive a myriad of external stimuli, which ultimately ensures their successful survival. Research over past centuries has established that plants respond to environmental factors such as light, temperature, moisture, and mechanical perturbations (e.g. wind, rain, touch, etc.) by suitably modulating their growth and development. However, sound vibrations (SVs) as a stimulus have only started receiving attention relatively recently. SVs have been shown to increase the yields of several crops and strengthen plant immunity against pathogens. These vibrations can also prime the plants so as to make them more tolerant to impending drought. Plants can recognize the chewing sounds of insect larvae and the buzz of a pollinating bee, and respond accordingly. It is thus plausible that SVs may serve as a long-range stimulus that evokes ecologically relevant signaling mechanisms in plants. Studies have suggested that SVs increase the transcription of certain genes, soluble protein content, and support enhanced growth and development in plants. At the cellular level, SVs can change the secondary structure of plasma membrane proteins, affect microfilament rearrangements, produce Ca(2+) signatures, cause increases in protein kinases, protective enzymes, peroxidases, antioxidant enzymes, amylase, H(+)-ATPase / K(+) channel activities, and enhance levels of polyamines, soluble sugars and auxin. In this paper, we propose a signaling model to account for the molecular episodes that SVs induce within the cell, and in so doing we uncover a number of interesting questions that need to be addressed by future research in plant acoustics. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions

  3. Cue-reactivity in behavioral addictions: A meta-analysis and methodological considerations.

    Science.gov (United States)

    Starcke, Katrin; Antons, Stephanie; Trotzke, Patrick; Brand, Matthias

    2018-05-23

    Background and aims Recent research has applied cue-reactivity paradigms to behavioral addictions. The aim of the current meta-analysis is to systematically analyze the effects of learning-based cue-reactivity in behavioral addictions. Methods The current meta-analysis includes 18 studies (29 data sets, 510 participants) that have used a cue-reactivity paradigm in persons with gambling (eight studies), gaming (nine studies), or buying (one study) disorders. We compared subjective, peripheral physiological, electroencephal, and neural responses toward addiction-relevant cues in patients versus control participants and toward addiction-relevant cues versus control cues in patients. Results Persons with behavioral addictions showed higher cue-reactivity toward addiction-relevant cues compared with control participants: subjective cue-reactivity (d = 0.84, p = .01) and peripheral physiological and electroencephal measures of cue-reactivity (d = 0.61, p buying disorders also showed higher cue-reactivity toward addiction-relevant cues compared with control cues: subjective cue-reactivity (d = 0.39, p = .11) and peripheral physiological and electroencephal measures of cue-reactivity (d = 0.47, p = .05). Increased neural activation was found in the caudate nucleus, inferior frontal gyrus, angular gyrus, inferior network, and precuneus. Discussion and conclusions Cue-reactivity not only exists in substance-use disorders but also in gambling, gaming, and buying disorders. Future research should differentiate between cue-reactivity in addictive behaviors and cue-reactivity in functional excessive behaviors such as passions, hobbies, or professions.

  4. The effect of age on involuntary capture of attention by irrelevant sounds: a test of the frontal hypothesis of aging.

    Science.gov (United States)

    Andrés, Pilar; Parmentier, Fabrice B R; Escera, Carles

    2006-01-01

    The aim of this study was to examine the effects of aging on the involuntary capture of attention by irrelevant sounds (distraction) and the use of these sounds as warning cues (alertness) in an oddball paradigm. We compared the performance of older and younger participants on a well-characterized auditory-visual distraction task. Based on the dissociations observed in aging between attentional processes sustained by the anterior and posterior attentional networks, our prediction was that distraction by irrelevant novel sounds would be stronger in older adults than in young adults while both groups would be equally able to use sound as an alert to prepare for upcoming stimuli. The results confirmed both predictions: there was a larger distraction effect in the older participants, but the alert effect was equivalent in both groups. These results give support to the frontal hypothesis of aging [Raz, N. (2000). Aging of the brain and its impact on cognitive performance: integration of structural and functional finding. In F.I.M. Craik & T.A. Salthouse (Eds.) Handbook of aging and cognition (pp. 1-90). Mahwah, NJ: Erlbaum; West, R. (1996). An application of prefrontal cortex function theory to cognitive aging. Psychological Bulletin, 120, 272-292].

  5. Role of Speaker Cues in Attention Inference

    Directory of Open Access Journals (Sweden)

    Jin Joo Lee

    2017-10-01

    Full Text Available Current state-of-the-art approaches to emotion recognition primarily focus on modeling the nonverbal expressions of the sole individual without reference to contextual elements such as the co-presence of the partner. In this paper, we demonstrate that the accurate inference of listeners’ social-emotional state of attention depends on accounting for the nonverbal behaviors of their storytelling partner, namely their speaker cues. To gain a deeper understanding of the role of speaker cues in attention inference, we conduct investigations into real-world interactions of children (5–6 years old storytelling with their peers. Through in-depth analysis of human–human interaction data, we first identify nonverbal speaker cues (i.e., backchannel-inviting cues and listener responses (i.e., backchannel feedback. We then demonstrate how speaker cues can modify the interpretation of attention-related backchannels as well as serve as a means to regulate the responsiveness of listeners. We discuss the design implications of our findings toward our primary goal of developing attention recognition models for storytelling robots, and we argue that social robots can proactively use speaker cues to form more accurate inferences about the attentive state of their human partners.

  6. Brain response to prosodic boundary cues depends on boundary position

    Directory of Open Access Journals (Sweden)

    Julia eHolzgrefe

    2013-07-01

    Full Text Available Prosodic information is crucial for spoken language comprehension and especially for syntactic parsing, because prosodic cues guide the hearer’s syntactic analysis. The time course and mechanisms of this interplay of prosody and syntax are not yet well understood. In particular, there is an ongoing debate whether local prosodic cues are taken into account automatically or whether they are processed in relation to the global prosodic context in which they appear. The present study explores whether the perception of a prosodic boundary is affected by its position within an utterance. In an event-related potential (ERP study we tested if the brain response evoked by the prosodic boundary differs when the boundary occurs early in a list of three names connected by conjunctions (i.e., after the first name as compared to later in the utterance (i.e., after the second name. A closure positive shift (CPS — marking the processing of a prosodic phrase boundary — was elicited only for stimuli with a late boundary, but not for stimuli with an early boundary. This result is further evidence for an immediate integration of prosodic information into the parsing of an utterance. In addition, it shows that the processing of prosodic boundary cues depends on the previously processed information from the preceding prosodic context.

  7. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  8. The effects of foreknowledge and task-set shifting as mirrored in cue- and target-locked event-related potentials.

    Directory of Open Access Journals (Sweden)

    Mareike Finke

    Full Text Available The present study examined the use of foreknowledge in a task-cueing protocol while manipulating sensory updating and executive control in both, informatively and non-informatively pre-cued trials. Foreknowledge, sensory updating (cue switch effects and task-switching were orthogonally manipulated in order to address the question of whether, and to which extent, the sensory processing of cue changes can partly or totally explain the final task switch costs. Participants responded faster when they could prepare for the upcoming task and if no task-set updating was necessary. Sensory cue switches influenced cue-locked ERPs only when they contained conceptual information about the upcoming task: frontal P2 amplitudes were modulated by task-relevant cue changes, mid-parietal P3 amplitudes by the anticipatory updating of stimulus-response mappings, and P3 peak latencies were modulated by task switching. Task preparation was advantageous for efficient stimulus-response re-mapping at target-onset as mirrored in target N2 amplitudes. However, N2 peak latencies indicate that this process is faster for all repeat trials. The results provide evidence to support a very fast detection of task-relevance in sensory (cue changes and argue against the view of task repetition benefits as secondary to purely perceptual repetition priming. Advanced preparation may have a stronger influence on behavioral performance and target-locked brain activity than the local effect of repeating or switching the task-set in the current trial.

  9. Sound localization in noise in hearing-impaired listeners.

    Science.gov (United States)

    Lorenzi, C; Gatehouse, S; Lever, C

    1999-06-01

    The present study assesses the ability of four listeners with high-frequency, bilateral symmetrical sensorineural hearing loss to localize and detect a broadband click train in the frontal-horizontal plane, in quiet and in the presence of a white noise. The speaker array and stimuli are identical to those described by Lorenzi et al. (in press). The results show that: (1) localization performance is only slightly poorer in hearing-impaired listeners than in normal-hearing listeners when noise is at 0 deg azimuth, (2) localization performance begins to decrease at higher signal-to-noise ratios for hearing-impaired listeners than for normal-hearing listeners when noise is at +/- 90 deg azimuth, and (3) the performance of hearing-impaired listeners is less consistent when noise is at +/- 90 deg azimuth than at 0 deg azimuth. The effects of a high-frequency hearing loss were also studied by measuring the ability of normal-hearing listeners to localize the low-pass filtered version of the clicks. The data reproduce the effects of noise on three out of the four hearing-impaired listeners when noise is at 0 deg azimuth. They reproduce the effects of noise on only two out of the four hearing-impaired listeners when noise is at +/- 90 deg azimuth. The additional effects of a low-frequency hearing loss were investigated by attenuating the low-pass filtered clicks and the noise by 20 dB. The results show that attenuation does not strongly affect localization accuracy for normal-hearing listeners. Measurements of the clicks' detectability indicate that the hearing-impaired listeners who show the poorest localization accuracy also show the poorest ability to detect the clicks. The inaudibility of high frequencies, "distortions," and reduced detectability of the signal are assumed to have caused the poorer-than-normal localization accuracy for hearing-impaired listeners.

  10. Resonant modal group theory of membrane-type acoustical metamaterials for low-frequency sound attenuation

    Science.gov (United States)

    Ma, Fuyin; Wu, Jiu Hui; Huang, Meng

    2015-09-01

    In order to overcome the influence of the structural resonance on the continuous structures and obtain a lightweight thin-layer structure which can effectively isolate the low-frequency noises, an elastic membrane structure was proposed. In the low-frequency range below 500 Hz, the sound transmission loss (STL) of this membrane type structure is greatly higher than that of the current sound insulation material EVA (ethylene-vinyl acetate copo) of vehicle, so it is possible to replace the EVA by the membrane-type metamaterial structure in practice engineering. Based on the band structure, modal shapes, as well as the sound transmission simulation, the sound insulation mechanism of the designed membrane-type acoustic metamaterials was analyzed from a new perspective, which had been validated experimentally. It is suggested that in the frequency range above 200 Hz for this membrane-mass type structure, the sound insulation effect was principally not due to the low-level locally resonant mode of the mass block, but the continuous vertical resonant modes of the localized membrane. So based on such a physical property, a resonant modal group theory is initially proposed in this paper. In addition, the sound insulation mechanism of the membrane-type structure and thin plate structure were combined by the membrane/plate resonant theory.

  11. Part-set cueing impairment & facilitation in semantic memory.

    Science.gov (United States)

    Kelley, Matthew R; Parihar, Sushmeena A

    2018-01-19

    The present study explored the influence of part-set cues in semantic memory using tests of "free" recall, reconstruction of order, and serial recall. Nine distinct categories of information were used (e.g., Zodiac signs, Harry Potter books, Star Wars films, planets). The results showed part-set cueing impairment for all three "free" recall sets, whereas part-set cueing facilitation was evident for five of the six ordered sets. Generally, the present results parallel those often observed across episodic tasks, which could indicate that similar mechanisms contribute to part-set cueing effects in both episodic and semantic memory. A novel anchoring explanation of part-set cueing facilitation in order and spatial tasks is provided.

  12. Use of explicit memory cues following parietal lobe lesions.

    Science.gov (United States)

    Dobbins, Ian G; Jaeger, Antonio; Studer, Bettina; Simons, Jon S

    2012-11-01

    The putative role of the lateral parietal lobe in episodic memory has recently become a topic of considerable debate, owing primarily to its consistent activation for studied materials during functional magnetic resonance imaging studies of recognition. Here we examined the performance of patients with parietal lobe lesions using an explicit memory cueing task in which probabilistic cues ("Likely Old" or "Likely New"; 75% validity) preceded the majority of verbal recognition memory probes. Without cues, patients and control participants did not differ in accuracy. However, group differences emerged during the "Likely New" cue condition with controls responding more accurately than parietal patients when these cues were valid (preceding new materials) and trending towards less accuracy when these cues were invalid (preceding old materials). Both effects suggest insufficient integration of external cues into memory judgments on the part of the parietal patients whose cued performance largely resembled performance in the complete absence of cues. Comparison of the parietal patients to a patient group with frontal lobe lesions suggested the pattern was specific to parietal and adjacent area lesions. Overall, the data indicate that parietal lobe patients fail to appropriately incorporate external cues of novelty into recognition attributions. This finding supports a role for the lateral parietal lobe in the adaptive biasing of memory judgments through the integration of external cues and internal memory evidence. We outline the importance of such adaptive biasing through consideration of basic signal detection predictions regarding maximum possible accuracy with and without informative environmental cues. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Measurement of sound velocity profiles in fluids for process monitoring

    International Nuclear Information System (INIS)

    Wolf, M; Kühnicke, E; Lenz, M; Bock, M

    2012-01-01

    In ultrasonic measurements, the time of flight to the object interface is often the only information that is analysed. Conventionally it is only possible to determine distances or sound velocities if the other value is known. The current paper deals with a novel method to measure the sound propagation path length and the sound velocity in media with moving scattering particles simultaneously. Since the focal position also depends on sound velocity, it can be used as a second parameter. Via calibration curves it is possible to determine the focal position and sound velocity from the measured time of flight to the focus, which is correlated to the maximum of averaged echo signal amplitude. To move focal position along the acoustic axis, an annular array is used. This allows measuring sound velocity locally resolved without any previous knowledge of the acoustic media and without a reference reflector. In previous publications the functional efficiency of this method was shown for media with constant velocities. In this work the accuracy of these measurements is improved. Furthermore first measurements and simulations are introduced for non-homogeneous media. Therefore an experimental set-up was created to generate a linear temperature gradient, which also causes a gradient of sound velocity.

  14. Dominance dynamics of competition between intrinsic and extrinsic grouping cues.

    Science.gov (United States)

    Luna, Dolores; Villalba-García, Cristina; Montoro, Pedro R; Hinojosa, José A

    2016-10-01

    In the present study we examined the dominance dynamics of perceptual grouping cues. We used a paradigm in which participants selectively attended to perceptual groups based on several grouping cues in different blocks of trials. In each block, single and competing grouping cues were presented under different exposure durations (50, 150 or 350ms). Using this procedure, intrinsic vs. intrinsic cues (i.e. proximity and shape similarity) were compared in Experiment 1; extrinsic vs. extrinsic cues (i.e. common region and connectedness) in Experiment 2; and intrinsic vs. extrinsic cues (i.e. common region and shape similarity) in Experiment 3. The results showed that in Experiment 1, no dominance of any grouping cue was found: shape similarity and proximity grouping cues showed similar reaction times (RTs) and interference effects. In contrast, in Experiments 2 and 3, common region dominated processing: (i) RTs to common region were shorter than those to connectedness (Exp. 2) or shape similarity (Exp. 3); and (ii) when the grouping cues competed, common region interfered with connectedness (Exp. 2) and shape similarity (Exp. 3) more than vice versa. The results showed that the exposure duration of stimuli only affected the connectedness grouping cue. An important result of our experiments indicates that when two grouping cues compete, both the non-attended intrinsic cue in Experiment 1, and the non-dominant extrinsic cue in Experiments 2 and 3, are still perceived and they are not completely lost. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Son et lumière: Sound and light effects on spatial distribution and swimming behavior in captive zebrafish.

    Science.gov (United States)

    Shafiei Sabet, Saeed; Van Dooren, Dirk; Slabbekoorn, Hans

    2016-05-01

    Aquatic and terrestrial habitats are heterogeneous by nature with respect to sound and light conditions. Fish may extract signals and exploit cues from both ambient modalities and they may also select their sound and light level of preference in free-ranging conditions. In recent decades, human activities in or near water have altered natural soundscapes and caused nocturnal light pollution to become more widespread. Artificial sound and light may cause anxiety, deterrence, disturbance or masking, but few studies have addressed in any detail how fishes respond to spatial variation in these two modalities. Here we investigated whether sound and light affected spatial distribution and swimming behavior of individual zebrafish that had a choice between two fish tanks: a treatment tank and a quiet and light escape tank. The treatments concerned a 2 × 2 design with noisy or quiet conditions and dim or bright light. Sound and light treatments did not induce spatial preferences for the treatment or escape tank, but caused various behavioral changes in both spatial distribution and swimming behavior within the treatment tank. Sound exposure led to more freezing and less time spent near the active speaker. Dim light conditions led to a lower number of crossings, more time spent in the upper layer and less time spent close to the tube for crossing. No interactions were found between sound and light conditions. This study highlights the potential relevance for studying multiple modalities when investigating fish behavior and further studies are needed to investigate whether similar patterns can be found for fish behavior in free-ranging conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. The influence of social and symbolic cues on observers' gaze behaviour.

    Science.gov (United States)

    Hermens, Frouke; Walker, Robin

    2016-08-01

    Research has shown that social and symbolic cues presented in isolation and at fixation have strong effects on observers, but it is unclear how cues compare when they are presented away from fixation and embedded in natural scenes. We here compare the effects of two types of social cue (gaze and pointing gestures) and one type of symbolic cue (arrow signs) on eye movements of observers under two viewing conditions (free viewing vs. a memory task). The results suggest that social cues are looked at more quickly, for longer and more frequently than the symbolic arrow cues. An analysis of saccades initiated from the cue suggests that the pointing cue leads to stronger cueing than the gaze and the arrow cue. While the task had only a weak influence on gaze orienting to the cues, stronger cue following was found for free viewing compared to the memory task. © 2015 The British Psychological Society.

  17. Cues for haptic perception of compliance

    NARCIS (Netherlands)

    Bergmann Tiest, W.M.; Kappers, A.M.L.

    2009-01-01

    For the perception of the hardness of compliant materials, several cues are available. In this paper, the relative roles of force/displacement and surface deformation cues are investigated. We have measured discrimination thresholds with silicone rubber stimuli of differing thickness and compliance.

  18. Meninges-derived cues control axon guidance.

    Science.gov (United States)

    Suter, Tracey A C S; DeLoughery, Zachary J; Jaworski, Alexander

    2017-10-01

    The axons of developing neurons travel long distances along stereotyped pathways under the direction of extracellular cues sensed by the axonal growth cone. Guidance cues are either secreted proteins that diffuse freely or bind the extracellular matrix, or membrane-anchored proteins. Different populations of axons express distinct sets of receptors for guidance cues, which results in differential responses to specific ligands. The full repertoire of axon guidance cues and receptors and the identity of the tissues producing these cues remain to be elucidated. The meninges are connective tissue layers enveloping the vertebrate brain and spinal cord that serve to protect the central nervous system (CNS). The meninges also instruct nervous system development by regulating the generation and migration of neural progenitors, but it has not been determined whether they help guide axons to their targets. Here, we investigate a possible role for the meninges in neuronal wiring. Using mouse neural tissue explants, we show that developing spinal cord meninges produce secreted attractive and repulsive cues that can guide multiple types of axons in vitro. We find that motor and sensory neurons, which project axons across the CNS-peripheral nervous system (PNS) boundary, are attracted by meninges. Conversely, axons of both ipsi- and contralaterally projecting dorsal spinal cord interneurons are repelled by meninges. The responses of these axonal populations to the meninges are consistent with their trajectories relative to meninges in vivo, suggesting that meningeal guidance factors contribute to nervous system wiring and control which axons are able to traverse the CNS-PNS boundary. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Diversity of fish sound types in the Pearl River Estuary, China

    Directory of Open Access Journals (Sweden)

    Zhi-Tao Wang

    2017-10-01

    Full Text Available Background Repetitive species-specific sound enables the identification of the presence and behavior of soniferous species by acoustic means. Passive acoustic monitoring has been widely applied to monitor the spatial and temporal occurrence and behavior of calling species. Methods Underwater biological sounds in the Pearl River Estuary, China, were collected using passive acoustic monitoring, with special attention paid to fish sounds. A total of 1,408 suspected fish calls comprising 18,942 pulses were qualitatively analyzed using a customized acoustic analysis routine. Results We identified a diversity of 66 types of fish sounds. In addition to single pulse, the sounds tended to have a pulse train structure. The pulses were characterized by an approximate 8 ms duration, with a peak frequency from 500 to 2,600 Hz and a majority of the energy below 4,000 Hz. The median inter-pulsepeak interval (IPPI of most call types was 9 or 10 ms. Most call types with median IPPIs of 9 ms and 10 ms were observed at times that were exclusive from each other, suggesting that they might be produced by different species. According to the literature, the two section signal types of 1 + 1 and 1 + N10 might belong to big-snout croaker (Johnius macrorhynus, and 1 + N19 might be produced by Belanger’s croaker (J. belangerii. Discussion Categorization of the baseline ambient biological sound is an important first step in mapping the spatial and temporal patterns of soniferous fishes. The next step is the identification of the species producing each sound. The distribution pattern of soniferous fishes will be helpful for the protection and management of local fishery resources and in marine environmental impact assessment. Since the local vulnerable Indo-Pacific humpback dolphin (Sousa chinensis mainly preys on soniferous fishes, the fine-scale distribution pattern of soniferous fishes can aid in the conservation of this species. Additionally, prey and predator

  20. Acupuncture inhibits cue-induced heroin craving and brain activation.

    Science.gov (United States)

    Cai, Xinghui; Song, Xiaoge; Li, Chuanfu; Xu, Chunsheng; Li, Xiliang; Lu, Qi

    2012-11-25

    Previous research using functional MRI has shown that specific brain regions associated with drug dependence and cue-elicited heroin craving are activated by environmental cues. Craving is an important trigger of heroin relapse, and acupuncture may inhibit craving. In this study, we performed functional MRI in heroin addicts and control subjects. We compared differences in brain activation between the two groups during heroin cue exposure, heroin cue exposure plus acupuncture at the Zusanli point (ST36) without twirling of the needle, and heroin cue exposure plus acupuncture at the Zusanli point with twirling of the needle. Heroin cue exposure elicited significant activation in craving-related brain regions mainly in the frontal lobes and callosal gyri. Acupuncture without twirling did not significantly affect the range of brain activation induced by heroin cue exposure, but significantly changed the extent of the activation in the heroin addicts group. Acupuncture at the Zusanli point with twirling of the needle significantly decreased both the range and extent of activation induced by heroin cue exposure compared with heroin cue exposure plus acupuncture without twirling of the needle. These experimental findings indicate that presentation of heroin cues can induce activation in craving-related brain regions, which are involved in reward, learning and memory, cognition and emotion. Acupuncture at the Zusanli point can rapidly suppress the activation of specific brain regions related to craving, supporting its potential as an intervention for drug craving.

  1. Sound radiation contrast in MR phase images. Method for the representation of elasticity, sound damping, and sound impedance changes

    International Nuclear Information System (INIS)

    Radicke, Marcus

    2009-01-01

    The method presented in this thesis combines ultrasound techniques with the magnetic-resonance tomography (MRT). An ultrasonic wave generates in absorbing media a static force in sound-propagation direction. The force leads at sound intensities of some W/cm 2 and a sound frequency in the lower MHz range to a tissue shift in the micrometer range. This tissue shift depends on the sound power, the sound frequency, the sound absorption, and the elastic properties of the tissue. A MRT sequence of the Siemens Healthcare AG was modified so that it measures (indirectly) the tissue shift, codes as grey values, and presents as 2D picture. By means of the grey values the sound-beam slope in the tissue can be visualized, and so additionally sound obstacles (changes of the sound impedance) can be detected. By the MRT images token up spatial changes of the tissue parameters sound absorption and elasticity can be detected. In this thesis measurements are presented, which show the feasibility and future chances of this method especially for the mammary-cancer diagnostics. [de

  2. Evaluation of multimodal ground cues

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Lecuyer, Anatole; Serafin, Stefania

    2012-01-01

    This chapter presents an array of results on the perception of ground surfaces via multiple sensory modalities,with special attention to non visual perceptual cues, notably those arising from audition and haptics, as well as interactions between them. It also reviews approaches to combining...... synthetic multimodal cues, from vision, haptics, and audition, in order to realize virtual experiences of walking on simulated ground surfaces or other features....

  3. Gefinex 400S (Sampo) EM-Soundings at Olkiluoto 2006

    International Nuclear Information System (INIS)

    Jokinen, T.; Lehtimaeki, J.

    2006-08-01

    In the beginning of summer 2006 Geological Survey of Finland carried out electromagnetic frequency soundings with Gefinex 400S equipment (called also Sampo) at Onkalo situated in Olkiluoto nuclear power plant area. The same soundings sites were the first time measured and marked in 2004 and repeated in 2005. The aim of the measurements is to monitor the changes of groundwater conditions by the changes of the electric conductivity of the earth at ONKALO and repository area. The measurements form two 1400 m long broadside profiles, which have 200 m mutual distance and 200 m station separation. The profiles have been measured using 200, 500, and 800 m coil separations. The total number of the soundings was 48 but at 8 stations the measurement did not succeed because of strong electromagnetic noise. The numerous power lines and the cables of the area generate local 3-D effects on the sounding curves, but the repeatability of the results is good. However, most suitable for monitoring purposes are the sites without strong 3-D effects. Comparison of results 2004-2006 shows small differences at some sounding sites. (orig.)

  4. Gait parameter control timing with dynamic manual contact or visual cues

    Science.gov (United States)

    Shi, Peter; Werner, William

    2016-01-01

    We investigated the timing of gait parameter changes (stride length, peak toe velocity, and double-, single-support, and complete step duration) to control gait speed. Eleven healthy participants adjusted their gait speed on a treadmill to maintain a constant distance between them and a fore-aft oscillating cue (a place on a conveyor belt surface). The experimental design balanced conditions of cue modality (vision: eyes-open; manual contact: eyes-closed while touching the cue); treadmill speed (0.2, 0.4, 0.85, and 1.3 m/s); and cue motion (none, ±10 cm at 0.09, 0.11, and 0.18 Hz). Correlation analyses revealed a number of temporal relationships between gait parameters and cue speed. The results suggest that neural control ranged from feedforward to feedback. Specifically, step length preceded cue velocity during double-support duration suggesting anticipatory control. Peak toe velocity nearly coincided with its most-correlated cue velocity during single-support duration. The toe-off concluding step and double-support durations followed their most-correlated cue velocity, suggesting feedback control. Cue-tracking accuracy and cue velocity correlations with timing parameters were higher with the manual contact cue than visual cue. The cue/gait timing relationships generalized across cue modalities, albeit with greater delays of step-cycle events relative to manual contact cue velocity. We conclude that individual kinematic parameters of gait are controlled to achieve a desired velocity at different specific times during the gait cycle. The overall timing pattern of instantaneous cue velocities associated with different gait parameters is conserved across cues that afford different performance accuracies. This timing pattern may be temporally shifted to optimize control. Different cue/gait parameter latencies in our nonadaptation paradigm provide general-case evidence of the independent control of gait parameters previously demonstrated in gait adaptation paradigms

  5. Gait parameter control timing with dynamic manual contact or visual cues.

    Science.gov (United States)

    Rabin, Ely; Shi, Peter; Werner, William

    2016-06-01

    We investigated the timing of gait parameter changes (stride length, peak toe velocity, and double-, single-support, and complete step duration) to control gait speed. Eleven healthy participants adjusted their gait speed on a treadmill to maintain a constant distance between them and a fore-aft oscillating cue (a place on a conveyor belt surface). The experimental design balanced conditions of cue modality (vision: eyes-open; manual contact: eyes-closed while touching the cue); treadmill speed (0.2, 0.4, 0.85, and 1.3 m/s); and cue motion (none, ±10 cm at 0.09, 0.11, and 0.18 Hz). Correlation analyses revealed a number of temporal relationships between gait parameters and cue speed. The results suggest that neural control ranged from feedforward to feedback. Specifically, step length preceded cue velocity during double-support duration suggesting anticipatory control. Peak toe velocity nearly coincided with its most-correlated cue velocity during single-support duration. The toe-off concluding step and double-support durations followed their most-correlated cue velocity, suggesting feedback control. Cue-tracking accuracy and cue velocity correlations with timing parameters were higher with the manual contact cue than visual cue. The cue/gait timing relationships generalized across cue modalities, albeit with greater delays of step-cycle events relative to manual contact cue velocity. We conclude that individual kinematic parameters of gait are controlled to achieve a desired velocity at different specific times during the gait cycle. The overall timing pattern of instantaneous cue velocities associated with different gait parameters is conserved across cues that afford different performance accuracies. This timing pattern may be temporally shifted to optimize control. Different cue/gait parameter latencies in our nonadaptation paradigm provide general-case evidence of the independent control of gait parameters previously demonstrated in gait adaptation paradigms

  6. Effectiveness of self-generated cues in early Alzheimer's disease.

    Science.gov (United States)

    Lipinska, B; Bäckman, L; Mäntylä, T; Viitanen, M

    1994-12-01

    The ability to utilize cognitive support in the form of self-generated cues in mild Alzheimer's disease (AD), and the factors promoting efficient cue utilization in this group of patients, were examined in two experiments on memory for words. Results from both experiments showed that normal old adults as well as AD patients performed better with self-generated cues than with experimenter-provided cues, although the latter type of cues resulted in gains relative to free recall. The findings indicate no qualitative differences in patterns of performance between the normal old and the AD patients. For both groups of subjects, cue effectiveness was optimized when (a) there was self-generation activity at encoding, and (b) encoding and retrieval conditions were compatible.

  7. Stimulus-driven attentional capture by subliminal onset cues

    NARCIS (Netherlands)

    Schoeberl, T.; Fuchs, I.; Theeuwes, J.; Ansorge, U.

    2015-01-01

    In two experiments, we tested whether subliminal abrupt onset cues capture attention in a stimulus-driven way. An onset cue was presented 16 ms prior to the stimulus display that consisted of clearly visible color targets. The onset cue was presented either at the same side as the target (the valid

  8. How Iconicity Helps People Learn New Words: Neural Correlates and Individual Differences in Sound-Symbolic Bootstrapping

    Directory of Open Access Journals (Sweden)

    Gwilym Lockwood

    2016-07-01

    Full Text Available Sound symbolism is increasingly understood as involving iconicity, or perceptual analogies and cross-modal correspondences between form and meaning, but the search for its functional and neural correlates is ongoing. Here we study how people learn sound-symbolic words, using behavioural, electrophysiological and individual difference measures. Dutch participants learned Japanese ideophones —lexical sound- symbolic words— with a translation of either the real meaning (in which form and meaning show cross-modal correspondences or the opposite meaning (in which form and meaning show cross-modal clashes. Participants were significantly better at identifying the words they learned in the real condition, correctly remembering the real word pairing 86.7% of the time, but the opposite word pairing only 71.3% of the time. Analysing event-related potentials (ERPs during the test round showed that ideophones in the real condition elicited a greater P3 component and late positive complex than ideophones in the opposite condition. In a subsequent forced choice task, participants were asked to guess the real translation from two alternatives. They did this with 73.0% accuracy, well above chance level even for words they had encountered in the opposite condition, showing that people are generally sensitive to the sound-symbolic cues in ideophones. Individual difference measures showed that the ERP effect in the test round of the learning task was greater for participants who were more sensitive to sound symbolism in the forced choice task. The main driver of the difference was a lower amplitude of the P3 component in response to ideophones in the opposite condition, suggesting that people who are more sensitive to sound symbolism may have more difficulty to suppress conflicting cross-modal information. The findings provide new evidence that cross-modal correspondences between sound and meaning facilitate word learning, while cross-modal clashes make word

  9. Sexual behavior and sex-associated environmental cues activate the mesolimbic system in male rats.

    Science.gov (United States)

    Balfour, Margaret E; Yu, Lei; Coolen, Lique M

    2004-04-01

    The mesolimbic system plays an important role in the regulation of both pathological behaviors such as drug addiction and normal motivated behaviors such as sexual behavior. The present study investigated the mechanism by which this system is endogenously activated during sexual behavior. Specifically, the effects of sexual experience and sex-related environmental cues on the activation of several components of the mesolimbic system were studied. The mesolimbic system consists of a dopaminergic projection from the ventral tegmental area (VTA) to the nucleus accumbens (NAc). Previous studies suggest that these neurons are under tonic inhibition by local GABA interneurons, which are in turn modulated by mu opioid receptor (MOR) ligands. To test the hypothesis that opioids are acting in the VTA during sexual behavior, visualization of MOR internalization in VTA was used as a marker for ligand-induced activation of the receptor. Significant increases in MOR internalization were observed following copulation or exposure to sex-related environmental cues. The next goal was to determine if sexual behavior activates dopamine neurons in the VTA, using tyrosine hydroxylase as a marker for dopaminergic neurons and Fos-immunoreactivity as a marker for neuronal activation. Significant increases in the percentage of activated dopaminergic neurons were observed following copulation or exposure to sex-related environmental cues. In addition, mating and sex-related cues activated a large population of nondopaminergic neurons in VTA as well as neurons in both the NAc Core and Shell. Taken together, our results provide functional neuroanatomical evidence that the mesolimbic system is activated by both sexual behavior and exposure to sex-related environmental cues.

  10. The development of prospective memory in young schoolchildren: the impact of ongoing task absorption, cue salience, and cue centrality.

    Science.gov (United States)

    Kliegel, Matthias; Mahy, Caitlin E V; Voigt, Babett; Henry, Julie D; Rendell, Peter G; Aberle, Ingo

    2013-12-01

    This study presents evidence that 9- and 10-year-old children outperform 6- and 7-year-old children on a measure of event-based prospective memory and that retrieval-based factors systematically influence performance and age differences. All experiments revealed significant age effects in prospective memory even after controlling for ongoing task performance. In addition, the provision of a less absorbing ongoing task (Experiment 1), higher cue salience (Experiment 2), and cues appearing in the center of attention (Experiment 3) were each associated with better performance. Of particular developmental importance was an age by cue centrality (in or outside of the center of attention) interaction that emerged in Experiment 3. Thus, age effects were restricted to prospective memory cues appearing outside of the center of attention, suggesting that the development of prospective memory across early school years may be modulated by whether a cue requires overt monitoring beyond the immediate attentional context. Because whether a cue is in or outside of the center of attention might determine the amount of executive control needed in a prospective memory task, findings suggest that developing executive control resources may drive prospective memory development across primary school age. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

    OpenAIRE

    Yoshio Kurosawa; Takao Yamaguchi

    2015-01-01

    High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy ...

  12. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    Science.gov (United States)

    2015-04-01

    associated with, or produced by, a physical event or human activity and 2) sound sources that are common in the environment. Reproductions or sound...Rogers S. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology . 2000;22(6):830–864. 14 VanDerveer NJ

  13. The role of typography in differentiating look-alike/sound-alike drug names.

    Science.gov (United States)

    Gabriele, Sandra

    2006-01-01

    Until recently, when errors occurred in the course of caring for patients, blame was assigned to the healthcare professionals closest to the incident rather than examining the larger system and the actions that led up to the event. Now, the medical profession is embracing expertise and methodologies used in other fields to improve its own systems in relation to patient safety issues. This exploratory study, part of a Master's of Design thesis project, was a response to the problem of errors that occur due to confusion between look-alike/sound-alike drug names (medication names that have orthographic and/or phonetic similarities). The study attempts to provide a visual means to help differentiate problematic names using formal typographic and graphic cues. The FDA's Name Differentiation Project recommendations and other typographic alternatives were considered to address issues of attention and cognition. Eleven acute care nurses participated in testing that consisted of word-recognition tasks and questions intended to elicit opinions regarding the visual treatment of look-alike/sound-alike names in the context of a label prototype. Though limited in sample size, testing provided insight into the kinds of typographic differentiation that might be effective in a high-risk situation.

  14. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    Science.gov (United States)

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Seismic and Biological Sources of Ambient Ocean Sound

    Science.gov (United States)

    Freeman, Simon Eric

    Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed

  16. Auditory Emotional Cues Enhance Visual Perception

    Science.gov (United States)

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  17. Contextual Cueing Effects across the Lifespan

    Science.gov (United States)

    Merrill, Edward C.; Conners, Frances A.; Roskos, Beverly; Klinger, Mark R.; Klinger, Laura Grofer

    2013-01-01

    The authors evaluated age-related variations in contextual cueing, which reflects the extent to which visuospatial regularities can facilitate search for a target. Previous research produced inconsistent results regarding contextual cueing effects in young children and in older adults, and no study has investigated the phenomenon across the life…

  18. Reminder cues modulate the renewal effect in human predictive learning

    Directory of Open Access Journals (Sweden)

    Javier Bustamante

    2016-12-01

    Full Text Available Associative learning refers to our ability to learn about regularities in our environment. When a stimulus is repeatedly followed by a specific outcome, we learn to expect the outcome in the presence of the stimulus. We are also able to modify established expectations in the face of disconfirming information (the stimulus is no longer followed by the outcome. Both the change of environmental regularities and the related processes of adaptation are referred to as extinction. However, extinction does not erase the initially acquired expectations. For instance, following successful extinction, the initially learned expectations can recover when there is a context change – a phenomenon called the renewal effect, which is considered as a model for relapse after exposure therapy. Renewal was found to be modulated by reminder cues of acquisition and extinction. However, the mechanisms underlying the effectiveness of reminder cues are not well understood. The aim of the present study was to investigate the impact of reminder cues on renewal in the field of human predictive learning. Experiment I demonstrated that renewal in human predictive learning is modulated by cues related to acquisition or extinction. Initially, participants received pairings of a stimulus and an outcome in one context. These stimulus-outcome pairings were preceded by presentations of a reminder cue (acquisition cue. Then, participants received extinction in a different context in which presentations of the stimulus were no longer followed by the outcome. These extinction trials were preceded by a second reminder cue (extinction cue. During a final phase conducted in a third context, participants showed stronger expectations of the outcome in the presence of the stimulus when testing was accompanied by the acquisition cue compared to the extinction cue. Experiment II tested an explanation of the reminder cue effect in terms of simple cue-outcome associations. Therefore

  19. Blood cues induce antipredator behavior in Nile tilapia conspecifics.

    Directory of Open Access Journals (Sweden)

    Rodrigo Egydio Barreto

    Full Text Available In this study, we show that the fish Nile tilapia displays an antipredator response to chemical cues present in the blood of conspecifics. This is the first report of alarm response induced by blood-borne chemical cues in fish. There is a body of evidence showing that chemical cues from epidermal 'club' cells elicit an alarm reaction in fish. However, the chemical cues of these 'club' cells are restricted to certain species of fish. Thus, as a parsimonious explanation, we assume that an alarm response to blood cues is a generalized response among animals because it occurs in mammals, birds and protostomian animals. Moreover, our results suggest that researchers must use caution when studying chemically induced alarm reactions because it is difficult to separate club cell cues from traces of blood.

  20. Theta and beta oscillatory dynamics in the dentate gyrus reveal a shift in network processing state during cue encounters

    Directory of Open Access Journals (Sweden)

    Lara Maria Rangel

    2015-07-01

    Full Text Available The hippocampus is an important structure for learning and memory processes, and has strong rhythmic activity. Although a large amount of research has been dedicated towards understanding the rhythmic activity in the hippocampus during exploratory behaviors, specifically in the theta (5-10 Hz frequency range, few studies have examined the temporal interplay of theta and other frequencies during the presentation of meaningful cues. We obtained in vivo electrophysiological recordings of local field potentials (LFP in the dentate gyrus (DG of the hippocampus as rats performed three different associative learning tasks. In each task, cue presentations elicited pronounced decrements in theta amplitude in conjunction with increases in beta (15-30Hz amplitude. These changes were often transient but were sustained from the onset of cue encounters until the occurrence of a reward outcome. This oscillatory profile shifted in time to precede cue encounters over the course of the session, and was not present during similar behavior in the absence of task relevant stimuli. The observed decreases in theta amplitude and increases in beta amplitude in the dentate gyrus may thus reflect a shift in processing state that occurs when encountering meaningful cues.

  1. Transfer of memory retrieval cues in rats.

    Science.gov (United States)

    Briggs, James F; Fitz, Kelly I; Riccio, David C

    2007-06-01

    Two experiments using rats were conducted to determine whether the retrieval of a memory could be brought under the control of new contextual cues that had not been present at the time of training. In Experiment 1, rats were trained in one context and then exposed to different contextual cues immediately, 60 min, or 120 min after training. When tested in the shifted context, rats that had been exposed shortly after training treated the shifted context as if it were the original context. The control that the previously neutral context had over retrieval disappeared with longer posttraining delays, suggesting the importance of an active memory representation during exposure. Experiment 2 replicated the basic finding and demonstrated that the transfer of retrieval cues was specific to the contextual cues present during exposure. These findings with rats are consistent with findings from infant research (see, e.g., Boller & Rovee-Collier, 1992) that have shown that a neutral context can come to serve as a retrieval cue for an episode experienced elsewhere.

  2. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2010-05-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  3. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2009-09-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  4. Digital sound de-localisation as a game mechanic for novel bodily play

    DEFF Research Database (Denmark)

    Tiab, John; Rantakari, Juho; Halse, Mads Laurberg

    2016-01-01

    This paper describes an exertion gameplay mechanic involving player's partial control of their opponent's sound localization abilities. We developed this concept through designing and testing "The Boy and The Wolf" game. In this game, we combined deprivation of sight with a positional disparity...... between player bodily movement and sound. This facilitated intense gameplay supporting player creativity and spectator engagement. We use our observations and analysis of our game to offer a set of lessons learnt for designing engaging bodily play using disparity between sound and movement. Moreover, we...... describe our intended future explorations of this area....

  5. Gefinex 400S (Sampo) EM-Soundings at Olkiluoto 2007

    International Nuclear Information System (INIS)

    Jokinen, T.; Lehtimaeki, J.

    2007-09-01

    In the beginning of June 2007 Geological Survey of Finland carried out electromagnetic frequency soundings with Gefinex 400S equipment (Sampo) at Onkalo situated in Olkiluoto nuclear power plant area. The same soundings sites were the first time measured and marked in 2004 and repeated after it yearly. The aim of the measurements is to monitor the changes of groundwater conditions by the changes of the electric conductivity of the earth at ONKALO and repository area. The measurements form two 1400 m long broadside profiles, which have 200 m mutual distance and 200 m station separation. The profiles have been measured using 200, 500, and 800 m coil separations. The total number of the soundings stations is 48. In 2007 at 8 sounding stations the transmitter and/or receiver sites were changed and the line L11.400 was substituted by line L11.500. Some changes helped but anyway there were 6 stations that could not be measured because of the strong electromagnetic noise. The numerous power lines and the cables of the area generate local 3-D effects on the sounding curves, but the repeatability of the results is good. However, most suitable for monitoring purposes are the sites without strong 3-D effects. Comparison of results 2004-2007 shows small differences at some sounding sites. (orig.)

  6. Facilitated orienting underlies fearful face-enhanced gaze cueing of spatial location

    Directory of Open Access Journals (Sweden)

    Joshua M. Carlson

    2016-12-01

    Full Text Available Faces provide a platform for non-verbal communication through emotional expression and eye gaze. Fearful facial expressions are salient indicators of potential threat within the environment, which automatically capture observers’ attention. However, the degree to which fearful facial expressions facilitate attention to others’ gaze is unresolved. Given that fearful gaze indicates the location of potential threat, it was hypothesized that fearful gaze facilitates location processing. To test this hypothesis, a gaze cueing study with fearful and neutral faces assessing target localization was conducted. The task consisted of leftward, rightward, and forward/straight gaze trials. The inclusion of forward gaze trials allowed for the isolation of orienting and disengagement components of gaze-directed attention. The results suggest that both neutral and fearful gaze modulates attention through orienting and disengagement components. Fearful gaze, however, resulted in quicker orienting than neutral gaze. Thus, fearful faces enhance gaze cueing of spatial location through facilitated orienting.

  7. Individualization of music-based rhythmic auditory cueing in Parkinson's disease.

    Science.gov (United States)

    Bella, Simone Dalla; Dotov, Dobromir; Bardy, Benoît; de Cock, Valérie Cochen

    2018-06-04

    Gait dysfunctions in Parkinson's disease can be partly relieved by rhythmic auditory cueing. This consists in asking patients to walk with a rhythmic auditory stimulus such as a metronome or music. The effect on gait is visible immediately in terms of increased speed and stride length. Moreover, training programs based on rhythmic cueing can have long-term benefits. The effect of rhythmic cueing, however, varies from one patient to the other. Patients' response to the stimulation may depend on rhythmic abilities, often deteriorating with the disease. Relatively spared abilities to track the beat favor a positive response to rhythmic cueing. On the other hand, most patients with poor rhythmic abilities either do not respond to the cues or experience gait worsening when walking with cues. An individualized approach to rhythmic auditory cueing with music is proposed to cope with this variability in patients' response. This approach calls for using assistive mobile technologies capable of delivering cues that adapt in real time to patients' gait kinematics, thus affording step synchronization to the beat. Individualized rhythmic cueing can provide a safe and cost-effective alternative to standard cueing that patients may want to use in their everyday lives. © 2018 New York Academy of Sciences.

  8. Augmented Reality Cues and Elderly Driver Hazard Perception

    Science.gov (United States)

    Schall, Mark C.; Rusch, Michelle L.; Lee, John D.; Dawson, Jeffrey D.; Thomas, Geb; Aksan, Nazan; Rizzo, Matthew

    2013-01-01

    Objective Evaluate the effectiveness of augmented reality (AR) cues in improving driving safety in elderly drivers who are at increased crash risk due to cognitive impairments. Background Cognitively challenging driving environments pose a particular crash risk for elderly drivers. AR cueing is a promising technology to mitigate risk by directing driver attention to roadway hazards. This study investigates whether AR cues improve or interfere with hazard perception in elderly drivers with age-related cognitive decline. Methods Twenty elderly (Mean= 73 years, SD= 5 years), licensed drivers with a range of cognitive abilities measured by a speed of processing (SOP) composite participated in a one-hour drive in an interactive, fixed-base driving simulator. Each participant drove through six, straight, six-mile-long rural roadway scenarios following a lead vehicle. AR cues directed attention to potential roadside hazards in three of the scenarios, and the other three were uncued (baseline) drives. Effects of AR cueing were evaluated with respect to: 1) detection of hazardous target objects, 2) interference with detecting nonhazardous secondary objects, and 3) impairment in maintaining safe distance behind a lead vehicle. Results AR cueing improved the detection of hazardous target objects of low visibility. AR cues did not interfere with detection of nonhazardous secondary objects and did not impair ability to maintain safe distance behind a lead vehicle. SOP capacity did not moderate those effects. Conclusion AR cues show promise for improving elderly driver safety by increasing hazard detection likelihood without interfering with other driving tasks such as maintaining safe headway. PMID:23829037

  9. The influence of signal parameters on the sound source localization ability of a harbor porpoise (Phocoena phocoena)

    NARCIS (Netherlands)

    Kastelein, R.A.; Haan, D.de; Verboom, W.C.

    2007-01-01

    It is unclear how well harbor porpoises can locate sound sources, and thus can locate acoustic alarms on gillnets. Therefore the ability of a porpoise to determine the location of a sound source was determined. The animal was trained to indicate the active one of 16 transducers in a 16-m -diam

  10. Intelligence as the efficiency of cue-driven retrieval from secondary memory.

    Science.gov (United States)

    Liesefeld, Heinrich René; Hoffmann, Eugenia; Wentura, Dirk

    2016-01-01

    Complex-span (working-memory-capacity) tasks are among the most successful predictors of intelligence. One important contributor to this relationship is the ability to efficiently employ cues for the retrieval from secondary memory. Presumably, intelligent individuals can considerably restrict their memory search sets by using such cues and can thereby improve recall performance. We here test this assumption by experimentally manipulating the validity of retrieval cues. When memoranda are drawn from the same semantic category on two successive trials of a verbal complex-span task, the category is a very strong retrieval cue on its first occurrence (strong-cue trial) but loses some of its validity on its second occurrence (weak-cue trial). If intelligent individuals make better use of semantic categories as retrieval cues, their recall accuracy suffers more from this loss of cue validity. Accordingly, our results show that less variance in intelligence is explained by recall accuracy on weak-cue compared with strong-cue trials.

  11. Using Self-Generated Cues to Facilitate Recall: A Narrative Review

    Science.gov (United States)

    Wheeler, Rebecca L.; Gabbert, Fiona

    2017-01-01

    We draw upon the Associative Network model of memory, as well as the principles of encoding-retrieval specificity, and cue distinctiveness, to argue that self-generated cue mnemonics offer an intuitive means of facilitating reliable recall of personally experienced events. The use of a self-generated cue mnemonic allows for the spreading activation nature of memory, whilst also presenting an opportunity to capitalize upon cue distinctiveness. Here, we present the theoretical rationale behind the use of this technique, and highlight the distinction between a self-generated cue and a self-referent cue in autobiographical memory research. We contrast this mnemonic with a similar retrieval technique, Mental Reinstatement of Context, which is recognized as the most effective mnemonic component of the Cognitive Interview. Mental Reinstatement of Context is based upon the principle of encoding-retrieval specificity, whereby the overlap between encoded information and retrieval cue predicts the likelihood of accurate recall. However, it does not incorporate the potential additional benefit of self-generated retrieval cues. PMID:29163254

  12. Using Self-Generated Cues to Facilitate Recall: A Narrative Review

    Directory of Open Access Journals (Sweden)

    Rebecca L. Wheeler

    2017-10-01

    Full Text Available We draw upon the Associative Network model of memory, as well as the principles of encoding-retrieval specificity, and cue distinctiveness, to argue that self-generated cue mnemonics offer an intuitive means of facilitating reliable recall of personally experienced events. The use of a self-generated cue mnemonic allows for the spreading activation nature of memory, whilst also presenting an opportunity to capitalize upon cue distinctiveness. Here, we present the theoretical rationale behind the use of this technique, and highlight the distinction between a self-generated cue and a self-referent cue in autobiographical memory research. We contrast this mnemonic with a similar retrieval technique, Mental Reinstatement of Context, which is recognized as the most effective mnemonic component of the Cognitive Interview. Mental Reinstatement of Context is based upon the principle of encoding-retrieval specificity, whereby the overlap between encoded information and retrieval cue predicts the likelihood of accurate recall. However, it does not incorporate the potential additional benefit of self-generated retrieval cues.

  13. Limits on the role of retrieval cues in memory for actions: enactment effects in the absence of object cues in the environment.

    Science.gov (United States)

    Steffens, Melanie C; Buchner, Axel; Wender, Karl F; Decker, Claudia

    2007-12-01

    Verb-object phrases (open the umbrella, knock on the table) are usually remembered better if they have been enacted during study (also called subject-performed tasks) than if they have merely been learned verbally (verbal tasks). This enactment effect is particularly pronounced for phrases for which the objects (table) are present as cues in the study and test contexts. In previous studies with retrieval cues for some phrases, the enactment effect in free recall for the other phrases has been surprisingly small or even nonexistent. The present study tested whether the often replicated enactment effect in free recall can be found if none of the phrases contains context cues. In Experiment 1, we tested, and corroborated, the suppression hypothesis: The enactment effect for a given type of phrase (marker phrases) is modified by the presence or absence of cues for the other phrases in the list (experimental phrases). Experiments 2 and 3 replicated the enactment effect for phrases without cues. Experiment 2 also showed that the presence of cues either at study or at test is sufficient for obtaining a suppression effect, and Experiment 3 showed that the enactment effect may disappear altogether if retrieval cues are very salient.

  14. Oxytocin differentially modulates pavlovian cue and context fear acquisition.

    Science.gov (United States)

    Cavalli, Juliana; Ruttorf, Michaela; Pahi, Mario Rosero; Zidda, Francesca; Flor, Herta; Nees, Frauke

    2017-06-01

    Fear acquisition and extinction have been demonstrated as core mechanisms for the development and maintenance of mental disorders, with different contributions of processing cues vs contexts. The hypothalamic peptide oxytocin (OXT) may have a prominent role in this context, as it has been shown to affect fear learning. However, investigations have focused on cue conditioning, and fear extinction. Its differential role for cue and context fear acquisition is still not known. In a randomized, double-blind, placebo (PLC)-controlled design, we administered an intranasal dose of OXT or PLC before the acquisition of cue and context fear conditioning in healthy individuals (n = 52), and assessed brain responses, skin conductance responses and self-reports (valence/arousal/contingency). OXT compared with PLC significantly induced decreased responses in the nucleus accumbens during early cue and context acquisition, and decreased responses of the anterior cingulate cortex and insula during early as well as increased hippocampal response during late context, but not cue acquisition. The OXT group additionally showed significantly higher arousal in late cue and context acquisition. OXT modulates various aspects of cue and context conditioning, which is relevant from a mechanism-based perspective and might have implications for the treatment of fear and anxiety. © The Author (2017). Published by Oxford University Press.

  15. Sound Is Sound: Film Sound Techniques and Infrasound Data Array Processing

    Science.gov (United States)

    Perttu, A. B.; Williams, R.; Taisne, B.; Tailpied, D.

    2017-12-01

    A multidisciplinary collaboration between earth scientists and a sound designer/composer was established to explore the possibilities of audification analysis of infrasound array data. Through the process of audification of the infrasound we began to experiment with techniques and processes borrowed from cinema to manipulate the noise content of the signal. The results of this posed the question: "Would the accuracy of infrasound data array processing be enhanced by employing these techniques?". So a new area of research was born from this collaboration and highlights the value of these interactions and the unintended paths that can occur from them. Using a reference event database, infrasound data were processed using these new techniques and the results were compared with existing techniques to asses if there was any improvement to detection capability for the array. With just under one thousand volcanoes, and a high probability of eruption, Southeast Asia offers a unique opportunity to develop and test techniques for regional monitoring of volcanoes with different technologies. While these volcanoes are monitored locally (e.g. seismometer, infrasound, geodetic and geochemistry networks) and remotely (e.g. satellite and infrasound), there are challenges and limitations to the current monitoring capability. Not only is there a high fraction of cloud cover in the region, making plume observation more difficult via satellite, there have been examples of local monitoring networks and telemetry being destroyed early in the eruptive sequence. The success of local infrasound studies to identify explosions at volcanoes, and calculate plume heights from these signals, has led to an interest in retrieving source parameters for the purpose of ash modeling with a regional network independent of cloud cover.

  16. The time course of attentional deployment in contextual cueing.

    Science.gov (United States)

    Jiang, Yuhong V; Sigstad, Heather M; Swallow, Khena M

    2013-04-01

    The time course of attention is a major characteristic on which different types of attention diverge. In addition to explicit goals and salient stimuli, spatial attention is influenced by past experience. In contextual cueing, behaviorally relevant stimuli are more quickly found when they appear in a spatial context that has previously been encountered than when they appear in a new context. In this study, we investigated the time that it takes for contextual cueing to develop following the onset of search layout cues. In three experiments, participants searched for a T target in an array of Ls. Each array was consistently associated with a single target location. In a testing phase, we manipulated the stimulus onset asynchrony (SOA) between the repeated spatial layout and the search display. Contextual cueing was equivalent for a wide range of SOAs between 0 and 1,000 ms. The lack of an increase in contextual cueing with increasing cue durations suggests that as an implicit learning mechanism, contextual cueing cannot be effectively used until search begins.

  17. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  18. Using multisensory cues to facilitate air traffic management.

    Science.gov (United States)

    Ngo, Mary K; Pierce, Russell S; Spence, Charles

    2012-12-01

    In the present study, we sought to investigate whether auditory and tactile cuing could be used to facilitate a complex, real-world air traffic management scenario. Auditory and tactile cuing provides an effective means of improving both the speed and accuracy of participants' performance in a variety of laboratory-based visual target detection and identification tasks. A low-fidelity air traffic simulation task was used in which participants monitored and controlled aircraft.The participants had to ensure that the aircraft landed or exited at the correct altitude, speed, and direction and that they maintained a safe separation from all other aircraft and boundaries. The performance measures recorded included en route time, handoff delay, and conflict resolution delay (the performance measure of interest). In a baseline condition, the aircraft in conflict was highlighted in red (visual cue), and in the experimental conditions, this standard visual cue was accompanied by a simultaneously presented auditory, vibrotactile, or audiotactile cue. Participants responded significantly more rapidly, but no less accurately, to conflicts when presented with an additional auditory or audiotactile cue than with either a vibrotactile or visual cue alone. Auditory and audiotactile cues have the potential for improving operator performance by reducing the time it takes to detect and respond to potential visual target events. These results have important implications for the design and use of multisensory cues in air traffic management.

  19. Caloric restriction in the presence of attractive food cues: external cues, eating, and weight.

    Science.gov (United States)

    Polivy, Janet; Herman, C Peter; Coelho, Jennifer S

    2008-08-06

    A growing body of research on caloric restriction (CR) in many species of laboratory animals suggests that underfeeding leads to better health and longevity in the calorically-restricted animal (e.g., see [[34]. J.P. Pinel, S. Assanand and D.R. Lehman, (2000). Hunger, eating and ill health. Am Psychol, 55, 1105-1116.], for a review). Although some objections have been raised by scientists concerned about negative psychological and behavioral sequelae of such restriction, advocates of CR continue to urge people to adopt sharply reduced eating regimes in order to increase their longevity. Yet very few people are even attempting to reap the benefits of such restriction. The present paper explores one factor that may deter many humans from drastically reducing their food consumption--the presence of abundant, attractive food cues in the environment. Research on the influence of food cues on food-related behaviors is reviewed to demonstrate that the presence of food cues makes restriction of intake more difficult.

  20. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory.

    Science.gov (United States)

    Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E

    2010-05-01

    The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.

  1. Tutorial on the Psychophysics and Technology of Virtual Acoustic Displays

    Science.gov (United States)

    Wenzel, Elizabeth M.; Null, Cynthia (Technical Monitor)

    1998-01-01

    Virtual acoustics, also known as 3-D sound and auralization, is the simulation of the complex acoustic field experienced by a listener within an environment. Going beyond the simple intensity panning of normal stereo techniques, the goal is to process sounds so that they appear to come from particular locations in three-dimensional space. Although loudspeaker systems are being developed, most of the recent work focuses on using headphones for playback and is the outgrowth of earlier analog techniques. For example, in binaural recording, the sound of an orchestra playing classical music is recorded through small mics in the two "ear canals" of an anthropomorphic artificial or "dummy" head placed in the audience of a concert hall. When the recorded piece is played back over headphones, the listener passively experiences the illusion of hearing the violins on the left and the cellos on the right, along with all the associated echoes, resonances, and ambience of the original environment. Current techniques use digital signal processing to synthesize the acoustical properties that people use to localize a sound source in space. Thus, they provide the flexibility of a kind of digital dummy head, allowing a more active experience in which a listener can both design and move around or interact with a simulated acoustic environment in real time. Such simulations are being developed for a variety of application areas including architectural acoustics, advanced human-computer interfaces, telepresence and virtual reality, navigation aids for the visually-impaired, and as a test bed for psychoacoustical investigations of complex spatial cues. The tutorial will review the basic psychoacoustical cues that determine human sound localization and the techniques used to measure these cues as Head-Related Transfer Functions (HRTFs) for the purpose of synthesizing virtual acoustic environments. The only conclusive test of the adequacy of such simulations is an operational one in which

  2. Making Sound Connections

    Science.gov (United States)

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  3. Local Control of Audio Environment: A Review of Methods and Applications

    Directory of Open Access Journals (Sweden)

    Jussi Kuutti

    2014-02-01

    Full Text Available The concept of a local audio environment is to have sound playback locally restricted such that, ideally, adjacent regions of an indoor or outdoor space could exhibit their own individual audio content without interfering with each other. This would enable people to listen to their content of choice without disturbing others next to them, yet, without any headphones to block conversation. In practice, perfect sound containment in free air cannot be attained, but a local audio environment can still be satisfactorily approximated using directional speakers. Directional speakers may be based on regular audible frequencies or they may employ modulated ultrasound. Planar, parabolic, and array form factors are commonly used. The directivity of a speaker improves as its surface area and sound frequency increases, making these the main design factors for directional audio systems. Even directional speakers radiate some sound outside the main beam, and sound can also reflect from objects. Therefore, directional speaker systems perform best when there is enough ambient noise to mask the leaking sound. Possible areas of application for local audio include information and advertisement audio feed in commercial facilities, guiding and narration in museums and exhibitions, office space personalization, control room messaging, rehabilitation environments, and entertainment audio systems.

  4. Evidence for a shared representation of sequential cues that engage sign-tracking.

    Science.gov (United States)

    Smedley, Elizabeth B; Smith, Kyle S

    2018-06-19

    Sign-tracking is a phenomenon whereby cues that predict rewards come to acquire their own motivational value (incentive salience) and attract appetitive behavior. Typically, sign-tracking paradigms have used single auditory, visual, or lever cues presented prior to a reward delivery. Yet, real world examples of events often can be predicted by a sequence of cues. We have shown that animals will sign-track to multiple cues presented in temporal sequence, and with time develop a bias in responding toward a reward distal cue over a reward proximal cue. Further, extinction of responding to the reward proximal cue directly decreases responding to the reward distal cue. One possible explanation of this result is that serial cues become representationally linked with one another. Here we provide further support of this by showing that extinction of responding to a reward distal cue directly reduces responding to a reward proximal cue. We suggest that the incentive salience of one cue can influence the incentive salience of the other cue. Copyright © 2018. Published by Elsevier B.V.

  5. Nonlocal nonlinear coupling of kinetic sound waves

    Directory of Open Access Journals (Sweden)

    O. Lyubchyk

    2014-11-01

    Full Text Available We study three-wave resonant interactions among kinetic-scale oblique sound waves in the low-frequency range below the ion cyclotron frequency. The nonlinear eigenmode equation is derived in the framework of a two-fluid plasma model. Because of dispersive modifications at small wavelengths perpendicular to the background magnetic field, these waves become a decay-type mode. We found two decay channels, one into co-propagating product waves (forward decay, and another into counter-propagating product waves (reverse decay. All wavenumbers in the forward decay are similar and hence this decay is local in wavenumber space. On the contrary, the reverse decay generates waves with wavenumbers that are much larger than in the original pump waves and is therefore intrinsically nonlocal. In general, the reverse decay is significantly faster than the forward one, suggesting a nonlocal spectral transport induced by oblique sound waves. Even with low-amplitude sound waves the nonlinear interaction rate is larger than the collisionless dissipation rate. Possible applications regarding acoustic waves observed in the solar corona, solar wind, and topside ionosphere are briefly discussed.

  6. Variation in habitat soundscape characteristics influences settlement of a reef-building coral

    Directory of Open Access Journals (Sweden)

    Ashlee Lillis

    2016-10-01

    Full Text Available Coral populations, and the productive reef ecosystems they support, rely on successful recruitment of reef-building species, beginning with settlement of dispersing larvae into habitat favourable to survival. Many substrate cues have been identified as contributors to coral larval habitat selection; however, the potential for ambient acoustic cues to influence coral settlement responses is unknown. Using in situ settlement chambers that excluded other habitat cues, larval settlement of a dominant Caribbean reef-building coral, Orbicella faveolata, was compared in response to three local soundscapes, with differing acoustic and habitat properties. Differences between reef sites in the number of larvae settled in chambers isolating acoustic cues corresponded to differences in sound levels and reef characteristics, with sounds at the loudest reef generating significantly higher settlement during trials compared to the quietest site (a 29.5 % increase. These results suggest that soundscapes could be an important influence on coral settlement patterns and that acoustic cues associated with reef habitat may be related to larval settlement. This study reports an effect of soundscape variation on larval settlement for a key coral species, and adds to the growing evidence that soundscapes affect marine ecosystems by influencing early life history processes of foundational species.

  7. Claimed Versus Calculated Cue-Weighting Systems for Screening Employee Applicants

    Science.gov (United States)

    Blevins, David E.

    1975-01-01

    This research compares the cue-weighting system which assessors claimed they used with the cue-weighting system one would infer they used based on multiple observations of their assessing behavior. The claimed cue-weighting systems agreed poorly with the empirically calculated cue-weighting systems for all assessors except one who utilized only…

  8. Does Contextual Cueing Guide the Deployment of Attention?

    Science.gov (United States)

    Kunar, Melina A.; Flusberg, Stephen; Horowitz, Todd S.; Wolfe, Jeremy M.

    2008-01-01

    Contextual cueing experiments show that when displays are repeated, reaction times (RTs) to find a target decrease over time even when observers are not aware of the repetition. It has been thought that the context of the display guides attention to the target. We tested this hypothesis by comparing the effects of guidance in a standard search task to the effects of contextual cueing. Firstly, in standard search, an improvement in guidance causes search slopes (derived from RT × Set Size functions) to decrease. In contrast, we found that search slopes in contextual cueing did not become more efficient over time (Experiment 1). Secondly, when guidance is optimal (e.g. in easy feature search) we still found a small, but reliable contextual cueing effect (Experiments 2a and 2b), suggesting that other factors, such as response selection, contribute to the effect. Experiment 3 supported this hypothesis by showing that the contextual cueing effect disappeared when we added interference to the response selection process. Overall, our data suggest that the relationship between guidance and contextual cueing is weak and that response selection can account for part of the effect. PMID:17683230

  9. Attentional bias for craving-related (chocolate) food cues.

    Science.gov (United States)

    Kemps, Eva; Tiggemann, Marika

    2009-12-01

    In this study, we investigated attentional biases for craving-related food cues. A pictorial dot probe task was used to assess selective attentional processing of one particular highly desired food, namely chocolate, relative to that of other highly desired foods. In Experiment 1, we examined biased processing of chocolate cues in habitual (trait) chocolate cravers, whereas in Experiment 2 we investigated the effect of experimentally induced (state) chocolate cravings on such processing. As predicted, habitual chocolate cravers (Experiment 1) and individuals in whom a craving for chocolate was temporarily induced (Experiment 2) showed speeded detection of probes replacing chocolate-related pictures, demonstrating an attentional bias for chocolate cues. Subsequent examination indicated that in both experiments the observed attentional biases stemmed from difficulty in disengaging attention from chocolate cues rather than from a shift of attention toward such cues. The findings have important theoretical and practical implications.

  10. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  11. Individual differences in food cue responsivity are associated with acute and repeated cocaine-induced vocalizations, but not cue-induced vocalizations.

    Science.gov (United States)

    Tripi, Jordan A; Dent, Micheal L; Meyer, Paul J

    2017-02-01

    Individuals prone to attribute incentive salience to food-associated stimuli ("cues") are also more sensitive to cues during drug seeking and drug taking. This may be due in part to a difference in sensitivity to the affective or other stimulus properties of the drug. In rats, these properties are associated with 50-kHz ultrasonic vocalizations (USVs), in that they are elicited during putative positive affective and motivational states, including in response to drugs of abuse. We sought to determine whether individual differences in the tendency to attribute incentive salience to a food cue (as measured by approach) were associated with differences in cocaine-induced USVs. We also tested whether the food cue would elicit USVs and if this response was related to approach to the food cue. In experiment 1, rats underwent Pavlovian conditioned approach (PavCA) training where they learned to associate a cue (an illuminated lever) with the delivery of a food pellet into a food cup. Subjects were categorized based on their approach to the cue ("sign-trackers") or to the food cup ("goal-trackers"). Rats subsequently underwent nine testing days in which they were given saline or cocaine (10 mg/kg i.p) and placed into a locomotor chamber. In experiment 2, rats were first tested in the locomotor chambers for one saline-treated day followed by one cocaine-treated day and then trained in PavCA. USVs were recorded from a subset of individuals during the last day of PavCA to determine if the food cue would elicit USVs. Sign-trackers produced 5-24 times more cocaine-induced 50 kHz USVs compared to goal-trackers for all days of experiment 1, and this response sensitized with repeated cocaine, only in sign-trackers. Similarly in experiment 2, individuals that produced the most cocaine-induced USVs on a single exposure also showed the greatest tendency to sign-track during PavCA. Lastly, while sign-trackers produced more USVs during PavCA generally, the cue itself did not elicit

  12. Habitat selection, facilitation, and biotic settlement cues affect distribution and performance of coral recruits in French Polynesia.

    Science.gov (United States)

    Price, Nichole

    2010-07-01

    Habitat selection can determine the distribution and performance of individuals if the precision with which sites are chosen corresponds with exposure to risks or resources. Contrastingly, facilitation can allow persistence of individuals arriving by chance and potentially maladapted to local abiotic conditions. For marine organisms, selection of a permanent attachment site at the end of their larval stage or the presence of a facilitator can be a critical determinant of recruitment success. In coral reef ecosystems, it is well known that settling planula larvae of reef-building corals use coarse environmental cues (i.e., light) for habitat selection. Although laboratory studies suggest that larvae can also use precise biotic cues produced by crustose coralline algae (CCA) to select attachment sites, the ecological consequences of biotic cues for corals are poorly understood in situ. In a field experiment exploring the relative importance of biotic cues and variability in habitat quality to recruitment of hard corals, pocilloporid and acroporid corals recruited more frequently to one species of CCA, Titanoderma prototypum, and significantly less so to other species of CCA; these results are consistent with laboratory assays from other studies. The provision of the biotic cue accurately predicted coral recruitment rates across habitats of varying quality. At the scale of CCA, corals attached to the "preferred" CCA experienced increased survivorship while recruits attached elsewhere had lower colony growth and survivorship. For reef-building corals, the behavioral selection of habitat using chemical cues both reduces the risk of incidental mortality and indicates the presence of a facilitator.

  13. In Search of the Golden Age Hip-Hop Sound (1986–1996

    Directory of Open Access Journals (Sweden)

    Ben Duinker

    2017-09-01

    Full Text Available The notion of a musical repertoire's "sound" is frequently evoked in journalism and scholarship, but what parameters comprise such a sound? This question is addressed through a statistically-driven corpus analysis of hip-hop music released during the genre's Golden Age era. The first part of the paper presents a methodology for developing, transcribing, and analyzing a corpus of 100 hip-hop tracks released during the Golden Age. Eight categories of aurally salient musical and production parameters are analyzed: tempo, orchestration and texture, harmony, form, vocal and lyric profiles, global and local production effects, vocal doubling and backing, and loudness and compression. The second part of the paper organizes the analysis data into three trend categories: trends of change (parameters that change over time, trends of prevalence (parameters that remain generally constant across the corpus, and trends of similarity (parameters that are similar from song to song. These trends form a generalized model of the Golden Age hip-hop sound which considers both global (the whole corpus and local (unique songs within the corpus contexts. By operationalizing "sound" as the sum of musical and production parameters, aspects of popular music that are resistant to traditional music-analytical methods can be considered.

  14. A magnetorheological haptic cue accelerator for manual transmission vehicles

    International Nuclear Information System (INIS)

    Han, Young-Min; Noh, Kyung-Wook; Choi, Seung-Bok; Lee, Yang-Sub

    2010-01-01

    This paper proposes a new haptic cue function for manual transmission vehicles to achieve optimal gear shifting. This function is implemented on the accelerator pedal by utilizing a magnetorheological (MR) brake mechanism. By combining the haptic cue function with the accelerator pedal, the proposed haptic cue device can transmit the optimal moment of gear shifting for manual transmission to a driver without requiring the driver's visual attention. As a first step to achieve this goal, a MR fluid-based haptic device is devised to enable rotary motion of the accelerator pedal. Taking into account spatial limitations, the design parameters are optimally determined using finite element analysis to maximize the relative control torque. The proposed haptic cue device is then manufactured and its field-dependent torque and time response are experimentally evaluated. Then the manufactured MR haptic cue device is integrated with the accelerator pedal. A simple virtual vehicle emulating the operation of the engine of a passenger vehicle is constructed and put into communication with the haptic cue device. A feed-forward torque control algorithm for the haptic cue is formulated and control performances are experimentally evaluated and presented in the time domain

  15. POST-RETRIEVAL EXTINCTION ATTENUATES ALCOHOL CUE REACTIVITY IN RATS

    Science.gov (United States)

    Cofresí, Roberto U.; Lewis, Suzanne M.; Chaudhri, Nadia; Lee, Hongjoo J.; Monfils, Marie-H.; Gonzales, Rueben A.

    2017-01-01

    BACKGROUND Conditioned responses to alcohol-associated cues can hinder recovery from alcohol use disorder (AUD). Cue exposure (extinction) therapy (CET) can reduce reactivity to alcohol cues, but its efficacy is limited by phenomena such as spontaneous recovery and reinstatement that can cause a return of conditioned responding after extinction. Using a preclinical model of alcohol cue reactivity in rats, we evaluated whether the efficacy of alcohol CET could be improved by conducting CET during the memory reconsolidation window after retrieval of a cue-alcohol association. METHODS Rats were provided with intermittent access to unsweetened alcohol. Rats were then trained to predict alcohol access based on a visual cue. Next, rats were treated with either standard extinction (n=14) or post-retrieval extinction (n=13). Rats were then tested for long-term memory of extinction and susceptibility to spontaneous recovery and reinstatement. RESULTS Despite equivalent extinction, rats treated with post-retrieval extinction exhibited reduced spontaneous recovery and reinstatement relative to rats treated with standard extinction. CONCLUSIONS Post-retrieval CET shows promise for persistently attenuating the risk to relapse posed by alcohol cues in individuals with AUD. PMID:28169439

  16. Visual cue-specific craving is diminished in stressed smokers.

    Science.gov (United States)

    Cochran, Justinn R; Consedine, Nathan S; Lee, John M J; Pandit, Chinmay; Sollers, John J; Kydd, Robert R

    2017-09-01

    Craving among smokers is increased by stress and exposure to smoking-related visual cues. However, few experimental studies have tested both elicitors concurrently and considered how exposures may interact to influence craving. The current study examined craving in response to stress and visual cue exposure, separately and in succession, in order to better understand the relationship between craving elicitation and the elicitor. Thirty-nine smokers (21 males) who forwent smoking for 30 minutes were randomized to complete a stress task and a visual cue task in counterbalanced orders (creating the experimental groups); for the cue task, counterbalanced blocks of neutral, motivational control, and smoking images were presented. Self-reported craving was assessed after each block of visual stimuli and stress task, and after a recovery period following each task. As expected, the stress and smoking images generated greater craving than neutral or motivational control images (p smokers are stressed, visual cues have little additive effect on craving, and different types of visual cues elicit comparable craving. These findings may imply that once stressed, smokers will crave cigarettes comparably notwithstanding whether they are exposed to smoking image cues.

  17. Spectral information as an orientation cue in dung beetles

    OpenAIRE

    el Jundi, Basil; Foster, James J.; Byrne, Marcus J.; Baird, Emily; Dacke, Marie

    2015-01-01

    During the day, a non-uniform distribution of long and short wavelength light generates a colour gradient across the sky. This gradient could be used as a compass cue, particularly by animals such as dung beetles that rely primarily on celestial cues for orientation. Here, we tested if dung beetles can use spectral cues for orientation by presenting them with monochromatic (green and UV) light spots in an indoor arena. Beetles kept their original bearing when presented with a single light cue...

  18. Triggers of fear: perceptual cues versus conceptual information in spider phobia.

    Science.gov (United States)

    Peperkorn, Henrik M; Alpers, Georg W; Mühlberger, Andreas

    2014-07-01

    Fear reactions in spider-phobic patients can be activated by specific perceptual cues or by conceptual fear-related information. Matching perceptual fear cues and fear-related information were expected to result in maximal fear responses, perceptual fear cues alone in less fear, and information alone in the weakest responses. We used virtual reality to manipulate the available cues and information. Forty-eight phobic patients and 48 healthy participants were repeatedly exposed to either a perceptual cue, information, or a combination of both. In conditions with a fear-relevant perceptual cue, phobic patients reported increased fear compared to the condition with information only. Across exposures trials, these reactions diminished. Skin conductance in phobic patients was significantly higher in the combined than in the cue or the information condition. Perceptual cues are essential for phobic fear reactions in spider phobia. In combination with fear-relevant information, perceptual cues activate an intense and persistent fear reaction. © 2013 Wiley Periodicals, Inc.

  19. Simultaneous Processing of Noun Cue and to-be-Produced Verb in Verb Generation Task: Electromagnetic Evidence

    Directory of Open Access Journals (Sweden)

    Anna V. Butorina

    2017-05-01

    Full Text Available A long-standing but implicit assumption is that words strongly associated with a presented cue are automatically activated in the memory through rapid spread of activation within brain semantic networks. The current study was aimed to provide direct evidence of such rapid access to words’ semantic representations and to investigate its neural sources using magnetoencephalography (MEG and distributed source localization technique. Thirty-three neurotypical subjects underwent the MEG recording during verb generation task, which was to produce verbs related to the presented noun cues. Brain responses evoked by the noun cues were examined while manipulating the strength of association between the noun and the potential verb responses. The strong vs. weak noun-verb association led to a greater noun-related neural response at 250–400 ms after cue onset, and faster verb production. The cortical sources of the differential response were localized in left temporal pole, previously implicated in semantic access, and left ventrolateral prefrontal cortex (VLPFC, thought to subserve controlled semantic retrieval. The strength of the left VLPFC’s response to the nouns with strong verb associates was positively correlated to the speed of verbs production. Our findings empirically validate the theoretical expectation that in case of a strongly connected noun-verb pair, successful access to target verb representation may occur already at the stage of lexico-semantic analysis of the presented noun. Moreover, the MEG results suggest that contrary to the previous conclusion derived from fMRI studies left VLPFC supports selection of the target verb representations, even if they were retrieved from semantic memory rapidly and effortlessly. The discordance between MEG and fMRI findings in verb generation task may stem from different modes of neural activation captured by phase-locked activity in MEG and slow changes of blood-oxygen-level-dependent (BOLD signal

  20. Children’s identification of familiar songs from pitch and timing cues

    Directory of Open Access Journals (Sweden)

    Anna eVolkova

    2014-08-01

    Full Text Available The goal of the present study was to ascertain whether children with normal hearing and prelingually deaf children with cochlear implants could use pitch or timing cues alone or in combination to identify familiar songs. Children 4-7 years of age were required to identify the theme songs of familiar TV shows in a simple task with excerpts that preserved (1 the relative pitch and timing cues of the melody but not the original instrumentation, (2 the timing cues only (rhythm, meter, and tempo, and (3 the relative pitch cues only (pitch contour and intervals. Children with normal hearing performed at high levels and comparably across the three conditions. The performance of child implant users was well above chance levels when both pitch and timing cues were available, marginally above chance with timing cues only, and at chance with pitch cues only. This is the first demonstration that children can identify familiar songs from monotonic versions—timing cues but no pitch cues—and from isochronous versions—pitch cues but no timing cues. The study also indicates that, in the context of a very simple task, young implant users readily identify songs from melodic versions that preserve pitch and timing cues.

  1. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  2. White sucker Catostomus commersonii respond to conspecific and sea lamprey Petromyzon marinus alarm cues but not potential predator cues

    Science.gov (United States)

    Jordbro, Ethan J.; Di Rocco, Richard T.; Imre, Istvan; Johnson, Nicholas; Brown, Grant E.

    2016-01-01

    Recent studies proposed the use of chemosensory alarm cues to control the distribution of invasive sea lamprey Petromyzon marinus populations in the Laurentian Great Lakes and necessitate the evaluation of sea lamprey chemosensory alarm cues on valuable sympatric species such as white sucker. In two laboratory experiments, 10 replicate groups (10 animals each) of migratory white suckers were exposed to deionized water (control), conspecific whole-body extract, heterospecific whole-body extract (sea lamprey) and two potential predator cues (2-phenylethylamine HCl (PEA HCl) and human saliva) during the day, and exposed to the first four of the above cues at night. White suckers avoided the conspecific and the sea lamprey whole-body extract both during the day and at night to the same extent. Human saliva did not induce avoidance during the day. PEA HCl did not induce avoidance at a higher concentration during the day, or at night at the minimum concentration that was previously shown to induce maximum avoidance by sea lamprey under laboratory conditions. Our findings suggest that human saliva and PEA HCl may be potential species-specific predator cues for sea lamprey.

  3. Evidence for greater cue reactivity among low-dependent vs. high-dependent smokers.

    Science.gov (United States)

    Watson, Noreen L; Carpenter, Matthew J; Saladin, Michael E; Gray, Kevin M; Upadhyaya, Himanshu P

    2010-07-01

    Cue reactivity paradigms are well-established laboratory procedures used to examine subjective craving in response to substance-related cues. For smokers, the relationship between nicotine dependence and cue reactivity has not been clearly established. The main aim of the present study was to further examine this relationship. Participants (N=90) were between the ages 18-40 and smoked > or =10 cigarettes per day. Average nicotine dependence (Fagerström Test for Nicotine Dependence; FTND) at baseline was 4.9 (SD=2.1). Participants completed four cue reactivity sessions consisting of two in vivo cues (smoking and neutral) and two affective imagery cues (stressful and relaxed), all counterbalanced. Craving in response to cues was assessed following each cue exposure using the Questionnaire of Smoking Urges-Brief (QSU-B). Differential cue reactivity was operationally defined as the difference in QSU scores between the smoking and neutral cues, and between the stressful and relaxed cues. Nicotine dependence was significantly and negatively associated with differential cue reactivity scores in regard to hedonic craving (QSU factor 1) for both in vivo and imagery cues, such that those who had low FTND scores demonstrated greater differential cue reactivity than those with higher FTND scores (beta=-.082; p=.037; beta=-.101; p=.023, respectively). Similar trends were found for the Total QSU and for negative reinforcement craving (QSU factor 2), but did not reach statistical significance. Under partially sated conditions, less dependent smokers may be more differentially cue reactive to smoking cues as compared to heavily dependent smokers. These findings offer methodological and interpretative implications for cue reactivity studies. 2010 Elsevier Ltd. All rights reserved.

  4. Haven't a Cue? Mapping the CUE Space as an Aid to HRA Modeling

    Energy Technology Data Exchange (ETDEWEB)

    David I Gertman; Ronald L Boring; Jacques Hugo; William Phoenix

    2012-06-01

    Advances in automation present a new modeling environment for the human reliability analysis (HRA) practitioner. Many, if not most, current day HRA methods have their origin in characterizing and quantifying human performance in analog environments where mode awareness and system status indications are potentially less comprehensive, but simpler to comprehend at a glance when compared to advanced presentation systems. The introduction of highly complex automation has the potential to lead to: decreased levels of situation awareness caused by the need for increased monitoring; confusion regarding the often non-obvious causes of automation failures, and emergent system dependencies that formerly may have been uncharacterized. Understanding the relation of incoming cues available to operators during plant upset conditions, in conjunction with operating procedures, yields insight into understanding the nature of the expected operator response in this control room environment. Static systems methods such as fault trees do not contain the appropriate temporal information or necessarily specify the relationship among cues leading to operator response. In this paper, we do not attempt to replace standard performance shaping factors commonly used in HRA nor offer a new HRA method, existing methods may suffice. In this paper we strive to enhance current understanding of the basis for operator response through a technique that can be used during the qualitative portion of the HRA analysis process. The CUE map is a means to visualize the relationship among salient cues in the control room that help influence operator response, show how the cognitive map of the operator changes as information is gained or lost, and is applicable to existing as well as advanced hybrid plants and small modular reactor designs. A brief application involving loss of condensate is presented and advantages and limitations of the modeling approach and use of the CUE map are discussed.

  5. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer

  6. Does acute tobacco smoking prevent cue-induced craving?

    Science.gov (United States)

    Schlagintweit, Hera E; Barrett, Sean P

    2016-05-01

    Smoking cessation aids appear to be limited in their ability to prevent craving triggered by exposure to smoking-associated stimuli; however, the extent to which cue-induced cravings persist following denicotinized or nicotine-containing tobacco smoking is not known. Thirty (17 male) ⩾12-hour abstinent dependent smokers completed two sessions during which they smoked a nicotine-containing or denicotinized cigarette. Instructions regarding the nicotine content of the cigarette varied across sessions, and all participants were exposed to a neutral cue followed by a smoking cue after cigarette consumption. Craving was assessed before and after cigarette consumption and cue exposure. Reduced intentions to smoke were associated with both nicotine expectancy (pSmoking-associated stimuli increased craving regardless of nicotine expectancy or administration (p-valuessmoking, neither smoking-related nicotine administration nor expectation prevents increases in craving following exposure to smoking-associated stimuli. These findings suggest that cue-induced craving may be resistant to various pharmacological and psychological interventions. © The Author(s) 2016.

  7. Mental state attribution and the gaze cueing effect.

    Science.gov (United States)

    Cole, Geoff G; Smith, Daniel T; Atkinson, Mark A

    2015-05-01

    Theory of mind is said to be possessed by an individual if he or she is able to impute mental states to others. Recently, some authors have demonstrated that such mental state attributions can mediate the "gaze cueing" effect, in which observation of another individual shifts an observer's attention. One question that follows from this work is whether such mental state attributions produce mandatory modulations of gaze cueing. Employing the basic gaze cueing paradigm, together with a technique commonly used to assess mental-state attribution in nonhuman animals, we manipulated whether the gazing agent could see the same thing as the participant (i.e., the target) or had this view obstructed by a physical barrier. We found robust gaze cueing effects, even when the observed agent in the display could not see the same thing as the participant. These results suggest that the attribution of "seeing" does not necessarily modulate the gaze cueing effect.

  8. Deceptive body movements reverse spatial cueing in soccer.

    Directory of Open Access Journals (Sweden)

    Michael J Wright

    Full Text Available The purpose of the experiments was to analyse the spatial cueing effects of the movements of soccer players executing normal and deceptive (step-over turns with the ball. Stimuli comprised normal resolution or point-light video clips of soccer players dribbling a football towards the observer then turning right or left with the ball. Clips were curtailed before or on the turn (-160, -80, 0 or +80 ms to examine the time course of direction prediction and spatial cueing effects. Participants were divided into higher-skilled (HS and lower-skilled (LS groups according to soccer experience. In experiment 1, accuracy on full video clips was higher than on point-light but results followed the same overall pattern. Both HS and LS groups correctly identified direction on normal moves at all occlusion levels. For deceptive moves, LS participants were significantly worse than chance and HS participants were somewhat more accurate but nevertheless substantially impaired. In experiment 2, point-light clips were used to cue a lateral target. HS and LS groups showed faster reaction times to targets that were congruent with the direction of normal turns, and to targets incongruent with the direction of deceptive turns. The reversed cueing by deceptive moves coincided with earlier kinematic events than cueing by normal moves. It is concluded that the body kinematics of soccer players generate spatial cueing effects when viewed from an opponent's perspective. This could create a reaction time advantage when anticipating the direction of a normal move. A deceptive move is designed to turn this cueing advantage into a disadvantage. Acting on the basis of advance information, the presence of deceptive moves primes responses in the wrong direction, which may be only partly mitigated by delaying a response until veridical cues emerge.

  9. Gaze Cueing by Pareidolia Faces

    OpenAIRE

    Kohske Takahashi; Katsumi Watanabe

    2013-01-01

    Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cuei...

  10. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  11. Shifting attention among working memory representations: testing cue type, awareness, and strategic control.

    Science.gov (United States)

    Berryhill, Marian E; Richmond, Lauren L; Shay, Cara S; Olson, Ingrid R

    2012-01-01

    It is well known that visual working memory (VWM) performance is modulated by attentional cues presented during encoding. Interestingly, retrospective cues presented after encoding, but prior to the test phase also improve performance. This improvement in performance is termed the retro-cue benefit. We investigated whether the retro-cue benefit is sensitive to cue type, whether participants were aware of their improvement in performance due to the retro-cue, and whether the effect was under strategic control. Experiment 1 compared the potential cueing benefits of abrupt onset retro-cues relying on bottom-up attention, number retro-cues relying on top-down attention, and arrow retro-cues, relying on a mixture of both. We found a significant retro-cue effect only for arrow retro-cues. In Experiment 2, we tested participants' awareness of their use of the informative retro-cue and found that they were aware of their improved performance. In Experiment 3, we asked whether participants have strategic control over the retro-cue. The retro-cue was difficult to ignore, suggesting that strategic control is low. The retro-cue effect appears to be within conscious awareness but not under full strategic control.

  12. The effect of social cues on marketing decisions

    Science.gov (United States)

    Hentschel, H. G. E.; Pan, Jiening; Family, Fereydoon; Zhang, Zhenyu; Song, Yiping

    2012-02-01

    We address the question as to what extent individuals, when given information in marketing polls on the decisions made by the previous Nr individuals questioned, are likely to change their original choices. The processes can be formulated in terms of a Cost function equivalent to a Hamiltonian, which depends on the original likelihood of an individual making a positive decision in the absence of social cues p0; the strength of the social cue J; and memory size Nr. We find both positive and negative herding effects are significant. Specifically, if p0>1/2 social cues enhance positive decisions, while for p0cues reduce the likelihood of a positive decision.

  13. Smoking, food, and alcohol cues on subsequent behavior: a qualitative systematic review.

    Science.gov (United States)

    Veilleux, Jennifer C; Skinner, Kayla D

    2015-03-01

    Although craving is a frequent phenomenon in addictive behaviors, and laboratory paradigms have robustly established that presentation of cues can elicit self-reported craving responses, extant work has not established whether cue exposure influences subsequent behavior. We systematically review extant literature assessing the effects of cue exposure to smoking, food, and alcohol cues on behavioral outcomes framed by three questions: (1) Is there value in distinguishing between the effects of cue exposure on behavior from the responses to cues (e.g., self-reported craving) predicting behavior?; (2) What are the effect of cues on behavior beyond lapse, such as broadly considering both target-syntonic (e.g., do cigarette cues predict smoking-related behaviors) and target-dystonic behaviors (e.g., do cigarette cues predict other outcomes besides smoking)?; (3) What are the lessons to be learned from examining cue exposure studies across smoking, food and alcohol domains? Evidence generally indicates an effect of cue exposure on both target-syntonic and target-dystonic behavior, and that self-report cue-reactivity predicts immediate target-syntonic outcomes. Effects of smoking, food and alcohol cues on behavior are compared to elucidate generalizations about the effects of cue exposure as well as methodological differences that may serve the study of craving in the future. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  15. Applying extinction research and theory to cue-exposure addiction treatments.

    Science.gov (United States)

    Conklin, Cynthia A; Tiffany, Stephen T

    2002-02-01

    To evaluate the efficacy of cue-exposure addiction treatment and review modern animal learning research to generate recommendations for substantially enhancing the effectiveness of this treatment. Meta-analysis of cue-exposure addiction treatment outcome studies (N=9), review of animal extinction research and theory, and evaluation of whether major principles from this literature are addressed adequately in cue-exposure treatments. The meta-analytical review showed that there is no consistent evidence for the efficacy of cue-exposure treatment as currently implemented. Moreover, procedures derived from the animal learning literature that should maximize the potential of extinction training are rarely used in cue-exposure treatments. Given what is known from animal extinction theory and research about extinguishing learned behavior, it is not surprising that cue-exposure treatments so often fail. This paper reviews current animal research regarding the most salient threats to the development and maintenance of extinction, and suggests several major procedures for increasing the efficacy of cue-exposure addiction treatment.

  16. Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load.

    Science.gov (United States)

    Pecchinenda, Anna; Petrucci, Manuel

    2016-01-01

    Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information.

  17. Sound Art and Spatial Practices: Situating Sound Installation Art Since 1958

    OpenAIRE

    Ouzounian, Gascia

    2008-01-01

    This dissertation examines the emergence and development ofsound installation art, an under-recognized tradition that hasdeveloped between music, architecture, and media art practicessince the late 1950s. Unlike many musical works, which are concernedwith organizing sounds in time, sound installations organize sounds inspace; they thus necessitate new theoretical and analytical modelsthat take into consideration the spatial situated-ness of sound. Existingdiscourses on “spatial sound” privile...

  18. Altered Brain Reactivity to Game Cues After Gaming Experience.

    Science.gov (United States)

    Ahn, Hyeon Min; Chung, Hwan Jun; Kim, Sang Hee

    2015-08-01

    Individuals who play Internet games excessively show elevated brain reactivity to game-related cues. This study attempted to test whether this elevated cue reactivity observed in game players is a result of repeated exposure to Internet games. Healthy young adults without a history of excessively playing Internet games were recruited, and they were instructed to play an online Internet game for 2 hours/day for five consecutive weekdays. Two control groups were used: the drama group, which viewed a fantasy TV drama, and the no-exposure group, which received no systematic exposure. All participants performed a cue reactivity task with game, drama, and neutral cues in the brain scanner, both before and after the exposure sessions. The game group showed an increased reactivity to game cues in the right ventrolateral prefrontal cortex (VLPFC). The degree of VLPFC activation increase was positively correlated with the self-reported increase in desire for the game. The drama group showed an increased cue reactivity in response to the presentation of drama cues in the caudate, posterior cingulate, and precuneus. The results indicate that exposure to either Internet games or TV dramas elevates the reactivity to visual cues associated with the particular exposure. The exact elevation patterns, however, appear to differ depending on the type of media experienced. How changes in each of the regions contribute to the progression to pathological craving warrants a future longitudinal study.

  19. Interference from retrieval cues in Parkinson's disease.

    Science.gov (United States)

    Crescentini, Cristiano; Marin, Dario; Del Missier, Fabio; Biasutti, Emanuele; Shallice, Tim

    2011-11-01

    Existing studies on memory interference in Parkinson's disease (PD) patients have provided mixed results and it is unknown whether PD patients have problems in overcoming interference from retrieval cues. We investigated this issue by using a part-list cuing paradigm. In this paradigm, after the study of a list of items, the presentation of some of these items as retrieval cues hinders the recall of the remaining ones. We tested PD patients' (n = 19) and control participants' (n = 16) episodic memory in the presence and absence of part-list cues, using initial-letter probes, and following either weak or strong serial associative encoding of list items. Both PD patients and control participants showed a comparable and significant part-list cuing effect after weak associative encoding (13% vs. 12% decrease in retrieval in part-list cuing vs. no part-list cuing -control- conditions in PD patients and control participants, respectively), denoting a similar effect of cue-driven interference in the two populations when a serial retrieval strategy is hard to develop. However, only PD patients showed a significant part-list cuing effect after strong associative encoding (20% vs. 5% decrease in retrieval in patients and controls, respectively). When encoding promotes the development of an effective serial retrieval strategy, the presentation of part-list cues has a specifically disruptive effect in PD patients. This indicates problems in strategic retrieval, probably related to PD patients' increased tendency to rely on external cues. Findings in control conditions suggest that less effective encoding may have contributed to PD patients' memory performance.

  20. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    Science.gov (United States)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  1. The Accuracy Enhancing Effect of Biasing Cues

    NARCIS (Netherlands)

    W. Vanhouche (Wouter); S.M.J. van Osselaer (Stijn)

    2009-01-01

    textabstractExtrinsic cues such as price and irrelevant attributes have been shown to bias consumers’ product judgments. Results in this article replicate those findings in pretrial judgments but show that such biasing cues can improve quality judgments at a later point in time. Initially biasing

  2. Requirement of Dopamine Signaling in the Amygdala and Striatum for Learning and Maintenance of a Conditioned Avoidance Response

    Science.gov (United States)

    Darvas, Martin; Fadok, Jonathan P.; Palmiter, Richard D.

    2011-01-01

    Two-way active avoidance (2WAA) involves learning Pavlovian (association of a sound cue with a foot shock) and instrumental (shock avoidance) contingencies. To identify regions where dopamine (DA) is involved in mediating 2WAA, we restored DA signaling in specific brain areas of dopamine-deficient (DD) mice by local reactivation of conditionally…

  3. Cue-induced craving for marijuana in cannabis-dependent adults.

    Science.gov (United States)

    Lundahl, Leslie H; Johanson, Chris-Ellyn

    2011-06-01

    Recent interest in the development of medications for treatment of cannabis-use disorders indicates the need for laboratory models to evaluate potential compounds prior to undertaking clinical trials. To investigate whether a cue-reactivity paradigm could induce marijuana craving in cannabis-dependent adults, 16 (eight female) cannabis-dependent and 16 (eight female) cannabis-naïve participants were exposed to neutral and marijuana-related cues, and subsequent changes in mood, self-reported craving, and physiologic function were assessed. Significant Group X cue interactions were found on all three VAS craving indices as well as on the Compulsivity scale of the Marijuana Craving Questionnaire-Brief Form (MCQ-BF). Cannabis-dependent individuals responded to marijuana-related cues with significantly increased reports of marijuana craving compared to neutral cue exposure, although there were no cue-induced changes in any of the physiological measures. There were no significant gender differences on any of the measures. These results indicate that marijuana craving can be induced and assessed in cannabis-dependent, healthy adults within a laboratory setting, and support the need for further research of the cue reactivity paradigm in the development of medications to treat cannabis-use disorders. (PsycINFO Database Record (c) 2011 APA, all rights reserved).

  4. Hierarchical acquisition of visual specificity in spatial contextual cueing.

    Science.gov (United States)

    Lie, Kin-Pou

    2015-01-01

    Spatial contextual cueing refers to visual search performance's being improved when invariant associations between target locations and distractor spatial configurations are learned incidentally. Using the instance theory of automatization and the reverse hierarchy theory of visual perceptual learning, this study explores the acquisition of visual specificity in spatial contextual cueing. Two experiments in which detailed visual features were irrelevant for distinguishing between spatial contexts found that spatial contextual cueing was visually generic in difficult trials when the trials were not preceded by easy trials (Experiment 1) but that spatial contextual cueing progressed to visual specificity when difficult trials were preceded by easy trials (Experiment 2). These findings support reverse hierarchy theory, which predicts that even when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing can progress to visual specificity if the stimuli remain constant, the task is difficult, and difficult trials are preceded by easy trials. However, these findings are inconsistent with instance theory, which predicts that when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing will not progress to visual specificity. This study concludes that the acquisition of visual specificity in spatial contextual cueing is more plausibly hierarchical, rather than instance-based.

  5. Responsivity to food cues in bulimic women and controls.

    Science.gov (United States)

    Staiger, P; Dawe, S; McCarthy, R

    2000-08-01

    The current study investigated responsivity to individualized food cues consisting of binge/favourite foods in 17 women with bulimic nervosa (BN) and 17 women with no history or current symptoms of eating disorders (C). The hypothesis that increasing cue salience would be associated with an increase in responsivity was tested by comparison of self reported urges, affective responses and salivation to the sight and smell (SS) and the sight, smell and taste (SST) of a binge/favourite food compared to a neutral stimulus (lettuce leaf). As predicted, the BN group reported a greater urge to binge and higher levels of stress/arousal to selected binge/favourite food cues compared to the C group. The BN group also reported lower confidence to resist the urge to binge and control over food intake compared to the C group. Further, a series of planned comparisons in the BN group found that the urge to binge, stress, and loss of control were greater when participants were exposed to the SST cue than to the SS cue. There was no difference between the groups in salivary responsivity to food cues. These results are discussed in terms of a conditioning model of cue reactivity. Copyright 2000 Academic Press.

  6. Interpreting instructional cues in task switching procedures: the role of mediator retrieval.

    Science.gov (United States)

    Logan, Gordon D; Schneider, Darryl W

    2006-03-01

    In 3 experiments the role of mediators in task switching with transparent and nontransparent cues was examined. Subjects switched between magnitude (greater or less than 5) and parity (odd or even) judgments of single digits. A cue-target congruency effect indicated mediator use: subjects responded faster to congruent cue-target combinations (e.g., ODD-3) than to incongruent cue-target combinations (e.g., ODD-4). Experiment 1 revealed significant congruency effects with transparent word cues (ODD, EVEN, HIGH, and LOW) and with relatively transparent letter cues (O, E, H, and L) but not with nontransparent letter cues (D, V, G, and W). Experiment 2 revealed significant congruency effects after subjects who were trained with nontransparent letter cues were informed of the relations between cues and word mediators halfway through the experiment. Experiment 3 showed that congruency effects with relatively transparent letter cues diminished over 10 sessions of practice, suggesting that subjects used mediators less as practice progressed. The results are discussed in terms of the role of mediators in interpreting instructional cues.

  7. Bats without borders: Predators learn novel prey cues from other predatory species.

    Science.gov (United States)

    Patriquin, Krista J; Kohles, Jenna E; Page, Rachel A; Ratcliffe, John M

    2018-03-01

    Learning from others allows individuals to adapt rapidly to environmental change. Although conspecifics tend to be reliable models, heterospecifics with similar resource requirements may be suitable surrogates when conspecifics are few or unfamiliar with recent changes in resource availability. We tested whether Trachops cirrhosus , a gleaning bat that localizes prey using their mating calls, can learn about novel prey from conspecifics and the sympatric bat Lophostoma silvicolum. Specifically, we compared the rate for naïve T. cirrhosus to learn an unfamiliar tone from either a trained conspecific or heterospecific alone through trial and error or through social facilitation. T. cirrhosus learned this novel cue from L. silvicolum as quickly as from conspecifics. This is the first demonstration of social learning of a novel acoustic cue in bats and suggests that heterospecific learning may occur in nature. We propose that auditory-based social learning may help bats learn about unfamiliar prey and facilitate their adaptive radiation.

  8. Tiger salamanders' (Ambystoma tigrinum) response learning and usage of visual cues.

    Science.gov (United States)

    Kundey, Shannon M A; Millar, Roberto; McPherson, Justin; Gonzalez, Maya; Fitz, Aleyna; Allen, Chadbourne

    2016-05-01

    We explored tiger salamanders' (Ambystoma tigrinum) learning to execute a response within a maze as proximal visual cue conditions varied. In Experiment 1, salamanders learned to turn consistently in a T-maze for reinforcement before the maze was rotated. All learned the initial task and executed the trained turn during test, suggesting that they learned to demonstrate the reinforced response during training and continued to perform it during test. In a second experiment utilizing a similar procedure, two visual cues were placed consistently at the maze junction. Salamanders were reinforced for turning towards one cue. Cue placement was reversed during test. All learned the initial task, but executed the trained turn rather than turning towards the visual cue during test, evidencing response learning. In Experiment 3, we investigated whether a compound visual cue could control salamanders' behaviour when it was the only cue predictive of reinforcement in a cross-maze by varying start position and cue placement. All learned to turn in the direction indicated by the compound visual cue, indicating that visual cues can come to control their behaviour. Following training, testing revealed that salamanders attended to stimuli foreground over background features. Overall, these results suggest that salamanders learn to execute responses over learning to use visual cues but can use visual cues if required. Our success with this paradigm offers the potential in future studies to explore salamanders' cognition further, as well as to shed light on how features of the tiger salamanders' life history (e.g. hibernation and metamorphosis) impact cognition.

  9. Sound a very short introduction

    CERN Document Server

    Goldsmith, Mike

    2015-01-01

    Sound is integral to how we experience the world, in the form of noise as well as music. But what is sound? What is the physical basis of pitch and harmony? And how are sound waves exploited in musical instruments? Sound: A Very Short Introduction looks at the science of sound and the behaviour of sound waves with their different frequencies. It also explores sound in different contexts, covering the audible and inaudible, sound underground and underwater, acoustic and electronic sound, and hearing in humans and animals. It concludes with the problem of sound out of place—noise and its reduction.

  10. Boundary stabilization of memory-type thermoelasticity with second sound

    Science.gov (United States)

    Mustafa, Muhammad I.

    2012-08-01

    In this paper, we consider an n-dimensional thermoelastic system of second sound with a viscoelastic damping localized on a part of the boundary. We establish an explicit and general decay rate result that allows a wider class of relaxation functions and generalizes previous results existing in the literature.

  11. Mind your pricing cues.

    Science.gov (United States)

    Anderson, Eric; Simester, Duncan

    2003-09-01

    For most of the items they buy, consumers don't have an accurate sense of what the price should be. Ask them to guess how much a four-pack of 35-mm film costs, and you'll get a variety of wrong answers: Most people will underestimate; many will only shrug. Research shows that consumers' knowledge of the market is so far from perfect that it hardly deserves to be called knowledge at all. Yet people happily buy film and other products every day. Is this because they don't care what kind of deal they're getting? No. Remarkably, it's because they rely on retailers to tell them whether they're getting a good price. In subtle and not-so-subtle ways, retailers send signals to customers, telling them whether a given price is relatively high or low. In this article, the authors review several common pricing cues retailers use--"sale" signs, prices that end in 9, signpost items, and price-matching guarantees. They also offer some surprising facts about how--and how well--those cues work. For instance, the authors' tests with several mail-order catalogs reveal that including the word "sale" beside a price can increase demand by more than 50%. The practice of using a 9 at the end of a price to denote a bargain is so common, you'd think customers would be numb to it. Yet in a study the authors did involving a women's clothing catalog, they increased demand by a third just by changing the price of a dress from $34 to $39. Pricing cues are powerful tools for guiding customers' purchasing decisions, but they must be applied judiciously. Used inappropriately, the cues may breach customers' trust, reduce brand equity, and give rise to lawsuits.

  12. Craving by imagery cue reactivity in opiate dependence following detoxification

    OpenAIRE

    Behera, Debakanta; Goswami, Utpal; Khastgir, Udayan; Kumar, Satindra

    2003-01-01

    Background: Frequent relapses in opioid addiction may be a result of abstinentemergent craving. Exposure to various stimuli associated with drug use (drug cues) may trigger craving as a conditioned response to ?drug cues?. Aims: The present study explored the effects of imagery cue exposure on psychophysiological mechanisms of craving, viz. autonomic arousal, in detoxified opiate addicts. Methodology: Opiate dependent subjects (N=38) following detoxification underwent imagery cue reactivity t...

  13. Bayesian integration of position and orientation cues in perception of biological and non-biological dynamic forms

    Directory of Open Access Journals (Sweden)

    Steven Matthew Thurman

    2014-02-01

    Full Text Available Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic

  14. Drinkers’ memory bias for alcohol picture cues in explicit and implicit memory tasks

    Science.gov (United States)

    Nguyen-Louie, Tam T.; Buckman, Jennifer F.; Ray, Suchismita

    2016-01-01

    Background Alcohol cues can bias attention and elicit emotional reactions, especially in drinkers. Yet, little is known about how alcohol cues affect explicit and implicit memory processes, and how memory for alcohol cues is affected by acute alcohol intoxication. Methods Young adult participants (N=161) were randomly assigned to alcohol, placebo, or control beverage conditions. Following beverage consumption, they were shown neutral, emotional and alcohol-related pictures cues. Participants then completed free recall and repetition priming tasks to test explicit and implicit memory, respectively, for picture cues. Average blood alcohol concentration for the alcohol group was 74 ± 13 mg/dl when memory testing began. Two mixed linear model analyses were conducted to examine the effects of beverage condition, picture cue type, and their interaction on explicit and implicit memory. Results Picture cue type and beverage condition each significantly affected explicit recall of picture cues, whereas only picture cue type significantly influenced repetition priming. Individuals in the alcohol condition recalled significantly fewer pictures than those in other conditions, regardless of cue type. Both free recall and repetition priming were greater for emotional and alcohol-related cues compared to neutral picture cues. No interaction effects were detected. Conclusions Young adult drinkers showed enhanced explicit and implicit memory processing of alcohol cues compared to emotionally neutral cues. This enhanced processing for alcohol cues was on par with that seen for positive emotional cues. Acute alcohol intoxication did not alter this preferential memory processing for alcohol cues over neutral cues. PMID:26811126

  15. Drinkers' memory bias for alcohol picture cues in explicit and implicit memory tasks.

    Science.gov (United States)

    Nguyen-Louie, Tam T; Buckman, Jennifer F; Ray, Suchismita; Bates, Marsha E

    2016-03-01

    Alcohol cues can bias attention and elicit emotional reactions, especially in drinkers. Yet, little is known about how alcohol cues affect explicit and implicit memory processes, and how memory for alcohol cues is affected by acute alcohol intoxication. Young adult participants (N=161) were randomly assigned to alcohol, placebo, or control beverage conditions. Following beverage consumption, they were shown neutral, emotional and alcohol-related pictures cues. Participants then completed free recall and repetition priming tasks to test explicit and implicit memory, respectively, for picture cues. Average blood alcohol concentration for the alcohol group was 74±13mg/dl when memory testing began. Two mixed linear model analyses were conducted to examine the effects of beverage condition, picture cue type, and their interaction on explicit and implicit memory. Picture cue type and beverage condition each significantly affected explicit recall of picture cues, whereas only picture cue type significantly influenced repetition priming. Individuals in the alcohol condition recalled significantly fewer pictures than those in other conditions, regardless of cue type. Both free recall and repetition priming were greater for emotional and alcohol-related cues compared to neutral picture cues. No interaction effects were detected. Young adult drinkers showed enhanced explicit and implicit memory processing of alcohol cues compared to emotionally neutral cues. This enhanced processing for alcohol cues was on par with that seen for positive emotional cues. Acute alcohol intoxication did not alter this preferential memory processing for alcohol cues over neutral cues. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Crystal structures of E. coli laccase CueO at different copper concentrations

    International Nuclear Information System (INIS)

    Li Xu; Wei Zhiyi; Zhang Min; Peng Xiaohui; Yu Guangzhe; Teng Maikun; Gong Weimin

    2007-01-01

    CueO protein is a hypothetical bacterial laccase and a good laccase candidate for large scale industrial application. Four CueO crystal structures were determined at different copper concentrations. Low copper occupancy in apo-CueO and slow copper reconstitution process in CueO with exogenous copper were demonstrated. These observations well explain the copper dependence of CueO oxidase activity. Structural comparison between CueO and other three fungal laccase proteins indicates that Glu106 in CueO constitutes the primary counter-work for reconstitution of the trinuclear copper site. Mutation of Glu106 to a Phe enhanced CueO oxidation activity and supported this hypothesis. In addition, an extra α-helix from Leu351 to Gly378 covers substrate biding pocket of CueO and might compromises the electron transfer from substrate to type I copper

  17. G-cueing microcontroller (a microprocessor application in simulators)

    Science.gov (United States)

    Horattas, C. G.

    1980-01-01

    A g cueing microcontroller is described which consists of a tandem pair of microprocessors, dedicated to the task of simulating pilot sensed cues caused by gravity effects. This task includes execution of a g cueing model which drives actuators that alter the configuration of the pilot's seat. The g cueing microcontroller receives acceleration commands from the aerodynamics model in the main computer and creates the stimuli that produce physical acceleration effects of the aircraft seat on the pilots anatomy. One of the two microprocessors is a fixed instruction processor that performs all control and interface functions. The other, a specially designed bipolar bit slice microprocessor, is a microprogrammable processor dedicated to all arithmetic operations. The two processors communicate with each other by a shared memory. The g cueing microcontroller contains its own dedicated I/O conversion modules for interface with the seat actuators and controls, and a DMA controller for interfacing with the simulation computer. Any application which can be microcoded within the available memory, the available real time and the available I/O channels, could be implemented in the same controller.

  18. The effect of cue content on retrieval from autobiographical memory.

    Science.gov (United States)

    Uzer, Tugba; Brown, Norman R

    2017-01-01

    It has long been argued that personal memories are usually generated in an effortful search process in word-cueing studies. However, recent research (Uzer, Lee, & Brown, 2012) shows that direct retrieval of autobiographical memories, in response to word cues, is common. This invites the question of whether direct retrieval phenomenon is generalizable beyond the standard laboratory paradigm. Here we investigated prevalence of direct retrieval of autobiographical memories cued by specific and individuated cues versus generic cues. In Experiment 1, participants retrieved memories in response to cues from their own life (e.g., the names of friends) and generic words (e.g., chair). In Experiment 2, participants provided their personal cues two or three months prior to coming to the lab (min: 75days; max: 100days). In each experiment, RT was measured and participants reported whether memories were directly retrieved or generated on each trial. Results showed that personal cues elicited a high rate of direct retrieval. Personal cues were more likely to elicit direct retrieval than generic word cues, and as a consequence, participants responded faster, on average, to the former than to the latter. These results challenge the constructive view of autobiographical memory and suggest that autobiographical memories consist of pre-stored event representations, which are largely governed by associative mechanisms. These demonstrations offer theoretically interesting questions such as why are we not overwhelmed with directly retrieved memories cued by everyday familiar surroundings? Copyright © 2016 Elsevier B.V. All rights reserved.

  19. The Influence of Age-Related Cues on Health and Longevity.

    Science.gov (United States)

    Hsu, Laura M; Chung, Jaewoo; Langer, Ellen J

    2010-11-01

    Environmental cues that signal aging may directly and indirectly prime diminished capacity. Similarly, the absence of these cues may prime improved health. The authors investigated the effects of age cues on health and longevity in five very different settings. The findings include the following: First, women who think they look younger after having their hair colored/cut show a decrease in blood pressure and appear younger in photographs (in which their hair is cropped out) to independent raters. Second, clothing is an age-related cue. Uniforms eliminate these age-related cues: Those who wear work uniforms have lower morbidity than do those who earn the same amount of money and do not wear work uniforms. Third, baldness cues old age. Men who bald prematurely see an older self and therefore age faster: Prematurely bald men have an excess risk of getting prostate cancer and coronary heart disease than do men who do not prematurely bald. Fourth, women who bear children later in life are surrounded by younger age-related cues: Older mothers have a longer life expectancy than do women who bear children earlier in life. Last, large spousal age differences result in age-incongruent cues: Younger spouses live shorter lives and older spouses live longer lives than do controls. © The Author(s) 2010.

  20. Slow-wave metamaterial open panels for efficient reduction of low-frequency sound transmission

    Science.gov (United States)

    Yang, Jieun; Lee, Joong Seok; Lee, Hyeong Rae; Kang, Yeon June; Kim, Yoon Young

    2018-02-01

    Sound transmission reduction is typically governed by the mass law, requiring thicker panels to handle lower frequencies. When open holes must be inserted in panels for heat transfer, ventilation, or other purposes, the efficient reduction of sound transmission through holey panels becomes difficult, especially in the low-frequency ranges. Here, we propose slow-wave metamaterial open panels that can dramatically lower the working frequencies of sound transmission loss. Global resonances originating from slow waves realized by multiply inserted, elaborately designed subwavelength rigid partitions between two thin holey plates contribute to sound transmission reductions at lower frequencies. Owing to the dispersive characteristics of the present metamaterial panels, local resonances that trap sound in the partitions also occur at higher frequencies, exhibiting negative effective bulk moduli and zero effective velocities. As a result, low-frequency broadened sound transmission reduction is realized efficiently in the present metamaterial panels. The theoretical model of the proposed metamaterial open panels is derived using an effective medium approach and verified by numerical and experimental investigations.