WorldWideScience

Sample records for cercopithecus campbelli auditory

  1. Social and emotional values of sounds influence human (Homo sapiens and non-human primate (Cercopithecus campbelli auditory laterality.

    Directory of Open Access Journals (Sweden)

    Muriel Basile

    Full Text Available The last decades evidenced auditory laterality in vertebrates, offering new important insights for the understanding of the origin of human language. Factors such as the social (e.g. specificity, familiarity and emotional value of sounds have been proved to influence hemispheric specialization. However, little is known about the crossed effect of these two factors in animals. In addition, human-animal comparative studies, using the same methodology, are rare. In our study, we adapted the head turn paradigm, a widely used non invasive method, on 8-9-year-old schoolgirls and on adult female Campbell's monkeys, by focusing on head and/or eye orientations in response to sound playbacks. We broadcast communicative signals (monkeys: calls, humans: speech emitted by familiar individuals presenting distinct degrees of social value (female monkeys: conspecific group members vs heterospecific neighbours, human girls: from the same vs different classroom and emotional value (monkeys: contact vs threat calls; humans: friendly vs aggressive intonation. We evidenced a crossed-categorical effect of social and emotional values in both species since only "negative" voices from same class/group members elicited a significant auditory laterality (Wilcoxon tests: monkeys, T = 0 p = 0.03; girls: T = 4.5 p = 0.03. Moreover, we found differences between species as a left and right hemisphere preference was found respectively in humans and monkeys. Furthermore while monkeys almost exclusively responded by turning their head, girls sometimes also just moved their eyes. This study supports theories defending differential roles played by the two hemispheres in primates' auditory laterality and evidenced that more systematic species comparisons are needed before raising evolutionary scenario. Moreover, the choice of sound stimuli and behavioural measures in such studies should be the focus of careful attention.

  2. Social and emotional values of sounds influence human (Homo sapiens) and non-human primate (Cercopithecus campbelli) auditory laterality.

    Science.gov (United States)

    Basile, Muriel; Lemasson, Alban; Blois-Heulin, Catherine

    2009-07-17

    The last decades evidenced auditory laterality in vertebrates, offering new important insights for the understanding of the origin of human language. Factors such as the social (e.g. specificity, familiarity) and emotional value of sounds have been proved to influence hemispheric specialization. However, little is known about the crossed effect of these two factors in animals. In addition, human-animal comparative studies, using the same methodology, are rare. In our study, we adapted the head turn paradigm, a widely used non invasive method, on 8-9-year-old schoolgirls and on adult female Campbell's monkeys, by focusing on head and/or eye orientations in response to sound playbacks. We broadcast communicative signals (monkeys: calls, humans: speech) emitted by familiar individuals presenting distinct degrees of social value (female monkeys: conspecific group members vs heterospecific neighbours, human girls: from the same vs different classroom) and emotional value (monkeys: contact vs threat calls; humans: friendly vs aggressive intonation). We evidenced a crossed-categorical effect of social and emotional values in both species since only "negative" voices from same class/group members elicited a significant auditory laterality (Wilcoxon tests: monkeys, T = 0 p = 0.03; girls: T = 4.5 p = 0.03). Moreover, we found differences between species as a left and right hemisphere preference was found respectively in humans and monkeys. Furthermore while monkeys almost exclusively responded by turning their head, girls sometimes also just moved their eyes. This study supports theories defending differential roles played by the two hemispheres in primates' auditory laterality and evidenced that more systematic species comparisons are needed before raising evolutionary scenario. Moreover, the choice of sound stimuli and behavioural measures in such studies should be the focus of careful attention.

  3. Weissella confusa Infection in Primate (Cercopithecus mona)

    OpenAIRE

    Ana I. Vela; Porrero, Concepción; Goyache, Joaquín; Nieto, Ana; Sánchez, Belen; Briones, Víctor; Moreno, Miguel Angel; Domínguez, Lucas; Fernández-Garayzábal, José F.

    2003-01-01

    We describe systemic infection by Weissella confusa in a mona monkey (Cercopithecus mona) on the basis of microbiologic, molecular genetic, and histologic data. The same strain of W. confusa, as determined by pulsed-field gel electrophoresis, was isolated in pure culture from the primate’s brain, liver, spleen, and intestine. Histologic lesions showed inflammatory infiltrates mainly composed of neutrophils, indicating an acute septicemic process.

  4. Albino and pink-eyed dilution mutants in the Russian dwarf hamster Phodopus campbelli.

    Science.gov (United States)

    Robinson, R

    1996-01-01

    The coat color mutant genes albino (c) and pink eyed dilution (p) are described in the dwarf hamster species Phodopus campbelli. Both genes are inherited as redessive to normal. Tests for linkage between the two genes gave negative results. The apparent absence of linkage is contrasted with linkage between homologous alleles c and p in other species of rodents.

  5. Bat Predation by Cercopithecus Monkeys: Implications for Zoonotic Disease Transmission.

    Science.gov (United States)

    Tapanes, Elizabeth; Detwiler, Kate M; Cords, Marina

    2016-06-01

    The relationship between bats and primates, which may contribute to zoonotic disease transmission, is poorly documented. We provide the first behavioral accounts of predation on bats by Cercopithecus monkeys, both of which are known to harbor zoonotic disease. We witnessed 13 bat predation events over 6.5 years in two forests in Kenya and Tanzania. Monkeys sometimes had prolonged contact with the bat carcass, consuming it entirely. All predation events occurred in forest-edge or plantation habitat. Predator-prey relations between bats and primates are little considered by disease ecologists, but may contribute to transmission of zoonotic disease, including Ebolavirus.

  6. Adapting to alcohol: Dwarf hamster (Phodopus campbelli) ethanol consumption, sensitivity, and hoard fermentation.

    Science.gov (United States)

    Lupfer, Gwen; Murphy, Eric S; Merculieff, Zoe; Radcliffe, Kori; Duddleston, Khrystyne N

    2015-06-01

    Ethanol consumption and sensitivity in many species are influenced by the frequency with which ethanol is encountered in their niches. In Experiment 1, dwarf hamsters (Phodopus campbelli) with ad libitum access to food and water consumed high amounts of unsweetened alcohol solutions. Their consumption of 15%, but not 30%, ethanol was reduced when they were fed a high-fat diet; a high carbohydrate diet did not affect ethanol consumption. In Experiment 2, intraperitoneal injections of ethanol caused significant dose-related motor impairment. Much larger doses administered orally, however, had no effect. In Experiment 3, ryegrass seeds, a common food source for wild dwarf hamsters, supported ethanol fermentation. Results of these experiments suggest that dwarf hamsters may have adapted to consume foods in which ethanol production naturally occurs.

  7. Abnormal pairing of X and Y sex chromosomes during meiosis I in interspecific hybrids of Phodopus campbelli and P. sungorus.

    Science.gov (United States)

    Ishishita, Satoshi; Tsuboi, Kazuma; Ohishi, Namiko; Tsuchiya, Kimiyuki; Matsuda, Yoichi

    2015-03-24

    Hybrid sterility plays an important role in the maintenance of species identity and promotion of speciation. Male interspecific hybrids from crosses between Campbell's dwarf hamster (Phodopus campbelli) and the Djungarian hamster (P. sungorus) exhibit sterility with abnormal spermatogenesis. However, the meiotic phenotype of these hybrids has not been well described. In the present work, we observed the accumulation of spermatocytes and apoptosis of spermatocyte-like cells in the testes of hybrids between P. campbelli females and P. sungorus males. In hybrid spermatocytes, a high frequency of asynapsis of X and Y chromosomes during the pachytene-like stage and dissociation of these chromosomes during metaphase I (MI) was observed. No autosomal univalency was observed during pachytene-like and MI stages in the hybrids; however, a low frequency of synapsis between autosomes and X or Y chromosomes, interlocking and partial synapsis between autosomal pairs, and γ-H2AFX staining in autosomal chromatin was observed during the pachytene-like stage. Degenerated MI-like nuclei were frequently observed in the hybrids. Most of the spermatozoa in hybrid epididymides exhibited head malformation. These results indicate that the pairing of X and Y chromosomes is more adversely affected than that of autosomes in Phodopus hybrids.

  8. Behavioral patterns in a population of Samango monkeys (Cercopithecus albogularis erythrarcus)

    OpenAIRE

    Tegner, Cecilia

    2011-01-01

    The understanding of behavioral patterns in different species is an important part of the proper management and conservation of wild populations of animals. This study aims to contribute to the understanding of behavioral patterns in the samango monkey (Cercopithecus albogularis erythrarcus) of northern South Africa. Using the scan- sampling procedure, the behaviors of an isolated population of free-ranging samango monkeys in the Soutpansberg, Limpopo Province, were recorded during 16 days in...

  9. Gastro-intestinal parasites of the Samango monkey, Cercopithecus mitis, in Natal, South Africa.

    Science.gov (United States)

    Appleton, C C; Krecek, R C; Verster, A; Bruorton, M R; Lawes, M J

    1994-01-01

    Eight gastro-intestinal tracts of Cercopithecus mitis labiatus from Karkloof, Natal, and 121 fecal samples from C. m. erythrarchus from Cape Vidal, Natal, were examined for helminth parasites and/or their eggs. Fecal samples from six of the C. m. labiatus were examined for protozoan cysts. Five protozoon and six helminth species were identified from C. m labiatus. Most adult worms occurred in the caecum and colon, gut regions which also contained the highest volatile fatty acid levels. The eggs of nine helminth species were recovered from C. m. erythrarchus fecal samples; protozoans were not looked for in these samples.

  10. Comparative postcranial body shape and locomotion in Chlorocebus aethiops and Cercopithecus mitis.

    Science.gov (United States)

    Anapol, F; Turner, T R; Mott, C S; Jolly, C J

    2005-06-01

    Body weight and length, chest girth, and seven postcranial limb segment lengths are compared between two guenon species, Chlorocebus (Cercopithecus) aethiops (vervets) and Cercopithecus mitis (blue monkeys), exhibiting different habitual locomotor preferences. The subjects, all adults, were wild caught for a non-related research project (Turner et al. [1986] Genetic and morphological studies on two species of Kenyan monkeys, C. aethiops and C. mitis. In: Else JG, Lee PC, editors. Primate evolution, proceedings of the Xth International Congress of Primatology, Cambridge. London). The morphological results are interpreted within the context of previously published observations of primate locomotion and social organization. The sample is unique in that the body weight of each individual is known, allowing the effects of body-size scaling to be assessed in interspecific and intersexual comparisons. C. mitis has a significantly (P agility, and also the requisite transition between ground and canopy. Although normally associated with arboreal monkeys, greater relative tail length occurs in the more terrestrial vervets. However, because vervets exploit both arboreal and terrestrial habitats, a longer tail may compensate for diminished balance during arboreal quadrupedalism resulting from the greater "brachial" and "crural" indices that enhance their ground quadrupedalism. Most interspecific differences in body proportions are explicable by differences in locomotor modalities. Some results, however, contradict commonly held "tenets" that relate body size and morphology exclusively to locomotion. Generally associated with terrestriality, sexual dimorphism (male/female) is greater in the more arboreal blue monkeys. A more intense, seasonal mating competition may account for this incongruity.

  11. Breeding season influxes and the behaviour of adult male samango monkeys (Cercopithecus mitis albogularis).

    Science.gov (United States)

    Henzi, S P; Lawes, M

    1987-01-01

    Troops comprising a high density population of samango monkeys (Cercopithecus mitis) in Natal province, South Africa, experienced an influx of adult males during the breeding season. Observation of one troop revealed that these males competed with one another and with two resident males for access to receptive females. Although both sexes initiated copulation, attempts to do so were more often successful if female-initiated. Males did not interact with non-receptive females and there were no recorded attempts at infanticide. Male-male interactions were agonistic in the presence of receptive females and neutral at other times. No ritualized displays of dominance and subordinance were seen. The significance of these observations for male reproductive strategies is discussed.

  12. After the fire: benefits of reduced ground cover for vervet monkeys (Cercopithecus aethiops).

    Science.gov (United States)

    Jaffe, Karin Enstam; Isbell, Lynne A

    2009-03-01

    Here we describe changes in ranging behavior and other activities of vervet monkeys (Cercopithecus aethiops) after a wildfire eliminated grass cover in a large area near the study group's home range. Soon after the fire, the vervets ranged farther away from tall trees that provide refuge from mammalian predators, and moved into the burned area where they had never been observed to go before the fire occurred. Visibility at vervet eye-level was 10 times farther in the burned area than in unburned areas. They traveled faster, and adult females spent more time feeding and less time scanning bipedally in the burned area than in the unburned area. The burned area's greater visibility may have lowered the animals' perceived risk of predation there, and may have provided them with an unusual opportunity to eat acacia ants.

  13. Embryo cryopreservation and in vitro culture of preimplantation embryos in Campbell's hamster (Phodopus campbelli).

    Science.gov (United States)

    Amstislavsky, Sergei; Brusentsev, Eugeny; Kizilova, Elena; Igonina, Tatyana; Abramova, Tatyana; Rozhkova, Irina

    2015-04-01

    The aims of this study were to compare different protocols of Campbell's hamster (Phodopus campbelli) embryos freezing-thawing and to explore the possibilities of their in vitro culture. First, the embryos were flushed from the reproductive ducts 2 days post coitum at the two-cell stage and cultured in rat one-cell embryo culture medium (R1ECM) for 48 hours. Most (86.7%) of the two-cell embryos developed to blastocysts in R1ECM. Second, the embryos at the two- to eight-cell stages were flushed on the third day post coitum. The eight-cell embryos were frozen in 0.25 mL straws according to standard procedures of slow cooling. Ethylene glycol (EG) was used either as a single cryoprotectant or in a mixture with sucrose. The survival of frozen-thawed embryos was assessed by double staining with fluorescein diacetate and propidium iodide. The use of EG as a single cryoprotectant resulted in fewer alive embryos when compared with control (fresh embryos), but combined use of EG and sucrose improved the survival rate after thawing. Furthermore, granulocyte-macrophage colony-stimulating factor rat (2 ng/mL) improved the rate of the hamster frozen-thawed embryo development in vitro by increasing the final cell number and alleviating nuclear fragmentation. Our data show the first attempt in freezing and thawing Campbell's hamster embryos and report the possibility of successful in vitro culture for this species in R1ECM supplemented with granulocyte-macrophage colony-stimulating factor.

  14. Simultaneous uterine leiomyoma and endometrial hiperplasia in a white-nosed monkey (cercopithecus nictitans). First case report

    OpenAIRE

    Martínez, Carlos M.; Ibáñez, Carla; Corpa, Juan M.

    2010-01-01

    This paper describes histopathological and immunocytochemical features of a combined uterine leyomioma and a non atypical complex endometrial hyperplasia in a white-nosed monkey (Cercopithecus nictitans). Immunocytochemically, uterine leiomyoma was a-actin positive, and negative for desmin. By the other hand, endometrial hyperplasia showed strong immunoreaction against ciclin D1, cyclooxygenase-2 (COX-2), oestrogen receptor, isoform A of progesterone receptor and slight p53 immunoreaction. Th...

  15. Samango monkeys (Cercopithecus albogularis labiatus) manage risk in a highly seasonal, human-modified landscape in Amathole Mountains, South Africa.

    OpenAIRE

    Nowak, K.; Wimberger, K.; Richards, S A; Hill, R. A.; Le Roux, A

    2016-01-01

    Wild species use habitats that vary in risk across space and time. This risk can derive from natural predators and also from direct and indirect human pressures. A starving forager will often take risks that a less hungry forager would not. At a highly seasonal and human-modified site, we predicted that arboreal samango monkeys (Cercopithecus albogularis labiatus) would show highly flexible, responsive, risk-sensitive foraging. We first determined how monkeys use horizontal and vertical space...

  16. Polymorphism of Mhc-DRB alleles in Cercopithecus aethiops (green monkey): generation and functionality.

    Science.gov (United States)

    Rosal-Sánchez, M; Paz-Artal, E; Moreno-Pelayo, M A; Martínez-Quiles, N; Martínez-Laso, J; Martín-Villa, J M; Arnaiz-Villena, A

    1998-05-01

    DRB genes have been studied for the first time in green monkeys (Cercopithecus aethiops). Eleven new DRB alleles (exon 2, exon 3) have been obtained and sequenced from cDNA. A limited number of lineages have been identified: DRB1*03 (4 alleles), DRB1*07 (3 alleles), DRB5 (1 allele), DRB*w6 (1 allele), and DRB*w7 (2 alleles). The existence of Ceae-DRB1 duplications is supported by the finding of 3 DRB1 alleles in 3 different individuals. Ceae-DRB1*0701 may be non-functional because it bears serine at position 82, which hinders molecule surface expression in mice; the allele is only found in Ceae-DRB duplicated haplotypes. Base changes in cDNA Ceae-DRB alleles are consistent with the generation of polymorphism by point mutations or short segment exchanges between alleles. The eleven green monkey DRB alleles meet the requirements for functionality as antigen-presenting molecules (perhaps, excluding DRB1*0701), since: 1) they have been isolated from cDNA and do not present deletions, insertions or stop codons: 2) structural motifs necessary for a correct folding of the molecule, for the formation of DR/DR dimers and for CD4 interactions are conserved, and 3) the number of non-synonymous substitutions is higher than the number of synonymous substitutions in the peptide binding region (PBR), while the contrary holds true for the non-PBR region.

  17. Diet and feeding behaviour of samango monkeys (Cercopithecus mitis labiatus) in Ngoye Forest, South Africa.

    Science.gov (United States)

    Lawes, M J; Henzi, S P; Perrin, M R

    1990-01-01

    The samango monkey occurs at the southern limit of the range of Cercopithecus mitis. Greater climatic seasonality at this latitude results in more predictable fruiting patterns. In addition, there are no diurnal sympatric primate frugivores. Under these conditions, the diet and feeding strategies of samango monkeys would be expected to differ notably from those of central or east African C. mitis subspecies. Contrary to these expectations, the preliminary observations reported here indicate that diets of samango and blue monkeys differ only superficially in the proportions of items eaten. Similarities in feeding behaviour are especially marked during the dry season period when fruit is not abundant. Both samango and blue monkeys tend to be less selective in their choice of food species and to eat available food species regardless of their energy content; a shift toward less nutritious items such as leaves is also noted. Feeding behaviour during the summer wet season is characterized by the selection of fruits with high-energy values. A high proportion of visits by the monkeys to areas of greater food availability suggests a concentration of feeding effort in food patches and the selection of higher energy food species within patches.

  18. Locomotor Anatomy and Behavior of Patas Monkeys (Erythrocebus patas with Comparison to Vervet Monkeys (Cercopithecus aethiops

    Directory of Open Access Journals (Sweden)

    Adrienne L. Zihlman

    2013-01-01

    Full Text Available Patas monkeys (Erythrocebus patas living in African savanna woodlands and grassland habitats have a locomotor system that allows them to run fast, presumably to avoid predators. Long fore- and hindlimbs, long foot bones, short toes, and a digitigrade foot posture were proposed as anatomical correlates with speed. In addition to skeletal proportions, soft tissue and whole body proportions are important components of the locomotor system. To further distinguish patas anatomy from other Old World monkeys, a comparative study based on dissection of skin, muscle, and bone from complete individuals of patas and vervet monkeys (Cercopithecus aethiops was undertaken. Analysis reveals that small adjustments in patas skeletal proportions, relative mass of limbs and tail, and specific muscle groups promote efficient sagittal limb motion. The ability to run fast is based on a locomotor system adapted for long distance walking. The patas’ larger home range and longer daily range than those of vervets give them access to highly dispersed, nutritious foods, water, and sleeping trees. Furthermore, patas monkeys have physiological adaptations that enable them to tolerate and dissipate heat. These features all contribute to the distinct adaptation that is the patas monkeys’ basis for survival in grassland and savanna woodland areas.

  19. Identifying preferred habitats of samango monkeys (Cercopithecus (nictitans) mitis erythrarchus) through patch use.

    Science.gov (United States)

    Emerson, Sara E; Brown, Joel S

    2013-11-01

    To examine habitat preferences of two groups of samango monkeys (Cercopithecus (nictitans) mitis erythrarchus) in the Soutpansberg, South Africa, we used experimental food patches in fragments of tall forest and in bordering secondary growth short forest. Additionally, to test for the impacts of group cohesion and movement on habitat use, we tested for the interaction of space and time in our analyses of foraging intensity in the experimental food patches placed throughout the home ranges of the two groups. We expected the monkeys to harvest the most from patches in tall forest habitats and the least from patches in short forest. Further, because the monkeys move through their habitats in groups, we expected to see group cohesion effects illustrated by daily spatial variation in the monkeys’ use of widespread foraging grids. In the forest height experiments, the two groups differed in their foraging responses, with 8% greater foraging overall for one group. However, forest height did not significantly impact foraging in either group, meaning that, given feeding opportunities, samango monkeys are able to utilise secondary growth forest. For one group, missed opportunity costs of staying with the group appeared through the statistical interaction of day with foraging location (the monkeys did not always spread out to take advantage of all available food patches). In several subsequent experiments in widespread grids, significant daily spatial variation in foraging occurred, pointing to spatial cohesion during group movement as likely being an important predictor of habitat use. For an individual social forager, staying with the group may be more important than habitat type in driving habitat selection.

  20. Identifying preferred habitats of samango monkeys (Cercopithecus (nictitans) mitis, erythrarchus) through patch use.

    Science.gov (United States)

    Emerson, Sara E; Brown, Joel S

    2013-10-26

    To examine habitat preferences of two groups of samango monkeys (Cercopithecus (nictitans) mitis erythrarchus) in the Soutpansberg, South Africa, we used experimental food patches in fragments of tall forest and in bordering secondary growth short forest. Additionally, to test for the impacts of group cohesion and movement on habitat use, we tested for the interaction of space and time in our analyses of foraging intensity in the experimental food patches placed throughout the home ranges of the two groups. We expected the monkeys to harvest the most from patches in tall forest habitats and the least from patches in short forest. Further, because the monkeys move through their habitats in groups, we expected to see group cohesion effects illustrated by daily spatial variation in the monkeys' use of widespread foraging grids. In the forest height experiments, the two groups differed in their foraging responses, with 8% greater foraging overall for one group. However, forest height did not significantly impact foraging in either group, meaning that, given feeding opportunities, samango monkeys are able to utilize secondary growth forest. For one group, missed opportunity costs of staying with the group appeared through the statistical interaction of day with foraging location (the monkeys did not always spread out to take advantage of all available food patches). In several subsequent experiments in widespread grids, significant daily spatial variation in foraging occurred, pointing to spatial cohesion during group movement as likely being an important predictor of habitat use. For an individual social forager, staying with the group may be more important than habitat type in driving habitat selection.

  1. Sacred populations of Cercopithecus sclateri: analysis of apparent population increases from census counts.

    Science.gov (United States)

    Baker, Lynne R; Tanimola, Adebowale A; Olubode, Oluseun S

    2014-04-01

    The development of effective conservation and management actions for populations of wild species generally requires monitoring programs that provide reliable estimates of population size over time. Primate researchers have to date given more attention to evaluating techniques for monitoring primates in natural habitats compared to populations that occur in villages or urban areas. We conducted censuses to estimate the abundance and density of two sacred, village-dwelling populations (Lagwa and Akpugoeze) of Sclater's monkey (Cercopithecus sclateri), a threatened species endemic to southeastern Nigeria, and compared these data to previous census results. We recorded population increases in both sites: a 66% increase over 4½ years in Lagwa (from 124 to 206 individuals) at an annual rate of 10.2%, and a 29% increase over 4 years in Akpugoeze (from 193 to 249 individuals) at an annual rate of 5.7%. Mean group size also increased in both sites. Density in Lagwa was 24.2 individuals/km(2) , and density in a core survey area of Akpugoeze was 36-38 individuals/km(2) . Our results may have been affected by monkey ranging and grouping patterns and improved detectability due to our revised census technique, which included secondary observers. With further work on methodology for censusing populations that occur in human-settled environments, techniques can be refined and customized to individual sites for more accurate estimates. Our investigation of Sclater's monkey in Lagwa and Akpugoeze, two sites critical for conservation of the species, indicated that both of these populations have increased, and neither faces immediate risk of extirpation. Such population growth, while encouraging, will likely exacerbate human-monkey conflict and thus should be understood in terms of potential socioeconomic impacts.

  2. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments.

  3. IDENTIFICATION OF MYCOBACTERIUM GENAVENSE IN A DIANA MONKEY (CERCOPITHECUS DIANA) BY POLYMERASE CHAIN REACTION AND HIGH-PERFORMANCE LIQUID CHROMATOGRAPHY.

    Science.gov (United States)

    Kelly, Kathleen M; Wack, Allison N; Bradway, Dan; Simons, Brian W; Bronson, Ellen; Osterhout, Gerard; Parrish, Nicole M; Montali, Richard J

    2015-06-01

    A 25-yr-old Diana monkey (Cercopithecus diana) with a 1.5-yr history of chronic colitis and diarrhea was found to have disseminated granulomatous disease with intralesional acid fast bacilli. Bacilli were identified as Mycobacterium genavense by polymerase chain reaction, sequencing of the 16S-23S ribosomal RNA intergenic spacer (ITS) gene, and mycolic acid analysis by high-performance liquid chromatography. Mycobacterium genavense is a common cause of mycobacteriosis in free-ranging and captive birds. In addition, recognition of opportunistic infection in human immunodeficiency virus-positive patients is increasing. Disease manifestations of M. genavense are similar to Mycobacterium avium complex (MAC) and include fever, wasting, and diarrhea with disseminated disease. Similar clinical signs and lesions were observed in this monkey. Mycobacterium genavense should be considered as a differential for disseminated mycobacterial disease in nonhuman primates as this agent can mimic MAC and related mycobacteria.

  4. A. Femoralis in the small Green Monkey(Cercopithecus aethiops sabeus

    Directory of Open Access Journals (Sweden)

    Blagojević Miloš

    2016-01-01

    Full Text Available The small Green Monkey (Cercopithecus aethiops sabeus in large groups inhabits the African savannah. The animals delivered to us were from East Africa, that is from Kenya, Uganda and Tanzania. The length of the animal is 110 cm, and the tail itself is 50 cm long. They can often be seen in Zoos. According to data, mostly by zoo gardens, these monkeys live for about 15 to 17 years, exceptionally for 20 years. The objective of our work was to investigate a part of their cardiovascular system so in that way to contribute to a better knowledge of this animal body structure and accordingly to comparative anatomy in general. The investigation included 6 Small Green Monkeys, of both gender, 3-4 years old, body weight 2000-3000 grams, obtained from The Institute for Virusology, vaccines and serums from Belgrade. For obtaining the hindlimb arterial vascularization, after exsanguination of the animal, contrast mass of gelatin coloured with tempera was injected into the abdominal aorta. After injecting, the blood vessels were prepared and photographed. In the Small Green Monkey, femoral artery (A. femoralis is an continuation of the external iliac artery (A. iliaca externa. The branches of the femoral artery are: A. profunda femoris, A. saphena, A. genus descendens and A. caudalis femoralis. A. profunda femoris separates to A. circumflexa femoris lateralis, Ramus muscularis and A. circumflexa femoris medialis. In humans A. femoralis branches into: A. epigastrica superficialis, A. circumflexa ilium superficialis, Aa. pudendae externae, A. profunda femoris and A. genus descendens (A. descendens genus. A. profunda femoris branches into: A. circumflexa femoris lateralis, A. circumflexa femoris medialis and Aa. perforantes. In domestic animals, mammals, the branches of the femoral artery (A. femoralis are: A. circumflexa femoris lateralis, A. saphena, A. genus descendens and Aa. caudales femoris In the Small Green Monkey, humans and domestic mammals A. femoralis

  5. Auditory Hallucination

    Directory of Open Access Journals (Sweden)

    MohammadReza Rajabi

    2003-09-01

    Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.

  6. Contribution à la ré-évaluation de l’aire de répartition du singe à queue de soleil (Cercopithecus solatus Contribution to the reassessment of the the sun-tailed monkey (Cercopithecus solatus distribution area

    Directory of Open Access Journals (Sweden)

    Peggy Motsch

    2011-05-01

    Full Text Available Le Cercopithèque à queue de soleil (Cercopithecus solatus est uneespèce endémique du Gabon, où il a été observé pour la première fois en 1984 par Mike Harrison et décrit en 1988. A ce jour, peu d'informations sont disponibles sur cette espècediscrète et rare. Pour pallier le manque d’études sur cette espèce, le projet ECOSOL (ECOlogie de C. SOLatus, projet de recherche multidisciplinaire, a été initié en janvier 2009 pour améliorer les connaissances sur cette espèce peu connue et pour encouragersa conservation. Depuis près de 2 ans, de nouvelles données ont été acquises, en particulier sur l’aire de répartition de l’espèce, dont nous avons ici étudié la limite sud-est. Notre étude s’est déroulée dans trois régions du Gabon où la présence de c. solatus était soit démontrée (zone historique, soit suspectée, soit n’avait jamais été étudiée. Des enquêtes dans des villages et des marches de reconnaissance sur le terrain ont ainsi été réalisées. Les résultats obtenus ont 1/ confirmé la présence de C solatus dans la zone historique, 2/ semblent soutenir les hypothèses de sa présence en dehors et 3/ suggèrent même que C. solatus serait plus au sud-est et plus près de la République du Congo que ce qui a été jusqu’alors affirmé. Cette étude a contribué à réexaminer la distribution des populations de C. solatus sur le territoire gabonais, fournissant ainsi des outils supplémentaires pour juger du statut de conservation de l’espèce.The sun-tailed monkeys (Cercopithecus solatus is an endemic species of Gabon, where it was first observed in 1984 by Mike Harrison and described in 1988. To date, little information is available on this cryptic and rare species. To overcome the lack of studies on this species, the ECOSOL project (ECOlogy of C. SOLatus, a multidisciplinary research project, was initiated in January 2009 to improve knowledge on this poorly known species and to

  7. Auditory Imagery: Empirical Findings

    Science.gov (United States)

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  8. Caractéristiques de l’habitat du singe à ventre rouge (Cercopithecus e. erythrogaster) dans le Sud-Bénin

    OpenAIRE

    Kassa, Barthélémy; Nobimè, Georges; Hanon, Laurence; Assogbadjo, Achille Ephrem; Brice SINSIN

    2014-01-01

    Introduction Le singe à ventre rouge (Cercopithecus erythrogaster erythrogaster) est une sous-espèce de cercopithèque endémique au Bénin (Grubb et al., 1999). Il colonise des habitats résiduels de forêts denses semi-décidues et les anciennes jachères de la vallée de l’Ouémé au sud du Bénin (Sinsin et al. 2002a). L’état de mosaïque qui caractérise la structure spatiale de son habitat et la taille relativement réduite de sa population nous conduisent à prendre en compte le risque de disparition...

  9. Effects of cassava diet on Cercopithecus aethiops livers: a case for cassava as the cause of both tropical splenomegaly syndrome (TSS) and endomyocardial fibrosis (EMF).

    Science.gov (United States)

    Sezi, C L

    1996-05-01

    The aetiology of endomyocardial fibrosis (EMF) and tropical splenomegaly syndrome (TSS) though speculative, was considered by the author to be the same or related since the two diseases may occur in the same individual and locality. Accordingly, when attempting to prove a hypothesis for the causation of EMF that prolonged ingestion of tuber (cassava/tapioca) associated with extreme deprivation of protein causes EMF; one group of three Cercopithecus aethiops was fed on uncooked cassava while a second group was fed with uncooked bananas and in addition to harvesting the hearts whenever the animal health deteriorated, livers were also harvested for histological changes. While hearts from the animals on cassava revealed changes seen in human EMF the livers from the same animals exhibited Kupffer cell hyperplasia and hypertrophy as well as sinusoidal lymphocytosis, features seen in human TSS thereby confirming that the aetiology of these two diseases is the same. However, the banana diet did not produce such changes.

  10. Caractérisation éthologique de l’émotivité chez le cercopithèque de Brazza (Cercopithecus neglectus Ethological characterization of emotivity in the de Brazza's monkey (Cercopithecus neglectus

    Directory of Open Access Journals (Sweden)

    Hélène Meunier

    2009-09-01

    Full Text Available Comprendre le fonctionnement des groupes sociaux implique une connaissance des caractéristiques individuelles. Il existe plusieurs niveaux de réflexion dans l’étude des différences interindividuelles, le plus complexe correspondant à l’étude des dimensions du tempérament (Budaev, 1997. C’est à ce niveau que se place notre investigation qui se base sur une approche éthologique comparative du tempérament tel que l’a définit Bates (1989. Nous nous intéresserons plus précisément à l’un des principaux traits de celui-ci : l’émotivité, définie comme la prédisposition héritée du système nerveux autonome permettant de réagir de façon particulièrement forte et durable à certaines classes de stimuli (Archer 1973. La plupart des études sur la réactivité émotionnelle n’utilisent qu’un seul test, au cours duquel n’est enregistré qu’un nombre limité de comportements (Bouissou et al., 1994. Or, le monde des émotions est complexe, englobant , entre autres, les aspects du grégarisme (Kilgour, 1975 ; Jones, 1977, 1987 et de la néophobie. Nous avons mené cette étude sur une espèce de primates non humains, le cercopithèque de Brazza (Cercopithecus neglectus, présentant une très forte tendance au grégarisme et réagissant fortement à l’isolement social (Joly 2000. Nous avons testé 5 individus adultes issus de deux groupes sociaux différents dans deux tests expérimentaux : (1 les aspects du grégarisme de la réaction émotionnelle ont été abordés par un test d’isolement social durant lequel les sujets ont été observés au sein de leur groupe social, en isolement partiel et en isolement total ; (2 les aspects de néophobie ont été étudiés à travers un test de réaction à un objet nouveau pendant lequel les sujets étaient isolés partiellement ou totalement de leur groupe social. A travers ces deux expériences, nous avons pu également tester l’influence de l

  11. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  12. Auditory Responses of Infants

    Science.gov (United States)

    Watrous, Betty Springer; And Others

    1975-01-01

    Forty infants, 3- to 12-months-old, participated in a study designed to differentiate the auditory response characteristics of normally developing infants in the age ranges 3 - 5 months, 6 - 8 months, and 9 - 12 months. (Author)

  13. Lesula: a new species of Cercopithecus monkey endemic to the Democratic Republic of Congo and implications for conservation of Congo's central basin.

    Directory of Open Access Journals (Sweden)

    John A Hart

    Full Text Available In June 2007, a previously undescribed monkey known locally as "lesula" was found in the forests of the middle Lomami Basin in central Democratic Republic of Congo (DRC. We describe this new species as Cercopithecus lomamiensis sp. nov., and provide data on its distribution, morphology, genetics, ecology and behavior. C. lomamiensis is restricted to the lowland rain forests of central DRC between the middle Lomami and the upper Tshuapa Rivers. Morphological and molecular data confirm that C. lomamiensis is distinct from its nearest congener, C. hamlyni, from which it is separated geographically by both the Congo (Lualaba and the Lomami Rivers. C. lomamiensis, like C. hamlyni, is semi-terrestrial with a diet containing terrestrial herbaceous vegetation. The discovery of C. lomamiensis highlights the biogeographic significance and importance for conservation of central Congo's interfluvial TL2 region, defined from the upper Tshuapa River through the Lomami Basin to the Congo (Lualaba River. The TL2 region has been found to contain a high diversity of anthropoid primates including three forms, in addition to C. lomamiensis, that are endemic to the area. We recommend the common name, lesula, for this new species, as it is the vernacular name used over most of its known range.

  14. Degree of terrestrial activity of the elusive sun-tailed monkey (Cercopithecus solatus) in Gabon: Comparative study of behavior and postcranial morphometric data.

    Science.gov (United States)

    Motsch, Peggy; Le Flohic, Guillaume; Dilger, Carole; Delahaye, Alexia; Chateau-Smith, Carmela; Couette, Sebastien

    2015-10-01

    We carried out a multidisciplinary study linking behavioral and morphological data from a little-known guenon species, Cercopithecus solatus, endemic to Gabon. Over a period of 9 months, we documented the pattern of stratum use associated with postural and locomotor behavior by direct observation (650 hrs) of a semi-free-ranging breeding colony. We also conducted a morphometric analysis of the humerus and limb proportions of 90 adult specimens from 16 guenon species, including C. solatus. Field observations indicated that C. solatus monkeys spent a third of their time on the ground, similar to semi-terrestrial guenon species. We detected two patterns of stratum use: at ground level, and in trees, at a height of 3-10 m. The monkeys spent more time on the ground during the dry season than the wet season, feeding mainly at ground level, while resting, and social behaviors occurred more frequently in the tree strata. Our study of humerus size and shape, together with the analysis of limb proportions, indicated morphofunctional adaptation of C. solatus to greater terrestriality than previously thought. We therefore characterize C. solatus as a semi-terrestrial guenon, and propose a new hypothesis for the ancestral condition. By combining behavioral and morphological results, we provide new information about the adaptive strategies of the species, and the evolutionary history of guenons, thus contributing to the conservation of the sun-tailed monkey in the wild.

  15. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  16. Auditory evacuation beacons

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.

    2005-01-01

    Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new si

  17. Virtual Auditory Displays

    Science.gov (United States)

    2000-01-01

    timbre , intensity, distance, room modeling, radio communication Virtual Environments Handbook Chapter 4 Virtual Auditory Displays Russell D... musical note “A” as a pure sinusoid, there will be 440 condensations and rarefactions per second. The distance between two adjacent condensations or...and complexity are pitch, loudness, and timbre respectively. This distinction between physical and perceptual measures of sound properties is an

  18. The neglected neglect: auditory neglect.

    Science.gov (United States)

    Gokhale, Sankalp; Lahoti, Sourabh; Caplan, Louis R

    2013-08-01

    Whereas visual and somatosensory forms of neglect are commonly recognized by clinicians, auditory neglect is often not assessed and therefore neglected. The auditory cortical processing system can be functionally classified into 2 distinct pathways. These 2 distinct functional pathways deal with recognition of sound ("what" pathway) and the directional attributes of the sound ("where" pathway). Lesions of higher auditory pathways produce distinct clinical features. Clinical bedside evaluation of auditory neglect is often difficult because of coexisting neurological deficits and the binaural nature of auditory inputs. In addition, auditory neglect and auditory extinction may show varying degrees of overlap, which makes the assessment even harder. Shielding one ear from the other as well as separating the ear from space is therefore critical for accurate assessment of auditory neglect. This can be achieved by use of specialized auditory tests (dichotic tasks and sound localization tests) for accurate interpretation of deficits. Herein, we have reviewed auditory neglect with an emphasis on the functional anatomy, clinical evaluation, and basic principles of specialized auditory tests.

  19. Auditory pathways: anatomy and physiology.

    Science.gov (United States)

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described.

  20. Animal models for auditory streaming.

    Science.gov (United States)

    Itatani, Naoya; Klump, Georg M

    2017-02-19

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons' response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.

  1. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...... parameters highlighting harmonious and balanced qualities while criticizing the noisy and cacophonous qualities of modern urban settings. This paper present a reaffirmation of Schafer’s central methodological claim: that environments can be analyzed through their sound, but offers considerations on the role...... musicalized through electro acoustic equipment installed in shops, shopping streets, transit areas etc. Urban noise no longer acts only as disturbance, but also structure and shape the places and spaces in which urban life enfold. Based on research done in Japanese shopping streets and in Copenhagen the paper...

  2. Behind the Scenes of Auditory Perception

    OpenAIRE

    Shamma, Shihab A.; Micheyl, Christophe

    2010-01-01

    Auditory scenes” often contain contributions from multiple acoustic sources. These are usually heard as separate auditory “streams”, which can be selectively followed over time. How and where these auditory streams are formed in the auditory system is one of the most fascinating questions facing auditory scientists today. Findings published within the last two years indicate that both cortical and sub-cortical processes contribute to the formation of auditory streams, and they raise importan...

  3. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mec

  4. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  5. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  6. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  7. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  8. Auditory Neuropathy - A Case of Auditory Neuropathy after Hyperbilirubinemia

    Directory of Open Access Journals (Sweden)

    Maliheh Mazaher Yazdi

    2007-12-01

    Full Text Available Background and Aim: Auditory neuropathy is an hearing disorder in which peripheral hearing is normal, but the eighth nerve and brainstem are abnormal. By clinical definition, patient with this disorder have normal OAE, but exhibit an absent or severely abnormal ABR. Auditory neuropathy was first reported in the late 1970s as different methods could identify discrepancy between absent ABR and present hearing threshold. Speech understanding difficulties are worse than can be predicted from other tests of hearing function. Auditory neuropathy may also affect vestibular function. Case Report: This article presents electrophysiological and behavioral data from a case of auditory neuropathy in a child with normal hearing after bilirubinemia in a 5 years follow-up. Audiological findings demonstrate remarkable changes after multidisciplinary rehabilitation. Conclusion: auditory neuropathy may involve damage to the inner hair cells-specialized sensory cells in the inner ear that transmit information about sound through the nervous system to the brain. Other causes may include faulty connections between the inner hair cells and the nerve leading from the inner ear to the brain or damage to the nerve itself. People with auditory neuropathy have OAEs response but absent ABR and hearing loss threshold that can be permanent, get worse or get better.

  9. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... CAPD often have trouble maintaining attention, although health, motivation, and attitude also can play a role. Auditory ... programs. Several computer-assisted programs are geared toward children with APD. They mainly help the brain do ...

  10. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  11. First detection of Echinococcus multilocularis infection in two species of nonhuman primates raised in a zoo: a fatal case in Cercopithecus diana and a strongly suspected case of spontaneous recovery in Macaca nigra.

    Science.gov (United States)

    Yamano, Kimiaki; Kouguchi, Hirokazu; Uraguchi, Kohji; Mukai, Takeshi; Shibata, Chikako; Yamamoto, Hideaki; Takaesu, Noboru; Ito, Masaki; Makino, Yoshinori; Takiguchi, Mitsuyoshi; Yagi, Kinpei

    2014-08-01

    The causative parasite of alveolar echinococcosis, Echinococcus multilocularis, maintains its life cycle between red foxes (Vulpes vulples, the definitive hosts) and voles (the intermediate hosts) in Hokkaido, Japan. Primates, including humans, and some other mammal species can be infected by the accidental ingestion of eggs in the feces of red foxes. In August 2011, a 6-year-old zoo-raised female Diana monkey (Cercopithecus diana) died from alveolar echinococcosis. E. multilocularis infection was confirmed by histopathological examination and detection of the E. multilocularis DNA by polymerase chain reaction (PCR). A field survey in the zoo showed that fox intrusion was common, and serodiagnosis of various nonhuman primates using western blotting detected a case of a 14-year-old female Celebes crested macaque (Macaca nigra) that was weakly positive for E. multilocularis. Computed tomography revealed only one small calcified lesion (approximately 8mm) in the macaque's liver, and both western blotting and enzyme-linked immunosorbent assay (ELISA) showed a gradual decline of antibody titer. These findings strongly suggest that the animal had recovered spontaneously. Until this study, spontaneous recovery from E. multilocularis infection in a nonhuman primate had never been reported.

  12. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc.

  13. Auditory Hallucinations Nomenclature and Classification

    NARCIS (Netherlands)

    Blom, Jan Dirk; Sommer, Iris E. C.

    2010-01-01

    Introduction: The literature on the possible neurobiologic correlates of auditory hallucinations is expanding rapidly. For an adequate understanding and linking of this emerging knowledge, a clear and uniform nomenclature is a prerequisite. The primary purpose of the present article is to provide an

  14. Nigel: A Severe Auditory Dyslexic

    Science.gov (United States)

    Cotterell, Gill

    1976-01-01

    Reported is the case study of a boy with severe auditory dyslexia who received remedial treatment from the age of four and progressed through courses at a technical college and a 3-year apprenticeship course in mechanics by the age of eighteen. (IM)

  15. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  16. Auditory adaptation improves tactile frequency perception.

    Science.gov (United States)

    Crommett, Lexi E; Pérez-Bellido, Alexis; Yau, Jeffrey M

    2017-01-11

    Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals shape tactile processing is unclear: perceptual interactions between contemporaneous sounds and vibrations are consistent with multiple neural mechanisms. Here we used a crossmodal adaptation paradigm, which separated auditory and tactile stimulation in time, to test the hypothesis that tactile frequency perception depends on neural circuits that also process auditory frequency. We reasoned that auditory adaptation effects would transfer to touch only if signals from both senses converge on common representations. We found that auditory adaptation can improve tactile frequency discrimination thresholds. This occurred only when adaptor and test frequencies overlapped. In contrast, auditory adaptation did not influence tactile intensity judgments. Thus, auditory adaptation enhances touch in a frequency- and feature-specific manner. A simple network model in which tactile frequency information is decoded from sensory neurons that are susceptible to auditory adaptation recapitulates these behavioral results. Our results imply that the neural circuits supporting tactile frequency perception also process auditory signals. This finding is consistent with the notion of supramodal operators performing canonical operations, like temporal frequency processing, regardless of input modality.

  17. Auditory Dysfunction in Patients with Cerebrovascular Disease

    Directory of Open Access Journals (Sweden)

    Sadaharu Tabuchi

    2014-01-01

    Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.

  18. The auditory brainstem is a barometer of rapid auditory learning.

    Science.gov (United States)

    Skoe, E; Krizman, J; Spitzer, E; Kraus, N

    2013-07-23

    To capture patterns in the environment, neurons in the auditory brainstem rapidly alter their firing based on the statistical properties of the soundscape. How this neural sensitivity relates to behavior is unclear. We tackled this question by combining neural and behavioral measures of statistical learning, a general-purpose learning mechanism governing many complex behaviors including language acquisition. We recorded complex auditory brainstem responses (cABRs) while human adults implicitly learned to segment patterns embedded in an uninterrupted sound sequence based on their statistical characteristics. The brainstem's sensitivity to statistical structure was measured as the change in the cABR between a patterned and a pseudo-randomized sequence composed from the same set of sounds but differing in their sound-to-sound probabilities. Using this methodology, we provide the first demonstration that behavioral-indices of rapid learning relate to individual differences in brainstem physiology. We found that neural sensitivity to statistical structure manifested along a continuum, from adaptation to enhancement, where cABR enhancement (patterned>pseudo-random) tracked with greater rapid statistical learning than adaptation. Short- and long-term auditory experiences (days to years) are known to promote brainstem plasticity and here we provide a conceptual advance by showing that the brainstem is also integral to rapid learning occurring over minutes.

  19. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  20. Speech distortion measure based on auditory properties

    Institute of Scientific and Technical Information of China (English)

    CHEN Guo; HU Xiulin; ZHANG Yunyu; ZHU Yaoting

    2000-01-01

    The Perceptual Spectrum Distortion (PSD), based on auditory properties of human being, is presented to measure speech distortion. The PSD measure calculates the speech distortion distance by simulating the auditory properties of human being and converting short-time speech power spectrum to auditory perceptual spectrum. Preliminary simulative experiments in comparison with the Itakura measure have been done. The results show that the PSD measure is a perferable speech distortion measure and more consistent with subjective assessment of speech quality.

  1. Auditory evoked potentials and multiple sclerosis

    OpenAIRE

    Carla Gentile Matas; Sandro Luiz de Andrade Matas; Caroline Rondina Salzano de Oliveira; Isabela Crivellaro Gonçalves

    2010-01-01

    Multiple sclerosis (MS) is an inflammatory, demyelinating disease that can affect several areas of the central nervous system. Damage along the auditory pathway can alter its integrity significantly. Therefore, it is important to investigate the auditory pathway, from the brainstem to the cortex, in individuals with MS. OBJECTIVE: The aim of this study was to characterize auditory evoked potentials in adults with MS of the remittent-recurrent type. METHOD: The study comprised 25 individuals w...

  2. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Directory of Open Access Journals (Sweden)

    Julia A Mossbridge

    Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.

  3. Auditory Training and Its Effects upon the Auditory Discrimination and Reading Readiness of Kindergarten Children.

    Science.gov (United States)

    Cullen, Minga Mustard

    The purpose of this investigation was to evaluate the effects of a systematic auditory training program on the auditory discrimination ability and reading readiness of 55 white, middle/upper middle class kindergarten students. Following pretesting with the "Wepman Auditory Discrimination Test,""The Clymer-Barrett Prereading Battery," and the…

  4. Effects of Methylphenidate (Ritalin) on Auditory Performance in Children with Attention and Auditory Processing Disorders.

    Science.gov (United States)

    Tillery, Kim L.; Katz, Jack; Keller, Warren D.

    2000-01-01

    A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…

  5. Central auditory function of deafness genes.

    Science.gov (United States)

    Willaredt, Marc A; Ebbers, Lena; Nothwang, Hans Gerd

    2014-06-01

    The highly variable benefit of hearing devices is a serious challenge in auditory rehabilitation. Various factors contribute to this phenomenon such as the diversity in ear defects, the different extent of auditory nerve hypoplasia, the age of intervention, and cognitive abilities. Recent analyses indicate that, in addition, central auditory functions of deafness genes have to be considered in this context. Since reduced neuronal activity acts as the common denominator in deafness, it is widely assumed that peripheral deafness influences development and function of the central auditory system in a stereotypical manner. However, functional characterization of transgenic mice with mutated deafness genes demonstrated gene-specific abnormalities in the central auditory system as well. A frequent function of deafness genes in the central auditory system is supported by a genome-wide expression study that revealed significant enrichment of these genes in the transcriptome of the auditory brainstem compared to the entire brain. Here, we will summarize current knowledge of the diverse central auditory functions of deafness genes. We furthermore propose the intimately interwoven gene regulatory networks governing development of the otic placode and the hindbrain as a mechanistic explanation for the widespread expression of these genes beyond the cochlea. We conclude that better knowledge of central auditory dysfunction caused by genetic alterations in deafness genes is required. In combination with improved genetic diagnostics becoming currently available through novel sequencing technologies, this information will likely contribute to better outcome prediction of hearing devices.

  6. Autosomal recessive hereditary auditory neuropathy

    Institute of Scientific and Technical Information of China (English)

    王秋菊; 顾瑞; 曹菊阳

    2003-01-01

    Objectives: Auditory neuropathy (AN) is a sensorineural hearing disorder characterized by absent or abnormal auditory brainstem responses (ABRs) and normal cochlear outer hair cell function as measured by otoacoustic emissions (OAEs). Many risk factors are thought to be involved in its etiology and pathophysiology. Three Chinese pedigrees with familial AN are presented herein to demonstrate involvement of genetic factors in AN etiology. Methods: Probands of the above - mentioned pedigrees, who had been diagnosed with AN, were evaluated and followed up in the Department of Otolaryngology Head and Neck Surgery, China PLA General Hospital. Their family members were studied and the pedigree diagrams were established. History of illness, physical examination,pure tone audiometry, acoustic reflex, ABRs and transient evoked and distortion- product otoacoustic emissions (TEOAEs and DPOAEs) were obtained from members of these families. DPOAE changes under the influence of contralateral sound stimuli were observed by presenting a set of continuous white noise to the non - recording ear to exam the function of auditory efferent system. Some subjects received vestibular caloric test, computed tomography (CT)scan of the temporal bone and electrocardiography (ECG) to exclude other possible neuropathy disorders. Results: In most affected subjects, hearing loss of various degrees and speech discrimination difficulties started at 10 to16 years of age. Their audiological evaluation showed absence of acoustic reflex and ABRs. As expected in AN, these subjects exhibited near normal cochlear outer hair cell function as shown in TEOAE & DPOAE recordings. Pure- tone audiometry revealed hearing loss ranging from mild to severe in these patients. Autosomal recessive inheritance patterns were observed in the three families. In Pedigree Ⅰ and Ⅱ, two affected brothers were found respectively, while in pedigree Ⅲ, 2 sisters were affected. All the patients were otherwise normal without

  7. Auditory hallucinations in nonverbal quadriplegics.

    Science.gov (United States)

    Hamilton, J

    1985-11-01

    When a system for communicating with nonverbal, quadriplegic, institutionalized residents was developed, it was discovered that many were experiencing auditory hallucinations. Nine cases are presented in this study. The "voices" described have many similar characteristics, the primary one being that they give authoritarian commands that tell the residents how to behave and to which the residents feel compelled to respond. Both the relationship of this phenomenon to the theoretical work of Julian Jaynes and its effect on the lives of the residents are discussed.

  8. Narrow, duplicated internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, T. [Servico de Neurorradiologia, Hospital Garcia de Orta, Avenida Torrado da Silva, 2801-951, Almada (Portugal); Shayestehfar, B. [Department of Radiology, UCLA Oliveview School of Medicine, Los Angeles, California (United States); Lufkin, R. [Department of Radiology, UCLA School of Medicine, Los Angeles, California (United States)

    2003-05-01

    A narrow internal auditory canal (IAC) constitutes a relative contraindication to cochlear implantation because it is associated with aplasia or hypoplasia of the vestibulocochlear nerve or its cochlear branch. We report an unusual case of a narrow, duplicated IAC, divided by a bony septum into a superior relatively large portion and an inferior stenotic portion, in which we could identify only the facial nerve. This case adds support to the association between a narrow IAC and aplasia or hypoplasia of the vestibulocochlear nerve. The normal facial nerve argues against the hypothesis that the narrow IAC is the result of a primary bony defect which inhibits the growth of the vestibulocochlear nerve. (orig.)

  9. Mapping tonotopy in human auditory cortex

    NARCIS (Netherlands)

    van Dijk, Pim; Langers, Dave R M; Moore, BCJ; Patterson, RD; Winter, IM; Carlyon, RP; Gockel, HE

    2013-01-01

    Tonotopy is arguably the most prominent organizational principle in the auditory pathway. Nevertheless, the layout of tonotopic maps in humans is still debated. We present neuroimaging data that robustly identify multiple tonotopic maps in the bilateral auditory cortex. In contrast with some earlier

  10. Bilateral duplication of the internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Weon, Young Cheol; Kim, Jae Hyoung; Choi, Sung Kyu [Seoul National University College of Medicine, Department of Radiology, Seoul National University Bundang Hospital, Seongnam-si (Korea); Koo, Ja-Won [Seoul National University College of Medicine, Department of Otolaryngology, Seoul National University Bundang Hospital, Seongnam-si (Korea)

    2007-10-15

    Duplication of the internal auditory canal is an extremely rare temporal bone anomaly that is believed to result from aplasia or hypoplasia of the vestibulocochlear nerve. We report bilateral duplication of the internal auditory canal in a 28-month-old boy with developmental delay and sensorineural hearing loss. (orig.)

  11. Primary Auditory Cortex Regulates Threat Memory Specificity

    Science.gov (United States)

    Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…

  12. Further Evidence of Auditory Extinction in Aphasia

    Science.gov (United States)

    Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim

    2013-01-01

    Purpose: Preliminary research ( Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Method: Seventeen IWA (M[subscript age] = 53.19 years)…

  13. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  14. Speech perception as complex auditory categorization

    Science.gov (United States)

    Holt, Lori L.

    2002-05-01

    Despite a long and rich history of categorization research in cognitive psychology, very little work has addressed the issue of complex auditory category formation. This is especially unfortunate because the general underlying cognitive and perceptual mechanisms that guide auditory category formation are of great importance to understanding speech perception. I will discuss a new methodological approach to examining complex auditory category formation that specifically addresses issues relevant to speech perception. This approach utilizes novel nonspeech sound stimuli to gain full experimental control over listeners' history of experience. As such, the course of learning is readily measurable. Results from this methodology indicate that the structure and formation of auditory categories are a function of the statistical input distributions of sound that listeners hear, aspects of the operating characteristics of the auditory system, and characteristics of the perceptual categorization system. These results have important implications for phonetic acquisition and speech perception.

  15. THE EFFECTS OF SALICYLATE ON AUDITORY EVOKED POTENTIAL AMPLITWDE FROM THE AUDITORY CORTEX AND AUDITORY BRAINSTEM

    Institute of Scientific and Technical Information of China (English)

    Brian Sawka; SUN Wei

    2014-01-01

    Tinnitus has often been studied using salicylate in animal models as they are capable of inducing tempo-rary hearing loss and tinnitus. Studies have recently observed enhancement of auditory evoked responses of the auditory cortex (AC) post salicylate treatment which is also shown to be related to tinnitus like behavior in rats. The aim of this study was to observe if enhancements of the AC post salicylate treatment are also present at structures in the brainstem. Four male Sprague Dawley rats with AC implanted electrodes were tested for both AC and auditory brainstem response (ABR) recordings pre and post 250 mg/kg intraperitone-al injections of salicylate. The responses were recorded as the peak to trough amplitudes of P1-N1 (AC), ABR wave V, and ABR waveⅡ. AC responses resulted in statistically significant enhancement of ampli-tude at 2 hours post salicylate with 90 dB stimuli tone bursts of 4, 8, 12, and 20 kHz. Wave V of ABR re-sponses at 90 dB resulted in a statistically significant reduction of amplitude 2 hours post salicylate and a mean decrease of amplitude of 31%for 16 kHz. WaveⅡamplitudes at 2 hours post treatment were signifi-cantly reduced for 4, 12, and 20 kHz stimuli at 90 dB SPL. Our results suggest that the enhancement chang-es of the AC related to salicylate induced tinnitus are generated superior to the level of the inferior colliculus and may originate in the AC.

  16. Relationship between Sympathetic Skin Responses and Auditory Hypersensitivity to Different Auditory Stimuli.

    Science.gov (United States)

    Kato, Fumi; Iwanaga, Ryoichiro; Chono, Mami; Fujihara, Saori; Tokunaga, Akiko; Murata, Jun; Tanaka, Koji; Nakane, Hideyuki; Tanaka, Goro

    2014-07-01

    [Purpose] Auditory hypersensitivity has been widely reported in patients with autism spectrum disorders. However, the neurological background of auditory hypersensitivity is currently not clear. The present study examined the relationship between sympathetic nervous system responses and auditory hypersensitivity induced by different types of auditory stimuli. [Methods] We exposed 20 healthy young adults to six different types of auditory stimuli. The amounts of palmar sweating resulting from the auditory stimuli were compared between groups with (hypersensitive) and without (non-hypersensitive) auditory hypersensitivity. [Results] Although no group × type of stimulus × first stimulus interaction was observed for the extent of reaction, significant type of stimulus × first stimulus interaction was noted for the extent of reaction. For an 80 dB-6,000 Hz stimulus, the trends for palmar sweating differed between the groups. For the first stimulus, the variance became larger in the hypersensitive group than in the non-hypersensitive group. [Conclusion] Subjects who regularly felt excessive reactions to auditory stimuli tended to have excessive sympathetic responses to repeated loud noises compared with subjects who did not feel excessive reactions. People with auditory hypersensitivity may be classified into several subtypes depending on their reaction patterns to auditory stimuli.

  17. Auditory filters at low-frequencies

    DEFF Research Database (Denmark)

    Orellana, Carlos Andrés Jurado; Pedersen, Christian Sejer; Møller, Henrik

    2009-01-01

    Prediction and assessment of low-frequency noise problems requires information about the auditory filter characteristics at low-frequencies. Unfortunately, data at low-frequencies is scarce and practically no results have been published for frequencies below 100 Hz. Extrapolation of ERB results......-ear transfer function), the asymmetry of the auditory filter changed from steeper high-frequency slopes at 1000 Hz to steeper low-frequency slopes below 100 Hz. Increasing steepness at low-frequencies of the middle-ear high-pass filter is thought to cause this effect. The dynamic range of the auditory filter...

  18. Assessing the aging effect on auditory-verbal memory by Persian version of dichotic auditory verbal memory test

    Directory of Open Access Journals (Sweden)

    Zahra Shahidipour

    2014-01-01

    Conclusion: Based on the obtained results, significant reduction in auditory memory was seen in aged group and the Persian version of dichotic auditory-verbal memory test, like many other auditory verbal memory tests, showed the aging effects on auditory verbal memory performance.

  19. Use of auditory learning to manage listening problems in children

    OpenAIRE

    Moore, David R.; Halliday, Lorna F.; Amitay, Sygal

    2008-01-01

    This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers hav...

  20. Auditory-visual spatial interaction and modularity

    Science.gov (United States)

    Radeau, M

    1994-02-01

    The results of dealing with the conditions for pairing visual and auditory data coming from spatially separate locations argue for cognitive impenetrability and computational autonomy, the pairing rules being the Gestalt principles of common fate and proximity. Other data provide evidence for pairing with several properties of modular functioning. Arguments for domain specificity are inferred from comparison with audio-visual speech. Suggestion of innate specification can be found in developmental data indicating that the grouping of visual and auditory signals is supported very early in life by the same principles that operate in adults. Support for a specific neural architecture comes from neurophysiological studies of the bimodal (auditory-visual) neurons of the cat superior colliculus. Auditory-visual pairing thus seems to present the four main properties of the Fodorian module.

  1. [Approaches to therapy of auditory agnosia].

    Science.gov (United States)

    Fechtelpeter, A; Göddenhenrich, S; Huber, W; Springer, L

    1990-01-01

    In a 41-year-old stroke patient with bitemporal brain damage, we found severe signs of auditory agnosia 6 months after onset. Recognition of environmental sounds was extremely impaired when tested in a multiple choice sound-picture matching task, whereas auditory discrimination between sounds and picture identifications by written names was almost undisturbed. In a therapy experiment, we tried to enhance sound recognition via semantic categorization and association, imitation of sound and analysis of auditory features, respectively. The stimulation of conscious auditory analysis proved to be increasingly effective over a 4-week period of therapy. We were able to show that the patient's improvement was not only a simple effect of practicing, but it was stable and carried over to nontrained items.

  2. Environment for Auditory Research Facility (EAR)

    Data.gov (United States)

    Federal Laboratory Consortium — EAR is an auditory perception and communication research center enabling state-of-the-art simulation of various indoor and outdoor acoustic environments. The heart...

  3. Effect of omega-3 on auditory system

    Directory of Open Access Journals (Sweden)

    Vida Rahimi

    2014-01-01

    Full Text Available Background and Aim: Omega-3 fatty acid have structural and biological roles in the body 's various systems . Numerous studies have tried to research about it. Auditory system is affected a s well. The aim of this article was to review the researches about the effect of omega-3 on auditory system.Methods: We searched Medline , Google Scholar, PubMed, Cochrane Library and SID search engines with the "auditory" and "omega-3" keywords and read textbooks about this subject between 19 70 and 20 13.Conclusion: Both excess and deficient amounts of dietary omega-3 fatty acid can cause harmful effects on fetal and infant growth and development of brain and central nervous system esspesially auditory system. It is important to determine the adequate dosage of omega-3.

  4. A critical period for auditory thalamocortical connectivity

    DEFF Research Database (Denmark)

    Rinaldi Barkat, Tania; Polley, Daniel B; Hensch, Takao K

    2011-01-01

    connectivity by in vivo recordings and day-by-day voltage-sensitive dye imaging in an acute brain slice preparation. Passive tone-rearing modified response strength and topography in mouse primary auditory cortex (A1) during a brief, 3-d window, but did not alter tonotopic maps in the thalamus. Gene...... locus of change for the tonotopic plasticity. The evolving postnatal connectivity between thalamus and cortex in the days following hearing onset may therefore determine a critical period for auditory processing....

  5. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  6. Auditory processing in fragile x syndrome.

    Science.gov (United States)

    Rotschafer, Sarah E; Razak, Khaleel A

    2014-01-01

    Fragile X syndrome (FXS) is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral, and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle response is seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning, and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable, and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional, and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  7. Auditory Processing in Fragile X Syndrome

    Directory of Open Access Journals (Sweden)

    Sarah E Rotschafer

    2014-02-01

    Full Text Available Fragile X syndrome (FXS is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle is also seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  8. Auditory model inversion and its application

    Institute of Scientific and Technical Information of China (English)

    ZHAO Heming; WANG Yongqi; CHEN Xueqin

    2005-01-01

    Auditory model has been applied to several aspects of speech signal processing field, and appears to be effective in performance. This paper presents the inverse transform of each stage of one widely used auditory model. First of all it is necessary to invert correlogram and reconstruct phase information by repetitious iterations in order to get auditory-nerve firing rate. The next step is to obtain the negative parts of the signal via the reverse process of the HWR (Half Wave Rectification). Finally the functions of inner hair cell/synapse model and Gammatone filters have to be inverted. Thus the whole auditory model inversion has been achieved. An application of noisy speech enhancement based on auditory model inversion algorithm is proposed. Many experiments show that this method is effective in reducing noise.Especially when SNR of noisy speech is low it is more effective than other methods. Thus this auditory model inversion method given in this paper is applicable to speech enhancement field.

  9. Auditory dysfunction associated with solvent exposure

    Directory of Open Access Journals (Sweden)

    Fuente Adrian

    2013-01-01

    Full Text Available Abstract Background A number of studies have demonstrated that solvents may induce auditory dysfunction. However, there is still little knowledge regarding the main signs and symptoms of solvent-induced hearing loss (SIHL. The aim of this research was to investigate the association between solvent exposure and adverse effects on peripheral and central auditory functioning with a comprehensive audiological test battery. Methods Seventy-two solvent-exposed workers and 72 non-exposed workers were selected to participate in the study. The test battery comprised pure-tone audiometry (PTA, transient evoked otoacoustic emissions (TEOAE, Random Gap Detection (RGD and Hearing-in-Noise test (HINT. Results Solvent-exposed subjects presented with poorer mean test results than non-exposed subjects. A bivariate and multivariate linear regression model analysis was performed. One model for each auditory outcome (PTA, TEOAE, RGD and HINT was independently constructed. For all of the models solvent exposure was significantly associated with the auditory outcome. Age also appeared significantly associated with some auditory outcomes. Conclusions This study provides further evidence of the possible adverse effect of solvents on the peripheral and central auditory functioning. A discussion of these effects and the utility of selected hearing tests to assess SIHL is addressed.

  10. Long Latency Auditory Evoked Potentials during Meditation.

    Science.gov (United States)

    Telles, Shirley; Deepeshwar, Singh; Naveen, Kalkuni Visweswaraiah; Pailoor, Subramanya

    2015-10-01

    The auditory sensory pathway has been studied in meditators, using midlatency and short latency auditory evoked potentials. The present study evaluated long latency auditory evoked potentials (LLAEPs) during meditation. Sixty male participants, aged between 18 and 31 years (group mean±SD, 20.5±3.8 years), were assessed in 4 mental states based on descriptions in the traditional texts. They were (a) random thinking, (b) nonmeditative focusing, (c) meditative focusing, and (d) meditation. The order of the sessions was randomly assigned. The LLAEP components studied were P1 (40-60 ms), N1 (75-115 ms), P2 (120-180 ms), and N2 (180-280 ms). For each component, the peak amplitude and peak latency were measured from the prestimulus baseline. There was significant decrease in the peak latency of the P2 component during and after meditation (Pmeditation facilitates the processing of information in the auditory association cortex, whereas the number of neurons recruited was smaller in random thinking and non-meditative focused thinking, at the level of the secondary auditory cortex, auditory association cortex and anterior cingulate cortex.

  11. Auditory function in vestibular migraine

    Directory of Open Access Journals (Sweden)

    John Mathew

    2016-01-01

    Full Text Available Introduction: Vestibular migraine (VM is a vestibular syndrome seen in patients with migraine and is characterized by short spells of spontaneous or positional vertigo which lasts between a few seconds to weeks. Migraine and VM are considered to be a result of chemical abnormalities in the serotonin pathway. Neuhauser′s diagnostic criteria for vestibular migraine is widely accepted. Research on VM is still limited and there are few studies which have been published on this topic. Materials and Methods: This study has two parts. In the first part, we did a retrospective chart review of eighty consecutive patients who were diagnosed with vestibular migraine and determined the frequency of auditory dysfunction in these patients. The second part was a prospective case control study in which we compared the audiological parameters of thirty patients diagnosed with VM with thirty normal controls to look for any significant differences. Results: The frequency of vestibular migraine in our population is 22%. The frequency of hearing loss in VM is 33%. Conclusion: There is a significant difference between cases and controls with regards to the presence of distortion product otoacoustic emissions in both ears. This finding suggests that the hearing loss in VM is cochlear in origin.

  12. Auditory sustained field responses to periodic noise

    Directory of Open Access Journals (Sweden)

    Keceli Sumru

    2012-01-01

    Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.

  13. Current status of auditory aging and anti-aging research.

    Science.gov (United States)

    Ruan, Qingwei; Ma, Cheng; Zhang, Ruxin; Yu, Zhuowei

    2014-01-01

    The development of presbycusis, or age-related hearing loss, is determined by a combination of genetic and environmental factors. The auditory periphery exhibits a progressive bilateral, symmetrical reduction of auditory sensitivity to sound from high to low frequencies. The central auditory nervous system shows symptoms of decline in age-related cognitive abilities, including difficulties in speech discrimination and reduced central auditory processing, ultimately resulting in auditory perceptual abnormalities. The pathophysiological mechanisms of presbycusis include excitotoxicity, oxidative stress, inflammation, aging and oxidative stress-induced DNA damage that results in apoptosis in the auditory pathway. However, the originating signals that trigger these mechanisms remain unclear. For instance, it is still unknown whether insulin is involved in auditory aging. Auditory aging has preclinical lesions, which manifest as asymptomatic loss of periphery auditory nerves and changes in the plasticity of the central auditory nervous system. Currently, the diagnosis of preclinical, reversible lesions depends on the detection of auditory impairment by functional imaging, and the identification of physiological and molecular biological markers. However, despite recent improvements in the application of these markers, they remain under-utilized in clinical practice. The application of antisenescent approaches to the prevention of auditory aging has produced inconsistent results. Future research will focus on the identification of markers for the diagnosis of preclinical auditory aging and the development of effective interventions.

  14. Experience and information loss in auditory and visual memory.

    Science.gov (United States)

    Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K

    2017-07-01

    Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.

  15. Effects of Caffeine on Auditory Brainstem Response

    Directory of Open Access Journals (Sweden)

    Saleheh Soleimanian

    2008-06-01

    Full Text Available Background and Aim: Blocking of the adenosine receptor in central nervous system by caffeine can lead to increasing the level of neurotransmitters like glutamate. As the adenosine receptors are present in almost all brain areas like central auditory pathway, it seems caffeine can change conduction in this way. The purpose of this study was to evaluate the effects of caffeine on latency and amplitude of auditory brainstem response(ABR.Materials and Methods: In this clinical trial study 43 normal 18-25 years old male students were participated. The subjects consumed 0, 2 and 3 mg/kg BW caffeine in three different sessions. Auditory brainstem responses were recorded before and 30 minute after caffeine consumption. The results were analyzed by Friedman and Wilcoxone test to assess the effects of caffeine on auditory brainstem response.Results: Compared to control group the latencies of waves III,V and I-V interpeak interval of the cases decreased significantly after 2 and 3mg/kg BW caffeine consumption. Wave I latency significantly decreased after 3mg/kg BW caffeine consumption(p<0.01. Conclusion: Increasing of the glutamate level resulted from the adenosine receptor blocking brings about changes in conduction in the central auditory pathway.

  16. Facilitated auditory detection for speech sounds.

    Science.gov (United States)

    Signoret, Carine; Gaudrain, Etienne; Tillmann, Barbara; Grimault, Nicolas; Perrin, Fabien

    2011-01-01

    If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo-words, and complex non-phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub-threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2) that was followed by a two alternative forced-choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo-words) were better detected than non-phonological stimuli (complex sounds), presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo-words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non-speech processing could not be attributed to energetic differences in the stimuli.

  17. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  18. Absence of auditory 'global interference' in autism.

    Science.gov (United States)

    Foxton, Jessica M; Stewart, Mary E; Barnard, Louise; Rodgers, Jacqui; Young, Allan H; O'Brien, Gregory; Griffiths, Timothy D

    2003-12-01

    There has been considerable recent interest in the cognitive style of individuals with Autism Spectrum Disorder (ASD). One theory, that of weak central coherence, concerns an inability to combine stimulus details into a coherent whole. Here we test this theory in the case of sound patterns, using a new definition of the details (local structure) and the coherent whole (global structure). Thirteen individuals with a diagnosis of autism or Asperger's syndrome and 15 control participants were administered auditory tests, where they were required to match local pitch direction changes between two auditory sequences. When the other local features of the sequence pairs were altered (the actual pitches and relative time points of pitch direction change), the control participants obtained lower scores compared with when these details were left unchanged. This can be attributed to interference from the global structure, defined as the combination of the local auditory details. In contrast, the participants with ASD did not obtain lower scores in the presence of such mismatches. This was attributed to the absence of interference from an auditory coherent whole. The results are consistent with the presence of abnormal interactions between local and global auditory perception in ASD.

  19. The effect of background music in auditory health persuasion

    NARCIS (Netherlands)

    Elbert, Sarah; Dijkstra, Arie

    2013-01-01

    In auditory health persuasion, threatening information regarding health is communicated by voice only. One relevant context of auditory persuasion is the addition of background music. There are different mechanisms through which background music might influence persuasion, for example through mood (

  20. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  1. Auditory imagery and the poor-pitch singer.

    Science.gov (United States)

    Pfordresher, Peter Q; Halpern, Andrea R

    2013-08-01

    The vocal imitation of pitch by singing requires one to plan laryngeal movements on the basis of anticipated target pitch events. This process may rely on auditory imagery, which has been shown to activate motor planning areas. As such, we hypothesized that poor-pitch singing, although not typically associated with deficient pitch perception, may be associated with deficient auditory imagery. Participants vocally imitated simple pitch sequences by singing, discriminated pitch pairs on the basis of pitch height, and completed an auditory imagery self-report questionnaire (the Bucknell Auditory Imagery Scale). The percentage of trials participants sung in tune correlated significantly with self-reports of vividness for auditory imagery, although not with the ability to control auditory imagery. Pitch discrimination was not predicted by auditory imagery scores. The results thus support a link between auditory imagery and vocal imitation.

  2. Intradermal melanocytic nevus of the external auditory canal.

    Science.gov (United States)

    Alves, Renato V; Brandão, Fabiano H; Aquino, José E P; Carvalho, Maria R M S; Giancoli, Suzana M; Younes, Eduado A P

    2005-01-01

    Intradermal nevi are common benign pigmented skin tumors. Their occurrence within the external auditory canal is uncommon. The clinical and pathologic features of an intradermal nevus arising within the external auditory canal are presented, and the literature reviewed.

  3. What determines auditory distraction? On the roles of local auditory changes and expectation violations.

    Directory of Open Access Journals (Sweden)

    Jan P Röer

    Full Text Available Both the acoustic variability of a distractor sequence and the degree to which it violates expectations are important determinants of auditory distraction. In four experiments we examined the relative contribution of local auditory changes on the one hand and expectation violations on the other hand in the disruption of serial recall by irrelevant sound. We present evidence for a greater disruption by auditory sequences ending in unexpected steady state distractor repetitions compared to auditory sequences with expected changing state endings even though the former contained fewer local changes. This effect was demonstrated with piano melodies (Experiment 1 and speech distractors (Experiment 2. Furthermore, it was replicated when the expectation violation occurred after the encoding of the target items (Experiment 3, indicating that the items' maintenance in short-term memory was disrupted by attentional capture and not their encoding. This seems to be primarily due to the violation of a model of the specific auditory distractor sequences because the effect vanishes and even reverses when the experiment provides no opportunity to build up a specific neural model about the distractor sequence (Experiment 4. Nevertheless, the violation of abstract long-term knowledge about auditory regularities seems to cause a small and transient capture effect: Disruption decreased markedly over the course of the experiments indicating that participants habituated to the unexpected distractor repetitions across trials. The overall pattern of results adds to the growing literature that the degree to which auditory distractors violate situation-specific expectations is a more important determinant of auditory distraction than the degree to which a distractor sequence contains local auditory changes.

  4. ABR and auditory P300 findings inchildren with ADHD

    OpenAIRE

    Schochat Eliane; Scheuer Claudia Ines; Andrade Ênio Roberto de

    2002-01-01

    Auditory processing disorders (APD), also referred as central auditory processing disorders (CAPD) and attention deficit hyperactivity disorders (ADHD) have become popular diagnostic entities for school age children. It has been demonstrated a high incidence of comorbid ADHD with communication disorders and auditory processing disorder. The aim of this study was to investigate ABR and P300 auditory evoked potentials in children with ADHD, in a double-blind study. Twenty-one children, ages bet...

  5. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  6. Are auditory percepts determined by experience?

    Science.gov (United States)

    Monson, Brian B; Han, Shui'Er; Purves, Dale

    2013-01-01

    Audition--what listeners hear--is generally studied in terms of the physical properties of sound stimuli and physiological properties of the auditory system. Based on recent work in vision, we here consider an alternative perspective that sensory percepts are based on past experience. In this framework, basic auditory qualities (e.g., loudness and pitch) are based on the frequency of occurrence of stimulus patterns in natural acoustic stimuli. To explore this concept of audition, we examined five well-documented psychophysical functions. The frequency of occurrence of acoustic patterns in a database of natural sound stimuli (speech) predicts some qualitative aspects of these functions, but with substantial quantitative discrepancies. This approach may offer a rationale for auditory phenomena that are difficult to explain in terms of the physical attributes of the stimuli as such.

  7. Are auditory percepts determined by experience?

    Directory of Open Access Journals (Sweden)

    Brian B Monson

    Full Text Available Audition--what listeners hear--is generally studied in terms of the physical properties of sound stimuli and physiological properties of the auditory system. Based on recent work in vision, we here consider an alternative perspective that sensory percepts are based on past experience. In this framework, basic auditory qualities (e.g., loudness and pitch are based on the frequency of occurrence of stimulus patterns in natural acoustic stimuli. To explore this concept of audition, we examined five well-documented psychophysical functions. The frequency of occurrence of acoustic patterns in a database of natural sound stimuli (speech predicts some qualitative aspects of these functions, but with substantial quantitative discrepancies. This approach may offer a rationale for auditory phenomena that are difficult to explain in terms of the physical attributes of the stimuli as such.

  8. Phonetic categorization in auditory word perception.

    Science.gov (United States)

    Ganong, W F

    1980-02-01

    To investigate the interaction in speech perception of auditory information and lexical knowledge (in particular, knowledge of which phonetic sequences are words), acoustic continua varying in voice onset time were constructed so that for each acoustic continuum, one of the two possible phonetic categorizations made a word and the other did not. For example, one continuum ranged between the word dash and the nonword tash; another used the nonword dask and the word task. In two experiments, subjects showed a significant lexical effect--that is, a tendency to make phonetic categorizations that make words. This lexical effect was greater at the phoneme boundary (where auditory information is ambiguous) than at the ends of the condinua. Hence the lexical effect must arise at a stage of processing sensitive to both lexical knowledge and auditory information.

  9. [Functional neuroimaging of auditory hallucinations in schizophrenia].

    Science.gov (United States)

    Font, M; Parellada, E; Fernández-Egea, E; Bernardo, M; Lomeña, F

    2003-01-01

    The neurobiological bases underlying the generation of auditory hallucinations, a distressing and paradigmatic symptom of schizophrenia, are still unknown in spite of in-depth phenomenological descriptions. This work aims to make a critical review of the latest published literature in recent years, focusing on functional neuroimaging studies (PET, SPECT, fMRI) of auditory hallucinations. Thus, the studies are classified according to whether they are sensory activation, trait and state. The two main hypotheses proposed to explain the phenomenon, external speech vs. subvocal or inner speech, are also explained. Finally, the latest unitary theory as well as the limitations the studies published are commented on. The need to continue investigating in this field, that is still underdeveloped, is posed in order to understand better the etiopathogenesis of auditory hallucinations in schizophrenia.

  10. The mitochondrial connection in auditory neuropathy.

    Science.gov (United States)

    Cacace, Anthony T; Pinheiro, Joaquim M B

    2011-01-01

    'Auditory neuropathy' (AN), the term used to codify a primary degeneration of the auditory nerve, can be linked directly or indirectly to mitochondrial dysfunction. These observations are based on the expression of AN in known mitochondrial-based neurological diseases (Friedreich's ataxia, Mohr-Tranebjærg syndrome), in conditions where defects in axonal transport, protein trafficking, and fusion processes perturb and/or disrupt mitochondrial dynamics (Charcot-Marie-Tooth disease, autosomal dominant optic atrophy), in a common neonatal condition known to be toxic to mitochondria (hyperbilirubinemia), and where respiratory chain deficiencies produce reductions in oxidative phosphorylation that adversely affect peripheral auditory mechanisms. This body of evidence is solidified by data derived from temporal bone and genetic studies, biochemical, molecular biologic, behavioral, electroacoustic, and electrophysiological investigations.

  11. The auditory hallucination: a phenomenological survey.

    Science.gov (United States)

    Nayani, T H; David, A S

    1996-01-01

    A comprehensive semi-structured questionnaire was administered to 100 psychotic patients who had experienced auditory hallucinations. The aim was to extend the phenomenology of the hallucination into areas of both form and content and also to guide future theoretical development. All subjects heard 'voices' talking to or about them. The location of the voice, its characteristics and the nature of address were described. Precipitants and alleviating factors plus the effect of the hallucinations on the sufferer were identified. Other hallucinatory experiences, thought insertion and insight were examined for their inter-relationships. A pattern emerged of increasing complexity of the auditory-verbal hallucination over time by a process of accretion, with the addition of more voices and extended dialogues, and more intimacy between subject and voice. Such evolution seemed to relate to the lessening of distress and improved coping. These findings should inform both neurological and cognitive accounts of the pathogenesis of auditory hallucinations in psychotic disorders.

  12. Cooperative dynamics in auditory brain response

    CERN Document Server

    Kwapien, J; Liu, L C; Ioannides, A A

    1998-01-01

    Simultaneous estimates of the activity in the left and right auditory cortex of five normal human subjects were extracted from Multichannel Magnetoencephalography recordings. Left, right and binaural stimulation were used, in separate runs, for each subject. The resulting time-series of left and right auditory cortex activity were analysed using the concept of mutual information. The analysis constitutes an objective method to address the nature of inter-hemispheric correlations in response to auditory stimulations. The results provide a clear evidence for the occurrence of such correlations mediated by a direct information transport, with clear laterality effects: as a rule, the contralateral hemisphere leads by 10-20ms, as can be seen in the average signal. The strength of the inter-hemispheric coupling, which cannot be extracted from the average data, is found to be highly variable from subject to subject, but remarkably stable for each subject.

  13. Auditory temporal processes in the elderly

    Directory of Open Access Journals (Sweden)

    E. Ben-Artzi

    2011-03-01

    Full Text Available Several studies have reported age-related decline in auditory temporal resolution and in working memory. However, earlier studies did not provide evidence as to whether these declines reflect overall changes in the same mechanisms, or reflect age-related changes in two independent mechanisms. In the current study we examined whether the age-related decline in auditory temporal resolution and in working memory would remain significant even after controlling for their shared variance. Eighty-two participants, aged 21-82 performed the dichotic temporal order judgment task and the backward digit span task. The findings indicate that age-related decline in auditory temporal resolution and in working memory are two independent processes.

  14. Do dyslexics have auditory input processing difficulties?

    DEFF Research Database (Denmark)

    Poulsen, Mads

    2011-01-01

    Word production difficulties are well documented in dyslexia, whereas the results are mixed for receptive phonological processing. This asymmetry raises the possibility that the core phonological deficit of dyslexia is restricted to output processing stages. The present study investigated whether...... a group of dyslexics had word level receptive difficulties using an auditory lexical decision task with long words and nonsense words. The dyslexics were slower and less accurate than chronological age controls in an auditory lexical decision task, with disproportionate low performance on nonsense words...

  15. The many facets of auditory display

    Science.gov (United States)

    Blattner, Meera M.

    1995-01-01

    In this presentation we will examine some of the ways sound can be used in a virtual world. We make the case that many different types of audio experience are available to us. A full range of audio experiences include: music, speech, real-world sounds, auditory displays, and auditory cues or messages. The technology of recreating real-world sounds through physical modeling has advanced in the past few years allowing better simulation of virtual worlds. Three-dimensional audio has further enriched our sensory experiences.

  16. Transient auditory hallucinations in an adolescent.

    Science.gov (United States)

    Skokauskas, Norbert; Pillay, Devina; Moran, Tom; Kahn, David A

    2010-05-01

    In adolescents, hallucinations can be a transient illness or can be associated with non-psychotic psychopathology, psychosocial adversity, or a physical illness. We present the case of a 15-year-old secondary-school student who presented with a 1-month history of first onset auditory hallucinations, which had been increasing in frequency and severity, and mild paranoid ideation. Over a 10-week period, there was a gradual diminution, followed by a complete resolution, of symptoms. We discuss issues regarding the diagnosis and prognosis of auditory hallucinations in adolescents.

  17. Neurodynamics, tonality, and the auditory brainstem response.

    Science.gov (United States)

    Large, Edward W; Almonte, Felix V

    2012-04-01

    Tonal relationships are foundational in music, providing the basis upon which musical structures, such as melodies, are constructed and perceived. A recent dynamic theory of musical tonality predicts that networks of auditory neurons resonate nonlinearly to musical stimuli. Nonlinear resonance leads to stability and attraction relationships among neural frequencies, and these neural dynamics give rise to the perception of relationships among tones that we collectively refer to as tonal cognition. Because this model describes the dynamics of neural populations, it makes specific predictions about human auditory neurophysiology. Here, we show how predictions about the auditory brainstem response (ABR) are derived from the model. To illustrate, we derive a prediction about population responses to musical intervals that has been observed in the human brainstem. Our modeled ABR shows qualitative agreement with important features of the human ABR. This provides a source of evidence that fundamental principles of auditory neurodynamics might underlie the perception of tonal relationships, and forces reevaluation of the role of learning and enculturation in tonal cognition.

  18. Reading adn Auditory-Visual Equivalences

    Science.gov (United States)

    Sidman, Murray

    1971-01-01

    A retarded boy, unable to read orally or with comprehension, was taught to match spoken to printed words and was then capable of reading comprehension (matching printed words to picture) and oral reading (naming printed words aloud), demonstrating that certain learned auditory-visual equivalences are sufficient prerequisites for reading…

  19. Tuning up the developing auditory CNS.

    Science.gov (United States)

    Sanes, Dan H; Bao, Shaowen

    2009-04-01

    Although the auditory system has limited information processing resources, the acoustic environment is infinitely variable. To properly encode the natural environment, the developing central auditory system becomes somewhat specialized through experience-dependent adaptive mechanisms that operate during a sensitive time window. Recent studies have demonstrated that cellular and synaptic plasticity occurs throughout the central auditory pathway. Acoustic-rearing experiments can lead to an over-representation of the exposed sound frequency, and this is associated with specific changes in frequency discrimination. These forms of cellular plasticity are manifest in brain regions, such as midbrain and cortex, which interact through feed-forward and feedback pathways. Hearing loss leads to a profound re-weighting of excitatory and inhibitory synaptic gain throughout the auditory CNS, and this is associated with an over-excitability that is observed in vivo. Further behavioral and computational analyses may provide insights into how theses cellular and systems plasticity effects underlie the development of cognitive functions such as speech perception.

  20. Auditory Integration Training: The Magical Mystery Cure.

    Science.gov (United States)

    Tharpe, Anne Marie

    1999-01-01

    This article notes the enthusiastic reception received by auditory integration training (AIT) for children with a wide variety of disorders including autism but raises concerns about this alternative treatment practice. It offers reasons for cautious evaluation of AIT prior to clinical implementation and summarizes current research findings. (DB)

  1. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  2. Development of Receiver Stimulator for Auditory Prosthesis

    Directory of Open Access Journals (Sweden)

    K. Raja Kumar

    2010-05-01

    Full Text Available The Auditory Prosthesis (AP is an electronic device that can provide hearing sensations to people who are profoundly deaf by stimulating the auditory nerve via an array of electrodes with an electric current allowing them to understand the speech. The AP system consists of two hardware functional units such as Body Worn Speech Processor (BWSP and Receiver Stimulator. The prototype model of Receiver Stimulator for Auditory Prosthesis (RSAP consists of Speech Data Decoder, DAC, ADC, constant current generator, electrode selection logic, switch matrix and simulated electrode resistance array. The laboratory model of speech processor is designed to implement the Continuous Interleaved Sampling (CIS speech processing algorithm which generates the information required for electrode stimulation based on the speech / audio data. Speech Data Decoder receives the encoded speech data via an inductive RF transcutaneous link from speech processor. Twelve channels of auditory Prosthesis with selectable eight electrodes for stimulation of simulated electrode resistance array are used for testing. The RSAP is validated by using the test data generated by the laboratory prototype of speech processor. The experimental results are obtained from specific speech/sound tests using a high-speed data acquisition system and found satisfactory.

  3. Auditory Processing Disorder: School Psychologist Beware?

    Science.gov (United States)

    Lovett, Benjamin J.

    2011-01-01

    An increasing number of students are being diagnosed with auditory processing disorder (APD), but the school psychology literature has largely neglected this controversial condition. This article reviews research on APD, revealing substantial concerns with assessment tools and diagnostic practices, as well as insufficient research regarding many…

  4. The Goldilocks Effect in Infant Auditory Attention

    Science.gov (United States)

    Kidd, Celeste; Piantadosi, Steven T.; Aslin, Richard N.

    2014-01-01

    Infants must learn about many cognitive domains (e.g., language, music) from auditory statistics, yet capacity limits on their cognitive resources restrict the quantity that they can encode. Previous research has established that infants can attend to only a subset of available acoustic input. Yet few previous studies have directly examined infant…

  5. Auditory Training with Frequent Communication Partners

    Science.gov (United States)

    Tye-Murray, Nancy; Spehar, Brent; Sommers, Mitchell; Barcroft, Joe

    2016-01-01

    Purpose: Individuals with hearing loss engage in auditory training to improve their speech recognition. They typically practice listening to utterances spoken by unfamiliar talkers but never to utterances spoken by their most frequent communication partner (FCP)--speech they most likely desire to recognize--under the assumption that familiarity…

  6. Auditory and visual scene analysis: an overview

    Science.gov (United States)

    2017-01-01

    We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how ‘scene analysis’ is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044011

  7. Affective Priming with Auditory Speech Stimuli

    Science.gov (United States)

    Degner, Juliane

    2011-01-01

    Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In Experiment 2, stimulus onset asynchrony (SOA) was…

  8. Affective priming with auditory speech stimuli

    NARCIS (Netherlands)

    Degner, J.

    2011-01-01

    Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In

  9. Auditory pathology in cri-du-chat (5p-) syndrome: phenotypic evidence for auditory neuropathy.

    Science.gov (United States)

    Swanepoel, D

    2007-10-01

    5p-(cri-du-chat syndrome) is a well-defined clinical entity presenting with phenotypic and cytogenetic variability. Despite recognition that abnormalities in audition are common, limited reports on auditory functioning in affected individuals are available. The current study presents a case illustrating the auditory functioning in a 22-month-old patient diagnosed with 5p- syndrome, karyotype 46,XX,del(5)(p13). Auditory neuropathy was diagnosed based on abnormal auditory evoked potentials with neural components suggesting severe to profound hearing loss in the presence of cochlear microphonic responses and behavioral reactions to sound at mild to moderate hearing levels. The current case and a review of available reports indicate that auditory neuropathy or neural dys-synchrony may be another phenotype of the condition possibly related to abnormal expression of the protein beta-catenin mapped to 5p. Implications are for routine and diagnostic specific assessments of auditory functioning and for employment of non-verbal communication methods in early intervention.

  10. Interhemispheric auditory connectivity: structure and function related to auditory verbal hallucinations.

    Science.gov (United States)

    Steinmann, Saskia; Leicht, Gregor; Mulert, Christoph

    2014-01-01

    Auditory verbal hallucinations (AVH) are one of the most common and most distressing symptoms of schizophrenia. Despite fundamental research, the underlying neurocognitive and neurobiological mechanisms are still a matter of debate. Previous studies suggested that "hearing voices" is associated with a number of factors including local deficits in the left auditory cortex and a disturbed connectivity of frontal and temporoparietal language-related areas. In addition, it is hypothesized that the interhemispheric pathways connecting right and left auditory cortices might be involved in the pathogenesis of AVH. Findings based on Diffusion-Tensor-Imaging (DTI) measurements revealed a remarkable interindividual variability in size and shape of the interhemispheric auditory pathways. Interestingly, schizophrenia patients suffering from AVH exhibited increased fractional anisotropy (FA) in the interhemispheric fibers than non-hallucinating patients. Thus, higher FA-values indicate an increased severity of AVH. Moreover, a dichotic listening (DL) task showed that the interindividual variability in the interhemispheric auditory pathways was reflected in the behavioral outcome: stronger pathways supported a better information transfer and consequently improved speech perception. This detection indicates a specific structure-function relationship, which seems to be interindividually variable. This review focuses on recent findings concerning the structure-function relationship of the interhemispheric pathways in controls, hallucinating and non-hallucinating schizophrenia patients and concludes that changes in the structural and functional connectivity of auditory areas are involved in the pathophysiology of AVH.

  11. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning.

    Science.gov (United States)

    Strait, Dana L; Kraus, Nina

    2014-02-01

    Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians' subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model in which to study mechanisms of experience-dependent changes in human auditory function.

  12. Representation of Reward Feedback in Primate Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Michael eBrosch

    2011-02-01

    Full Text Available It is well established that auditory cortex is plastic on different time scales and that this plasticity is driven by the reinforcement that is used to motivate subjects to learn or to perform an auditory task. Motivated by these findings, we study in detail properties of neuronal firing in auditory cortex that is related to reward feedback. We recorded from the auditory cortex of two monkeys while they were performing an auditory categorization task. Monkeys listened to a sequence of tones and had to signal when the frequency of adjacent tones stepped in downward direction, irrespective of the tone frequency and step size. Correct identifications were rewarded with either a large or a small amount of water. The size of reward depended on the monkeys' performance in the previous trial: it was large after a correct trial and small after an incorrect trial. The rewards served to maintain task performance. During task performance we found three successive periods of neuronal firing in auditory cortex that reflected (1 the reward expectancy for each trial, (2 the reward size received and (3 the mismatch between the expected and delivered reward. These results, together with control experiments suggest that auditory cortex receives reward feedback that could be used to adapt auditory cortex to task requirements. Additionally, the results presented here extend previous observations of non-auditory roles of auditory cortex and shows that auditory cortex is even more cognitively influenced than lately recognized.

  13. Representation of reward feedback in primate auditory cortex.

    Science.gov (United States)

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2011-01-01

    It is well established that auditory cortex is plastic on different time scales and that this plasticity is driven by the reinforcement that is used to motivate subjects to learn or to perform an auditory task. Motivated by these findings, we study in detail properties of neuronal firing in auditory cortex that is related to reward feedback. We recorded from the auditory cortex of two monkeys while they were performing an auditory categorization task. Monkeys listened to a sequence of tones and had to signal when the frequency of adjacent tones stepped in downward direction, irrespective of the tone frequency and step size. Correct identifications were rewarded with either a large or a small amount of water. The size of reward depended on the monkeys' performance in the previous trial: it was large after a correct trial and small after an incorrect trial. The rewards served to maintain task performance. During task performance we found three successive periods of neuronal firing in auditory cortex that reflected (1) the reward expectancy for each trial, (2) the reward-size received, and (3) the mismatch between the expected and delivered reward. These results, together with control experiments suggest that auditory cortex receives reward feedback that could be used to adapt auditory cortex to task requirements. Additionally, the results presented here extend previous observations of non-auditory roles of auditory cortex and shows that auditory cortex is even more cognitively influenced than lately recognized.

  14. Measuring Auditory Selective Attention using Frequency Tagging

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory

  15. Comparison of Electrophysiological Auditory Measures in Fishes.

    Science.gov (United States)

    Maruska, Karen P; Sisneros, Joseph A

    2016-01-01

    Sounds provide fishes with important information used to mediate behaviors such as predator avoidance, prey detection, and social communication. How we measure auditory capabilities in fishes, therefore, has crucial implications for interpreting how individual species use acoustic information in their natural habitat. Recent analyses have highlighted differences between behavioral and electrophysiologically determined hearing thresholds, but less is known about how physiological measures at different auditory processing levels compare within a single species. Here we provide one of the first comparisons of auditory threshold curves determined by different recording methods in a single fish species, the soniferous Hawaiian sergeant fish Abudefduf abdominalis, and review past studies on representative fish species with tuning curves determined by different methods. The Hawaiian sergeant is a colonial benthic-spawning damselfish (Pomacentridae) that produces low-frequency, low-intensity sounds associated with reproductive and agonistic behaviors. We compared saccular potentials, auditory evoked potentials (AEP), and single neuron recordings from acoustic nuclei of the hindbrain and midbrain torus semicircularis. We found that hearing thresholds were lowest at low frequencies (~75-300 Hz) for all methods, which matches the spectral components of sounds produced by this species. However, thresholds at best frequency determined via single cell recordings were ~15-25 dB lower than those measured by AEP and saccular potential techniques. While none of these physiological techniques gives us a true measure of the auditory "perceptual" abilities of a naturally behaving fish, this study highlights that different methodologies can reveal similar detectable range of frequencies for a given species, but absolute hearing sensitivity may vary considerably.

  16. Impairments of auditory scene analysis in Alzheimer's disease.

    Science.gov (United States)

    Goll, Johanna C; Kim, Lois G; Ridgway, Gerard R; Hailstone, Julia C; Lehmann, Manja; Buckley, Aisling H; Crutch, Sebastian J; Warren, Jason D

    2012-01-01

    Parsing of sound sources in the auditory environment or 'auditory scene analysis' is a computationally demanding cognitive operation that is likely to be vulnerable to the neurodegenerative process in Alzheimer's disease. However, little information is available concerning auditory scene analysis in Alzheimer's disease. Here we undertook a detailed neuropsychological and neuroanatomical characterization of auditory scene analysis in a cohort of 21 patients with clinically typical Alzheimer's disease versus age-matched healthy control subjects. We designed a novel auditory dual stream paradigm based on synthetic sound sequences to assess two key generic operations in auditory scene analysis (object segregation and grouping) in relation to simpler auditory perceptual, task and general neuropsychological factors. In order to assess neuroanatomical associations of performance on auditory scene analysis tasks, structural brain magnetic resonance imaging data from the patient cohort were analysed using voxel-based morphometry. Compared with healthy controls, patients with Alzheimer's disease had impairments of auditory scene analysis, and segregation and grouping operations were comparably affected. Auditory scene analysis impairments in Alzheimer's disease were not wholly attributable to simple auditory perceptual or task factors; however, the between-group difference relative to healthy controls was attenuated after accounting for non-verbal (visuospatial) working memory capacity. These findings demonstrate that clinically typical Alzheimer's disease is associated with a generic deficit of auditory scene analysis. Neuroanatomical associations of auditory scene analysis performance were identified in posterior cortical areas including the posterior superior temporal lobes and posterior cingulate. This work suggests a basis for understanding a class of clinical symptoms in Alzheimer's disease and for delineating cognitive mechanisms that mediate auditory scene analysis

  17. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    Science.gov (United States)

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  18. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  19. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  20. An Auditory Model with Hearing Loss

    DEFF Research Database (Denmark)

    Nielsen, Lars Bramsløw

    An auditory model based on the psychophysics of hearing has been developed and tested. The model simulates the normal ear or an impaired ear with a given hearing loss. Based on reviews of the current literature, the frequency selectivity and loudness growth as functions of threshold and stimulus...... level have been found and implemented in the model. The auditory model was verified against selected results from the literature, and it was confirmed that the normal spread of masking and loudness growth could be simulated in the model. The effects of hearing loss on these parameters was also...... in qualitative agreement with recent findings. The temporal properties of the ear have currently not been included in the model. As an example of a real-world application of the model, loudness spectrograms for a speech utterance were presented. By introducing hearing loss, the speech sounds became less audible...

  1. Deafness in cochlear and auditory nerve disorders.

    Science.gov (United States)

    Hopkins, Kathryn

    2015-01-01

    Sensorineural hearing loss is the most common type of hearing impairment worldwide. It arises as a consequence of damage to the cochlea or auditory nerve, and several structures are often affected simultaneously. There are many causes, including genetic mutations affecting the structures of the inner ear, and environmental insults such as noise, ototoxic substances, and hypoxia. The prevalence increases dramatically with age. Clinical diagnosis is most commonly accomplished by measuring detection thresholds and comparing these to normative values to determine the degree of hearing loss. In addition to causing insensitivity to weak sounds, sensorineural hearing loss has a number of adverse perceptual consequences, including loudness recruitment, poor perception of pitch and auditory space, and difficulty understanding speech, particularly in the presence of background noise. The condition is usually incurable; treatment focuses on restoring the audibility of sounds made inaudible by hearing loss using either hearing aids or cochlear implants.

  2. Anatomy and Physiology of the Auditory Tracts

    Directory of Open Access Journals (Sweden)

    Mohammad hosein Hekmat Ara

    1999-03-01

    Full Text Available Hearing is one of the excel sense of human being. Sound waves travel through the medium of air and enter the ear canal and then hit the tympanic membrane. Middle ear transfer almost 60-80% of this mechanical energy to the inner ear by means of “impedance matching”. Then, the sound energy changes to traveling wave and is transferred based on its specific frequency and stimulates organ of corti. Receptors in this organ and their synapses transform mechanical waves to the neural waves and transfer them to the brain. The central nervous system tract of conducting the auditory signals in the auditory cortex will be explained here briefly.

  3. Modeling auditory evoked potentials to complex stimuli

    DEFF Research Database (Denmark)

    Rønne, Filip Munch

    The auditory evoked potential (AEP) is an electrical signal that can be recorded from electrodes attached to the scalp of a human subject when a sound is presented. The signal is considered to reflect neural activity in response to the acoustic stimulation and is a well established clinical...... clinically and in research towards using realistic and complex stimuli, such as speech, to electrophysiologically assess the human hearing. However, to interpret the AEP generation to complex sounds, the potential patterns in response to simple stimuli needs to be understood. Therefore, the model was used...... to simulate auditory brainstem responses (ABRs) evoked by classic stimuli like clicks, tone bursts and chirps. The ABRs to these simple stimuli were compared to literature data and the model was shown to predict the frequency dependence of tone-burst ABR wave-V latency and the level-dependence of ABR wave...

  4. Neurophysiological mechanisms involved in auditory perceptual organization

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-09-01

    Full Text Available In our complex acoustic environment, we are confronted with a mixture of sounds produced by several simultaneous sources. However, we rarely perceive these sounds as incomprehensible noise. Our brain uses perceptual organization processes to independently follow the emission of each sound source over time. If the acoustic properties exploited in these processes are well-established, the neurophysiological mechanisms involved in auditory scene analysis have raised interest only recently. Here, we review the studies investigating these mechanisms using electrophysiological recordings from the cochlear nucleus to the auditory cortex, in animals and humans. Their findings reveal that basic mechanisms such as frequency selectivity, forward suppression and multi-second habituation shape the automatic brain responses to sounds in a way that can account for several important characteristics of perceptual organization of both simultaneous and successive sounds. One challenging question remains unresolved: how are the resulting activity patterns integrated to yield the corresponding conscious perceptsµ

  5. Cognitive mechanisms associated with auditory sensory gating.

    Science.gov (United States)

    Jones, L A; Hills, P J; Dick, K M; Jones, S P; Bright, P

    2016-02-01

    Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification.

  6. Lesions in the external auditory canal

    Directory of Open Access Journals (Sweden)

    Priyank S Chatra

    2011-01-01

    Full Text Available The external auditory canal is an S- shaped osseo-cartilaginous structure that extends from the auricle to the tympanic membrane. Congenital, inflammatory, neoplastic, and traumatic lesions can affect the EAC. High-resolution CT is well suited for the evaluation of the temporal bone, which has a complex anatomy with multiple small structures. In this study, we describe the various lesions affecting the EAC.

  7. Midbrain auditory selectivity to natural sounds.

    Science.gov (United States)

    Wohlgemuth, Melville J; Moss, Cynthia F

    2016-03-01

    This study investigated auditory stimulus selectivity in the midbrain superior colliculus (SC) of the echolocating bat, an animal that relies on hearing to guide its orienting behaviors. Multichannel, single-unit recordings were taken across laminae of the midbrain SC of the awake, passively listening big brown bat, Eptesicus fuscus. Species-specific frequency-modulated (FM) echolocation sound sequences with dynamic spectrotemporal features served as acoustic stimuli along with artificial sound sequences matched in bandwidth, amplitude, and duration but differing in spectrotemporal structure. Neurons in dorsal sensory regions of the bat SC responded selectively to elements within the FM sound sequences, whereas neurons in ventral sensorimotor regions showed broad response profiles to natural and artificial stimuli. Moreover, a generalized linear model (GLM) constructed on responses in the dorsal SC to artificial linear FM stimuli failed to predict responses to natural sounds and vice versa, but the GLM produced accurate response predictions in ventral SC neurons. This result suggests that auditory selectivity in the dorsal extent of the bat SC arises through nonlinear mechanisms, which extract species-specific sensory information. Importantly, auditory selectivity appeared only in responses to stimuli containing the natural statistics of acoustic signals used by the bat for spatial orientation-sonar vocalizations-offering support for the hypothesis that sensory selectivity enables rapid species-specific orienting behaviors. The results of this study are the first, to our knowledge, to show auditory spectrotemporal selectivity to natural stimuli in SC neurons and serve to inform a more general understanding of mechanisms guiding sensory selectivity for natural, goal-directed orienting behaviors.

  8. Response recovery in the locust auditory pathway.

    Science.gov (United States)

    Wirtssohn, Sarah; Ronacher, Bernhard

    2016-01-01

    Temporal resolution and the time courses of recovery from acute adaptation of neurons in the auditory pathway of the grasshopper Locusta migratoria were investigated with a response recovery paradigm. We stimulated with a series of single click and click pair stimuli while performing intracellular recordings from neurons at three processing stages: receptors and first and second order interneurons. The response to the second click was expressed relative to the single click response. This allowed the uncovering of the basic temporal resolution in these neurons. The effect of adaptation increased with processing layer. While neurons in the auditory periphery displayed a steady response recovery after a short initial adaptation, many interneurons showed nonlinear effects: most prominent a long-lasting suppression of the response to the second click in a pair, as well as a gain in response if a click was preceded by a click a few milliseconds before. Our results reveal a distributed temporal filtering of input at an early auditory processing stage. This set of specified filters is very likely homologous across grasshopper species and thus forms the neurophysiological basis for extracting relevant information from a variety of different temporal signals. Interestingly, in terms of spike timing precision neurons at all three processing layers recovered very fast, within 20 ms. Spike waveform analysis of several neuron types did not sufficiently explain the response recovery profiles implemented in these neurons, indicating that temporal resolution in neurons located at several processing layers of the auditory pathway is not necessarily limited by the spike duration and refractory period.

  9. Brainstem auditory evoked response: application in neurology

    Directory of Open Access Journals (Sweden)

    Carlos A. M. Guerreiro

    1982-03-01

    Full Text Available The tecnique that we use for eliciting brainstem auditory evoked responses (BAERs is described. BAERs are a non-invasive and reliable clinical test when carefully performed. This test is indicated in the evaluation of disorders which may potentially involve the brainstem such as coma, multiple sclerosis posterior fossa tumors and others. Unsuspected lesions with normal radiologic studies (including CT-scan can be revealed by the BAER.

  10. Cognitive mechanisms associated with auditory sensory gating

    OpenAIRE

    Jones, L. A.; Hills, P.J.; Dick, K.M.; Jones, S. P.; Bright, P

    2015-01-01

    Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants addit...

  11. Stroke caused auditory attention deficits in children

    Directory of Open Access Journals (Sweden)

    Karla Maria Ibraim da Freiria Elias

    2013-01-01

    Full Text Available OBJECTIVE: To verify the auditory selective attention in children with stroke. METHODS: Dichotic tests of binaural separation (non-verbal and consonant-vowel and binaural integration - digits and Staggered Spondaic Words Test (SSW - were applied in 13 children (7 boys, from 7 to 16 years, with unilateral stroke confirmed by neurological examination and neuroimaging. RESULTS: The attention performance showed significant differences in comparison to the control group in both kinds of tests. In the non-verbal test, identifications the ear opposite the lesion in the free recall stage was diminished and, in the following stages, a difficulty in directing attention was detected. In the consonant- vowel test, a modification in perceptual asymmetry and difficulty in focusing in the attended stages was found. In the digits and SSW tests, ipsilateral, contralateral and bilateral deficits were detected, depending on the characteristics of the lesions and demand of the task. CONCLUSION: Stroke caused auditory attention deficits when dealing with simultaneous sources of auditory information.

  12. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  13. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  14. Visual speech gestures modulate efferent auditory system.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Wong, Wing Yiu Stephanie; Sharma, Dinaay; van Lieshout, Pascal

    2015-03-01

    Visual and auditory systems interact at both cortical and subcortical levels. Studies suggest a highly context-specific cross-modal modulation of the auditory system by the visual system. The present study builds on this work by sampling data from 17 young healthy adults to test whether visual speech stimuli evoke different responses in the auditory efferent system compared to visual non-speech stimuli. The descending cortical influences on medial olivocochlear (MOC) activity were indirectly assessed by examining the effects of contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) at 1, 2, 3 and 4 kHz under three conditions: (a) in the absence of any contralateral noise (Baseline), (b) contralateral noise + observing facial speech gestures related to productions of vowels /a/ and /u/ and (c) contralateral noise + observing facial non-speech gestures related to smiling and frowning. The results are based on 7 individuals whose data met strict recording criteria and indicated a significant difference in TEOAE suppression between observing speech gestures relative to the non-speech gestures, but only at the 1 kHz frequency. These results suggest that observing a speech gesture compared to a non-speech gesture may trigger a difference in MOC activity, possibly to enhance peripheral neural encoding. If such findings can be reproduced in future research, sensory perception models and theories positing the downstream convergence of unisensory streams of information in the cortex may need to be revised.

  15. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia.

  16. Auditory evoked potentials in peripheral vestibular disorder individuals

    Directory of Open Access Journals (Sweden)

    Matas, Carla Gentile

    2011-07-01

    Full Text Available Introduction: The auditory and vestibular systems are located in the same peripheral receptor, however they enter the CNS and go through different ways, thus creating a number of connections and reaching a wide area of the encephalon. Despite going through different ways, some changes can impair both systems. Such tests as Auditory Evoked Potentials can help find a diagnosis when vestibular alterations are seen. Objective: describe the Auditory Evoked Potential results in individuals complaining about dizziness or vertigo with Peripheral Vestibular Disorders and in normal individuals having the same complaint. Methods: Short, middle and long latency Auditory Evoked Potentials were performed as a transversal prospective study. Conclusion: individuals complaining about dizziness or vertigo can show some changes in BAEP (Brainstem Auditory Evoked Potential, MLAEP (Medium Latency Auditory Evoked Potential and P300.

  17. Auditory cortical processing in real-world listening: the auditory system going real.

    Science.gov (United States)

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well.

  18. Binaural technology for e.g. rendering auditory virtual environments

    DEFF Research Database (Denmark)

    Hammershøi, Dorte

    2008-01-01

    , helped mediate the understanding that if the transfer functions could be mastered, then important dimensions of the auditory percept could also be controlled. He early understood the potential of using the HRTFs and numerical sound transmission analysis programs for rendering auditory virtual...... environments. Jens Blauert participated in many European cooperation projects exploring  this field (and others), among other the SCATIS project addressing the auditory-tactile dimensions in the absence of visual information....

  19. Depth-Dependent Temporal Response Properties in Core Auditory Cortex

    OpenAIRE

    Christianson, G. Björn; Sahani, Maneesh; Linden, Jennifer F.

    2011-01-01

    The computational role of cortical layers within auditory cortex has proven difficult to establish. One hypothesis is that interlaminar cortical processing might be dedicated to analyzing temporal properties of sounds; if so, then there should be systematic depth-dependent changes in cortical sensitivity to the temporal context in which a stimulus occurs. We recorded neural responses simultaneously across cortical depth in primary auditory cortex and anterior auditory field of CBA/Ca mice, an...

  20. [Auditory guidance systems for the visually impaired people].

    Science.gov (United States)

    He, Jing; Nie, Min; Luo, Lan; Tong, Shanbao; Niu, Jinhai; Zhu, Yisheng

    2010-04-01

    Visually impaired people face many inconveniences because of the loss of vision. Therefore, scientists are trying to design various guidance systems for improving the lives of the blind. Based on sensory substitution, auditory guidance has become an interesting topic in the field of biomedical engineering. In this paper, we made a state-of-technique review of the auditory guidance system. Although there have been many technical challenges, the auditory guidance system would be a useful alternative for the visually impaired people.

  1. Auditory cortex basal activity modulates cochlear responses in chinchillas.

    Directory of Open Access Journals (Sweden)

    Alex León

    Full Text Available BACKGROUND: The auditory efferent system has unique neuroanatomical pathways that connect the cerebral cortex with sensory receptor cells. Pyramidal neurons located in layers V and VI of the primary auditory cortex constitute descending projections to the thalamus, inferior colliculus, and even directly to the superior olivary complex and to the cochlear nucleus. Efferent pathways are connected to the cochlear receptor by the olivocochlear system, which innervates outer hair cells and auditory nerve fibers. The functional role of the cortico-olivocochlear efferent system remains debated. We hypothesized that auditory cortex basal activity modulates cochlear and auditory-nerve afferent responses through the efferent system. METHODOLOGY/PRINCIPAL FINDINGS: Cochlear microphonics (CM, auditory-nerve compound action potentials (CAP and auditory cortex evoked potentials (ACEP were recorded in twenty anesthetized chinchillas, before, during and after auditory cortex deactivation by two methods: lidocaine microinjections or cortical cooling with cryoloops. Auditory cortex deactivation induced a transient reduction in ACEP amplitudes in fifteen animals (deactivation experiments and a permanent reduction in five chinchillas (lesion experiments. We found significant changes in the amplitude of CM in both types of experiments, being the most common effect a CM decrease found in fifteen animals. Concomitantly to CM amplitude changes, we found CAP increases in seven chinchillas and CAP reductions in thirteen animals. Although ACEP amplitudes were completely recovered after ninety minutes in deactivation experiments, only partial recovery was observed in the magnitudes of cochlear responses. CONCLUSIONS/SIGNIFICANCE: These results show that blocking ongoing auditory cortex activity modulates CM and CAP responses, demonstrating that cortico-olivocochlear circuits regulate auditory nerve and cochlear responses through a basal efferent tone. The diversity of the

  2. Using Facebook to Reach People Who Experience Auditory Hallucinations

    OpenAIRE

    Crosier, Benjamin Sage; Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging...

  3. Myosin VIIA, important for human auditory function, is necessary for Drosophila auditory organ development.

    Directory of Open Access Journals (Sweden)

    Sokol V Todi

    Full Text Available BACKGROUND: Myosin VIIA (MyoVIIA is an unconventional myosin necessary for vertebrate audition [1]-[5]. Human auditory transduction occurs in sensory hair cells with a staircase-like arrangement of apical protrusions called stereocilia. In these hair cells, MyoVIIA maintains stereocilia organization [6]. Severe mutations in the Drosophila MyoVIIA orthologue, crinkled (ck, are semi-lethal [7] and lead to deafness by disrupting antennal auditory organ (Johnston's Organ, JO organization [8]. ck/MyoVIIA mutations result in apical detachment of auditory transduction units (scolopidia from the cuticle that transmits antennal vibrations as mechanical stimuli to JO. PRINCIPAL FINDINGS: Using flies expressing GFP-tagged NompA, a protein required for auditory organ organization in Drosophila, we examined the role of ck/MyoVIIA in JO development and maintenance through confocal microscopy and extracellular electrophysiology. Here we show that ck/MyoVIIA is necessary early in the developing antenna for initial apical attachment of the scolopidia to the articulating joint. ck/MyoVIIA is also necessary to maintain scolopidial attachment throughout adulthood. Moreover, in the adult JO, ck/MyoVIIA genetically interacts with the non-muscle myosin II (through its regulatory light chain protein and the myosin binding subunit of myosin II phosphatase. Such genetic interactions have not previously been observed in scolopidia. These factors are therefore candidates for modulating MyoVIIA activity in vertebrates. CONCLUSIONS: Our findings indicate that MyoVIIA plays evolutionarily conserved roles in auditory organ development and maintenance in invertebrates and vertebrates, enhancing our understanding of auditory organ development and function, as well as providing significant clues for future research.

  4. Effect of auditory training on the middle latency response in children with (central) auditory processing disorder.

    Science.gov (United States)

    Schochat, E; Musiek, F E; Alonso, R; Ogata, J

    2010-08-01

    The purpose of this study was to determine the middle latency response (MLR) characteristics (latency and amplitude) in children with (central) auditory processing disorder [(C)APD], categorized as such by their performance on the central auditory test battery, and the effects of these characteristics after auditory training. Thirty children with (C)APD, 8 to 14 years of age, were tested using the MLR-evoked potential. This group was then enrolled in an 8-week auditory training program and then retested at the completion of the program. A control group of 22 children without (C)APD, composed of relatives and acquaintances of those involved in the research, underwent the same testing at equal time intervals, but were not enrolled in the auditory training program. Before auditory training, MLR results for the (C)APD group exhibited lower C3-A1 and C3-A2 wave amplitudes in comparison to the control group [C3-A1, 0.84 microV (mean), 0.39 (SD--standard deviation) for the (C)APD group and 1.18 microV (mean), 0.65 (SD) for the control group; C3-A2, 0.69 microV (mean), 0.31 (SD) for the (C)APD group and 1.00 microV (mean), 0.46 (SD) for the control group]. After training, the MLR C3-A1 [1.59 microV (mean), 0.82 (SD)] and C3-A2 [1.24 microV (mean), 0.73 (SD)] wave amplitudes of the (C)APD group significantly increased, so that there was no longer a significant difference in MLR amplitude between (C)APD and control groups. These findings suggest progress in the use of electrophysiological measurements for the diagnosis and treatment of (C)APD.

  5. Effect of auditory training on the middle latency response in children with (central auditory processing disorder

    Directory of Open Access Journals (Sweden)

    E. Schochat

    2010-08-01

    Full Text Available The purpose of this study was to determine the middle latency response (MLR characteristics (latency and amplitude in children with (central auditory processing disorder [(CAPD], categorized as such by their performance on the central auditory test battery, and the effects of these characteristics after auditory training. Thirty children with (CAPD, 8 to 14 years of age, were tested using the MLR-evoked potential. This group was then enrolled in an 8-week auditory training program and then retested at the completion of the program. A control group of 22 children without (CAPD, composed of relatives and acquaintances of those involved in the research, underwent the same testing at equal time intervals, but were not enrolled in the auditory training program. Before auditory training, MLR results for the (CAPD group exhibited lower C3-A1 and C3-A2 wave amplitudes in comparison to the control group [C3-A1, 0.84 µV (mean, 0.39 (SD - standard deviation for the (CAPD group and 1.18 µV (mean, 0.65 (SD for the control group; C3-A2, 0.69 µV (mean, 0.31 (SD for the (CAPD group and 1.00 µV (mean, 0.46 (SD for the control group]. After training, the MLR C3-A1 [1.59 µV (mean, 0.82 (SD] and C3-A2 [1.24 µV (mean, 0.73 (SD] wave amplitudes of the (CAPD group significantly increased, so that there was no longer a significant difference in MLR amplitude between (CAPD and control groups. These findings suggest progress in the use of electrophysiological measurements for the diagnosis and treatment of (CAPD.

  6. Sound objects – Auditory objects – Musical objects

    DEFF Research Database (Denmark)

    Hjortkjær, Jens

    2015-01-01

    The auditory system transforms patterns of sound energy into perceptual objects but the precise definition of an ‘auditory object’ is much debated. In the context of music listening, Pierre Schaeffer argued that ‘sound objects’ are the fundamental perceptual units in ‘musical objects......’. In this paper, I review recent neurocognitive research suggesting that the auditory system is sensitive to structural information about real-world objects. Instead of focusing solely on perceptual sound features as determinants of auditory objects, I propose that real-world object properties are inherent...

  7. Sound objects – Auditory objects – Musical objects

    DEFF Research Database (Denmark)

    Hjortkjær, Jens

    2016-01-01

    The auditory system transforms patterns of sound energy into perceptual objects but the precise definition of an ‘auditory object’ is much debated. In the context of music listening, Pierre Schaeffer argued that ‘sound objects’ are the fundamental perceptual units in ‘musical objects......’. In this paper, I review recent neurocognitive research suggesting that the auditory system is sensitive to structural information about real-world objects. Instead of focusing solely on perceptual sound features as determinants of auditory objects, I propose that real-world object properties are inherent...

  8. Extrinsic sound stimulations and development of periphery auditory synapses

    Institute of Scientific and Technical Information of China (English)

    Kun Hou; Shiming Yang; Ke Liu

    2015-01-01

    The development of auditory synapses is a key process for the maturation of hearing function. However, it is still on debate regarding whether the development of auditory synapses is dominated by acquired sound stimulations. In this review, we summarize relevant publications in recent decades to address this issue. Most reported data suggest that extrinsic sound stimulations do affect, but not govern the development of periphery auditory synapses. Overall, periphery auditory synapses develop and mature according to its intrinsic mechanism to build up the synaptic connections between sensory neurons and/or interneurons.

  9. Evaluation of peripheral compression and auditory nerve fiber intensity coding using auditory steady-state responses

    DEFF Research Database (Denmark)

    Encina Llamas, Gerard; M. Harte, James; Epp, Bastian

    2015-01-01

    The compressive nonlinearity of the auditory system is assumed to be an epiphenomenon of a healthy cochlea and, particularly, of outer-hair cell function. Another ability of the healthy auditory system is to enable communication in acoustical environments with high-level background noises....... Evaluation of these properties provides information about the health state of the system. It has been shown that a loss of outer hair cells leads to a reduction in peripheral compression. It has also recently been shown in animal studies that noise over-exposure, producing temporary threshold shifts, can...

  10. Diversity and prevalence of gastrointestinal parasites in seven non-human primates of the Taï National Park, Côte d’Ivoire

    Directory of Open Access Journals (Sweden)

    Kouassi Roland Yao Wa

    2015-01-01

    Full Text Available Parasites and infectious diseases are well-known threats to primate populations. The main objective of this study was to provide baseline data on fecal parasites in the cercopithecid monkeys inhabiting Côte d’Ivoire’s Taï National Park. Seven of eight cercopithecid species present in the park were sampled: Cercopithecus diana, Cercopithecus campbelli, Cercopithecus petaurista, Procolobus badius, Procolobus verus, Colobus polykomos, and Cercocebus atys. We collected 3142 monkey stool samples between November 2009 and December 2010. Stool samples were processed by direct wet mount examination, formalin-ethyl acetate concentration, and MIF (merthiolate, iodine, formalin concentration methods. Slides were examined under microscope and parasite identification was based on the morphology of cysts, eggs, and adult worms. A total of 23 species of parasites was recovered including 9 protozoa (Entamoeba coli, Entamoeba histolytica/dispar, Entamoeba hartmanni, Endolimax nana, Iodamoeba butschlii, Chilomastix mesnili, Giardia sp., Balantidium coli, and Blastocystis sp., 13 nematodes (Oesophagostomum sp., Ancylostoma sp., Anatrichosoma sp., Capillariidae Gen. sp. 1, Capillariidae Gen. sp. 2, Chitwoodspirura sp., Subulura sp., spirurids [cf Protospirura muricola], Ternidens sp., Strongyloides sp., Trichostrongylus sp., and Trichuris sp., and 1 trematode (Dicrocoelium sp.. Diversity indices and parasite richness were high for all monkey taxa, but C. diana, C. petaurista, C. atys, and C. campbelli exhibited a greater diversity of parasite species and a more equitable distribution. The parasitological data reported are the first available for these cercopithecid species within Taï National Park.

  11. Organization of the auditory brainstem in a lizard, Gekko gecko. I. Auditory nerve, cochlear nuclei, and superior olivary nuclei

    DEFF Research Database (Denmark)

    Tang, Y. Z.; Christensen-Dalsgaard, J.; Carr, C. E.

    2012-01-01

    We used tract tracing to reveal the connections of the auditory brainstem in the Tokay gecko (Gekko gecko). The auditory nerve has two divisions, a rostroventrally directed projection of mid- to high best-frequency fibers to the nucleus angularis (NA) and a more dorsal and caudal projection of lo...... of auditory connections in lizards and archosaurs but also different processing of low- and high-frequency information in the brainstem. J. Comp. Neurol. 520:17841799, 2012. (C) 2011 Wiley Periodicals, Inc...

  12. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  13. Visual cortex and auditory cortex activation in early binocularly blind macaques: A BOLD-fMRI study using auditory stimuli.

    Science.gov (United States)

    Wang, Rong; Wu, Lingjie; Tang, Zuohua; Sun, Xinghuai; Feng, Xiaoyuan; Tang, Weijun; Qian, Wen; Wang, Jie; Jin, Lixin; Zhong, Yufeng; Xiao, Zebin

    2017-04-15

    Cross-modal plasticity within the visual and auditory cortices of early binocularly blind macaques is not well studied. In this study, four healthy neonatal macaques were assigned to group A (control group) or group B (binocularly blind group). Sixteen months later, blood oxygenation level-dependent functional imaging (BOLD-fMRI) was conducted to examine the activation in the visual and auditory cortices of each macaque while being tested using pure tones as auditory stimuli. The changes in the BOLD response in the visual and auditory cortices of all macaques were compared with immunofluorescence staining findings. Compared with group A, greater BOLD activity was observed in the bilateral visual cortices of group B, and this effect was particularly obvious in the right visual cortex. In addition, more activated volumes were found in the bilateral auditory cortices of group B than of group A, especially in the right auditory cortex. These findings were consistent with the fact that there were more c-Fos-positive cells in the bilateral visual and auditory cortices of group B compared with group A (p visual cortices of binocularly blind macaques can be reorganized to process auditory stimuli after visual deprivation, and this effect is more obvious in the right than the left visual cortex. These results indicate the establishment of cross-modal plasticity within the visual and auditory cortices.

  14. Auditory excitation patterns : the significance of the pulsation threshold method for the measurement of auditory nonlinearity

    NARCIS (Netherlands)

    H. Verschuure (Hans)

    1978-01-01

    textabstractThe auditory system is the toto[ of organs that translates an acoustical signal into the perception of a sound. An acoustic signal is a vibration. It is decribed by physical parameters. The perception of sound is the awareness of a signal being present and the attribution of certain qual

  15. Development of auditory localization accuracy and auditory spatial discrimination in children and adolescents.

    Science.gov (United States)

    Kühnle, S; Ludwig, A A; Meuret, S; Küttner, C; Witte, C; Scholbach, J; Fuchs, M; Rübsamen, R

    2013-01-01

    The present study investigated the development of two parameters of spatial acoustic perception in children and adolescents with normal hearing, aged 6-18 years. Auditory localization accuracy was quantified by means of a sound source identification task and auditory spatial discrimination acuity by measuring minimum audible angles (MAA). Both low- and high-frequency noise bursts were employed in the tests, thereby separately addressing auditory processing based on interaural time and intensity differences. Setup consisted of 47 loudspeakers mounted in the frontal azimuthal hemifield, ranging from 90° left to 90° right (-90°, +90°). Target signals were presented from 8 loudspeaker positions in the left and right hemifields (±4°, ±30°, ±60° and ±90°). Localization accuracy and spatial discrimination acuity showed different developmental courses. Localization accuracy remained stable from the age of 6 onwards. In contrast, MAA thresholds and interindividual variability of spatial discrimination decreased significantly with increasing age. Across all age groups, localization was most accurate and MAA thresholds were lower for frontal than for lateral sound sources, and for low-frequency compared to high-frequency noise bursts. The study also shows better performance in spatial hearing based on interaural time differences rather than on intensity differences throughout development. These findings confirm that specific aspects of central auditory processing show continuous development during childhood up to adolescence.

  16. Effects of sequential streaming on auditory masking using psychoacoustics and auditory evoked potentials.

    Science.gov (United States)

    Verhey, Jesko L; Ernst, Stephan M A; Yasin, Ifat

    2012-03-01

    The present study was aimed at investigating the relationship between the mismatch negativity (MMN) and psychoacoustical effects of sequential streaming on comodulation masking release (CMR). The influence of sequential streaming on CMR was investigated using a psychoacoustical alternative forced-choice procedure and electroencephalography (EEG) for the same group of subjects. The psychoacoustical data showed, that adding precursors comprising of only off-signal-frequency maskers abolished the CMR. Complementary EEG data showed an MMN irrespective of the masker envelope correlation across frequency when only the off-signal-frequency masker components were present. The addition of such precursors promotes a separation of the on- and off-frequency masker components into distinct auditory objects preventing the auditory system from using comodulation as an additional cue. A frequency-specific adaptation changing the representation of the flanking bands in the streaming conditions may also contribute to the reduction of CMR in the stream conditions, however, it is unlikely that adaptation is the primary reason for the streaming effect. A neurophysiological correlate of sequential streaming was found in EEG data using MMN, but the magnitude of the MMN was not correlated with the audibility of the signal in CMR experiments. Dipole source analysis indicated different cortical regions involved in processing auditory streaming and modulation detection. In particular, neural sources for processing auditory streaming include cortical regions involved in decision-making.

  17. Time computations in anuran auditory systems

    Directory of Open Access Journals (Sweden)

    Gary J Rose

    2014-05-01

    Full Text Available Temporal computations are important in the acoustic communication of anurans. In many cases, calls between closely related species are nearly identical spectrally but differ markedly in temporal structure. Depending on the species, calls can differ in pulse duration, shape and/or rate (i.e., amplitude modulation, direction and rate of frequency modulation, and overall call duration. Also, behavioral studies have shown that anurans are able to discriminate between calls that differ in temporal structure. In the peripheral auditory system, temporal information is coded primarily in the spatiotemporal patterns of activity of auditory-nerve fibers. However, major transformations in the representation of temporal information occur in the central auditory system. In this review I summarize recent advances in understanding how temporal information is represented in the anuran midbrain, with particular emphasis on mechanisms that underlie selectivity for pulse duration and pulse rate (i.e., intervals between onsets of successive pulses. Two types of neurons have been identified that show selectivity for pulse rate: long-interval cells respond well to slow pulse rates but fail to spike or respond phasically to fast pulse rates; conversely, interval-counting neurons respond to intermediate or fast pulse rates, but only after a threshold number of pulses, presented at optimal intervals, have occurred. Duration-selectivity is manifest as short-pass, band-pass or long-pass tuning. Whole-cell patch recordings, in vivo, suggest that excitation and inhibition are integrated in diverse ways to generate temporal selectivity. In many cases, activity-related enhancement or depression of excitatory or inhibitory processes appear to contribute to selective responses.

  18. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Directory of Open Access Journals (Sweden)

    Eric Olivier Boyer

    2013-04-01

    Full Text Available Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed towards unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space.

  19. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Science.gov (United States)

    Boyer, Eric O.; Babayan, Bénédicte M.; Bevilacqua, Frédéric; Noisternig, Markus; Warusfel, Olivier; Roby-Brami, Agnes; Hanneton, Sylvain; Viaud-Delmon, Isabelle

    2013-01-01

    Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed toward unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space. PMID:23626532

  20. Multiprofessional committee on auditory health: COMUSA.

    Science.gov (United States)

    Lewis, Doris Ruthy; Marone, Silvio Antonio Monteiro; Mendes, Beatriz C A; Cruz, Oswaldo Laercio Mendonça; Nóbrega, Manoel de

    2010-01-01

    Created in 2007, COMUSA is a multiprofessional committee comprising speech therapy, otology, otorhinolaryngology and pediatrics with the aim of debating and countersigning auditory health actions for neonatal, lactating, preschool and school children, adolescents, adults and elderly persons. COMUSA includes representatives of the Brazilian Audiology Academy (Academia Brasileira de Audiologia or ABA), the Brazilian Otorhinolaryngology and Cervicofacial Surgery Association (Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico Facial or ABORL), the Brazilian Phonoaudiology Society (Sociedade Brasileira de Fonoaudiologia or SBFa), the Brazilian Otology Society (Sociedade Brasileira de Otologia or SBO), and the Brazilian Pediatrics Society (Sociedade Brasileira de Pediatria or SBP).

  1. Musical and auditory hallucinations: A spectrum.

    Science.gov (United States)

    E Fischer, Corinne; Marchie, Anthony; Norris, Mireille

    2004-02-01

    Musical hallucinosis is a rare and poorly understood clinical phenomenon. While an association appears to exist between this phenomenon and organic brain pathology, aging and sensory impairment the precise association remains unclear. The authors present two cases of musical hallucinosis, both in elderly patients with mild-moderate cognitive impairment and mild-moderate hearing loss, who subsequently developed auditory hallucinations and in one case command hallucinations. The literature in reference to musical hallucinosis will be reviewed and a theory relating to the development of musical hallucinations will be proposed.

  2. Cancer of the external auditory canal

    DEFF Research Database (Denmark)

    Nyrop, Mette; Grøntved, Aksel

    2002-01-01

    OBJECTIVE: To evaluate the outcome of surgery for cancer of the external auditory canal and relate this to the Pittsburgh staging system used both on squamous cell carcinoma and non-squamous cell carcinoma. DESIGN: Retrospective case series of all patients who had surgery between 1979 and 2000....... PATIENTS: Ten women and 10 men with previously untreated primary cancer. Median age at diagnosis was 67 years (range, 31-87 years). Survival data included 18 patients with at least 2 years of follow-up or recurrence. INTERVENTION: Local canal resection or partial temporal bone resection. MAIN OUTCOME...

  3. CAVERNOUS HEMANGIOMA OF THE INTERNAL AUDITORY CANAL

    Directory of Open Access Journals (Sweden)

    Mohammad Hossein Hekmatara

    1993-06-01

    Full Text Available Cavernous hemangioma is a rare benign tumor of the internal auditory canal (IAC of which fourteen cases have been reported so far."nTinnitus and progressive sensorineural hearing loss (SNHL are the chief complaints of the patients. Audiological and radiological planes, CTScan, and magnetic resonance image (MRI studies are helpful in diagnosis. The only choice of treatment is surgery with elective transmastoid trans¬labyrinthine approach. And if tumor is very large, the method of choice will be retrosigmoid approach.

  4. PLASTICITY IN THE ADULT CENTRAL AUDITORY SYSTEM.

    Science.gov (United States)

    Irvine, Dexter R F; Fallon, James B; Kamke, Marc R

    2006-04-01

    The central auditory system retains into adulthood a remarkable capacity for plastic changes in the response characteristics of single neurons and the functional organization of groups of neurons. The most dramatic examples of this plasticity are provided by changes in frequency selectivity and organization as a consequence of either partial hearing loss or procedures that alter the significance of particular frequencies for the organism. Changes in temporal resolution are also seen as a consequence of altered experience. These forms of plasticity are likely to contribute to the improvements exhibited by cochlear implant users in the post-implantation period.

  5. PLASTICITY IN THE ADULT CENTRAL AUDITORY SYSTEM

    Science.gov (United States)

    Irvine, Dexter R. F.; Fallon, James B.; Kamke, Marc R.

    2007-01-01

    The central auditory system retains into adulthood a remarkable capacity for plastic changes in the response characteristics of single neurons and the functional organization of groups of neurons. The most dramatic examples of this plasticity are provided by changes in frequency selectivity and organization as a consequence of either partial hearing loss or procedures that alter the significance of particular frequencies for the organism. Changes in temporal resolution are also seen as a consequence of altered experience. These forms of plasticity are likely to contribute to the improvements exhibited by cochlear implant users in the post-implantation period. PMID:17572797

  6. Comparison of auditory hallucinations across different disorders and syndromes

    NARCIS (Netherlands)

    Sommer, Iris E. C.; Koops, Sanne; Blom, Jan Dirk

    2012-01-01

    Auditory hallucinations can be experienced in the context of many different disorders and syndromes. The differential diagnosis basically rests on the presence or absence of accompanying symptoms. In terms of clinical relevance, the most important distinction to be made is between auditory hallucina

  7. Development of a central auditory test battery for adults.

    NARCIS (Netherlands)

    Neijenhuis, C.A.M.; Stollman, M.H.P.; Snik, A.F.M.; Broek, P. van den

    2001-01-01

    There is little standardized test material in Dutch to document central auditory processing disorders (CAPDs). Therefore, a new central auditory test battery was composed and standardized for use with adult populations and older children. The test battery comprised seven tests (words in noise, filte

  8. Deactivation of the Parahippocampal Gyrus Preceding Auditory Hallucinations in Schizophrenia

    NARCIS (Netherlands)

    Diederen, Kelly M. J.; Neggers, Sebastiaan F. W.; Daalman, Kirstin; Blom, Jan Dirk; Goekoop, Rutger; Kahn, Rene S.; Sommer, Iris E. C.

    2010-01-01

    Objective: Activation in a network of language-related regions has been reported during auditory verbal hallucinations. It remains unclear, however, how this activation is triggered. Identifying brain regions that show significant signal changes preceding auditory hallucinations might reveal the ori

  9. Impact of Educational Level on Performance on Auditory Processing Tests.

    Science.gov (United States)

    Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane

    2016-01-01

    Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.

  10. Auditory Processing Theories of Language Disorders: Past, Present, and Future

    Science.gov (United States)

    Miller, Carol A.

    2011-01-01

    Purpose: The purpose of this article is to provide information that will assist readers in understanding and interpreting research literature on the role of auditory processing in communication disorders. Method: A narrative review was used to summarize and synthesize the literature on auditory processing deficits in children with auditory…

  11. Source reliability in auditory health persuasion : Its antecedents and consequences

    NARCIS (Netherlands)

    Elbert, Sarah P.; Dijkstra, Arie

    2015-01-01

    Persuasive health messages can be presented through an auditory channel, thereby enhancing the salience of the source, making it fundamentally different from written or pictorial information. We focused on the determinants of perceived source reliability in auditory health persuasion by investigatin

  12. Preparation and Culture of Chicken Auditory Brainstem Slices

    OpenAIRE

    Sanchez, Jason T.; Seidl, Armin H.; Rubel, Edwin W; Barria, Andres

    2011-01-01

    The chicken auditory brainstem is a well-established model system that has been widely used to study the anatomy and physiology of auditory processing at discreet periods of development 1-4 as well as mechanisms for temporal coding in the central nervous system 5-7.

  13. Strategy choice mediates the link between auditory processing and spelling.

    Science.gov (United States)

    Kwong, Tru E; Brachman, Kyle J

    2014-01-01

    Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities.

  14. Entrainment to an auditory signal: Is attention involved?

    NARCIS (Netherlands)

    Kunert, R.; Jongman, S.R.

    2017-01-01

    Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of rhy

  15. Cortical Auditory Evoked Potentials in Unsuccessful Cochlear Implant Users

    Science.gov (United States)

    Munivrana, Boska; Mildner, Vesna

    2013-01-01

    In some cochlear implant users, success is not achieved in spite of optimal clinical factors (including age at implantation, duration of rehabilitation and post-implant hearing level), which may be attributed to disorders at higher levels of the auditory pathway. We used cortical auditory evoked potentials to investigate the ability to perceive…

  16. Auditory signal design for automatic number plate recognition system

    NARCIS (Netherlands)

    Heydra, C.G.; Jansen, R.J.; Van Egmond, R.

    2014-01-01

    This paper focuses on the design of an auditory signal for the Automatic Number Plate Recognition system of Dutch national police. The auditory signal is designed to alert police officers of suspicious cars in their proximity, communicating priority level and location of the suspicious car and takin

  17. Modeling auditory evoked brainstem responses to transient stimuli

    DEFF Research Database (Denmark)

    Rønne, Filip Munch; Dau, Torsten; Harte, James;

    2012-01-01

    A quantitative model is presented that describes the formation of auditory brainstem responses (ABR) to tone pulses, clicks and rising chirps as a function of stimulation level. The model computes the convolution of the instantaneous discharge rates using the “humanized” nonlinear auditory-nerve ...

  18. Tinnitus intensity dependent gamma oscillations of the contralateral auditory cortex.

    Directory of Open Access Journals (Sweden)

    Elsa van der Loo

    Full Text Available BACKGROUND: Non-pulsatile tinnitus is considered a subjective auditory phantom phenomenon present in 10 to 15% of the population. Tinnitus as a phantom phenomenon is related to hyperactivity and reorganization of the auditory cortex. Magnetoencephalography studies demonstrate a correlation between gamma band activity in the contralateral auditory cortex and the presence of tinnitus. The present study aims to investigate the relation between objective gamma-band activity in the contralateral auditory cortex and subjective tinnitus loudness scores. METHODS AND FINDINGS: In unilateral tinnitus patients (N = 15; 10 right, 5 left source analysis of resting state electroencephalographic gamma band oscillations shows a strong positive correlation with Visual Analogue Scale loudness scores in the contralateral auditory cortex (max r = 0.73, p<0.05. CONCLUSION: Auditory phantom percepts thus show similar sound level dependent activation of the contralateral auditory cortex as observed in normal audition. In view of recent consciousness models and tinnitus network models these results suggest tinnitus loudness is coded by gamma band activity in the contralateral auditory cortex but might not, by itself, be responsible for tinnitus perception.

  19. Functional outcome of auditory implants in hearing loss.

    Science.gov (United States)

    Di Girolamo, S; Saccoccio, A; Giacomini, P G; Ottaviani, F

    2007-01-01

    The auditory implant provides a new mechanism for hearing when a hearing aid is not enough. It is the only medical technology able to functionally restore a human sense i.e. hearing. The auditory implant is very different from a hearing aid. Hearing aids amplify sound. Auditory implants compensate for damaged or non-working parts of the inner ear because they can directly stimulate the acoustic nerve. There are two principal types of auditory implant: the cochlear implant and the auditory brainstem implant. They have common basic characteristics, but different applications. A cochlear implant attempts to replace a function lost by the cochlea, usually due to an absence of functioning hair cells; the auditory brainstem implant (ABI) is a modification of the cochlear implant, in which the electrode array is placed directly into the brain when the acoustic nerve is not anymore able to carry the auditory signal. Different types of deaf or severely hearing-impaired patients choose auditory implants. Both children and adults can be candidates for implants. The best age for implantation is still being debated, but most children who receive implants are between 2 and 6 years old. Earlier implantation seems to perform better thanks to neural plasticity. The decision to receive an implant should involve a discussion with many medical specialists and an experienced surgeon.

  20. Auditory Processing Learning Disability, Suicidal Ideation, and Transformational Faith

    Science.gov (United States)

    Bailey, Frank S.; Yocum, Russell G.

    2015-01-01

    The purpose of this personal experience as a narrative investigation is to describe how an auditory processing learning disability exacerbated--and how spirituality and religiosity relieved--suicidal ideation, through the lived experiences of an individual born and raised in the United States. The study addresses: (a) how an auditory processing…

  1. Functional sex differences in human primary auditory cortex

    NARCIS (Netherlands)

    Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W. J.; Willemsen, Antoon T. M.

    2007-01-01

    Background We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a bas

  2. Auditory Dysfunction and Its Communicative Impact in the Classroom.

    Science.gov (United States)

    Friedrich, Brad W.

    1982-01-01

    The origins and nature of auditory dysfunction in school age children and the role of the audiologist in the evaluation of the learning disabled child are reviewed. Specific structures and mechanisms responsible for the reception and perception of auditory signals are specified. (Author/SEW)

  3. Auditory perceptual simulation: Simulating speech rates or accents?

    Science.gov (United States)

    Zhou, Peiyun; Christianson, Kiel

    2016-07-01

    When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects.

  4. Use of auditory learning to manage listening problems in children.

    Science.gov (United States)

    Moore, David R; Halliday, Lorna F; Amitay, Sygal

    2009-02-12

    This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers have debated what aspect of training contributed to the improvement and even whether the claimed improvements reflect primarily a retest effect on the skill measures. Key to understanding this research have been more circumscribed studies of the transfer of learning and the use of multiple control groups to examine auditory and non-auditory contributions to the learning. Significant auditory learning can occur during relatively brief periods of training. As children mature, their ability to train improves, but the relation between the duration of training, amount of learning and benefit remains unclear. Individual differences in initial performance and amount of subsequent learning advocate tailoring training to individual learners. The mechanisms of learning remain obscure, especially in children, but it appears that the development of cognitive skills is of at least equal importance to the refinement of sensory processing. Promotion of retention and transfer of learning are major goals for further research.

  5. Auditory Backward Masking Deficits in Children with Reading Disabilities

    Science.gov (United States)

    Montgomery, Christine R.; Morris, Robin D.; Sevcik, Rose A.; Clarkson, Marsha G.

    2005-01-01

    Studies evaluating temporal auditory processing among individuals with reading and other language deficits have yielded inconsistent findings due to methodological problems (Studdert-Kennedy & Mody, 1995) and sample differences. In the current study, seven auditory masking thresholds were measured in fifty-two 7- to 10-year-old children (26…

  6. A Pilot Study of Auditory Integration Training in Autism.

    Science.gov (United States)

    Rimland, Bernard; Edelson, Stephen M.

    1995-01-01

    The effectiveness of Auditory Integration Training (AIT) in 8 autistic individuals (ages 4-21) was evaluated using repeated multiple criteria assessment over a 3-month period. Compared to matched controls, subjects' scores improved on the Aberrant Behavior Checklist and Fisher's Auditory Problems Checklist. AIT did not decrease sound sensitivity.…

  7. Quantification of the auditory startle reflex in children

    NARCIS (Netherlands)

    Bakker, Mirte J.; Boer, Frits; van der Meer, Johan N.; Koelman, Johannes H. T. M.; Boeree, Thijs; Bour, Lo; Tijssen, Marina A. J.

    2009-01-01

    Objective: To find an adequate tool to assess the auditory startle reflex (ASR) in children. Methods: We investigated the effect of stimulus repetition, gender and age on several quantifications of the ASR. ASR's were elicited by eight consecutive auditory stimuli in 27 healthy children. Electromyog

  8. Auditory and visual spatial impression: Recent studies of three auditoria

    Science.gov (United States)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  9. Linking topography to tonotopy in the mouse auditory thalamocortical circuit

    DEFF Research Database (Denmark)

    Hackett, Troy A; Rinaldi Barkat, Tania; O'Brien, Barbara M J;

    2011-01-01

    The mouse sensory neocortex is reported to lack several hallmark features of topographic organization such as ocular dominance and orientation columns in primary visual cortex or fine-scale tonotopy in primary auditory cortex (AI). Here, we re-examined the question of auditory functional topography...

  10. Perceptual Load Influences Auditory Space Perception in the Ventriloquist Aftereffect

    Science.gov (United States)

    Eramudugolla, Ranmalee; Kamke, Marc. R.; Soto-Faraco, Salvador; Mattingley, Jason B.

    2011-01-01

    A period of exposure to trains of simultaneous but spatially offset auditory and visual stimuli can induce a temporary shift in the perception of sound location. This phenomenon, known as the "ventriloquist aftereffect", reflects a realignment of auditory and visual spatial representations such that they approach perceptual alignment despite their…

  11. Across frequency processes involved in auditory detection of coloration

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Kerketsos, P

    2008-01-01

    When an early wall reflection is added to a direct sound, a spectral modulation is introduced to the signal's power spectrum. This spectral modulation typically produces an auditory sensation of coloration or pitch. Throughout this study, auditory spectral-integration effects involved in coloration...... detection are investigated. Coloration detection thresholds were therefore measured as a function of reflection delay and stimulus bandwidth. In order to investigate the involved auditory mechanisms, an auditory model was employed that was conceptually similar to the peripheral weighting model [Yost, JASA......, 1982, 416-425]. When a “classical” gammatone filterbank was applied within this spectrum-based model, the model largely underestimated human performance at high signal frequencies. However, this limitation could be resolved by employing an auditory filterbank with narrower filters. This novel...

  12. Temporal expectation weights visual signals over auditory signals.

    Science.gov (United States)

    Menceloglu, Melisa; Grabowecky, Marcia; Suzuki, Satoru

    2017-04-01

    Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory-visual interaction, using an auditory-visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.

  13. Formal auditory training in adult hearing aid users

    Directory of Open Access Journals (Sweden)

    Daniela Gil

    2010-01-01

    Full Text Available INTRODUCTION: Individuals with sensorineural hearing loss are often able to regain some lost auditory function with the help of hearing aids. However, hearing aids are not able to overcome auditory distortions such as impaired frequency resolution and speech understanding in noisy environments. The coexistence of peripheral hearing loss and a central auditory deficit may contribute to patient dissatisfaction with amplification, even when audiological tests indicate nearly normal hearing thresholds. OBJECTIVE: This study was designed to validate the effects of a formal auditory training program in adult hearing aid users with mild to moderate sensorineural hearing loss. METHODS: Fourteen bilateral hearing aid users were divided into two groups: seven who received auditory training and seven who did not. The training program was designed to improve auditory closure, figure-to-ground for verbal and nonverbal sounds and temporal processing (frequency and duration of sounds. Pre- and post-training evaluations included measuring electrophysiological and behavioral auditory processing and administration of the Abbreviated Profile of Hearing Aid Benefit (APHAB self-report scale. RESULTS: The post-training evaluation of the experimental group demonstrated a statistically significant reduction in P3 latency, improved performance in some of the behavioral auditory processing tests and higher hearing aid benefit in noisy situations (p-value < 0,05. No changes were noted for the control group (p-value <0,05. CONCLUSION: The results demonstrated that auditory training in adult hearing aid users can lead to a reduction in P3 latency, improvements in sound localization, memory for nonverbal sounds in sequence, auditory closure, figure-to-ground for verbal sounds and greater benefits in reverberant and noisy environments.

  14. The Effect of Gender on the N1-P2 Auditory Complex while Listening and Speaking with Altered Auditory Feedback

    Science.gov (United States)

    Swink, Shannon; Stuart, Andrew

    2012-01-01

    The effect of gender on the N1-P2 auditory complex was examined while listening and speaking with altered auditory feedback. Fifteen normal hearing adult males and 15 females participated. N1-P2 components were evoked while listening to self-produced nonaltered and frequency shifted /a/ tokens and during production of /a/ tokens during nonaltered…

  15. Middle components of the auditory evoked response in bilateral temporal lobe lesions. Report on a patient with auditory agnosia

    DEFF Research Database (Denmark)

    Parving, A; Salomon, G; Elberling, Claus

    1980-01-01

    An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements. The mi...

  16. Auditory Masking Effects on Speech Fluency in Apraxia of Speech and Aphasia: Comparison to Altered Auditory Feedback

    Science.gov (United States)

    Jacks, Adam; Haley, Katarina L.

    2015-01-01

    Purpose: To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not…

  17. Auditory discrimination of force of impact.

    Science.gov (United States)

    Lutfi, Robert A; Liu, Ching-Ju; Stoelinga, Christophe N J

    2011-04-01

    The auditory discrimination of force of impact was measured for three groups of listeners using sounds synthesized according to first-order equations of motion for the homogenous, isotropic bar [Morse and Ingard (1968). Theoretical Acoustics pp. 175-191]. The three groups were professional percussionists, nonmusicians, and individuals recruited from the general population without regard to musical background. In the two-interval, forced-choice procedure, listeners chose the sound corresponding to the greater force of impact as the length of the bar varied from one presentation to the next. From the equations of motion, a maximum-likelihood test for the task was determined to be of the form Δlog A + αΔ log f > 0, where A and f are the amplitude and frequency of any one partial and α = 0.5. Relative decision weights on Δ log f were obtained from the trial-by-trial responses of listeners and compared to α. Percussionists generally outperformed the other groups; however, the obtained decision weights of all listeners deviated significantly from α and showed variability within groups far in excess of the variability associated with replication. Providing correct feedback after each trial had little effect on the decision weights. The variability in these measures was comparable to that seen in studies involving the auditory discrimination of other source attributes.

  18. Happiness increases distraction by auditory deviant stimuli.

    Science.gov (United States)

    Pacheco-Unguetti, Antonia Pilar; Parmentier, Fabrice B R

    2016-08-01

    Rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards) capture attention and impair behavioural performance in an ongoing visual task. Recent evidence indicates that this effect is increased by sadness in a task involving neutral stimuli. We tested the hypothesis that such effect may not be limited to negative emotions but reflect a general depletion of attentional resources by examining whether a positive emotion (happiness) would increase deviance distraction too. Prior to performing an auditory-visual oddball task, happiness or a neutral mood was induced in participants by means of the exposure to music and the recollection of an autobiographical event. Results from the oddball task showed significantly larger deviance distraction following the induction of happiness. Interestingly, the small amount of distraction typically observed on the standard trial following a deviant trial (post-deviance distraction) was not increased by happiness. We speculate that happiness might interfere with the disengagement of attention from the deviant sound back towards the target stimulus (through the depletion of cognitive resources and/or mind wandering) but help subsequent cognitive control to recover from distraction.

  19. Intrinsic modulators of auditory thalamocortical transmission.

    Science.gov (United States)

    Lee, Charles C; Sherman, S Murray

    2012-05-01

    Neurons in layer 4 of the primary auditory cortex receive convergent glutamatergic inputs from thalamic and cortical projections that activate different groups of postsynaptic glutamate receptors. Of particular interest in layer 4 neurons are the Group II metabotropic glutamate receptors (mGluRs), which hyperpolarize neurons postsynaptically via the downstream opening of GIRK channels. This pronounced effect on membrane conductance could influence the neuronal processing of synaptic inputs, such as those from the thalamus, essentially modulating information flow through the thalamocortical pathway. To examine how Group II mGluRs affect thalamocortical transmission, we used an in vitro slice preparation of the auditory thalamocortical pathways in the mouse to examine synaptic transmission under conditions where Group II mGluRs were activated. We found that both pre- and post-synaptic Group II mGluRs are involved in the attenuation of thalamocortical EPSP/Cs. Thus, thalamocortical synaptic transmission is suppressed via the presynaptic reduction of thalamocortical neurotransmitter release and the postsynaptic inhibition of the layer 4 thalamorecipient neurons. This could enable the thalamocortical pathway to autoregulate transmission, via either a gating or gain control mechanism, or both.

  20. Auditory evoked potentials in postconcussive syndrome.

    Science.gov (United States)

    Drake, M E; Weate, S J; Newell, S A

    1996-12-01

    The neuropsychiatric sequelae of minor head trauma have been the source of controversy. Most clinical and imaging studies have shown no alteration after concussion, but neuropsychological and neuropathological abnormalities have been reported. Some changes in neurophysiologic diagnostic tests have been described in postconcussive syndrome. We recorded middle latency auditory evoked potentials (MLR) and slow vertex responses (SVR) in 20 individuals with prolonged cognitive difficulties, behavior changes, dizziness, and headache after concussion. MLR is utilized alternating polarity clicks presented monaurally at 70 dB SL at 4 per second, with 40 dB contralateral masking. Five hundred responses were recorded and replicated from Cz-A1 and Cz-A2, with 50 ms. analysis time and 20-1000 Hz filter band pass. SVRs were recorded with the same montage, but used rarefaction clicks, 0.5 Hz stimulus rate, 500 ms. analysis time, and 1-50 Hz filter band pass. Na and Pa MLR components were reduced in amplitude in postconcussion patients. Pa latency was significantly longer in patients than in controls. SVR amplitudes were longer in concussed individuals, but differences in latency and amplitude were not significant. These changes may reflect posttraumatic disturbance in presumed subcortical MLR generators, or in frontal or temporal cortical structures that modulate them. Middle and long-latency auditory evoked potentials may be helpful in the evaluation of postconcussive neuropsychiatric symptoms.

  1. Auditory verbal hallucinations: neuroimaging and treatment.

    Science.gov (United States)

    Bohlken, M M; Hugdahl, K; Sommer, I E C

    2017-01-01

    Auditory verbal hallucinations (AVH) are a frequently occurring phenomenon in the general population and are considered a psychotic symptom when presented in the context of a psychiatric disorder. Neuroimaging literature has shown that AVH are subserved by a variety of alterations in brain structure and function, which primarily concentrate around brain regions associated with the processing of auditory verbal stimuli and with executive control functions. However, the direction of association between AVH and brain function remains equivocal in certain research areas and needs to be carefully reviewed and interpreted. When AVH have significant impact on daily functioning, several efficacious treatments can be attempted such as antipsychotic medication, brain stimulation and cognitive-behavioural therapy. Interestingly, the neural correlates of these treatments largely overlap with brain regions involved in AVH. This suggests that the efficacy of treatment corresponds to a normalization of AVH-related brain activity. In this selected review, we give a compact yet comprehensive overview of the structural and functional neuroimaging literature on AVH, with a special focus on the neural correlates of efficacious treatment.

  2. Selective attention in an insect auditory neuron.

    Science.gov (United States)

    Pollack, G S

    1988-07-01

    Previous work (Pollack, 1986) showed that an identified auditory neuron of crickets, the omega neuron, selectively encodes the temporal structure of an ipsilateral sound stimulus when a contralateral stimulus is presented simultaneously, even though the contralateral stimulus is clearly encoded when it is presented alone. The present paper investigates the physiological basis for this selective response. The selectivity for the ipsilateral stimulus is a result of the apparent intensity difference of ipsi- and contralateral stimuli, which is imposed by auditory directionality; when simultaneous presentation of stimuli from the 2 sides is mimicked by presenting low- and high-intensity stimuli simultaneously from the ipsilateral side, the neuron responds selectively to the high-intensity stimulus, even though the low-intensity stimulus is effective when it is presented alone. The selective encoding of the more intense (= ipsilateral) stimulus is due to intensity-dependent inhibition, which is superimposed on the cell's excitatory response to sound. Because of the inhibition, the stimulus with lower intensity (i.e., the contralateral stimulus) is rendered subthreshold, while the stimulus with higher intensity (the ipsilateral stimulus) remains above threshold. Consequently, the temporal structure of the low-intensity stimulus is filtered out of the neuron's spike train. The source of the inhibition is not known. It is not a consequence of activation of the omega neuron. Its characteristics are not consistent with those of known inhibitory inputs to the omega neuron.

  3. Talker-specific auditory imagery during reading

    Science.gov (United States)

    Nygaard, Lynne C.; Duke, Jessica; Kawar, Kathleen; Queen, Jennifer S.

    2004-05-01

    The present experiment was designed to determine if auditory imagery during reading includes talker-specific characteristics such as speaking rate. Following Kosslyn and Matt (1977), participants were familiarized with two talkers during a brief prerecorded conversation. One talker spoke at a fast speaking rate and one spoke at a slow speaking rate. During familiarization, participants were taught to identify each talker by name. At test, participants were asked to read two passages and told that either the slow or fast talker wrote each passage. In one condition, participants were asked to read each passage aloud, and in a second condition, they were asked to read each passage silently. Participants pressed a key when they had completed reading the passage, and reading times were collected. Reading times were significantly slower when participants thought they were reading a passage written by the slow talker than when reading a passage written by the fast talker. However, the effects of speaking rate were only present in the reading-aloud condition. Additional experiments were conducted to investigate the role of attention to talker's voice during familiarization. These results suggest that readers may engage in auditory imagery while reading that preserves perceptual details of an author's voice.

  4. Increased BOLD Signals Elicited by High Gamma Auditory Stimulation of the Left Auditory Cortex in Acute State Schizophrenia

    Directory of Open Access Journals (Sweden)

    Hironori Kuga, M.D.

    2016-10-01

    We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ, 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ, and 24 healthy controls (HC, assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.

  5. Integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2013-01-01

    Musicians often say that they not only hear but also "feel" music. To explore the contribution of tactile information to "feeling" music, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter-recognition task. Subjects discriminated between two types of sequences, "duple" (march-like rhythms) and "triple" (waltz-like rhythms), presented in three conditions: (1) unimodal inputs (auditory or tactile alone); (2) various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts; and (3) bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70-85 %) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70-90 %) when all of the metrically important notes are assigned to one channel and is reduced to 60 % when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90 %). Performance dropped dramatically when subjects were presented with incongruent auditory cues (10 %), as opposed to incongruent tactile cues (60 %), demonstrating that auditory input dominates meter perception. These observations support the notion that meter perception is a cross-modal percept with tactile inputs underlying the perception of "feeling" music.

  6. The Study of Frequency Self Care Strategies against Auditory Hallucinations

    Directory of Open Access Journals (Sweden)

    Mahin Nadem

    2012-03-01

    Full Text Available Background: In schizophrenic clients, self-care strategies against auditory hallucinations can decrease disturbances results in hallucination. This study was aimed to assess frequency of self-care strategies against auditory hallucinations in paranoid schizophrenic patients, hospitalized in Shafa Hospital.Materials and Method: This was a descriptive study on 201 patients with paranoid schizophrenia hospitalized in psychiatry unit with convenience sampling in Rasht. The gathered data consists of two parts, first unit demographic characteristic and the second part, self- report questionnaire include 38 items about self-care strategies.Results: There were statistically significant relationship between demographic variables and knowledg effect and self-care strategies against auditory hallucinaions. Sex with phisical domain p0.07, marriage status with cognitive domain (p>0.07 and life status with behavioural domain (p>0.01. 53.2% of reported type of our auditory hallucinations were command hallucinations, furtheremore the most effective self-care strategies against auditory hallucinations were from physical domain and substance abuse (82.1% was the most effective strategies in this domain.Conclusion: The client with paranoid schizophrenia used more than physical domain strategies against auditory hallucinaions and this result highlight need those to approprait nursing intervention. Instruction and leading about selection the effective self-care strategies against auditory ha

  7. Translation and adaptation of functional auditory performance indicators (FAPI

    Directory of Open Access Journals (Sweden)

    Karina Ferreira

    2011-12-01

    Full Text Available Work with deaf children has gained new attention since the expectation and goal of therapy has expanded to language development and subsequent language learning. Many clinical tests were developed for evaluation of speech sound perception in young children in response to the need for accurate assessment of hearing skills that developed from the use of individual hearing aids or cochlear implants. These tests also allow the evaluation of the rehabilitation program. However, few of these tests are available in Portuguese. Evaluation with the Functional Auditory Performance Indicators (FAPI generates a child's functional auditory skills profile, which lists auditory skills in an integrated and hierarchical order. It has seven hierarchical categories, including sound awareness, meaningful sound, auditory feedback, sound source localizing, auditory discrimination, short-term auditory memory, and linguistic auditory processing. FAPI evaluation allows the therapist to map the child's hearing profile performance, determine the target for increasing the hearing abilities, and develop an effective therapeutic plan. Objective: Since the FAPI is an American test, the inventory was adapted for application in the Brazilian population. Material and Methods: The translation was done following the steps of translation and back translation, and reproducibility was evaluated. Four translated versions (two originals and two back-translated were compared, and revisions were done to ensure language adaptation and grammatical and idiomatic equivalence. Results: The inventory was duly translated and adapted. Conclusion: Further studies about the application of the translated FAPI are necessary to make the test practicable in Brazilian clinical use.

  8. Auditory-perceptual learning improves speech motor adaptation in children.

    Science.gov (United States)

    Shiller, Douglas M; Rochon, Marie-Lyne

    2014-08-01

    Auditory feedback plays an important role in children's speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback; however, it is not known whether their capacity for motor learning is limited by immature auditory-perceptual abilities. Here, the link between speech perceptual ability and the capacity for motor learning was explored in two groups of 5- to 7-year-old children who underwent a period of auditory perceptual training followed by tests of speech motor adaptation to altered auditory feedback. One group received perceptual training on a speech acoustic property relevant to the motor task while a control group received perceptual training on an irrelevant speech contrast. Learned perceptual improvements led to an enhancement in speech motor adaptation (proportional to the perceptual change) only for the experimental group. The results indicate that children's ability to perceive relevant speech acoustic properties has a direct influence on their capacity for sensory-based speech motor adaptation.

  9. Missing a trick: Auditory load modulates conscious awareness in audition.

    Science.gov (United States)

    Fairnie, Jake; Moore, Brian C J; Remington, Anna

    2016-07-01

    In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record

  10. A corollary discharge maintains auditory sensitivity during sound production.

    Science.gov (United States)

    Poulet, James F A; Hedwig, Berthold

    2002-08-22

    Speaking and singing present the auditory system of the caller with two fundamental problems: discriminating between self-generated and external auditory signals and preventing desensitization. In humans and many other vertebrates, auditory neurons in the brain are inhibited during vocalization but little is known about the nature of the inhibition. Here we show, using intracellular recordings of auditory neurons in the singing cricket, that presynaptic inhibition of auditory afferents and postsynaptic inhibition of an identified auditory interneuron occur in phase with the song pattern. Presynaptic and postsynaptic inhibition persist in a fictively singing, isolated cricket central nervous system and are therefore the result of a corollary discharge from the singing motor network. Mimicking inhibition in the interneuron by injecting hyperpolarizing current suppresses its spiking response to a 100-dB sound pressure level (SPL) acoustic stimulus and maintains its response to subsequent, quieter stimuli. Inhibition by the corollary discharge reduces the neural response to self-generated sound and protects the cricket's auditory pathway from self-induced desensitization.

  11. Functional sex differences in human primary auditory cortex

    Energy Technology Data Exchange (ETDEWEB)

    Ruytjens, Liesbet [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Georgiadis, Janniko R. [University of Groningen, University Medical Center Groningen, Department of Anatomy and Embryology, Groningen (Netherlands); Holstege, Gert [University of Groningen, University Medical Center Groningen, Center for Uroneurology, Groningen (Netherlands); Wit, Hero P. [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); Albers, Frans W.J. [University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Willemsen, Antoon T.M. [University Medical Center Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands)

    2007-12-15

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  12. Cochlear Responses and Auditory Brainstem Response Functions in Adults with Auditory Neuropathy/ Dys-Synchrony and Individuals with Normal Hearing

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2007-06-01

    Full Text Available Background and Aim: Physiologic measures of cochlear and auditory nerve function may be of assis¬tance in distinguishing between hearing disorders due primarily to auditory nerve impairment from those due primarily to cochlear hair cells dysfunction. The goal of present study was to measure of co-chlear responses (otoacoustic emissions and cochlear microphonics and auditory brainstem response in some adults with auditory neuropathy/ dys-synchrony and subjects with normal hearing. Materials and Methods: Patients were 16 adults (32 ears in age range of 14-30 years with auditory neu¬ropathy/ dys-synchrony and 16 individuals in age range of 16-30 years from both sexes. The results of transient otoacoustic emissions, cochlear microphonics and auditory brainstem response measures were compared in both groups and the effects of age, sex, ear and degree of hearing loss were studied. Results: The pure-tone average was 48.1 dB HL in auditory neuropathy/dys-synchrony group and the fre¬quency of low tone loss and flat audiograms were higher among other audiogram's shapes. Transient oto¬acoustic emissions were shown in all auditory neuropathy/dys-synchrony people except two cases and its average was near in both studied groups. The latency and amplitude of the biggest reversed co-chlear microphonics response were higher in auditory neuropathy/dys-synchrony patients than control peo¬ple significantly. The correlation between cochlear microphonics amplitude and degree of hearing loss was not significant, and age had significant effect in some cochlear microphonics measures. Audi-tory brainstem response had no response in auditory neuropathy/dys-synchrony patients even with low stim¬uli rates. Conclusion: In adults with speech understanding worsen than predicted from the degree of hearing loss that suspect to auditory neuropathy/ dys-synchrony, the frequency of low tone loss and flat audiograms are higher. Usually auditory brainstem response is absent in

  13. Diffusion tensor imaging and MR morphometry of the central auditory pathway and auditory cortex in aging.

    Science.gov (United States)

    Profant, O; Škoch, A; Balogová, Z; Tintěra, J; Hlinka, J; Syka, J

    2014-02-28

    Age-related hearing loss (presbycusis) is caused mainly by the hypofunction of the inner ear, but recent findings point also toward a central component of presbycusis. We used MR morphometry and diffusion tensor imaging (DTI) with a 3T MR system with the aim to study the state of the central auditory system in a group of elderly subjects (>65years) with mild presbycusis, in a group of elderly subjects with expressed presbycusis and in young controls. Cortical reconstruction, volumetric segmentation and auditory pathway tractography were performed. Three parameters were evaluated by morphometry: the volume of the gray matter, the surface area of the gyrus and the thickness of the cortex. In all experimental groups the surface area and gray matter volume were larger on the left side in Heschl's gyrus and planum temporale and slightly larger in the gyrus frontalis superior, whereas they were larger on the right side in the primary visual cortex. Almost all of the measured parameters were significantly smaller in the elderly subjects in Heschl's gyrus, planum temporale and gyrus frontalis superior. Aging did not change the side asymmetry (laterality) of the gyri. In the central part of the auditory pathway above the inferior colliculus, a trend toward an effect of aging was present in the axial vector of the diffusion (L1) variable of DTI, with increased values observed in elderly subjects. A trend toward a decrease of L1 on the left side, which was more pronounced in the elderly groups, was observed. The effect of hearing loss was present in subjects with expressed presbycusis as a trend toward an increase of the radial vectors (L2L3) in the white matter under Heschl's gyrus. These results suggest that in addition to peripheral changes, changes in the central part of the auditory system in elderly subjects are also present; however, the extent of hearing loss does not play a significant role in the central changes.

  14. Air pollution is associated with brainstem auditory nuclei pathology and delayed brainstem auditory evoked potentials

    OpenAIRE

    Calderón-Garcidueñas, Lilian; D’Angiulli, Amedeo; Kulesza, Randy J.; Torres-Jardón, Ricardo; Osnaya, Norma; Romero, Lina; Keefe, Sheyla; Herritt, Lou; Brooks, Diane M.; Avila-Ramirez, Jose; Delgado-Chávez, Ricardo; Medina-Cortina, Humberto; González-González, Luis Oscar

    2011-01-01

    We assessed brainstem inflammation in children exposed to air pollutants by comparing brainstem auditory evoked potentials (BAEPs) and blood inflammatory markers in children age 96.3± 8.5 months from highly polluted (n=34) versus a low polluted city (n=17). The brainstems of nine children with accidental deaths were also examined. Children from the highly polluted environment had significant delays in wave III (t(50)=17.038; p

  15. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    Science.gov (United States)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  16. Continuity of visual and auditory rhythms influences sensorimotor coordination.

    Directory of Open Access Journals (Sweden)

    Manuel Varlet

    Full Text Available People often coordinate their movement with visual and auditory environmental rhythms. Previous research showed better performances when coordinating with auditory compared to visual stimuli, and with bimodal compared to unimodal stimuli. However, these results have been demonstrated with discrete rhythms and it is possible that such effects depend on the continuity of the stimulus rhythms (i.e., whether they are discrete or continuous. The aim of the current study was to investigate the influence of the continuity of visual and auditory rhythms on sensorimotor coordination. We examined the dynamics of synchronized oscillations of a wrist pendulum with auditory and visual rhythms at different frequencies, which were either unimodal or bimodal and discrete or continuous. Specifically, the stimuli used were a light flash, a fading light, a short tone and a frequency-modulated tone. The results demonstrate that the continuity of the stimulus rhythms strongly influences visual and auditory motor coordination. Participants' movement led continuous stimuli and followed discrete stimuli. Asymmetries between the half-cycles of the movement in term of duration and nonlinearity of the trajectory occurred with slower discrete rhythms. Furthermore, the results show that the differences of performance between visual and auditory modalities depend on the continuity of the stimulus rhythms as indicated by movements closer to the instructed coordination for the auditory modality when coordinating with discrete stimuli. The results also indicate that visual and auditory rhythms are integrated together in order to better coordinate irrespective of their continuity, as indicated by less variable coordination closer to the instructed pattern. Generally, the findings have important implications for understanding how we coordinate our movements with visual and auditory environmental rhythms in everyday life.

  17. Tactile stimulation and hemispheric asymmetries modulate auditory perception and neural responses in primary auditory cortex.

    Science.gov (United States)

    Hoefer, M; Tyll, S; Kanowski, M; Brosch, M; Schoenfeld, M A; Heinze, H-J; Noesselt, T

    2013-10-01

    Although multisensory integration has been an important area of recent research, most studies focused on audiovisual integration. Importantly, however, the combination of audition and touch can guide our behavior as effectively which we studied here using psychophysics and functional magnetic resonance imaging (fMRI). We tested whether task-irrelevant tactile stimuli would enhance auditory detection, and whether hemispheric asymmetries would modulate these audiotactile benefits using lateralized sounds. Spatially aligned task-irrelevant tactile stimuli could occur either synchronously or asynchronously with the sounds. Auditory detection was enhanced by non-informative synchronous and asynchronous tactile stimuli, if presented on the left side. Elevated fMRI-signals to left-sided synchronous bimodal stimulation were found in primary auditory cortex (A1). Adjacent regions (planum temporale, PT) expressed enhanced BOLD-responses for synchronous and asynchronous left-sided bimodal conditions. Additional connectivity analyses seeded in right-hemispheric A1 and PT for both bimodal conditions showed enhanced connectivity with right-hemispheric thalamic, somatosensory and multisensory areas that scaled with subjects' performance. Our results indicate that functional asymmetries interact with audiotactile interplay which can be observed for left-lateralized stimulation in the right hemisphere. There, audiotactile interplay recruits a functional network of unisensory cortices, and the strength of these functional network connections is directly related to subjects' perceptual sensitivity.

  18. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  19. Tiapride for the treatment of auditory hallucinations in schizophrenia

    Directory of Open Access Journals (Sweden)

    Sagar Karia

    2013-01-01

    Full Text Available Hallucinations are considered as core symptoms of psychosis by both International Classification of Diseases-10 (ICD-10 and Diagnostic and Statistical Manual for the Classification of Psychiatric Disorders - 4 th edition text revised (DSM-IV TR. The most common types of hallucinations in patients with schizophrenia are auditory in nature followed by visual hallucinations. Few patients with schizophrenia have persisting auditory hallucinations despite all other features of schizophrenia having being improved. Here, we report two cases where tiapride was useful as an add-on drug for treating persistent auditory hallucinations.

  20. Human Auditory Processing: Insights from Cortical Event-related Potentials

    Directory of Open Access Journals (Sweden)

    Alexandra P. Key

    2016-04-01

    Full Text Available Human communication and language skills rely heavily on the ability to detect and process auditory inputs. This paper reviews possible applications of the event-related potential (ERP technique to the study of cortical mechanisms supporting human auditory processing, including speech stimuli. Following a brief introduction to the ERP methodology, the remaining sections focus on demonstrating how ERPs can be used in humans to address research questions related to cortical organization, maturation and plasticity, as well as the effects of sensory deprivation, and multisensory interactions. The review is intended to serve as a primer for researchers interested in using ERPs for the study of the human auditory system.

  1. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    DEFF Research Database (Denmark)

    Gil Carvajal, Juan Camilo; Cubick, Jens; Santurette, Sébastien;

    2016-01-01

    whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings...... decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between...

  2. A loudspeaker-based room auralization system for auditory research

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel

    to systematically study the signal processing of realistic sounds by normal-hearing and hearing-impaired listeners, a flexible, reproducible and fully controllable auditory environment is needed. A loudspeaker-based room auralization (LoRA) system was developed in this thesis to provide virtual auditory...... environments (VAEs) with an array of loudspeakers. The LoRA system combines state-of-the-art acoustic room models with sound-field reproduction techniques. Limitations of these two techniques were taken into consideration together with the limitations of the human auditory system to localize sounds...

  3. Predictive uncertainty in auditory sequence processing

    DEFF Research Database (Denmark)

    Hansen, Niels Chr.; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine...... the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using...... in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis...

  4. A computer model of auditory stream segregation.

    Science.gov (United States)

    Beauvois, M W; Meddis, R

    1991-08-01

    A computer model is described which simulates some aspects of auditory stream segregation. The model emphasizes the explanatory power of simple physiological principles operating at a peripheral rather than a central level. The model consists of a multi-channel bandpass-filter bank with a "noisy" output and an attentional mechanism that responds selectively to the channel with the greatest activity. A "leaky integration" principle allows channel excitation to accumulate and dissipate over time. The model produces similar results to two experimental demonstrations of streaming phenomena, which are presented in detail. These results are discussed in terms of the "emergent properties" of a system governed by simple physiological principles. As such the model is contrasted with higher-level Gestalt explanations of the same phenomena while accepting that they may constitute complementary kinds of explanation.

  5. Hearing Restoration with Auditory Brainstem Implant

    Science.gov (United States)

    NAKATOMI, Hirofumi; MIYAWAKI, Satoru; KIN, Taichi; SAITO, Nobuhito

    2016-01-01

    Auditory brainstem implant (ABI) technology attempts to restore hearing in deaf patients caused by bilateral cochlear nerve injury through the direct stimulation of the brainstem, but many aspects of the related mechanisms remain unknown. The unresolved issues can be grouped into three topics: which patients are the best candidates; which type of electrode should be used; and how to improve restored hearing. We evaluated our experience with 11 cases of ABI placement. We found that if at least seven of eleven electrodes of the MED-EL ABI are effectively placed in a patient with no deformation of the fourth ventricle, open set sentence recognition of approximately 20% and closed set word recognition of approximately 65% can be achieved only with the ABI. Appropriate selection of patients for ABI placement can lead to good outcomes. Further investigation is required regarding patient selection criteria and methods of surgery for effective ABI placement. PMID:27464470

  6. Changes of brainstem auditory and somatosensory evoked

    Institute of Scientific and Technical Information of China (English)

    Yang Jian

    2000-01-01

    Objective: to investigate the characteristics and clinical value of evoked potentials in late infantile form of metachromatic leukodystrophy. Methods: Brainstem auditory, and somatosensory evoked potentials were recorded in 6 patients, and compared with the results of CT scan. Results: All of the 6 patients had abnormal results of BAEP and MNSEP. The main abnormal parameters in BAEP were latency prolongation in wave I, inter-peak latency prolongation in Ⅰ-Ⅲ and Ⅰ-Ⅴ. The abnormal features of MNSEP were low amplitude and absence of wave N9, inter-Peak latency prolongation in Ng-N13 and N13-N20, but no significant change of N20 amplitude. The results also revealed that abnormal changes in BAEP and MNSEP were earlier than that in CT. Conclusion: The detection of BAEP and MNSEP in late infantile form of metachromatic leukodystrophy might early reveal the abnormality of conductive function in nervous system and might be a useful method in diagnosis.

  7. Discrimination of auditory stimuli during isoflurane anesthesia.

    Science.gov (United States)

    Rojas, Manuel J; Navas, Jinna A; Greene, Stephen A; Rector, David M

    2008-10-01

    Deep isoflurane anesthesia initiates a burst suppression pattern in which high-amplitude bursts are preceded by periods of nearly silent electroencephalogram. The burst suppression ratio (BSR) is the percentage of suppression (silent electroencephalogram) during the burst suppression pattern and is one parameter used to assess anesthesia depth. We investigated cortical burst activity in rats in response to different auditory stimuli presented during the burst suppression state. We noted a rapid appearance of bursts and a significant decrease in the BSR during stimulation. The BSR changes were distinctive for the different stimuli applied, and the BSR decreased significantly more when stimulated with a voice familiar to the rat as compared with an unfamiliar voice. These results show that the cortex can show differential sensory responses during deep isoflurane anesthesia.

  8. Low power adder based auditory filter architecture.

    Science.gov (United States)

    Rahiman, P F Khaleelur; Jayanthi, V S

    2014-01-01

    Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT) based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%.

  9. Resting Heart Rate and Auditory Evoked Potential

    Directory of Open Access Journals (Sweden)

    Simone Fiuza Regaçone

    2015-01-01

    Full Text Available The objective of this study was to evaluate the association between rest heart rate (HR and the components of the auditory evoked-related potentials (ERPs at rest in women. We investigated 21 healthy female university students between 18 and 24 years old. We performed complete audiological evaluation and measurement of heart rate for 10 minutes at rest (heart rate monitor Polar RS800CX and performed ERPs analysis (discrepancy in frequency and duration. There was a moderate negative correlation of the N1 and P3a with rest HR and a strong positive correlation of the P2 and N2 components with rest HR. Larger components of the ERP are associated with higher rest HR.

  10. Biomedical Simulation Models of Human Auditory Processes

    Science.gov (United States)

    Bicak, Mehmet M. A.

    2012-01-01

    Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.

  11. Tonotopic organization of human auditory association cortex.

    Science.gov (United States)

    Cansino, S; Williamson, S J; Karron, D

    1994-11-07

    Neuromagnetic studies of responses in human auditory association cortex for tone burst stimuli provide evidence for a tonotopic organization. The magnetic source image for the 100 ms component evoked by the onset of a tone is qualitatively similar to that of primary cortex, with responses lying deeper beneath the scalp for progressively higher tone frequencies. However, the tonotopic sequence of association cortex in three subjects is found largely within the superior temporal sulcus, although in the right hemisphere of one subject some sources may be closer to the inferior temporal sulcus. The locus of responses for individual subjects suggests a progression across the cortical surface that is approximately proportional to the logarithm of the tone frequency, as observed previously for primary cortex, with the span of 10 mm for each decade in frequency being comparable for the two areas.

  12. Genetics of auditory mechano-electrical transduction.

    Science.gov (United States)

    Michalski, Nicolas; Petit, Christine

    2015-01-01

    The hair bundles of cochlear hair cells play a central role in the auditory mechano-electrical transduction (MET) process. The identification of MET components and of associated molecular complexes by biochemical approaches is impeded by the very small number of hair cells within the cochlea. In contrast, human and mouse genetics have proven to be particularly powerful. The study of inherited forms of deafness led to the discovery of several essential proteins of the MET machinery, which are currently used as entry points to decipher the associated molecular networks. Notably, MET relies not only on the MET machinery but also on several elements ensuring the proper sound-induced oscillation of the hair bundle or the ionic environment necessary to drive the MET current. Here, we review the most significant advances in the molecular bases of the MET process that emerged from the genetics of hearing.

  13. Predictive uncertainty in auditory sequence processing.

    Science.gov (United States)

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  14. Effects of pitch on auditory number comparisons.

    Science.gov (United States)

    Campbell, Jamie I D; Scheepers, Florence

    2015-05-01

    Three experiments investigated interactions between auditory pitch and the numerical quantities represented by spoken English number words. In Experiment 1, participants heard a pair of sequential auditory numbers in the range zero to ten. They pressed a left-side or right-side key to indicate if the second number was lower or higher in numerical value. The vocal pitches of the two numbers either ascended or descended so that pitch change was congruent or incongruent with number change. The error rate was higher when pitch and number were incongruent relative to congruent trials. The distance effect on RT (i.e., slower responses for numerically near than far number pairs) occurred with pitch ascending but not descending. In Experiment 2, to determine if these effects depended on the left/right spatial mapping of responses, participants responded "yes" if the second number was higher and "no" if it was lower. Again, participants made more number comparison errors when number and pitch were incongruent, but there was no distance × pitch order effect. To pursue the latter, in Experiment 3, participants were tested with response buttons assigned left-smaller and right-larger ("normal" spatial mapping) or the reverse mapping. Participants who received normal mapping first presented a distance effect with pitch ascending but not descending as in Experiment 1, whereas participants who received reverse mapping first presented a distance effect with pitch descending but not ascending. We propose that the number and pitch dimensions of stimuli both activated spatial representations and that strategy shifts from quantity comparison to order processing were induced by spatial incongruities.

  15. Statistical representation of sound textures in the impaired auditory system

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; Dau, Torsten

    2015-01-01

    Many challenges exist when it comes to understanding and compensating for hearing impairment. Traditional methods, such as pure tone audiometry and speech intelligibility tests, offer insight into the deficiencies of a hearingimpaired listener, but can only partially reveal the mechanisms...... that underlie the hearing loss. An alternative approach is to investigate the statistical representation of sounds for hearing-impaired listeners along the auditory pathway. Using models of the auditory periphery and sound synthesis, we aimed to probe hearing impaired perception for sound textures – temporally...... homogenous sounds such as rain, birds, or fire. It has been suggested that sound texture perception is mediated by time-averaged statistics measured from early auditory representations (McDermott et al., 2013). Changes to early auditory processing, such as broader “peripheral” filters or reduced compression...

  16. Oscillatory Cortical Network Involved in Auditory Verbal Hallucinations in Schizophrenia

    NARCIS (Netherlands)

    van Lutterveld, Remko; Hillebrand, Arjan; Diederen, Kelly M. J.; Daalman, Kirstin; Kahn, Rene S.; Stam, Cornelis J.; Sommer, Iris E. C.

    2012-01-01

    Background: Auditory verbal hallucinations (AVH), a prominent symptom of schizophrenia, are often highly distressing for patients. Better understanding of the pathogenesis of hallucinations could increase therapeutic options. Magnetoencephalography (MEG) provides direct measures of neuronal activity

  17. Ion channel noise can explain firing correlation in auditory nerves.

    Science.gov (United States)

    Moezzi, Bahar; Iannella, Nicolangelo; McDonnell, Mark D

    2016-10-01

    Neural spike trains are commonly characterized as a Poisson point process. However, the Poisson assumption is a poor model for spiking in auditory nerve fibres because it is known that interspike intervals display positive correlation over long time scales and negative correlation over shorter time scales. We have therefore developed a biophysical model based on the well-known Meddis model of the peripheral auditory system, to produce simulated auditory nerve fibre spiking statistics that more closely match the firing correlations observed in empirical data. We achieve this by introducing biophysically realistic ion channel noise to an inner hair cell membrane potential model that includes fractal fast potassium channels and deterministic slow potassium channels. We succeed in producing simulated spike train statistics that match empirically observed firing correlations. Our model thus replicates macro-scale stochastic spiking statistics in the auditory nerve fibres due to modeling stochasticity at the micro-scale of potassium channels.

  18. Auditory hallucinations in childhood : associations with adversity and delusional ideation

    NARCIS (Netherlands)

    Bartels-Velthuis, A. A.; van de Willige, G.; Jenner, J. A.; Wiersma, D.; van Os, J.

    2012-01-01

    Background. Previous work suggests that exposure to childhood adversity is associated with the combination of delusions and hallucinations. In the present study, associations between (severity of) auditory vocal hallucinations (AVH) and (i) social adversity [traumatic experiences (TE) and stressful

  19. Modality specific neural correlates of auditory and somatic hallucinations

    Science.gov (United States)

    Shergill, S; Cameron, L; Brammer, M; Williams, S; Murray, R; McGuire, P

    2001-01-01

    Somatic hallucinations occur in schizophrenia and other psychotic disorders, although auditory hallucinations are more common. Although the neural correlates of auditory hallucinations have been described in several neuroimaging studies, little is known of the pathophysiology of somatic hallucinations. Functional magnetic resonance imaging (fMRI) was used to compare the distribution of brain activity during somatic and auditory verbal hallucinations, occurring at different times in a 36 year old man with schizophrenia. Somatic hallucinations were associated with activation in the primary somatosensory and posterior parietal cortex, areas that normally mediate tactile perception. Auditory hallucinations were associated with activation in the middle and superior temporal cortex, areas involved in processing external speech. Hallucinations in a given modality seem to involve areas that normally process sensory information in that modality.

 PMID:11606687

  20. Music and the auditory brain: where is the connection?

    Directory of Open Access Journals (Sweden)

    Israel eNelken

    2011-09-01

    Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.

  1. Auditory short-term memory activation during score reading.

    Directory of Open Access Journals (Sweden)

    Veerle L Simoens

    Full Text Available Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.

  2. Contextual modulation of primary visual cortex by auditory signals

    Science.gov (United States)

    Paton, A. T.

    2017-01-01

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044015

  3. Presentation of dynamically overlapping auditory messages in user interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Papp, III, Albert Louis [Univ. of California, Davis, CA (United States)

    1997-09-01

    This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by

  4. Visual change detection recruits auditory cortices in early deafness.

    Science.gov (United States)

    Bottari, Davide; Heimler, Benedetta; Caclin, Anne; Dalmolin, Anna; Giard, Marie-Hélène; Pavani, Francesco

    2014-07-01

    Although cross-modal recruitment of early sensory areas in deafness and blindness is well established, the constraints and limits of these plastic changes remain to be understood. In the case of human deafness, for instance, it is known that visual, tactile or visuo-tactile stimuli can elicit a response within the auditory cortices. Nonetheless, both the timing of these evoked responses and the functional contribution of cross-modally recruited areas remain to be ascertained. In the present study, we examined to what extent auditory cortices of deaf humans participate in high-order visual processes, such as visual change detection. By measuring visual ERPs, in particular the visual MisMatch Negativity (vMMN), and performing source localization, we show that individuals with early deafness (N=12) recruit the auditory cortices when a change in motion direction during shape deformation occurs in a continuous visual motion stream. Remarkably this "auditory" response for visual events emerged with the same timing as the visual MMN in hearing controls (N=12), between 150 and 300 ms after the visual change. Furthermore, the recruitment of auditory cortices for visual change detection in early deaf was paired with a reduction of response within the visual system, indicating a shift from visual to auditory cortices of part of the computational process. The present study suggests that the deafened auditory cortices participate at extracting and storing the visual information and at comparing on-line the upcoming visual events, thus indicating that cross-modally recruited auditory cortices can reach this level of computation.

  5. Auditory Temporal Resolution in Individuals with Diabetes Mellitus Type 2

    OpenAIRE

    2016-01-01

    Introduction “Diabetes mellitus is a group of metabolic disorders characterized by elevated blood sugar and abnormalities in insulin secretion and action” (American Diabetes Association). Previous literature has reported connection between diabetes mellitus and hearing impairment. There is a dearth of literature on auditory temporal resolution ability in individuals with diabetes mellitus type 2. Objective The main objective of the present study was to assess auditory temporal resolution a...

  6. Auditory stream formation affects comodulation masking release retroactively

    DEFF Research Database (Denmark)

    Dau, Torsten; Ewert, Stephan; Oxenham, A. J.

    2009-01-01

    in terms of the sequence of "postcursor" flanking bands forming a perceptual stream with the original flanking bands, resulting in perceptual segregation of the flanking bands from the masker. The results are consistent with the idea that modulation analysis occurs within, not across, auditory objects......, and that across-frequency CMR only occurs if the on-frequency and flanking bands fall within the same auditory object or stream....

  7. Auditory neuropathy spectrum disorder in a child with albinism

    Directory of Open Access Journals (Sweden)

    Mayur Bhat

    2016-01-01

    Full Text Available Albinism is a congenital disorder characterized by complete or partial absence of pigments in the skin, eyes, and hair due to the absence or defective melanin production. As a result of that, there will be disruption seen in auditory pathways along with other areas. Therefore, the aim of the present study is to highlight the underlying auditory neural deficits seen in albinism and discuss the role of audiologist in these cases.

  8. The plastic ear and perceptual relearning in auditory spatial perception.

    Science.gov (United States)

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5-10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  9. Auditory hypersensitivity in children and teenagers with autistic spectrum disorder

    OpenAIRE

    2004-01-01

    OBJECTIVE: To verify if the clinical behavior of auditory hypersensitivity, reported in interviews with parents/caregivers and therapists/teachers of 46 children and teenagers suffering from autistic spectrum disorder, correspond to audiological findings. METHOD: The clinical diagnosis for auditory hypersensitivity was investigated by means of an interview. Subsequently, a test of the acoustic stapedial reflex was conducted, and responses to intense acoustic stimulus in open field were observ...

  10. Auditory stream segregation in children with Asperger syndrome

    OpenAIRE

    Lepistö, T.; Kuitunen, A.; Sussman, E.; Saalasti, S.; Jansson-Verkasalo, E. (Eira); Nieminen-von Wendt, T.; Kujala, T. (Tiia)

    2009-01-01

    Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically deve...

  11. Contextual modulation of primary visual cortex by auditory signals.

    Science.gov (United States)

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'.

  12. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  13. The plastic ear and perceptual relearning in auditory spatial perception.

    Directory of Open Access Journals (Sweden)

    Simon eCarlile

    2014-08-01

    Full Text Available The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear moulds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localisation, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear moulds or through virtual auditory space stimulation using non-individualised spectral cues. The work with ear moulds demonstrates that a relatively short period of training involving sensory-motor feedback (5 – 10 days significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide a spatial code but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.

  14. A unique cellular scaling rule in the avian auditory system.

    Science.gov (United States)

    Corfield, Jeremy R; Long, Brendan; Krilow, Justin M; Wylie, Douglas R; Iwaniuk, Andrew N

    2016-06-01

    Although it is clear that neural structures scale with body size, the mechanisms of this relationship are not well understood. Several recent studies have shown that the relationship between neuron numbers and brain (or brain region) size are not only different across mammalian orders, but also across auditory and visual regions within the same brains. Among birds, similar cellular scaling rules have not been examined in any detail. Here, we examine the scaling of auditory structures in birds and show that the scaling rules that have been established in the mammalian auditory pathway do not necessarily apply to birds. In galliforms, neuronal densities decrease with increasing brain size, suggesting that auditory brainstem structures increase in size faster than neurons are added; smaller brains have relatively more neurons than larger brains. The cellular scaling rules that apply to auditory brainstem structures in galliforms are, therefore, different to that found in primate auditory pathway. It is likely that the factors driving this difference are associated with the anatomical specializations required for sound perception in birds, although there is a decoupling of neuron numbers in brain structures and hair cell numbers in the basilar papilla. This study provides significant insight into the allometric scaling of neural structures in birds and improves our understanding of the rules that govern neural scaling across vertebrates.

  15. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  16. [Analysis of auditory information in the brain of the cetacean].

    Science.gov (United States)

    Popov, V V; Supin, A Ia

    2006-01-01

    The cetacean brain specifics involve an exceptional development of the auditory neural centres. The place of projection sensory areas including the auditory that in the cetacean brain cortex is essentially different from that in other mammals. The EP characteristics indicated presence of several functional divisions in the auditory cortex. Physiological studies of the cetacean auditory centres were mainly performed using the EP technique. Of several types of the EPs, the short-latency auditory EP was most thoroughly studied. In cetacean, it is characterised by exceptionally high temporal resolution with the integration time about 0.3 ms which corresponds to the cut-off frequency 1700 Hz. This much exceeds the temporal resolution of the hearing in terranstrial mammals. The frequency selectivity of hearing in cetacean was measured using a number of variants of the masking technique. The hearing frequency selectivity acuity in cetacean exceeds that of most terraneous mammals (excepting the bats). This acute frequency selectivity provides the differentiation among the finest spectral patterns of auditory signals.

  17. Auditory Neuropathy: Findings of Behavioral, Physiological and Neurophysiological Tests

    Directory of Open Access Journals (Sweden)

    Mohammad Farhadi

    2006-12-01

    Full Text Available Background and Aim: Auditory neuropathy (AN can be diagnosed by abnormal auditory brainstem response (ABR, in the presence of normal cochlear microphonic (CM and otoacoustic emissions (OAEs.The aim of this study was to investigate the ABR and other electrodiagnostic test results of 6 patients suspicious to AN with problems in speech recognition. Materials and Methods: this cross sectional study was conducted on 6 AN patients with different ages evaluated by pure tone audiometry, speech discrimination score (SDS , immittance audiometry. ElectroCochleoGraphy , ABR, middle latency response (MLR, Late latency response (LLR, and OAEs. Results: Behavioral pure tone audiometric tests showed moderate to profound hearing loss. SDS was so poor which is not in accordance with pure tone thresholds. All patients had normal tympanogram but absent acoustic reflexes. CMs and OAEs were within normal limits. There was no contra lateral suppression of OAEs. None of cases had normal ABR or MLR although LLR was recorded in 4. Conclusion: All patients in this study are typical cases of auditory neuropathy. Despite having abnormal input, LLR remains normal that indicates differences in auditory evoked potentials related to required neural synchrony. These findings show that auditory cortex may play a role in regulating presentation of deficient signals along auditory pathways in primary steps.

  18. Auditory function in individuals within Leber's hereditary optic neuropathy pedigrees.

    Science.gov (United States)

    Rance, Gary; Kearns, Lisa S; Tan, Johanna; Gravina, Anthony; Rosenfeld, Lisa; Henley, Lauren; Carew, Peter; Graydon, Kelley; O'Hare, Fleur; Mackey, David A

    2012-03-01

    The aims of this study are to investigate whether auditory dysfunction is part of the spectrum of neurological abnormalities associated with Leber's hereditary optic neuropathy (LHON) and to determine the perceptual consequences of auditory neuropathy (AN) in affected listeners. Forty-eight subjects confirmed by genetic testing as having one of four mitochondrial mutations associated with LHON (mt11778, mtDNA14484, mtDNA14482 and mtDNA3460) participated. Thirty-two of these had lost vision, and 16 were asymptomatic at the point of data collection. While the majority of individuals showed normal sound detection, >25% (of both symptomatic and asymptomatic participants) showed electrophysiological evidence of AN with either absent or severely delayed auditory brainstem potentials. Abnormalities were observed for each of the mutations, but subjects with the mtDNA11778 type were the most affected. Auditory perception was also abnormal in both symptomatic and asymptomatic subjects, with >20% of cases showing impaired detection of auditory temporal (timing) cues and >30% showing abnormal speech perception both in quiet and in the presence of background noise. The findings of this study indicate that a relatively high proportion of individuals with the LHON genetic profile may suffer functional hearing difficulties due to neural abnormality in the central auditory pathways.

  19. An anatomical and functional topography of human auditory cortical areas.

    Science.gov (United States)

    Moerel, Michelle; De Martino, Federico; Formisano, Elia

    2014-01-01

    While advances in magnetic resonance imaging (MRI) throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla). Importantly, we illustrate that-whereas a group-based approach to analyze functional (tonotopic) maps is appropriate to highlight the main tonotopic axis-the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e., myelination) as well as of functional properties (e.g., broadness of frequency tuning) is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post-mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions.

  20. Left hemispheric dominance during auditory processing in a noisy environment

    Directory of Open Access Journals (Sweden)

    Ross Bernhard

    2007-11-01

    Full Text Available Abstract Background In daily life, we are exposed to different sound inputs simultaneously. During neural encoding in the auditory pathway, neural activities elicited by these different sounds interact with each other. In the present study, we investigated neural interactions elicited by masker and amplitude-modulated test stimulus in primary and non-primary human auditory cortex during ipsi-lateral and contra-lateral masking by means of magnetoencephalography (MEG. Results We observed significant decrements of auditory evoked responses and a significant inter-hemispheric difference for the N1m response during both ipsi- and contra-lateral masking. Conclusion The decrements of auditory evoked neural activities during simultaneous masking can be explained by neural interactions evoked by masker and test stimulus in peripheral and central auditory systems. The inter-hemispheric differences of N1m decrements during ipsi- and contra-lateral masking reflect a basic hemispheric specialization contributing to the processing of complex auditory stimuli such as speech signals in noisy environments.

  1. An anatomical and functional topography of human auditory cortical areas

    Directory of Open Access Journals (Sweden)

    Michelle eMoerel

    2014-07-01

    Full Text Available While advances in magnetic resonance imaging (MRI throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla. Importantly, we illustrate that - whereas a group-based approach to analyze functional (tonotopic maps is appropriate to highlight the main tonotopic axis - the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e. myelination as well as of functional properties (e.g. broadness of frequency tuning is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions.

  2. Influence of Auditory and Haptic Stimulation in Visual Perception

    Directory of Open Access Journals (Sweden)

    Shunichi Kawabata

    2011-10-01

    Full Text Available While many studies have shown that visual information affects perception in the other modalities, little is known about how auditory and haptic information affect visual perception. In this study, we investigated how auditory, haptic, or auditory and haptic stimulation affects visual perception. We used a behavioral task based on the subjects observing the phenomenon of two identical visual objects moving toward each other, overlapping and then continuing their original motion. Subjects may perceive the objects as either streaming each other or bouncing and reversing their direction of motion. With only visual motion stimulus, subjects usually report the objects as streaming, whereas if a sound or flash is played when the objects touch each other, subjects report the objects as bouncing (Bounce-Inducing Effect. In this study, “auditory stimulation”, “haptic stimulation” or “haptic and auditory stimulation” were presented at various times relative to the visual overlap of objects. Our result shows the bouncing rate when haptic and auditory stimulation were presented were the highest. This result suggests that the Bounce-Inducing Effect is enhanced by simultaneous modality presentation to visual motion. In the future, a neuroscience approach (eg, TMS, fMRI may be required to elucidate the brain mechanism in this study.

  3. Verrucous Carcinoma in External Auditory Canal – A Rare Case

    Directory of Open Access Journals (Sweden)

    Md Zillur Rahman

    2013-05-01

    Full Text Available Verrucous carcinoma is a variant of squamous cell carcinoma. It is of low grade malignancy and rarely present with distant metastasis. Oral cavity is the commonest site of this tumour, other sites are larynx, oesophagus and genitalia. Verrucous carcinoma in external auditory canal is extremely rare. This is the presentation of a 45 years old woman who came to the ENT & Head Neck Surgery department of Delta medical college, Dhaka, Bangladesh with discharging left ear and impairment of hearing on the same side for 7 years. Otoscopic examination showed a mass occupying almost whole of the external auditory canal and the overlying skin was thickened, papillary and blackish. Cytology from external auditory canal scrap showed hyperkeratosis and parakeratosis. External auditory canal bone was found eroded at some parts. Excision of the mass was done under microscope. Split thickness skin grafting was done in external auditory canal. The mass was diagnosed as verrucous carcinoma on histopathological examination. Afterwards she was given radiotherapy. Six months follow up showed no recurrence and healthy epithelialization of external auditory canal.

  4. Selective increase of auditory cortico-striatal coherence during auditory-cued Go/NoGo discrimination learning.

    Directory of Open Access Journals (Sweden)

    Andreas L. Schulz

    2016-01-01

    Full Text Available Goal directed behavior and associated learning processes are tightly linked to neuronal activity in the ventral striatum. Mechanisms that integrate task relevant sensory information into striatal processing during decision making and learning are implicitly assumed in current reinforcementmodels, yet they are still weakly understood. To identify the functional activation of cortico-striatal subpopulations of connections during auditory discrimination learning, we trained Mongolian gerbils in a two-way active avoidance task in a shuttlebox to discriminate between falling and rising frequency modulated tones with identical spectral properties. We assessed functional coupling by analyzing the field-field coherence between the auditory cortex and the ventral striatum of animals performing the task. During the course of training, we observed a selective increase of functionalcoupling during Go-stimulus presentations. These results suggest that the auditory cortex functionally interacts with the ventral striatum during auditory learning and that the strengthening of these functional connections is selectively goal-directed.

  5. Increased BOLD Signals Elicited by High Gamma Auditory Stimulation of the Left Auditory Cortex in Acute State Schizophrenia.

    Science.gov (United States)

    Kuga, Hironori; Onitsuka, Toshiaki; Hirano, Yoji; Nakamura, Itta; Oribe, Naoya; Mizuhara, Hiroaki; Kanai, Ryota; Kanba, Shigenobu; Ueno, Takefumi

    2016-10-01

    Recent MRI studies have shown that schizophrenia is characterized by reductions in brain gray matter, which progress in the acute state of the disease. Cortical circuitry abnormalities in gamma oscillations, such as deficits in the auditory steady state response (ASSR) to gamma frequency (>30-Hz) stimulation, have also been reported in schizophrenia patients. In the current study, we investigated neural responses during click stimulation by BOLD signals. We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ), 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ), and 24 healthy controls (HC), assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.

  6. Using Facebook to Reach People Who Experience Auditory Hallucinations

    Science.gov (United States)

    Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. Methods We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Results Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience

  7. Auditory place theory and frequency difference limen

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jialu

    2006-01-01

    It has been a barrier that the place code is far too coarse a mechanism to account for the finest frequency difference limen for place theory of hearing since it was proposed in 19th century. A place correlation model, which takes the energy distribution of a pure tone in neighboring bands of auditory filters into full account, was presented in this paper. The model based on the place theory and some experimental results of the psychophysical tuning curves of hearing can explain the finest difference limen for frequency (about 0.02 or 0.3% at 1000 Hz)easily. Using a standard 1/3 octave filter bank of which the relationship between the frequency of a input pure tone apart from the centre frequency of K-th filter band, △f, and the output intensity difference between K-th and (K + 1)-th filters, △E, was established in order to show the fine frequency detection ability of the filter bank. This model can also be used to abstract the fundamental frequency of speech and to measure the frequency of pure tone precisely.

  8. Theory of Auditory Thresholds in Primates

    Science.gov (United States)

    Harrison, Michael J.

    2001-03-01

    The influence of thermal pressure fluctuations at the tympanic membrane has been previously investigated as a possible determinant of the threshold of hearing in humans (L.J. Sivian and S.D. White, J. Acoust. Soc. Am. IV, 4;288(1933).). More recent work has focussed more precisely on the relation between statistical mechanics and sensory signal processing by biological means in creatures' brains (W. Bialek, in ``Physics of Biological Systems: from molecules to species'', H. Flyvberg et al, (Eds), p. 252; Springer 1997.). Clinical data on the frequency dependence of hearing thresholds in humans and other primates (W.C. Stebbins, ``The Acoustic Sense of Animals'', Harvard 1983.) has long been available. I have derived an expression for the frequency dependence of hearing thresholds in primates, including humans, by first calculating the frequency dependence of thermal pressure fluctuations at eardrums from damped normal modes excited in model ear canals of given simple geometry. I then show that most of the features of the clinical data are directly related to the frequency dependence of the ratio of thermal noise pressure arising from without to that arising from within the masking bandwidth which signals must dominate in order to be sensed. The higher intensity of threshold signals in primates smaller than humans, which is clinically observed over much but not all of the human auditory spectrum is shown to arise from their smaller meatus dimensions. note

  9. Elastic modulus of cetacean auditory ossicles.

    Science.gov (United States)

    Tubelli, Andrew A; Zosuls, Aleks; Ketten, Darlene R; Mountain, David C

    2014-05-01

    In order to model the hearing capabilities of marine mammals (cetaceans), it is necessary to understand the mechanical properties, such as elastic modulus, of the middle ear bones in these species. Biologically realistic models can be used to investigate the biomechanics of hearing in cetaceans, much of which is currently unknown. In the present study, the elastic moduli of the auditory ossicles (malleus, incus, and stapes) of eight species of cetacean, two baleen whales (mysticete) and six toothed whales (odontocete), were measured using nanoindentation. The two groups of mysticete ossicles overall had lower average elastic moduli (35.2 ± 13.3 GPa and 31.6 ± 6.5 GPa) than the groups of odontocete ossicles (53.3 ± 7.2 GPa to 62.3 ± 4.7 GPa). Interior bone generally had a higher modulus than cortical bone by up to 36%. The effects of freezing and formalin-fixation on elastic modulus were also investigated, although samples were few and no clear trend could be discerned. The high elastic modulus of the ossicles and the differences in the elastic moduli between mysticetes and odontocetes are likely specializations in the bone for underwater hearing.

  10. Structured Counseling for Auditory Dynamic Range Expansion.

    Science.gov (United States)

    Gold, Susan L; Formby, Craig

    2017-02-01

    A structured counseling protocol is described that, when combined with low-level broadband sound therapy from bilateral sound generators, offers audiologists a new tool for facilitating the expansion of the auditory dynamic range (DR) for loudness. The protocol and its content are specifically designed to address and treat problems that impact hearing-impaired persons who, due to their reduced DRs, may be limited in the use and benefit of amplified sound from hearing aids. The reduced DRs may result from elevated audiometric thresholds and/or reduced sound tolerance as documented by lower-than-normal loudness discomfort levels (LDLs). Accordingly, the counseling protocol is appropriate for challenging and difficult-to-fit persons with sensorineural hearing losses who experience loudness recruitment or hyperacusis. Positive treatment outcomes for individuals with the former and latter conditions are highlighted in this issue by incremental shifts (improvements) in LDL and/or categorical loudness judgments, associated reduced complaints of sound intolerance, and functional improvements in daily communication, speech understanding, and quality of life leading to improved hearing aid benefit, satisfaction, and aided sound quality, posttreatment.

  11. Auditory free classification of nonnative speech

    Science.gov (United States)

    Atagi, Eriko; Bent, Tessa

    2013-01-01

    Through experience with speech variability, listeners build categories of indexical speech characteristics including categories for talker, gender, and dialect. The auditory free classification task—a task in which listeners freely group talkers based on audio samples—has been a useful tool for examining listeners’ representations of some of these characteristics including regional dialects and different languages. The free classification task was employed in the current study to examine the perceptual representation of nonnative speech. The category structure and salient perceptual dimensions of nonnative speech were investigated from two perspectives: general similarity and perceived native language background. Talker intelligibility and whether native talkers were included were manipulated to test stimulus set effects. Results showed that degree of accent was a highly salient feature of nonnative speech for classification based on general similarity and on perceived native language background. This salience, however, was attenuated when listeners were listening to highly intelligible stimuli and attending to the talkers’ native language backgrounds. These results suggest that the context in which nonnative speech stimuli are presented—such as the listeners’ attention to the talkers’ native language and the variability of stimulus intelligibility—can influence listeners’ perceptual organization of nonnative speech. PMID:24363470

  12. Cholesteatoma invasion into the internal auditory canal.

    Science.gov (United States)

    Migirov, Lela; Bendet, Erez; Kronenberg, Jona

    2009-05-01

    Cholesteatoma invasion into the internal auditory canal (IAC) is rare and usually results in irreversible, complete hearing loss and facial paralysis on the affected side. This retrospective study examines the clinical characteristics of seven patients with cholesteatoma invading the IAC, analyzes possible routes of the cholesteatoma's extension and describes the surgical approaches used and patient outcome. Extension to the IAC was via the supralabyrinthine route in most patients. A subtotal petrosectomy, a translabyrinthine approach or a middle cranial fossa approach combined with radical mastoidectomy were required for the complete removal of the cholesteatoma. All seven patients presented with some preoperative facial nerve palsy. The facial nerve was decompressed in four patients and facial nerve repair was performed in three others, two by hypoglossal-facial anastomosis and one by a greater auricular nerve interposition grafting. All patients ended up with total deafness in the operate ear. At 1 year following surgery, the facial nerve function was House-Brackmann grade III in six cases and grade II in one. In conclusion, cholesteatoma invading the IAC is a separate entity with characteristic clinical presentations, require a unique surgical approach, and result in significant morbidity, such as total deafness in the operated ear and impaired facial movement.

  13. Expectation and attention in hierarchical auditory prediction.

    Science.gov (United States)

    Chennu, Srivas; Noreika, Valdas; Gueorguiev, David; Blenkmann, Alejandro; Kochen, Silvia; Ibáñez, Agustín; Owen, Adrian M; Bekinschtein, Tristan A

    2013-07-03

    Hierarchical predictive coding suggests that attention in humans emerges from increased precision in probabilistic inference, whereas expectation biases attention in favor of contextually anticipated stimuli. We test these notions within auditory perception by independently manipulating top-down expectation and attentional precision alongside bottom-up stimulus predictability. Our findings support an integrative interpretation of commonly observed electrophysiological signatures of neurodynamics, namely mismatch negativity (MMN), P300, and contingent negative variation (CNV), as manifestations along successive levels of predictive complexity. Early first-level processing indexed by the MMN was sensitive to stimulus predictability: here, attentional precision enhanced early responses, but explicit top-down expectation diminished it. This pattern was in contrast to later, second-level processing indexed by the P300: although sensitive to the degree of predictability, responses at this level were contingent on attentional engagement and in fact sharpened by top-down expectation. At the highest level, the drift of the CNV was a fine-grained marker of top-down expectation itself. Source reconstruction of high-density EEG, supported by intracranial recordings, implicated temporal and frontal regions differentially active at early and late levels. The cortical generators of the CNV suggested that it might be involved in facilitating the consolidation of context-salient stimuli into conscious perception. These results provide convergent empirical support to promising recent accounts of attention and expectation in predictive coding.

  14. Theta oscillations accompanying concurrent auditory stream segregation.

    Science.gov (United States)

    Tóth, Brigitta; Kocsis, Zsuzsanna; Urbán, Gábor; Winkler, István

    2016-08-01

    The ability to isolate a single sound source among concurrent sources is crucial for veridical auditory perception. The present study investigated the event-related oscillations evoked by complex tones, which could be perceived as a single sound and tonal complexes with cues promoting the perception of two concurrent sounds by inharmonicity, onset asynchrony, and/or perceived source location difference of the components tones. In separate task conditions, participants performed a visual change detection task (visual control), watched a silent movie (passive listening) or reported for each tone whether they perceived one or two concurrent sounds (active listening). In two time windows, the amplitude of theta oscillation was modulated by the presence vs. absence of the cues: 60-350ms/6-8Hz (early) and 350-450ms/4-8Hz (late). The early response appeared both in the passive and the active listening conditions; it did not closely match the task performance; and it had a fronto-central scalp distribution. The late response was only elicited in the active listening condition; it closely matched the task performance; and it had a centro-parietal scalp distribution. The neural processes reflected by these responses are probably involved in the processing of concurrent sound segregation cues, in sound categorization, and response preparation and monitoring. The current results are compatible with the notion that theta oscillations mediate some of the processes involved in concurrent sound segregation.

  15. Inhibition in the Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Koji Inui

    Full Text Available Despite their indispensable roles in sensory processing, little is known about inhibitory interneurons in humans. Inhibitory postsynaptic potentials cannot be recorded non-invasively, at least in a pure form, in humans. We herein sought to clarify whether prepulse inhibition (PPI in the auditory cortex reflected inhibition via interneurons using magnetoencephalography. An abrupt increase in sound pressure by 10 dB in a continuous sound was used to evoke the test response, and PPI was observed by inserting a weak (5 dB increase for 1 ms prepulse. The time course of the inhibition evaluated by prepulses presented at 10-800 ms before the test stimulus showed at least two temporally distinct inhibitions peaking at approximately 20-60 and 600 ms that presumably reflected IPSPs by fast spiking, parvalbumin-positive cells and somatostatin-positive, Martinotti cells, respectively. In another experiment, we confirmed that the degree of the inhibition depended on the strength of the prepulse, but not on the amplitude of the prepulse-evoked cortical response, indicating that the prepulse-evoked excitatory response and prepulse-evoked inhibition reflected activation in two different pathways. Although many diseases such as schizophrenia may involve deficits in the inhibitory system, we do not have appropriate methods to evaluate them; therefore, the easy and non-invasive method described herein may be clinically useful.

  16. Compression of auditory space during forward self-motion.

    Directory of Open Access Journals (Sweden)

    Wataru Teramoto

    Full Text Available BACKGROUND: Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. METHODOLOGY/PRINCIPAL FINDINGS: Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point. In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. CONCLUSIONS/SIGNIFICANCE: These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial

  17. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  18. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Science.gov (United States)

    Isnard, Vincent; Taffou, Marine; Viaud-Delmon, Isabelle; Suied, Clara

    2016-01-01

    Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  19. CT findings of the osteoma of the external auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ha Young; Song, Chang Joon; Yoon, Chung Dae; Park, Mi Hyun; Shin, Byung Seok [Chungnam National University, School of Medicine, Daejeon (Korea, Republic of)

    2006-07-15

    We wanted to report the CT image findings of the osteoma of the external auditory canal. Temporal bone CT scanning was performed on eight patients (4 males and 4 females aged between 8 and 41 years) with pathologically proven osteoma of the external auditory canal after operation, and the findings of the CT scanning were retrospectively reviewed. Not only did we analyze the size, shape, distribution and location of the osteomas, we also analyzed the relationship between the lesion and the tympanosqumaous or tympanomastoid suture line, and the changes seen on the CT scan images for the patients who were able to undergo follow-up. All the lesions of the osteoma of the external auditory canal were unilateral, solitary, pedunculated bony masses. In five patients, the osteomas occurred on the left side and for the other three patients, the osteomas occurred on the right side. The average size of the osteoma was 0.6 cm with the smallest being 0.5 cm and the largest being 1.2 cm. Each of the lesions was located at the osteochondral junction in the terminal part of the osseous external ear canal. The stalk of the osteoma of the external auditory canal was found to have occurred in the anteroinferior wall in five cases (63%), in the anterosuperior wall (the tympanosqumaous suture line) in two cases (25%), and in the anterior wall in one case. The osteoma of the external auditory canal was a compact form in five cases and it was a cancellous form in three cases. One case of the cancellous form was changed into a compact form 35 months later due to the advanced ossification. Osteoma of the external auditory canal developed in a unilateral and solitary fashion. The characteristic image findings show that it is attached to the external auditory canal by its stalk. Unlike our common knowledge about its occurrence, osteoma mostly occurred in the tympanic wall, and this is regardless of the tympanosquamous or tympanomastoid suture line.

  20. Auditory perception of self-similarity in water sounds.

    Directory of Open Access Journals (Sweden)

    Maria Neimark Geffen

    2011-05-01

    Full Text Available Many natural signals, including environmental sounds, exhibit scale-invariant statistics: their structure is repeated at multiple scales. Such scale invariance has been identified separately across spectral and temporal correlations of natural sounds (Clarke and Voss, 1975; Attias and Schreiner, 1997; Escabi et al., 2003; Singh and Theunissen, 2003. Yet the role of scale-invariance across overall spectro-temporal structure of the sound has not been explored directly in auditory perception. Here, we identify that the sound wave of a recording of running water is a self-similar fractal, exhibiting scale-invariance not only within spectral channels, but also across the full spectral bandwidth. The auditory perception of the water sound did not change with its scale. We tested the role of scale-invariance in perception by using an artificial sound, which could be rendered scale-invariant. We generated a random chirp stimulus: an auditory signal controlled by two parameters, Q, controlling the relative, and r, controlling the absolute, temporal structure of the sound. Imposing scale-invariant statistics on the artificial sound was required for its perception as natural and water-like. Further, Q had to be restricted to a specific range for the sound to be perceived as natural. To detect self-similarity in the water sound, and identify Q, the auditory system needs to process the temporal dynamics of the waveform across spectral bands in terms of the number of cycles, rather than absolute timing. We propose a two-stage neural model implementing this computation. This computation may be carried out by circuits of neurons in the auditory cortex. The set of auditory stimuli developed in this study are particularly suitable for measurements of response properties of neurons in the auditory pathway, allowing for quantification of the effects of varying the statistics of the spectro-temporal statistical structure of the stimulus.

  1. The Effect of Neonatal Hyperbilirubinemia on the Auditory System

    Directory of Open Access Journals (Sweden)

    Dr. Zahra Jafari

    2008-12-01

    Full Text Available Background and Aim: Hyperbilirubinemia during the neonatal period is known to be an important risk factor for neonatal auditory impairment, and may reveal as a permanent brain damage, if no proper therapeutic intervention is considered. In the present study some electroacoustic and electrophysiologic tests were used to evaluate function of auditory system in a group of children with severe neonatal Jaundice. Materials and Methods: Forty five children with mean age of 16.1 14.81 months and 17 mg/dl and higher bilirubin level were studied, and the transient evoked otoacoustic emission, acoustic reflex, auditory brainstem response and auditory steady-state response tests were performed for them. Results: The mean score of bilirubin was 29.37 8.95 mg/dl. It was lower than 20 mg/dl in 22.2%, between 20-30 mg/dl in 24.4% and more than 30 mg/dl in 48.0% of children. No therapeutic intervention in 26.7%, phototherapy in 44.4%, and blood exchange in 28.9% of children were reported. 48.9% hypoxia and 26.6% preterm birth history was shown too. TEOAEs was recordable in 71.1% of cases. The normal result in acoustic reflex, ABR and ASSR tests was shown just in 11.1% of cases. The clinical symptoms of auditory neuropathy were revealed in 57.7% of children. Conclusion: Conducting auditory tests sensitive to hyperbilirubinemia place of injury is necessary to inform from functional effect and severity of disorder. Because the auditory neuropathy/ dys-synchrony is common in neonates with hyperbilirubinemic, the OAEs and ABR are the minimum essential tests to identify this disorder.

  2. Simultaneously-evoked auditory potentials (SEAP): A new method for concurrent measurement of cortical and subcortical auditory-evoked activity.

    Science.gov (United States)

    Slugocki, Christopher; Bosnyak, Daniel; Trainor, Laurel J

    2017-03-01

    Recent electrophysiological work has evinced a capacity for plasticity in subcortical auditory nuclei in human listeners. Similar plastic effects have been measured in cortically-generated auditory potentials but it is unclear how the two interact. Here we present Simultaneously-Evoked Auditory Potentials (SEAP), a method designed to concurrently elicit electrophysiological brain potentials from inferior colliculus, thalamus, and primary and secondary auditory cortices. Twenty-six normal-hearing adult subjects (mean 19.26 years, 9 male) were exposed to 2400 monaural (right-ear) presentations of a specially-designed stimulus which consisted of a pure-tone carrier (500 or 600 Hz) that had been amplitude-modulated at the sum of 37 and 81 Hz (depth 100%). Presentation followed an oddball paradigm wherein the pure-tone carrier was set to 500 Hz for 85% of presentations and pseudo-randomly changed to 600 Hz for the remaining 15% of presentations. Single-channel electroencephalographic data were recorded from each subject using a vertical montage referenced to the right earlobe. We show that SEAP elicits a 500 Hz frequency-following response (FFR; generated in inferior colliculus), 80 (subcortical) and 40 (primary auditory cortex) Hz auditory steady-state responses (ASSRs), mismatch negativity (MMN) and P3a (when there is an occasional change in carrier frequency; secondary auditory cortex) in addition to the obligatory N1-P2 complex (secondary auditory cortex). Analyses showed that subcortical and cortical processes are linked as (i) the latency of the FFR predicts the phase delay of the 40 Hz steady-state response, (ii) the phase delays of the 40 and 80 Hz steady-state responses are correlated, and (iii) the fidelity of the FFR predicts the latency of the N1 component. The SEAP method offers a new approach for measuring the dynamic encoding of acoustic features at multiple levels of the auditory pathway. As such, SEAP is a promising tool with which to study how

  3. A Detection-Theoretic Analysis of Auditory Streaming and Its Relation to Auditory Masking

    Directory of Open Access Journals (Sweden)

    An-Chieh Chang

    2016-09-01

    Full Text Available Research on hearing has long been challenged with understanding our exceptional ability to hear out individual sounds in a mixture (the so-called cocktail party problem. Two general approaches to the problem have been taken using sequences of tones as stimuli. The first has focused on our tendency to hear sequences, sufficiently separated in frequency, split into separate cohesive streams (auditory streaming. The second has focused on our ability to detect a change in one sequence, ignoring all others (auditory masking. The two phenomena are clearly related, but that relation has never been evaluated analytically. This article offers a detection-theoretic analysis of the relation between multitone streaming and masking that underscores the expected similarities and differences between these phenomena and the predicted outcome of experiments in each case. The key to establishing this relation is the function linking performance to the information divergence of the tone sequences, DKL (a measure of the statistical separation of their parameters. A strong prediction is that streaming and masking of tones will be a common function of DKL provided that the statistical properties of sequences are symmetric. Results of experiments are reported supporting this prediction.

  4. A Detection-Theoretic Analysis of Auditory Streaming and Its Relation to Auditory Masking.

    Science.gov (United States)

    Chang, An-Chieh; Lutfi, Robert; Lee, Jungmee; Heo, Inseok

    2016-09-18

    Research on hearing has long been challenged with understanding our exceptional ability to hear out individual sounds in a mixture (the so-called cocktail party problem). Two general approaches to the problem have been taken using sequences of tones as stimuli. The first has focused on our tendency to hear sequences, sufficiently separated in frequency, split into separate cohesive streams (auditory streaming). The second has focused on our ability to detect a change in one sequence, ignoring all others (auditory masking). The two phenomena are clearly related, but that relation has never been evaluated analytically. This article offers a detection-theoretic analysis of the relation between multitone streaming and masking that underscores the expected similarities and differences between these phenomena and the predicted outcome of experiments in each case. The key to establishing this relation is the function linking performance to the information divergence of the tone sequences, DKL (a measure of the statistical separation of their parameters). A strong prediction is that streaming and masking of tones will be a common function of DKL provided that the statistical properties of sequences are symmetric. Results of experiments are reported supporting this prediction.

  5. Mode-locking neurodynamics predict human auditory brainstem responses to musical intervals.

    Science.gov (United States)

    Lerud, Karl D; Almonte, Felix V; Kim, Ji Chul; Large, Edward W

    2014-02-01

    The auditory nervous system is highly nonlinear. Some nonlinear responses arise through active processes in the cochlea, while others may arise in neural populations of the cochlear nucleus, inferior colliculus and higher auditory areas. In humans, auditory brainstem recordings reveal nonlinear population responses to combinations of pure tones, and to musical intervals composed of complex tones. Yet the biophysical origin of central auditory nonlinearities, their signal processing properties, and their relationship to auditory perception remain largely unknown. Both stimulus components and nonlinear resonances are well represented in auditory brainstem nuclei due to neural phase-locking. Recently mode-locking, a generalization of phase-locking that implies an intrinsically nonlinear processing of sound, has been observed in mammalian auditory brainstem nuclei. Here we show that a canonical model of mode-locked neural oscillation predicts the complex nonlinear population responses to musical intervals that have been observed in the human brainstem. The model makes predictions about auditory signal processing and perception that are different from traditional delay-based models, and may provide insight into the nature of auditory population responses. We anticipate that the application of dynamical systems analysis will provide the starting point for generic models of auditory population dynamics, and lead to a deeper understanding of nonlinear auditory signal processing possibly arising in excitatory-inhibitory networks of the central auditory nervous system. This approach has the potential to link neural dynamics with the perception of pitch, music, and speech, and lead to dynamical models of auditory system development.

  6. Specialized prefrontal auditory fields: organization of primate prefrontal-temporal pathways

    Directory of Open Access Journals (Sweden)

    Maria eMedalla

    2014-04-01

    Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal auditory field as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.

  7. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.

    Science.gov (United States)

    Woolley, Sarah M N; Portfors, Christine V

    2013-11-01

    The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".

  8. Subcortical neural coding mechanisms for auditory temporal processing.

    Science.gov (United States)

    Frisina, R D

    2001-08-01

    Biologically relevant sounds such as speech, animal vocalizations and music have distinguishing temporal features that are utilized for effective auditory perception. Common temporal features include sound envelope fluctuations, often modeled in the laboratory by amplitude modulation (AM), and starts and stops in ongoing sounds, which are frequently approximated by hearing researchers as gaps between two sounds or are investigated in forward masking experiments. The auditory system has evolved many neural processing mechanisms for encoding important temporal features of sound. Due to rapid progress made in the field of auditory neuroscience in the past three decades, it is not possible to review all progress in this field in a single article. The goal of the present report is to focus on single-unit mechanisms in the mammalian brainstem auditory system for encoding AM and gaps as illustrative examples of how the system encodes key temporal features of sound. This report, following a systems analysis approach, starts with findings in the auditory nerve and proceeds centrally through the cochlear nucleus, superior olivary complex and inferior colliculus. Some general principles can be seen when reviewing this entire field. For example, as one ascends the central auditory system, a neural encoding shift occurs. An emphasis on synchronous responses for temporal coding exists in the auditory periphery, and more reliance on rate coding occurs as one moves centrally. In addition, for AM, modulation transfer functions become more bandpass as the sound level of the signal is raised, but become more lowpass in shape as background noise is added. In many cases, AM coding can actually increase in the presence of background noise. For gap processing or forward masking, coding for gaps changes from a decrease in spike firing rate for neurons of the peripheral auditory system that have sustained response patterns, to an increase in firing rate for more central neurons with

  9. Neurofeedback in Learning Disabled Children: Visual versus Auditory Reinforcement.

    Science.gov (United States)

    Fernández, Thalía; Bosch-Bayard, Jorge; Harmony, Thalía; Caballero, María I; Díaz-Comas, Lourdes; Galán, Lídice; Ricardo-Garcell, Josefina; Aubert, Eduardo; Otero-Ojeda, Gloria

    2016-03-01

    Children with learning disabilities (LD) frequently have an EEG characterized by an excess of theta and a deficit of alpha activities. NFB using an auditory stimulus as reinforcer has proven to be a useful tool to treat LD children by positively reinforcing decreases of the theta/alpha ratio. The aim of the present study was to optimize the NFB procedure by comparing the efficacy of visual (with eyes open) versus auditory (with eyes closed) reinforcers. Twenty LD children with an abnormally high theta/alpha ratio were randomly assigned to the Auditory or the Visual group, where a 500 Hz tone or a visual stimulus (a white square), respectively, was used as a positive reinforcer when the value of the theta/alpha ratio was reduced. Both groups had signs consistent with EEG maturation, but only the Auditory Group showed behavioral/cognitive improvements. In conclusion, the auditory reinforcer was more efficacious in reducing the theta/alpha ratio, and it improved the cognitive abilities more than the visual reinforcer.

  10. The use of visual stimuli during auditory assessment.

    Science.gov (United States)

    Pearlman, R C; Cunningham, D R; Williamson, D G; Amerman, J D

    1975-01-01

    Two groups of male subjects beyond 50 years of age were given audiometric tasks with and without visual stimulation to determine if visual stimuli changed auditory perception. The first group consisted of 10 subjects with normal auditory acuity; the second, 10 with sensorineural hearing losses greater than 30 decibels. The rate of presentation of the visual stimuli, consisting of photographic slides of various subjects, was determined in experiment I of the study. The subjects, while viewing the slides at their own rate, took an audiotry speech discrimination test. Advisedly they changed the slides at a speed which they felt facilitated attention while performing the auditory task. The mean rate of slide-changing behavior was used as the "optimum" visual stimulation rate in experiment II, which was designed to explore the interaction of the bisensory presentation of stimuli. Bekesy tracings and Rush Hughes recordings were administered without and with visual stimuli, the latter presented at the mean rate of slide changes found in experiment I. Analysis of data indicated that (1) no statistically significant difference exists between visual and nonvisual conditions during speech discrimination and Bekesy testing; and (2) subjects did not believe that visual stimuli as presented in this study helped them to listen more effectively. The experimenter concluded that the various auditory stimuli encountered in the auditory test situation may actually be a deterrent to boredom because of the variety of tasks required in a testing situation.

  11. Coding of melodic gestalt in human auditory cortex.

    Science.gov (United States)

    Schindler, Andreas; Herdener, Marcus; Bartels, Andreas

    2013-12-01

    The perception of a melody is invariant to the absolute properties of its constituting notes, but depends on the relation between them-the melody's relative pitch profile. In fact, a melody's "Gestalt" is recognized regardless of the instrument or key used to play it. Pitch processing in general is assumed to occur at the level of the auditory cortex. However, it is unknown whether early auditory regions are able to encode pitch sequences integrated over time (i.e., melodies) and whether the resulting representations are invariant to specific keys. Here, we presented participants different melodies composed of the same 4 harmonic pitches during functional magnetic resonance imaging recordings. Additionally, we played the same melodies transposed in different keys and on different instruments. We found that melodies were invariantly represented by their blood oxygen level-dependent activation patterns in primary and secondary auditory cortices across instruments, and also across keys. Our findings extend common hierarchical models of auditory processing by showing that melodies are encoded independent of absolute pitch and based on their relative pitch profile as early as the primary auditory cortex.

  12. Head Tracking of Auditory, Visual, and Audio-Visual Targets.

    Science.gov (United States)

    Leung, Johahn; Wei, Vincent; Burgess, Martin; Carlile, Simon

    2015-01-01

    The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20 to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual "bisensory" stimuli. Three metrics were measured-onset, RMS, and gain error. The results showed that tracking accuracy (RMS error) varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets.

  13. Brainstem auditory evoked potentials in children with lead exposure

    Directory of Open Access Journals (Sweden)

    Katia de Freitas Alvarenga

    2015-02-01

    Full Text Available Introduction: Earlier studies have demonstrated an auditory effect of lead exposure in children, but information on the effects of low chronic exposures needs to be further elucidated. Objective: To investigate the effect of low chronic exposures of the auditory system in children with a history of low blood lead levels, using an auditory electrophysiological test. Methods: Contemporary cross-sectional cohort. Study participants underwent tympanometry, pure tone and speech audiometry, transient evoked otoacoustic emissions, and brainstem auditory evoked potentials, with blood lead monitoring over a period of 35.5 months. The study included 130 children, with ages ranging from 18 months to 14 years, 5 months (mean age 6 years, 8 months ± 3 years, 2 months. Results: The mean time-integrated cumulative blood lead index was 12 µg/dL (SD ± 5.7, range:2.433. All participants had hearing thresholds equal to or below 20 dBHL and normal amplitudes of transient evoked otoacoustic emissions. No association was found between the absolute latencies of waves I, III, and V, the interpeak latencies I-III, III-V, and I-V, and the cumulative lead values. Conclusion: No evidence of toxic effects from chronic low lead exposures was observed on the auditory function of children living in a lead contaminated area.

  14. Enhanced representation of spectral contrasts in the primary auditory cortex

    Directory of Open Access Journals (Sweden)

    Nicolas eCatz

    2013-06-01

    Full Text Available The role of early auditory processing may be to extract some elementary features from an acoustic mixture in order to organize the auditory scene. To accomplish this task, the central auditory system may rely on the fact that sensory objects are often composed of spectral edges, i.e. regions where the stimulus energy changes abruptly over frequency. The processing of acoustic stimuli may benefit from a mechanism enhancing the internal representation of spectral edges. While the visual system is thought to rely heavily on this mechanism (enhancing spatial edges, it is still unclear whether a related process plays a significant role in audition. We investigated the cortical representation of spectral edges, using acoustic stimuli composed of multi-tone pips whose time-averaged spectral envelope contained suppressed or enhanced regions. Importantly, the stimuli were designed such that neural responses properties could be assessed as a function of stimulus frequency during stimulus presentation. Our results suggest that the representation of acoustic spectral edges is enhanced in the auditory cortex, and that this enhancement is sensitive to the characteristics of the spectral contrast profile, such as depth, sharpness and width. Spectral edges are maximally enhanced for sharp contrast and large depth. Cortical activity was also suppressed at frequencies within the suppressed region. To note, the suppression of firing was larger at frequencies nearby the lower edge of the suppressed region than at the upper edge. Overall, the present study gives critical insights into the processing of spectral contrasts in the auditory system.

  15. Prevalence of auditory changes in newborns in a teaching hospital

    Directory of Open Access Journals (Sweden)

    Guimarães, Valeriana de Castro

    2012-01-01

    Full Text Available Introduction: The precocious diagnosis and the intervention in the deafness are of basic importance in the infantile development. The loss auditory and more prevalent than other joined riots to the birth. Objective: Esteem the prevalence of auditory alterations in just-born in a hospital school. Method: Prospective transversal study that evaluated 226 just-been born, been born in a public hospital, between May of 2008 the May of 2009. Results: Of the 226 screened, 46 (20.4% had presented absence of emissions, having been directed for the second emission. Of the 26 (56.5% children who had appeared in the retest, 8 (30.8% had remained with absence and had been directed to the Otolaryngologist. Five (55.5% had appeared and had been examined by the doctor. Of these, 3 (75.0% had presented normal otoscopy, being directed for evaluation of the Evoked Potential Auditory of Brainstem (PEATE. Of the total of studied children, 198 (87.6% had had presence of emissions in one of the tests and, 2 (0.9% with deafness diagnosis. Conclusion: The prevalence of auditory alterations in the studied population was of 0,9%. The study it offers given excellent epidemiologists and it presents the first report on the subject, supplying resulted preliminary future implantation and development of a program of neonatal auditory selection.

  16. Task-irrelevant auditory feedback facilitates motor performance in musicians

    Directory of Open Access Journals (Sweden)

    Virginia eConde

    2012-05-01

    Full Text Available An efficient and fast auditory–motor network is a basic resource for trained musicians due to the importance of motor anticipation of sound production in musical performance. When playing an instrument, motor performance always goes along with the production of sounds and the integration between both modalities plays an essential role in the course of musical training. The aim of the present study was to investigate the role of task-irrelevant auditory feedback during motor performance in musicians using a serial reaction time task (SRTT. Our hypothesis was that musicians, due to their extensive auditory–motor practice routine during musical training, have a superior performance and learning capabilities when receiving auditory feedback during SRTT relative to musicians performing the SRTT without any auditory feedback. Here we provide novel evidence that task-irrelevant auditory feedback is capable to reinforce SRTT performance but not learning, a finding that might provide further insight into auditory-motor integration in musicians on a behavioral level.

  17. The auditory attention status in Iranian bilingual and monolingual people

    Directory of Open Access Journals (Sweden)

    Nayiere Mansoori

    2013-05-01

    Full Text Available Background and Aim: Bilingualism, as one of the discussing issues of psychology and linguistics, can influence the speech processing. Of several tests for assessing auditory processing, dichotic digit test has been designed to study divided auditory attention. Our study was performed to compare the auditory attention between Iranian bilingual and monolingual young adults. Methods: This cross-sectional study was conducted on 60 students including 30 Turkish-Persian bilinguals and 30 Persian monolinguals aged between 18 to 30 years in both genders. Dichotic digit test was performed on young individuals with normal peripheral hearing and right hand preference. Results: No significant correlation was found between the results of dichotic digit test of monolinguals and bilinguals (p=0.195, and also between the results of right and left ears in monolingual (p=0.460 and bilingual (p=0.054 groups. The mean score of women was significantly more than men (p=0.031. Conclusion: There was no significant difference between bilinguals and monolinguals in divided auditory attention; and it seems that acquisition of second language in lower ages has no noticeable effect on this type of auditory attention.

  18. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.

    Science.gov (United States)

    Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.

  19. Modulating human auditory processing by transcranial electrical stimulation

    Directory of Open Access Journals (Sweden)

    Kai eHeimrath

    2016-03-01

    Full Text Available Transcranial electrical stimulation (tES has become a valuable research tool for the investigation of neurophysiological processes underlying human action and cognition. In recent years, striking evidence for the neuromodulatory effects of transcranial direct current stimulation (tDCS, transcranial alternating current stimulation (tACS, and transcranial random noise stimulation (tRNS has emerged. However, while the wealth of knowledge has been gained about tES in the motor domain and, to a lesser extent, about its ability to modulate human cognition, surprisingly little is known about its impact on perceptual processing, particularly in the auditory domain. Moreover, while only a few studies systematically investigated the impact of auditory tES, it has already been applied in a large number of clinical trials, leading to a remarkable imbalance between basic and clinical research on auditory tES. Here, we review the state of the art of tES application in the auditory domain focussing on the impact of neuromodulation on acoustic perception and its potential for clinical application in the treatment of auditory related disorders.

  20. Spatial organization of tettigoniid auditory receptors: insights from neuronal tracing.

    Science.gov (United States)

    Strauß, Johannes; Lehmann, Gerlind U C; Lehmann, Arne W; Lakes-Harlan, Reinhard

    2012-11-01

    The auditory sense organ of Tettigoniidae (Insecta, Orthoptera) is located in the foreleg tibia and consists of scolopidial sensilla which form a row termed crista acustica. The crista acustica is associated with the tympana and the auditory trachea. This ear is a highly ordered, tonotopic sensory system. As the neuroanatomy of the crista acustica has been documented for several species, the most distal somata and dendrites of receptor neurons have occasionally been described as forming an alternating or double row. We investigate the spatial arrangement of receptor cell bodies and dendrites by retrograde tracing with cobalt chloride solution. In six tettigoniid species studied, distal receptor neurons are consistently arranged in double-rows of somata rather than a linear sequence. This arrangement of neurons is shown to affect 30-50% of the overall auditory receptors. No strict correlation of somata positions between the anterio-posterior and dorso-ventral axis was evident within the distal crista acustica. Dendrites of distal receptors occasionally also occur in a double row or are even massed without clear order. Thus, a substantial part of auditory receptors can deviate from a strictly straight organization into a more complex morphology. The linear organization of dendrites is not a morphological criterion that allows hearing organs to be distinguished from nonhearing sense organs serially homologous to ears in all species. Both the crowded arrangement of receptor somata and dendrites may result from functional constraints relating to frequency discrimination, or from developmental constraints of auditory morphogenesis in postembryonic development.

  1. Modeling of Auditory Neuron Response Thresholds with Cochlear Implants

    Directory of Open Access Journals (Sweden)

    Frederic Venail

    2015-01-01

    Full Text Available The quality of the prosthetic-neural interface is a critical point for cochlear implant efficiency. It depends not only on technical and anatomical factors such as electrode position into the cochlea (depth and scalar placement, electrode impedance, and distance between the electrode and the stimulated auditory neurons, but also on the number of functional auditory neurons. The efficiency of electrical stimulation can be assessed by the measurement of e-CAP in cochlear implant users. In the present study, we modeled the activation of auditory neurons in cochlear implant recipients (nucleus device. The electrical response, measured using auto-NRT (neural responses telemetry algorithm, has been analyzed using multivariate regression with cubic splines in order to take into account the variations of insertion depth of electrodes amongst subjects as well as the other technical and anatomical factors listed above. NRT thresholds depend on the electrode squared impedance (β = −0.11 ± 0.02, P<0.01, the scalar placement of the electrodes (β = −8.50 ± 1.97, P<0.01, and the depth of insertion calculated as the characteristic frequency of auditory neurons (CNF. Distribution of NRT residues according to CNF could provide a proxy of auditory neurons functioning in implanted cochleas.

  2. Modeling of Auditory Neuron Response Thresholds with Cochlear Implants.

    Science.gov (United States)

    Venail, Frederic; Mura, Thibault; Akkari, Mohamed; Mathiolon, Caroline; Menjot de Champfleur, Sophie; Piron, Jean Pierre; Sicard, Marielle; Sterkers-Artieres, Françoise; Mondain, Michel; Uziel, Alain

    2015-01-01

    The quality of the prosthetic-neural interface is a critical point for cochlear implant efficiency. It depends not only on technical and anatomical factors such as electrode position into the cochlea (depth and scalar placement), electrode impedance, and distance between the electrode and the stimulated auditory neurons, but also on the number of functional auditory neurons. The efficiency of electrical stimulation can be assessed by the measurement of e-CAP in cochlear implant users. In the present study, we modeled the activation of auditory neurons in cochlear implant recipients (nucleus device). The electrical response, measured using auto-NRT (neural responses telemetry) algorithm, has been analyzed using multivariate regression with cubic splines in order to take into account the variations of insertion depth of electrodes amongst subjects as well as the other technical and anatomical factors listed above. NRT thresholds depend on the electrode squared impedance (β = -0.11 ± 0.02, P < 0.01), the scalar placement of the electrodes (β = -8.50 ± 1.97, P < 0.01), and the depth of insertion calculated as the characteristic frequency of auditory neurons (CNF). Distribution of NRT residues according to CNF could provide a proxy of auditory neurons functioning in implanted cochleas.

  3. Role of the auditory system in speech production.

    Science.gov (United States)

    Guenther, Frank H; Hickok, Gregory

    2015-01-01

    This chapter reviews evidence regarding the role of auditory perception in shaping speech output. Evidence indicates that speech movements are planned to follow auditory trajectories. This in turn is followed by a description of the Directions Into Velocities of Articulators (DIVA) model, which provides a detailed account of the role of auditory feedback in speech motor development and control. A brief description of the higher-order brain areas involved in speech sequencing (including the pre-supplementary motor area and inferior frontal sulcus) is then provided, followed by a description of the Hierarchical State Feedback Control (HSFC) model, which posits internal error detection and correction processes that can detect and correct speech production errors prior to articulation. The chapter closes with a treatment of promising future directions of research into auditory-motor interactions in speech, including the use of intracranial recording techniques such as electrocorticography in humans, the investigation of the potential roles of various large-scale brain rhythms in speech perception and production, and the development of brain-computer interfaces that use auditory feedback to allow profoundly paralyzed users to learn to produce speech using a speech synthesizer.

  4. Head Tracking of Auditory, Visual and Audio-Visual Targets

    Directory of Open Access Journals (Sweden)

    Johahn eLeung

    2016-01-01

    Full Text Available The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20°/s to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual bisensory stimuli. Three metrics were measured – onset, RMS and gain error. The results showed that tracking accuracy (RMS error varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets.

  5. Training-induced plasticity of auditory localization in adult mammals.

    Directory of Open Access Journals (Sweden)

    Oliver Kacelnik

    2006-04-01

    Full Text Available Accurate auditory localization relies on neural computations based on spatial cues present in the sound waves at each ear. The values of these cues depend on the size, shape, and separation of the two ears and can therefore vary from one individual to another. As with other perceptual skills, the neural circuits involved in spatial hearing are shaped by experience during development and retain some capacity for plasticity in later life. However, the factors that enable and promote plasticity of auditory localization in the adult brain are unknown. Here we show that mature ferrets can rapidly relearn to localize sounds after having their spatial cues altered by reversibly occluding one ear, but only if they are trained to use these cues in a behaviorally relevant task, with greater and more rapid improvement occurring with more frequent training. We also found that auditory adaptation is possible in the absence of vision or error feedback. Finally, we show that this process involves a shift in sensitivity away from the abnormal auditory spatial cues to other cues that are less affected by the earplug. The mature auditory system is therefore capable of adapting to abnormal spatial information by reweighting different localization cues. These results suggest that training should facilitate acclimatization to hearing aids in the hearing impaired.

  6. Speech identification and cortical potentials in individuals with auditory neuropathy

    Directory of Open Access Journals (Sweden)

    Vanaja CS

    2008-03-01

    Full Text Available Abstract Background Present study investigated the relationship between speech identification scores in quiet and parameters of cortical potentials (latency of P1, N1, and P2; and amplitude of N1/P2 in individuals with auditory neuropathy. Methods Ten individuals with auditory neuropathy (five males and five females and ten individuals with normal hearing in the age range of 12 to 39 yr participated in the study. Speech identification ability was assessed for bi-syllabic words and cortical potentials were recorded for click stimuli. Results Results revealed that in individuals with auditory neuropathy, speech identification scores were significantly poorer than that of individuals with normal hearing. Individuals with auditory neuropathy were further classified into two groups, Good Performers and Poor Performers based on their speech identification scores. It was observed that the mean amplitude of N1/P2 of Poor Performers was significantly lower than that of Good Performers and those with normal hearing. There was no significant effect of group on the latency of the peaks. Speech identification scores showed a good correlation with the amplitude of cortical potentials (N1/P2 complex but did not show a significant correlation with the latency of cortical potentials. Conclusion Results of the present study suggests that measuring the cortical potentials may offer a means for predicting perceptual skills in individuals with auditory neuropathy.

  7. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex

    Science.gov (United States)

    Zhuo, Ran; Xue, Hongbo; Chambers, Anna R.; Kolaczyk, Eric; Polley, Daniel B.

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices. PMID:27622211

  8. Effect of background music on auditory-verbal memory performance

    Directory of Open Access Journals (Sweden)

    Sona Matloubi

    2014-12-01

    Full Text Available Background and Aim: Music exists in all cultures; many scientists are seeking to understand how music effects cognitive development such as comprehension, memory, and reading skills. More recently, a considerable number of neuroscience studies on music have been developed. This study aimed to investigate the effects of null and positive background music in comparison with silence on auditory-verbal memory performance.Methods: Forty young adults (male and female with normal hearing, aged between 18 and 26, participated in this comparative-analysis study. An auditory and speech evaluation was conducted in order to investigate the effects of background music on working memory. Subsequently, the Rey auditory-verbal learning test was performed for three conditions: silence, positive, and null music.Results: The mean score of the Rey auditory-verbal learning test in silence condition was higher than the positive music condition (p=0.003 and the null music condition (p=0.01. The tests results did not reveal any gender differences.Conclusion: It seems that the presence of competitive music (positive and null music and the orientation of auditory attention have negative effects on the performance of verbal working memory. It is possibly owing to the intervention of music with verbal information processing in the brain.

  9. Asymmetric transfer of auditory perceptual learning

    Directory of Open Access Journals (Sweden)

    Sygal eAmitay

    2012-11-01

    Full Text Available Perceptual skills can improve dramatically even with minimal practice. A major and practical benefit of learning, however, is in transferring the improvement on the trained task to untrained tasks or stimuli, yet the mechanisms underlying this process are still poorly understood. Reduction of internal noise has been proposed as a mechanism of perceptual learning, and while we have evidence that frequency discrimination (FD learning is due to a reduction of internal noise, the source of that noise was not determined. In this study, we examined whether reducing the noise associated with neural phase locking to tones can explain the observed improvement in behavioural thresholds. We compared FD training between two tone durations (15 and 100 ms that straddled the temporal integration window of auditory nerve fibers upon which computational modeling of phase locking noise was based. Training on short tones resulted in improved FD on probe tests of both the long and short tones. Training on long tones resulted in improvement only on the long tones. Simulations of FD learning, based on the computational model and on signal detection theory, were compared with the behavioral FD data. We found that improved fidelity of phase locking accurately predicted transfer of learning from short to long tones, but also predicted transfer from long to short tones. The observed lack of transfer from long to short tones suggests the involvement of a second mechanism. Training may have increased the temporal integration window which could not transfer because integration time for the short tone is limited by its duration. Current learning models assume complex relationships between neural populations that represent the trained stimuli. In contrast, we propose that training-induced enhancement of the signal-to-noise ratio offers a parsimonious explanation of learning and transfer that easily accounts for asymmetric transfer of learning.

  10. Auditory hair cell innervational patterns in lizards.

    Science.gov (United States)

    Miller, M R; Beck, J

    1988-05-22

    The pattern of afferent and efferent innervation of two to four unidirectional (UHC) and two to nine bidirectional (BHC) hair cells of five different types of lizard auditory papillae was determined by reconstruction of serial TEM sections. The species studies were Crotaphytus wislizeni (iguanid), Podarcis (Lacerta) sicula and P. muralis (lacertids), Ameiva ameiva (teiid), Coleonyx variegatus (gekkonid), and Mabuya multifasciata (scincid). The main object was to determine in which species and in which hair cell types the nerve fibers were innervating only one (exclusive innervation), or two or more hair cells (nonexclusive innervation); how many nerve fibers were supplying each hair cell; how many synapses were made by the innervating fibers; and the total number of synapses on each hair cell. In the species studies, efferent innervation was limited to the UHC, and except for the iguanid, C. wislizeni, it was nonexclusive, each fiber supplying two or more hair cells. Afferent innervation varied both with the species and the hair cell types. In Crotaphytus, both the UHC and the BHC were exclusively innervated. In Podarcis and Ameiva, the UHC were innervated exclusively by some fibers but nonexclusively by others (mixed pattern). In Coleonyx, the UHC were exclusively innervated but the BHC were nonexclusively innervated. In Mabuya, both the UHC and BHC were nonexclusively innervated. The number of afferent nerve fibers and the number of afferent synapses were always larger in the UHC than in the BHC. In Ameiva, Podarcis, and Mabuya, groups of bidirectionally oriented hair cells occur in regions of cytologically distinct UHC, and in Ameiva, unidirectionally oriented hair cells occur in cytologically distinct BHC regions.

  11. Preliminary Studies on Differential Expression of Auditory Functional Genes in the Brain After Repeated Blast Exposures

    Science.gov (United States)

    2012-01-01

    Army Medical Research and Materiel Command, Fort Detrick, MD Abstract—The mechanisms of central auditory processing involved in auditory/ vestibular ...trans- ducers in auditory neurons [22–23,45–48]. The frontal cor- tex and midbrain of blast-exposed mice showed significant increase in the expression of...auditory neurons [26]. Other types of molecules involved in calcium regula- tion, such as calreticulin and calmodulin-dependent pro- tein kinase expression

  12. Tuning Shifts of the Auditory System By Corticocortical and Corticofugal Projections and Conditioning

    OpenAIRE

    Suga, Nobuo

    2011-01-01

    The central auditory system consists of the lemniscal and nonlemniscal systems. The thalamic lemniscal and non-lemniscal auditory nuclei are different from each other in response properties and neural connectivities. The cortical auditory areas receiving the projections from these thalamic nuclei interact with each other through corticocortical projections and project down to the subcortical auditory nuclei. This corticofugal (descending) system forms multiple feedback loops with the ascendin...

  13. Auditory Memory deficit in Elderly People with Hearing Loss

    Directory of Open Access Journals (Sweden)

    Zahra Shahidipour

    2013-06-01

    Full Text Available Introduction: Hearing loss is one of the most common problems in elderly people. Functional side effects of hearing loss are various. Due to the fact that hearing loss is the common impairment in elderly people; the importance of its possible effects on auditory memory is undeniable. This study aims to focus on the hearing loss effects on auditory memory.   Materials and Methods: Dichotic Auditory Memory Test (DVMT was performed on 47 elderly people, aged 60 to 80; that were divided in two groups, the first group consisted of elderly people with hearing range of 24 normal and the second one consisted of 23 elderly people with bilateral symmetrical ranged from mild to moderate Sensorineural hearing loss in the high frequency due to aging in both genders.   Results: Significant difference was observed in DVMT between elderly people with normal hearing and those with hearing loss (P

  14. Multisensory Interactions between Auditory and Haptic Object Recognition

    DEFF Research Database (Denmark)

    Kassuba, Tanja; Menz, Mareike M; Röder, Brigitte;

    2013-01-01

    Object manipulation produces characteristic sounds and causes specific haptic sensations that facilitate the recognition of the manipulated object. To identify the neural correlates of audio-haptic binding of object features, healthy volunteers underwent functional magnetic resonance imaging while...... they matched a target object to a sample object within and across audition and touch. By introducing a delay between the presentation of sample and target stimuli, it was possible to dissociate haptic-to-auditory and auditory-to-haptic matching. We hypothesized that only semantically coherent auditory...... and haptic object features activate cortical regions that host unified conceptual object representations. The left fusiform gyrus (FG) and posterior superior temporal sulcus (pSTS) showed increased activation during crossmodal matching of semantically congruent but not incongruent object stimuli. In the FG...

  15. Robust speech features representation based on computational auditory model

    Institute of Scientific and Technical Information of China (English)

    LU Xugang; JIA Chuan; DANG Jianwu

    2004-01-01

    A speech signal processing and features extracting method based on computational auditory model is proposed. The computational model is based on psychological, physiological knowledge and digital signal processing methods. In each stage of a hearing perception system, there is a corresponding computational model to simulate its function. Based on this model, speech features are extracted. In each stage, the features in different kinds of level are extracted. A further processing for primary auditory spectrum based on lateral inhibition is proposed to extract much more robust speech features. All these features can be regarded as the internal representations of speech stimulation in hearing system. The robust speech recognition experiments are conducted to test the robustness of the features. Results show that the representations based on the proposed computational auditory model are robust representations for speech signals.

  16. Temporal resolution in the hearing system and auditory evoked potentials

    DEFF Research Database (Denmark)

    Miller, Lee; Beedholm, Kristian

    2008-01-01

    3pAB5. Temporal resolution in the hearing system and auditory evoked potentials. Kristian Beedholm Institute of Biology,University of Southern Denmark, Campusvej 55, 5230 Odense M, Denmark, beedholm@mail.dk, Lee A. Miller Institute of Biology,University of Southern Denmark, Campusvej 55, 5230...... Odense M, Denmark, lee@biology.sdu.dkA popular type of investigation with auditory evoked potentials AEP consists of mapping the dependency of the envelope followingresponse to the AM frequency. This results in what is called the modulation rate transfer function MRTF. The physiologicalinterpretation...... of the MRTF is not straight forward, but is often used as a measure of the ability of the auditory system to encodetemporal changes. It is, however, shown here that the MRTF must depend on the waveform of the click-evoked AEP ceAEP, whichdoes not relate directly to temporal resolution. The theoretical...

  17. Stimulator with arbitrary waveform for auditory evoked potentials

    Energy Technology Data Exchange (ETDEWEB)

    Martins, H R; Romao, M; Placido, D; Provenzano, F; Tierra-Criollo, C J [Universidade Federal de Minas Gerais (UFMG), Departamento de Engenharia Eletrica (DEE), Nucleo de Estudos e Pesquisa em Engenharia Biomedica NEPEB, Av. Ant. Carlos, 6627, sala 2206, Pampulha, Belo Horizonte, MG, 31.270-901 (Brazil)

    2007-11-15

    The technological improvement helps many medical areas. The audiometric exams involving the auditory evoked potentials can make better diagnoses of auditory disorders. This paper proposes the development of a stimulator based on Digital Signal Processor. This stimulator is the first step of an auditory evoked potential system based on the ADSP-BF533 EZ KIT LITE (Analog Devices Company - USA). The stimulator can generate arbitrary waveform like Sine Waves, Modulated Amplitude, Pulses, Bursts and Pips. The waveforms are generated through a graphical interface programmed in C++ in which the user can define the parameters of the waveform. Furthermore, the user can set the exam parameters as number of stimuli, time with stimulation (Time ON) and time without stimulus (Time OFF). In future works will be implemented another parts of the system that includes the acquirement of electroencephalogram and signal processing to estimate and analyze the evoked potential.

  18. Auditory aura in frontal opercular epilepsy: sounds from afar.

    Science.gov (United States)

    Thompson, Stephen A; Alexopoulos, Andreas; Bingaman, William; Gonzalez-Martinez, Jorge; Bulacio, Juan; Nair, Dileep; So, Norman K

    2015-06-01

    Auditory auras are typically considered to localize to the temporal neocortex. Herein, we present two cases of frontal operculum/perisylvian epilepsy with auditory auras. Following a non-invasive evaluation, including ictal SPECT and magnetoencephalography, implicating the frontal operculum, these cases were evaluated with invasive monitoring, using stereoelectroencephalography and subdural (plus depth) electrodes, respectively. Spontaneous and electrically-induced seizures showed an ictal onset involving the frontal operculum in both cases. A typical auditory aura was triggered by stimulation of the frontal operculum in one. Resection of the frontal operculum and subjacent insula rendered one case seizure- (and aura-) free. From a hodological (network) perspective, we discuss these findings with consideration of the perisylvian and insular network(s) interconnecting the frontal and temporal lobes, and revisit the non-invasive data, specifically that of ictal SPECT.

  19. Spontaneous synchronized tapping to an auditory rhythm in a chimpanzee.

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2013-01-01

    Humans actively use behavioral synchrony such as dancing and singing when they intend to make affiliative relationships. Such advanced synchronous movement occurs even unconsciously when we hear rhythmically complex music. A foundation for this tendency may be an evolutionary adaptation for group living but evolutionary origins of human synchronous activity is unclear. Here we show the first evidence that a member of our closest living relatives, a chimpanzee, spontaneously synchronizes her movement with an auditory rhythm: After a training to tap illuminated keys on an electric keyboard, one chimpanzee spontaneously aligned her tapping with the sound when she heard an isochronous distractor sound. This result indicates that sensitivity to, and tendency toward synchronous movement with an auditory rhythm exist in chimpanzees, although humans may have expanded it to unique forms of auditory and visual communication during the course of human evolution.

  20. An auditory feature detection circuit for sound pattern recognition.

    Science.gov (United States)

    Schöneich, Stefan; Kostarakos, Konstantinos; Hedwig, Berthold

    2015-09-01

    From human language to birdsong and the chirps of insects, acoustic communication is based on amplitude and frequency modulation of sound signals. Whereas frequency processing starts at the level of the hearing organs, temporal features of the sound amplitude such as rhythms or pulse rates require processing by central auditory neurons. Besides several theoretical concepts, brain circuits that detect temporal features of a sound signal are poorly understood. We focused on acoustically communicating field crickets and show how five neurons in the brain of females form an auditory feature detector circuit for the pulse pattern of the male calling song. The processing is based on a coincidence detector mechanism that selectively responds when a direct neural response and an intrinsically delayed response to the sound pulses coincide. This circuit provides the basis for auditory mate recognition in field crickets and reveals a principal mechanism of sensory processing underlying the perception of temporal patterns.

  1. Development of visuo-auditory integration in space and time

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2012-09-01

    Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.

  2. Neural dynamics of phonological processing in the dorsal auditory stream.

    Science.gov (United States)

    Liebenthal, Einat; Sabri, Merav; Beardsley, Scott A; Mangalathu-Arumana, Jain; Desai, Anjali

    2013-09-25

    Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80-100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors.

  3. Validation of the Emotiv EPOC® EEG gaming system for measuring research quality auditory ERPs

    OpenAIRE

    Badcock, Nicholas A.; Petroula Mousikou; Yatin Mahajan; Peter de Lissa; Johnson Thie; Genevieve McArthur

    2013-01-01

    Background. Auditory event-related potentials (ERPs) have proved useful in investigating the role of auditory processing in cognitive disorders such as developmental dyslexia, specific language impairment (SLI), attention deficit hyperactivity disorder (ADHD), schizophrenia, and autism. However, laboratory recordings of auditory ERPs can be lengthy, uncomfortable, or threatening for some participants – particularly children. Recently, a commercial gaming electroencephalography (EEG) system ha...

  4. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion.

    Science.gov (United States)

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information.

  5. Relationship between Selected Auditory and Visual Receptive Skills and Academic Achievement.

    Science.gov (United States)

    Bryant, Lynda Carol

    To observe the relationship of auditory and visual receptive skills to achievement in reading, 80 eight-year-old children were given a diagnostic test battery which examined three receptive skills--attention to stimuli, discrimination, and memory--within three sensory modalities--auditory, visual, and auditory-visual. The control group consisted…

  6. Effects of Multimodal Presentation and Stimulus Familiarity on Auditory and Visual Processing

    Science.gov (United States)

    Robinson, Christopher W.; Sloutsky, Vladimir M.

    2010-01-01

    Two experiments examined the effects of multimodal presentation and stimulus familiarity on auditory and visual processing. In Experiment 1, 10-month-olds were habituated to either an auditory stimulus, a visual stimulus, or an auditory-visual multimodal stimulus. Processing time was assessed during the habituation phase, and discrimination of…

  7. Plasticity in tinnitus patients : a role for the efferent auditory system?

    NARCIS (Netherlands)

    Geven, Leontien I.; Koeppl, Christine; de Kleine, Emile; van Dijk, Pim

    2014-01-01

    Hypothesis: The role of the corticofugal efferent auditory system in the origin or maintenance of tinnitus is currently mostly overlooked. Changes in the balance between excitation and inhibition after an auditory trauma are likely to play a role in the origin of tinnitus. The efferent auditory syst

  8. Bilateral Mandibular Condylar Fractures with Associated External Auditory Canal Fractures and Otorrhagia

    OpenAIRE

    Dang, David

    2016-01-01

    A rare case of bilateral mandibular condylar fractures associated with bilateral external auditory canal fractures and otorrhagia is reported. The more severe external auditory canal fracture was present on the side of high condylar fracture, and the less severe external auditory canal fracture was ipsilateral to the condylar neck fracture. A mechanism of injury is proposed to account for such findings.

  9. Bilateral Mandibular Condylar Fractures with Associated External Auditory Canal Fractures and Otorrhagia.

    Science.gov (United States)

    Dang, David

    2007-01-01

    A rare case of bilateral mandibular condylar fractures associated with bilateral external auditory canal fractures and otorrhagia is reported. The more severe external auditory canal fracture was present on the side of high condylar fracture, and the less severe external auditory canal fracture was ipsilateral to the condylar neck fracture. A mechanism of injury is proposed to account for such findings.

  10. Older adults' recognition of bodily and auditory expressions of emotion.

    Science.gov (United States)

    Ruffman, Ted; Sullivan, Susan; Dittrich, Winand

    2009-09-01

    This study compared young and older adults' ability to recognize bodily and auditory expressions of emotion and to match bodily and facial expressions to vocal expressions. Using emotion discrimination and matching techniques, participants assessed emotion in voices (Experiment 1), point-light displays (Experiment 2), and still photos of bodies with faces digitally erased (Experiment 3). Older adults' were worse at least some of the time in recognition of anger, sadness, fear, and happiness in bodily expressions and of anger in vocal expressions. Compared with young adults, older adults also found it more difficult to match auditory expressions to facial expressions (5 of 6 emotions) and bodily expressions (3 of 6 emotions).

  11. Inversion of Auditory Spectrograms, Traditional Spectrograms, and Other Envelope Representations

    DEFF Research Database (Denmark)

    Decorsière, Remi Julien Blaise; Søndergaard, Peter Lempel; MacDonald, Ewen

    2015-01-01

    implementations of this framework are presented for auditory spectrograms, where the filterbank is based on the behavior of the basilar membrane and envelope extraction is modeled on the response of inner hair cells. One implementation is direct while the other is a two-stage approach that is computationally...... simpler. While both can accurately invert an auditory spectrogram, the two-stage approach performs better on time-domain metrics. The same framework is applied to traditional spectrograms based on the magnitude of the short-time Fourier transform. Inspired by human perception of loudness, a modification...

  12. Auditory streaming of tones of uncertain frequency, level, and duration.

    Science.gov (United States)

    Chang, An-Chieh; Lutfi, Robert A; Lee, Jungmee

    2015-12-01

    Stimulus uncertainty is known to critically affect auditory masking, but its influence on auditory streaming has been largely ignored. Standard ABA-ABA tone sequences were made increasingly uncertain by increasing the sigma of normal distributions from which the frequency, level, or duration of tones were randomly drawn. Consistent with predictions based on a model of masking by Lutfi, Gilbertson, Chang, and Stamas [J. Acoust. Soc. Am. 134, 2160-2170 (2013)], the frequency difference for which A and B tones formed separate streams increased as a linear function of sigma in tone frequency but was much less affected by sigma in tone level or duration.

  13. Formant compensation for auditory feedback with English vowels

    DEFF Research Database (Denmark)

    Mitsuya, Takashi; MacDonald, Ewen N; Munhall, Kevin G;

    2015-01-01

    Past studies have shown that speakers spontaneously adjust their speech acoustics in response to their auditory feedback perturbed in real time. In the case of formant perturbation, the majority of studies have examined speaker's compensatory production using the English vowel /ɛ/ as in the word...... to differences in the degree of lingual contact or jaw openness. This may in turn influence the ways in which speakers compensate for auditory feedback. The aim of the current study was to examine speakers' compensatory behavior with six English monophthongs. Specifically, the current study tested to see...

  14. Designing auditory cues for Parkinson's disease gait rehabilitation.

    Science.gov (United States)

    Cancela, Jorge; Moreno, Eugenio M; Arredondo, Maria T; Bonato, Paolo

    2014-01-01

    Recent works have proved that Parkinson's disease (PD) patients can be largely benefit by performing rehabilitation exercises based on audio cueing and music therapy. Specially, gait can benefit from repetitive sessions of exercises using auditory cues. Nevertheless, all the experiments are based on the use of a metronome as auditory stimuli. Within this work, Human-Computer Interaction methodologies have been used to design new cues that could benefit the long-term engagement of PD patients in these repetitive routines. The study has been also extended to commercial music and musical pieces by analyzing features and characteristics that could benefit the engagement of PD patients to rehabilitation tasks.

  15. Auditory brainstem responses predict auditory nerve fiber thresholds and frequency selectivity in hearing impaired chinchillas.

    Science.gov (United States)

    Henry, Kenneth S; Kale, Sushrut; Scheidt, Ryan E; Heinz, Michael G

    2011-10-01

    Noninvasive auditory brainstem responses (ABRs) are commonly used to assess cochlear pathology in both clinical and research environments. In the current study, we evaluated the relationship between ABR characteristics and more direct measures of cochlear function. We recorded ABRs and auditory nerve (AN) single-unit responses in seven chinchillas with noise-induced hearing loss. ABRs were recorded for 1-8 kHz tone burst stimuli both before and several weeks after 4 h of exposure to a 115 dB SPL, 50 Hz band of noise with a center frequency of 2 kHz. Shifts in ABR characteristics (threshold, wave I amplitude, and wave I latency) following hearing loss were compared to AN-fiber tuning curve properties (threshold and frequency selectivity) in the same animals. As expected, noise exposure generally resulted in an increase in ABR threshold and decrease in wave I amplitude at equal SPL. Wave I amplitude at equal sensation level (SL), however, was similar before and after noise exposure. In addition, noise exposure resulted in decreases in ABR wave I latency at equal SL and, to a lesser extent, at equal SPL. The shifts in ABR characteristics were significantly related to AN-fiber tuning curve properties in the same animal at the same frequency. Larger shifts in ABR thresholds and ABR wave I amplitude at equal SPL were associated with greater AN threshold elevation. Larger reductions in ABR wave I latency at equal SL, on the other hand, were associated with greater loss of AN frequency selectivity. This result is consistent with linear systems theory, which predicts shorter time delays for broader peripheral frequency tuning. Taken together with other studies, our results affirm that ABR thresholds and wave I amplitude provide useful estimates of cochlear sensitivity. Furthermore, comparisons of ABR wave I latency to normative data at the same SL may prove useful for detecting and characterizing loss of cochlear frequency selectivity.

  16. Auditory Rehabilitation in Rhesus Macaque Monkeys (Macaca mulatta) with Auditory Brainstem Implants

    Institute of Scientific and Technical Information of China (English)

    Zhen-Min Wang; Zhi-Jun Yang; Fu Zhao; Bo Wang; Xing-Chao Wang; Pei-Ran Qu; Pi-Nan Liu

    2015-01-01

    Background:The auditory brainstem implants (ABIs) have been used to treat deafness for patients with neurofibromatosis Type 2 and nontumor patients.The lack of an appropriate animal model has limited the study of improving hearing rehabilitation by the device.This study aimed to establish an animal model of ABI in adult rhesus macaque monkey (Macaca mulatta).Methods:Six adult rhesus macaque monkeys (M.mulatta) were included.Under general anesthesia,a multichannel ABI was implanted into the lateral recess of the fourth ventricle through the modified suboccipital-retrosigmoid (RS) approach.The electrical auditory brainstem response (EABR) waves were tested to ensure the optimal implant site.After the operation,the EABR and computed tomography (CT) were used to test and verify the effectiveness via electrophysiology and anatomy,respectively.The subjects underwent behavioral observation for 6 months,and the postoperative EABR was tested every two weeks from the 1st month after implant surgery.Result:The implant surgery lasted an average of 5.2 h,and no monkey died or sacrificed.The averaged latencies of peaks Ⅰ,Ⅱ and Ⅳ were 1.27,2.34 and 3.98 ms,respectively in the ABR.One-peak EABR wave was elicited in the operation,and one-or two-peak waves were elicited during the postoperative period.The EABR wave latencies appeared to be constant under different stimulus intensities;however,the amplitudes increased as the stimulus increased within a certain scope.Conclusions:It is feasible and safe to implant ABIs in rhesus macaque monkeys (M.mulatta) through a modified suboccipital RS approach,and EABR and CT are valid tools for animal model establishment.In addition,this model should be an appropriate animal model for the electrophysiological and behavioral study of rhesus macaque monkey with ABI.

  17. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    Science.gov (United States)

    Matusz, Pawel J; Thelen, Antonia; Amrein, Sarah; Geiser, Eveline; Anken, Jacques; Murray, Micah M

    2015-03-01

    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time.

  18. Does the whistling thorn acacia (Acacia drepanolobium) use auditory aposematism to deter mammalian herbivores?

    Science.gov (United States)

    Lev-Yadun, Simcha

    2016-08-02

    Auditory signaling including aposematism characterizes many terrestrial animals. Auditory aposematism by which certain animals use auditory aposematic signals to fend off enemies is well known for instance in rattlesnakes. Auditory signaling by plants toward animals and other plants is an emerging area of plant biology that still suffers from limited amount of solid data. Here I propose that auditory aposematism operates in the African whistling thorn acacia (Acacia drepanolobium = Vachellia drepanolobium). In this tree, the large and hollow thorn bases whistle when wind blows. This type of aposematism compliments the well-known conspicuous thorn and mutualistic ant based aposematism during day and may operate during night when the conspicuous thorns are invisible.

  19. Neuronal representations of distance in human auditory cortex.

    Science.gov (United States)

    Kopčo, Norbert; Huang, Samantha; Belliveau, John W; Raij, Tommi; Tengshe, Chinmayi; Ahveninen, Jyrki

    2012-07-03

    Neuronal mechanisms of auditory distance perception are poorly understood, largely because contributions of intensity and distance processing are difficult to differentiate. Typically, the received intensity increases when sound sources approach us. However, we can also distinguish between soft-but-nearby and loud-but-distant sounds, indicating that distance processing can also be based on intensity-independent cues. Here, we combined behavioral experiments, fMRI measurements, and computational analyses to identify the neural representation of distance independent of intensity. In a virtual reverberant environment, we simulated sound sources at varying distances (15-100 cm) along the right-side interaural axis. Our acoustic analysis suggested that, of the individual intensity-independent depth cues available for these stimuli, direct-to-reverberant ratio (D/R) is more reliable and robust than interaural level difference (ILD). However, on the basis of our behavioral results, subjects' discrimination performance was more consistent with complex intensity-independent distance representations, combining both available cues, than with representations on the basis of either D/R or ILD individually. fMRI activations to sounds varying in distance (containing all cues, including intensity), compared with activations to sounds varying in intensity only, were significantly increased in the planum temporale and posterior superior temporal gyrus contralateral to the direction of stimulation. This fMRI result suggests that neurons in posterior nonprimary auditory cortices, in or near the areas processing other auditory spatial features, are sensitive to intensity-independent sound properties relevant for auditory distance perception.

  20. Can Children with (Central) Auditory Processing Disorders Ignore Irrelevant Sounds?

    Science.gov (United States)

    Elliott, Emily M.; Bhagat, Shaum P.; Lynn, Sharon D.

    2007-01-01

    This study investigated the effects of irrelevant sounds on the serial recall performance of visually presented digits in a sample of children diagnosed with (central) auditory processing disorders [(C)APD] and age- and span-matched control groups. The irrelevant sounds used were samples of tones and speech. Memory performance was significantly…

  1. Effect of stimulus hemifield on free-field auditory saltation.

    Science.gov (United States)

    Ishigami, Yoko; Phillips, Dennis P

    2008-07-01

    Auditory saltation is the orderly misperception of the spatial location of repetitive click stimuli emitted from two successive locations when the inter-click intervals (ICIs) are sufficiently short. The clicks are perceived as originating not only from the actual source locations, but also from locations between them. In two tasks, the present experiment compared free-field auditory saltation for 90 degrees excursions centered in the frontal, rear, left and right acoustic hemifields, by measuring the ICI at which subjects report 50% illusion strength (subjective task) and the ICI at which subjects could not distinguish real motion from saltation (objective task). A comparison of the saltation illusion for excursions spanning the midline (i.e. for frontal or rear hemifields) with that for stimuli in the lateral hemifields (left or right) revealed that the illusion was weaker for the midline-straddling conditions (i.e. the illusion was restricted to shorter ICIs). This may reflect the contribution of two perceptual channels to the task in the midline conditions (as opposed to one in the lateral hemifield conditions), or the fact that the temporal dynamics of localization differ between the midline and lateral hemifield conditions. A subsidiary comparison of saltation supported in the left and right auditory hemifields, and therefore by the right and left auditory forebrains, revealed no difference.

  2. Increased Auditory Startle Reflex in Children with Functional Abdominal Pain

    NARCIS (Netherlands)

    Bakker, Mirte J.; Boer, Frits; Benninga, Marc A.; Koelman, Johannes H. T. M.; Tijssen, Marina A. J.

    2010-01-01

    Objective To test the hypothesis that children with abdominal pain-related functional gastrointestinal disorders have a general hypersensitivity for sensory stimuli. Study design Auditory startle reflexes were assessed in 20 children classified according to Rome III classifications of abdominal pain

  3. Speech Compensation for Time-Scale-Modified Auditory Feedback

    Science.gov (United States)

    Ogane, Rintaro; Honda, Masaaki

    2014-01-01

    Purpose: The purpose of this study was to examine speech compensation in response to time-scale-modified auditory feedback during the transition of the semivowel for a target utterance of /ija/. Method: Each utterance session consisted of 10 control trials in the normal feedback condition followed by 20 perturbed trials in the modified auditory…

  4. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex.

    Science.gov (United States)

    Fishman, Yonatan I; Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate "auditory objects" with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas.

  5. Exploring Auditory Saltation Using the "Reduced-Rabbit" Paradigm

    Science.gov (United States)

    Getzmann, Stephan

    2009-01-01

    Sensory saltation is a spatiotemporal illusion in which the judged positions of stimuli are shifted toward subsequent stimuli that follow closely in time. So far, studies on saltation in the auditory domain have usually employed subjective rating techniques, making it difficult to exactly quantify the extent of saltation. In this study, temporal…

  6. Unilateral Auditory Neuropathy Caused by Cochlear Nerve Deficiency

    Directory of Open Access Journals (Sweden)

    Cheng Liu

    2012-01-01

    Full Text Available Objective. To explore possible corelationship between the cochlear nerve deficiency (CND and unilateral auditory neuropathy (AN. Methods. From a database of 85 patients with unilateral profound sensorineural hearing loss, eight who presented with evoked otoacoustic emissions (EOAEs or cochlear microphonic (CM in the affected ear were diagnosed with unilateral AN. Audiological and radiological records in eight patients with unilateral AN were retrospectively reviewed. Results. Eight cases were diagnosed as having unilateral AN caused by CND. Seven had type “A” tympanogram with normal EOAE in both ears. The other patient had unilateral type “B” tympanogram and absent OAE but CM recorded, consistent with middle ear effusion in the affected ear. For all the ears involved in the study, auditory brainstem responses (ABRs were either absent or responded to the maximum output and the neural responses from the cochlea were not revealed when viewed by means of the oblique sagittal MRI on the internal auditory canal. Conclusion. Cochlear nerve deficiency can be seen by electrophysiological evidence and may be a significant cause of unilateral AN. Inclined sagittal MRI of the internal auditory canal is recommended for the diagnosis of this disorder.

  7. Merging functional and structural properties of the monkey auditory cortex

    Directory of Open Access Journals (Sweden)

    Olivier eJoly

    2014-07-01

    Full Text Available Recent neuroimaging studies in primates aim to define the functional properties of auditory cortical areas, especially areas beyond A1, in order to further our understanding of the auditory cortical organization. Precise mapping of functional magnetic resonance imaging (fMRI results and interpretation of their localizations among all the small auditory subfields remains challenging. To facilitate this mapping, we combined here information from cortical folding, micro-anatomy, surface-based atlas and tonotopic mapping. We used for the first time, phase-encoded fMRI design for mapping the monkey tonotopic organization. From posterior to anterior, we found a high-low-high progression of frequency preference on the superior temporal plane. We show a faithful representation of the fMRI results on a locally flattened surface of the superior temporal plane. In a tentative scheme to delineate core versus belt regions which share similar tonotopic organizations we used the ratio of T1-weighted and T2-weighted MR images as a measure of cortical myelination. Our results, presented along a co-registered surface-based atlas, can be interpreted in terms of a current model of the monkey auditory cortex.

  8. The auditory startle response in post-traumatic stress disorder

    NARCIS (Netherlands)

    Siegelaar, S. E.; Olff, M.; Bour, L. J.; Veelo, D.; Zwinderman, A. H.; van Bruggen, G.; de Vries, G. J.; Raabe, S.; Cupido, C.; Koelman, J. H. T. M.; Tijssen, M. A. J.

    2006-01-01

    Post-traumatic stress disorder (PTSD) patients are considered to have excessive EMG responses in the orbicularis oculi (OO) muscle and excessive autonomic responses to startling stimuli. The aim of the present study was to gain more insight into the pattern of the generalized auditory startle reflex

  9. Music Genre Classification using an Auditory Memory Model

    DEFF Research Database (Denmark)

    2011-01-01

    Audio feature estimation is potentially improved by including higher- level models. One such model is the Auditory Short Term Memory (STM) model. A new paradigm of audio feature estimation is obtained by adding the influence of notes in the STM. These notes are identified when the perceptual...

  10. Central Auditory Nervous System Dysfunction in Echolalic Autistic Individuals.

    Science.gov (United States)

    Wetherby, Amy Miller; And Others

    1981-01-01

    The results showed that all the Ss had normal hearing on the monaural speech tests; however, there was indication of central auditory nervous system dysfunction in the language dominant hemisphere, inferred from the dichotic tests, for those Ss displaying echolalia. (Author)

  11. Background sounds contribute to spectrotemporal plasticity in primary auditory cortex.

    Science.gov (United States)

    Moucha, Raluca; Pandya, Pritesh K; Engineer, Navzer D; Rathbun, Daniel L; Kilgard, Michael P

    2005-05-01

    The mammalian auditory system evolved to extract meaningful information from complex acoustic environments. Spectrotemporal selectivity of auditory neurons provides a potential mechanism to represent natural sounds. Experience-dependent plasticity mechanisms can remodel the spectrotemporal selectivity of neurons in primary auditory cortex (A1). Electrical stimulation of the cholinergic nucleus basalis (NB) enables plasticity in A1 that parallels natural learning and is specific to acoustic features associated with NB activity. In this study, we used NB stimulation to explore how cortical networks reorganize after experience with frequency-modulated (FM) sweeps, and how background stimuli contribute to spectrotemporal plasticity in rat auditory cortex. Pairing an 8-4 kHz FM sweep with NB stimulation 300 times per day for 20 days decreased tone thresholds, frequency selectivity, and response latency of A1 neurons in the region of the tonotopic map activated by the sound. In an attempt to modify neuronal response properties across all of A1 the same NB activation was paired in a second group of rats with five downward FM sweeps, each spanning a different octave. No changes in FM selectivity or receptive field (RF) structure were observed when the neural activation was distributed across the cortical surface. However, the addition of unpaired background sweeps of different rates or direction was sufficient to alter RF characteristics across the tonotopic map in a third group of rats. These results extend earlier observations that cortical neurons can develop stimulus specific plasticity and indicate that background conditions can strongly influence cortical plasticity.

  12. Tuned with a tune: Talker normalization via general auditory processes

    Directory of Open Access Journals (Sweden)

    Erika J C Laing

    2012-06-01

    Full Text Available Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker’s speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS of a talker’s speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences’ LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by nonspeech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.

  13. Synchronization and phonological skills: precise auditory timing hypothesis (PATH

    Directory of Open Access Journals (Sweden)

    Adam eTierney

    2014-11-01

    Full Text Available Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel 2011, 2012, 2014. There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The precise auditory timing hypothesis predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.

  14. Persistent fluctuations in stride intervals under fractal auditory stimulation

    NARCIS (Netherlands)

    Marmelat, V.C.M.; Torre, K.; Beek, P.J.; Daffertshofer, A.

    2014-01-01

    Stride sequences of healthy gait are characterized by persistent long-range correlations, which become anti-persistent in the presence of an isochronous metronome. The latter phenomenon is of particular interest because auditory cueing isgenerally considered to reduce stride variability and may henc

  15. Brainstem auditory evoked potential abnormalities in type 2 diabetes mellitus

    Directory of Open Access Journals (Sweden)

    Sharat Gupta

    2013-01-01

    Full Text Available Background: Diabetes mellitus represents a syndrome complex in which multiple organ systems, including the central nervous system, are affected. Aim: The study was conducted to determine the changes in the brainstem auditory evoked potentials in type 2 diabetes mellitus. Materials and Methods: A cross sectional study was conducted on 126 diabetic males, aged 35-50 years, and 106 age-matched, healthy male volunteers. Brainstem auditory evoked potentials were recorded and the results were analyzed statistically using student′s unpaired t-test. The data consisted of wave latencies I, II, III, IV, V and interpeak latencies I-III, III-V and I-V, separately for both ears. Results: The latency of wave IV was significantly delayed only in the right ear, while the latency of waves III, V and interpeak latencies III-V, I-V showed a significant delay bilaterally in diabetic males. However, no significant difference was found between diabetic and control subjects as regards to the latency of wave IV unilaterally in the left ear and the latencies of waves I, II and interpeak latency I-III bilaterally. Conclusion: Diabetes patients have an early involvement of central auditory pathway, which can be detected with fair accuracy with auditory evoked potential studies.

  16. Active stream segregation specifically involves the left human auditory cortex.

    Science.gov (United States)

    Deike, Susann; Scheich, Henning; Brechmann, André

    2010-06-14

    An important aspect of auditory scene analysis is the sequential grouping of similar sounds into one "auditory stream" while keeping competing streams separate. In the present low-noise fMRI study we presented sequences of alternating high-pitch (A) and low-pitch (B) complex harmonic tones using acoustic parameters that allow the perception of either two separate streams or one alternating stream. However, the subjects were instructed to actively and continuously segregate the A from the B stream. This was controlled by the additional instruction to listen for rare level deviants only in the low-pitch stream. Compared to the control condition in which only one non-separable stream was presented the active segregation of the A from the B stream led to a selective increase of activation in the left auditory cortex (AC). Together with a similar finding from a previous study using a different acoustic cue for streaming, namely timbre, this suggests that the left auditory cortex plays a dominant role in active sequential stream segregation. However, we found cue differences within the left AC: Whereas in the posterior areas, including the planum temporale, activation increased for both acoustic cues, the anterior areas, including Heschl's gyrus, are only involved in stream segregation based on pitch.

  17. Biological Impact of Music and Software-Based Auditory Training

    Science.gov (United States)

    Kraus, Nina

    2012-01-01

    Auditory-based communication skills are developed at a young age and are maintained throughout our lives. However, some individuals--both young and old--encounter difficulties in achieving or maintaining communication proficiency. Biological signals arising from hearing sounds relate to real-life communication skills such as listening to speech in…

  18. Are Auditory and Visual Processing Deficits Related to Developmental Dyslexia?

    Science.gov (United States)

    Georgiou, George K.; Papadopoulos, Timothy C.; Zarouna, Elena; Parrila, Rauno

    2012-01-01

    The purpose of this study was to examine if children with dyslexia learning to read a consistent orthography (Greek) experience auditory and visual processing deficits and if these deficits are associated with phonological awareness, rapid naming speed and orthographic processing. We administered measures of general cognitive ability, phonological…

  19. Thalamic and parietal brain morphology predicts auditory category learning.

    Science.gov (United States)

    Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas

    2014-01-01

    Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties.

  20. Changes in otoacoustic emissions during selective auditory and visual attention.

    Science.gov (United States)

    Walsh, Kyle P; Pasanen, Edward G; McFadden, Dennis

    2015-05-01

    Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing-the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2-3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater.

  1. Sprint starts and the minimum auditory reaction time.

    Science.gov (United States)

    Pain, Matthew T G; Hibbs, Angela

    2007-01-01

    The simple auditory reaction time is one of the fastest reaction times and is thought to be rarely less than 100 ms. The current false start criterion in a sprint used by the International Association of Athletics Federations is based on this assumed auditory reaction time of 100 ms. However, there is evidence, both anecdotal and from reflex research, that simple auditory reaction times of less than 100 ms can be achieved. Reaction time in nine athletes performing sprint starts in four conditions was measured using starting blocks instrumented with piezoelectric force transducers in each footplate that were synchronized with the starting signal. Only three conditions were used to calculate reaction times. The pre-motor and pseudo-motor time for two athletes were also measured across 13 muscles using surface electromyography (EMG) synchronized with the rest of the system. Five of the athletes had mean reaction times of less than 100 ms in at least one condition and 20% of all starts in the first two conditions had a reaction time of less than 100 ms. The results demonstrate that the neuromuscular-physiological component of simple auditory reaction times can be under 85 ms and that EMG latencies can be under 60 ms.

  2. Context, Contrast, and Tone of Voice in Auditory Sarcasm Perception

    Science.gov (United States)

    Voyer, Daniel; Thibodeau, Sophie-Hélène; Delong, Breanna J.

    2016-01-01

    Four experiments were conducted to investigate the interplay between context and tone of voice in the perception of sarcasm. These experiments emphasized the role of contrast effects in sarcasm perception exclusively by means of auditory stimuli whereas most past research has relied on written material. In all experiments, a positive or negative…

  3. Influence of Syllable Structure on L2 Auditory Word Learning

    Science.gov (United States)

    Hamada, Megumi; Goya, Hideki

    2015-01-01

    This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a…

  4. Characteristics of Auditory Processing Disorders : A Systematic Review

    NARCIS (Netherlands)

    de Wit, Ellen; Visser-Bochane, Margot I; Steenbergen, Bert; van Dijk, Pim; van der Schans, Cees P; Luinge, Margreet R

    2016-01-01

    Purpose: The purpose of this review article is to describe characteristics of auditory processing disorders (APD) by evaluating the literature in which children with suspected or diagnosed APD were compared with typically developing children and to determine whether APD must be regarded as a deficit

  5. Characteristics of auditory processing disorders: A systematic review

    NARCIS (Netherlands)

    Wit, E. de; Visser-Bochane, M.I.; Steenbergen, B.; Dijk, P. van; Schans, C.P. van der; Luinge, M.R.

    2016-01-01

    Purpose: The purpose of this review article is to describe characteristics of auditory processing disorders (APD) by evaluating the literature in which children with suspected or diagnosed APD were compared with typically developing children and to determine whether APD must be regarded as a deficit

  6. Vestibular receptors contribute to cortical auditory evoked potentials.

    Science.gov (United States)

    Todd, Neil P M; Paillard, Aurore C; Kluk, Karolina; Whittle, Elizabeth; Colebatch, James G

    2014-03-01

    Acoustic sensitivity of the vestibular apparatus is well-established, but the contribution of vestibular receptors to the late auditory evoked potentials of cortical origin is unknown. Evoked potentials from 500 Hz tone pips were recorded using 70 channel EEG at several intensities below and above the vestibular acoustic threshold, as determined by vestibular evoked myogenic potentials (VEMPs). In healthy subjects both auditory mid- and long-latency auditory evoked potentials (AEPs), consisting of Na, Pa, N1 and P2 waves, were observed in the sub-threshold conditions. However, in passing through the vestibular threshold, systematic changes were observed in the morphology of the potentials and in the intensity dependence of their amplitude and latency. These changes were absent in a patient without functioning vestibular receptors. In particular, for the healthy subjects there was a fronto-central negativity, which appeared at about 42 ms, referred to as an N42, prior to the AEP N1. Source analysis of both the N42 and N1 indicated involvement of cingulate cortex, as well as bilateral superior temporal cortex. Our findings are best explained by vestibular receptors contributing to what were hitherto considered as purely auditory evoked potentials and in addition tentatively identify a new component that appears to be primarily of vestibular origin.

  7. Implications of blast exposure for central auditory function: A review

    Directory of Open Access Journals (Sweden)

    Frederick J. Gallun, PhD

    2012-10-01

    Full Text Available Auditory system functions, from peripheral sensitivity to central processing capacities, are all at risk from a blast event. Accurate encoding of auditory patterns in time, frequency, and space are required for a clear understanding of speech and accurate localization of sound sources in environments with background noise, multiple sound sources, and/or reverberation. Further work is needed to refine the battery of clinical tests sensitive to the sorts of central auditory dysfunction observed in individuals with blast exposure. Treatment options include low-gain hearing aids, remote-microphone technology, and auditory-training regimens, but clinical evidence does not yet exist for recommending one or more of these options. As this population ages, the natural aging process and other potential brain injuries (such as stroke and blunt trauma may combine with blast-related brain changes to produce a population for which the current clinical diagnostic and treatment tools may prove inadequate. It is important to maintain an updated understanding of the scope of the issues present in this population and to continue to identify those solutions that can provide measurable improvements in the lives of Veterans who have been exposed to high-intensity blasts during the course of their military service.

  8. Listener Agreement for Auditory-Perceptual Ratings of Dysarthria

    Science.gov (United States)

    Bunton, Kate; Kent, Raymond D.; Duffy, Joseph R.; Rosenbek, John C.; Kent, Jane F.

    2007-01-01

    Purpose: Darley, Aronson, and Brown (1969a, 1969b) detailed methods and results of auditory-perceptual assessment for speakers with dysarthrias of varying etiology. They reported adequate listener reliability for use of the rating system as a tool for differential diagnosis, but several more recent studies have raised concerns about listener…

  9. Auditory and visual capture during focused visual attention

    NARCIS (Netherlands)

    Koelewijn, T.; Bronkhorst, A.W.; Theeuwes, J.

    2009-01-01

    It is well known that auditory and visual onsets presented at a particular location can capture a person’s visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies ha

  10. Discovering Structure in Auditory Input: Evidence from Williams Syndrome

    Science.gov (United States)

    Elsabbagh, Mayada; Cohen, Henri; Karmiloff-Smith, Annette

    2010-01-01

    We examined auditory perception in Williams syndrome by investigating strategies used in organizing sound patterns into coherent units. In Experiment 1, we investigated the streaming of sound sequences into perceptual units, on the basis of pitch cues, in a group of children and adults with Williams syndrome compared to typical controls. We showed…

  11. The Auditory Verbal Learning Test (Rey AVLT): An Arabic Version

    Science.gov (United States)

    Sharoni, Varda; Natur, Nazeh

    2014-01-01

    The goals of this study were to adapt the Rey Auditory Verbal Learning Test (AVLT) into Arabic, to compare recall functioning among age groups (6:0 to 17:11), and to compare gender differences on various memory dimensions (immediate and delayed recall, learning rate, recognition, proactive interferences, and retroactive interferences). This…

  12. Abnormal connectivity between attentional, language and auditory networks in schizophrenia

    NARCIS (Netherlands)

    Liemburg, Edith J.; Vercammen, Ans; Ter Horst, Gert J.; Curcic-Blake, Branislava; Knegtering, Henderikus; Aleman, Andre

    2012-01-01

    Brain circuits involved in language processing have been suggested to be compromised in patients with schizophrenia. This does not only include regions subserving language production and perception, but also auditory processing and attention. We investigated resting state network connectivity of aud

  13. MR and genetics in schizophrenia: Focus on auditory hallucinations

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar, Eduardo Jesus [Psychiatric Service, Clinic University Hospital, Avda. Blasco Ibanez 17, 46010 Valencia (Spain)], E-mail: eduardoj.aguilar@gmail.com; Sanjuan, Julio [Psychiatric Unit, Faculty of Medicine, Valencia University, Avda. Blasco Ibanez 17, 46010 Valencia (Spain); Garcia-Marti, Gracian [Department of Radiology, Hospital Quiron, Avda. Blasco Ibanez 14, 46010 Valencia (Spain); Lull, Juan Jose; Robles, Montserrat [ITACA Institute, Polytechnic University of Valencia, Camino de Vera s/n, 46022 Valencia (Spain)

    2008-09-15

    Although many structural and functional abnormalities have been related to schizophrenia, until now, no single biological marker has been of diagnostic clinical utility. One way to obtain more valid findings is to focus on the symptoms instead of the syndrome. Auditory hallucinations (AHs) are one of the most frequent and reliable symptoms of psychosis. We present a review of our main findings, using a multidisciplinary approach, on auditory hallucinations. Firstly, by applying a new auditory emotional paradigm specific for psychosis, we found an enhanced activation of limbic and frontal brain areas in response to emotional words in these patients. Secondly, in a voxel-based morphometric study, we obtained a significant decreased gray matter concentration in the insula (bilateral), superior temporal gyrus (bilateral), and amygdala (left) in patients compared to healthy subjects. This gray matter loss was directly related to the intensity of AH. Thirdly, using a new method for looking at areas of coincidence between gray matter loss and functional activation, large coinciding brain clusters were found in the left and right middle temporal and superior temporal gyri. Finally, we summarized our main findings from our studies of the molecular genetics of auditory hallucinations. Taking these data together, an integrative model to explain the neurobiological basis of this psychotic symptom is presented.

  14. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Directory of Open Access Journals (Sweden)

    Yi-Huang Su

    2016-01-01

    Full Text Available Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  15. Stuttering Inhibition via Altered Auditory Feedback during Scripted Telephone Conversations

    Science.gov (United States)

    Hudock, Daniel; Kalinowski, Joseph

    2014-01-01

    Background: Overt stuttering is inhibited by approximately 80% when people who stutter read aloud as they hear an altered form of their speech feedback to them. However, levels of stuttering inhibition vary from 60% to 100% depending on speaking situation and signal presentation. For example, binaural presentations of delayed auditory feedback…

  16. Segmental processing in the human auditory dorsal stream.

    Science.gov (United States)

    Zaehle, Tino; Geiser, Eveline; Alter, Kai; Jancke, Lutz; Meyer, Martin

    2008-07-18

    In the present study we investigated the functional organization of sublexical auditory perception with specific respect to auditory spectro-temporal processing in speech and non-speech sounds. Participants discriminated verbal and nonverbal auditory stimuli according to either spectral or temporal acoustic features in the context of a sparse event-related functional magnetic resonance imaging (fMRI) study. Based on recent models of speech processing, we hypothesized that auditory segmental processing, as is required in the discrimination of speech and non-speech sound according to its temporal features, will lead to a specific involvement of a left-hemispheric dorsal processing network comprising the posterior portion of the inferior frontal cortex and the inferior parietal lobe. In agreement with our hypothesis results revealed significant responses in the posterior part of the inferior frontal gyrus and the parietal operculum of the left hemisphere when participants had to discriminate speech and non-speech stimuli based on subtle temporal acoustic features. In contrast, when participants had to discriminate speech and non-speech stimuli on the basis of changes in the frequency content, we observed bilateral activations along the middle temporal gyrus and superior temporal sulcus. The results of the present study demonstrate an involvement of the dorsal pathway in the segmental sublexical analysis of speech sounds as well as in the segmental acoustic analysis of non-speech sounds with analogous spectro-temporal characteristics.

  17. Central projections of auditory receptor neurons of crickets.

    Science.gov (United States)

    Imaizumi, Kazuo; Pollack, Gerald S

    2005-12-19

    We describe the central projections of physiologically characterized auditory receptor neurons of crickets as revealed by confocal microscopy. Receptors tuned to ultrasonic frequencies (similar to those produced by echolocating, insectivorous bats), to a mid-range of frequencies, and a subset of those tuned to low, cricket-like frequencies have similar projections, terminating medially within the auditory neuropile. Quantitative analysis shows that despite the general similarity of these projections they are tonotopic, with receptors tuned to lower frequencies terminating more medially. Another subset of cricket-song-tuned receptors projects more laterally and posteriorly than the other types. Double-fills of receptors and identified interneurons show that the three medially projecting receptor types are anatomically well positioned to provide monosynaptic input to interneurons that relay auditory information to the brain and to interneurons that modify this ascending information. The more laterally and posteriorly branching receptor type may not interact directly with this ascending pathway, but is well positioned to provide direct input to an interneuron that carries auditory information to more posterior ganglia. These results suggest that information about cricket song is segregated into functionally different pathways as early as the level of receptor neurons. Ultrasound-tuned and mid-frequency tuned receptors have approximately twice as many varicosities, which are sites of transmitter release, per receptor as either anatomical type of cricket-song-tuned receptor. This may compensate in part for the numerical under-representation of these receptor types.

  18. Startle auditory stimuli enhance the performance of fast dynamic contractions.

    Science.gov (United States)

    Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M

    2014-01-01

    Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training.

  19. Auditory intensity processing: Effect of MRI background noise.

    Science.gov (United States)

    Angenstein, Nicole; Stadler, Jörg; Brechmann, André

    2016-03-01

    Studies on active auditory intensity discrimination in humans showed equivocal results regarding the lateralization of processing. Whereas experiments with a moderate background found evidence for right lateralized processing of intensity, functional magnetic resonance imaging (fMRI) studies with background scanner noise suggest more left lateralized processing. With the present fMRI study, we compared the task dependent lateralization of intensity processing between a conventional continuous echo planar imaging (EPI) sequence with a loud background scanner noise and a fast low-angle shot (FLASH) sequence with a soft background scanner noise. To determine the lateralization of the processing, we employed the contralateral noise procedure. Linearly frequency modulated (FM) tones were presented monaurally with and without contralateral noise. During both the EPI and the FLASH measurement, the left auditory cortex was more strongly involved than the right auditory cortex while participants categorized the intensity of FM tones. This was shown by a strong effect of the additional contralateral noise on the activity in the left auditory cortex. This means a massive reduction in background scanner noise still leads to a significant left lateralized effect. This suggests that the reversed lateralization in fMRI studies with loud background noise in contrast to studies with softer background cannot be fully explained by the MRI background noise.

  20. Auditory-Visual Perception of Changing Distance by Human Infants.

    Science.gov (United States)

    Walker-Andrews, Arlene S.; Lennon, Elizabeth M.

    1985-01-01

    Examines, in two experiments, 5-month-old infants' sensitivity to auditory-visual specification of distance and direction of movement. One experiment presented two films with soundtracks in either a match or mismatch condition; the second showed the two films side-by-side with a single soundtrack appropriate to one. Infants demonstrated visual…

  1. Transcranial direct current stimulation as a treatment for auditory hallucinations

    NARCIS (Netherlands)

    Koops, Sanne; van den Brink, Hilde; Sommer, Iris E C

    2015-01-01

    Auditory hallucinations (AH) are a symptom of several psychiatric disorders, such as schizophrenia. In a significant minority of patients, AH are resistant to antipsychotic medication. Alternative treatment options for this medication resistant group are scarce and most of them focus on coping with

  2. Formation of the avian nucleus magnocellularis from the auditory anlage.

    Science.gov (United States)

    Hendricks, Susan J; Rubel, Edwin W; Nishi, Rae

    2006-10-01

    In the avian auditory system, the neural network for computing the localization of sound in space begins with bilateral innervation of nucleus laminaris (NL) by nucleus magnocellularis (NM) neurons. We used antibodies against the neural specific markers Hu C/D, neurofilament, and SV2 together with retrograde fluorescent dextran labeling from the contralateral hindbrain to identify NM neurons within the anlage and follow their development. NM neurons could be identified by retrograde labeling as early as embryonic day (E) 6. While the auditory anlage organized itself into NM and NL in a rostral-to-caudal fashion between E6 and E8, labeled NM neurons were visible throughout the extent of the anlage at E6. By observing the pattern of neuronal rearrangements together with the pattern of contralaterally projecting NM fibers, we could identify NL in the ventral anlage. Ipsilateral NM fibers contacted the developing NL at E8, well after NM collaterals had projected contralaterally. Furthermore, the formation of ipsilateral connections between NM and NL neurons appeared to coincide with the arrival of VIIIth nerve fibers in NM. By E10, immunoreactivity for SV2 was heavily concentrated in the dorsal and ventral neuropils of NL. Thus, extensive pathfinding and morphological rearrangement of central auditory nuclei occurs well before the arrival of cochlear afferents. Our results suggest that NM neurons may play a central role in formation of tonotopic connections in the auditory system.

  3. The impact of severity of hypertension on auditory brainstem responses

    Directory of Open Access Journals (Sweden)

    Gurdev Lal Goyal

    2014-07-01

    Full Text Available Background: Auditory brainstem response is an objective electrophysiological method for assessing the auditory pathways from the auditory nerve to the brainstem. The aim of this study was to correlate and to assess the degree of involvement of peripheral and central regions of brainstem auditory pathways with increasing severity of hypertension, among the patients of essential hypertension. Method: This study was conducted on 50 healthy age and sex matched controls (Group I and 50 hypertensive patients (Group II. Later group was further sub-divided into - Group IIa (Grade 1 hypertension, Group IIb (Grade 2 hypertension, and Group IIc (Grade 3 hypertension, as per WHO guidelines. These responses/potentials were recorded by using electroencephalogram electrodes on a root-mean-square electromyography, EP MARC II (PC-based machine and data were statistically compared between the various groups by way of one-way ANOVA. The parameters used for analysis were the absolute latencies of Waves I through V, interpeak latencies (IPLs and amplitude ratio of Wave V/I. Result: The absolute latency of Wave I was observed to be significantly increased in Group IIa and IIb hypertensives, while Wave V absolute latency was highly significantly prolonged among Group IIb and IIc, as compared to that of normal control group. All the hypertensives, that is, Group IIa, IIb, and IIc patients were found to have highly significant prolonged III-V IPL as compared to that of normal healthy controls. Further, intergroup comparison among hypertensive patients revealed a significant prolongation of Wave V absolute latency and III-V IPL in Group IIb and IIc patients as compared to Group IIa patients. These findings suggest a sensory deficit along with synaptic delays, across the auditory pathways in all the hypertensives, the deficit being more markedly affecting the auditory processing time at pons to midbrain (IPL III-V region of auditory pathways among Grade 2 and 3

  4. Music lessons improve auditory perceptual and cognitive performance in deaf children

    Directory of Open Access Journals (Sweden)

    Françoise eROCHETTE

    2014-07-01

    Full Text Available Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5 to 4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically-trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  5. The effect of visual and auditory cues on seat preference in an opera theater.

    Science.gov (United States)

    Jeon, Jin Yong; Kim, Yong Hee; Cabrera, Densil; Bassett, John

    2008-06-01

    Opera performance conveys both visual and auditory information to an audience, and so opera theaters should be evaluated in both domains. This study investigates the effect of static visual and auditory cues on seat preference in an opera theater. Acoustical parameters were measured and visibility was analyzed for nine seats. Subjective assessments for visual-only, auditory-only, and auditory-visual preferences for these seat positions were made through paired-comparison tests. In the cases of visual-only and auditory-only subjective evaluations, preference judgment tests on a rating scale were also employed. Visual stimuli were based on still photographs, and auditory stimuli were based on binaural impulse responses convolved with a solo tenor recording. For the visual-only experiment, preference is predicted well by measurements taken related to the angle of seats from the theater midline at the center of the stage, the size of the photographed stage view, the visual obstruction, and the distance from the stage. Sound pressure level was the dominant predictor of auditory preference in the auditory-only experiment. In the cross-modal experiments, both auditory and visual preferences were shown to contribute to overall impression, but auditory cues were more influential than the static visual cues. The results show that both a positive visual-only or a positive auditory-only evaluations positively contribute to the assessments of seat quality.

  6. Auditory distraction transmitted by a cochlear implant alters allocation of attentional resources

    Directory of Open Access Journals (Sweden)

    Mareike eFinke

    2015-03-01

    Full Text Available Cochlear implants (CIs are auditory prostheses which restore hearing via electrical stimulation of the auditory nerve. The successful adaptation of auditory cognition to the CI input depends to a substantial degree on individual factors. We pursued an electrophysiological approach towards an analysis of cortical responses that reflect perceptual processing stages and higher-level responses to CI input. Performance and event-related potentials on two cross-modal discrimination-following-distraction tasks from CI users and normal-hearing (NH individuals were compared. The visual-auditory distraction task combined visual distraction with following auditory discrimination performance. Here, we observed similar cortical responses to visual distractors (Novelty-N2 and slowed, less accurate auditory discrimination performance in CI users when compared to NH individuals. Conversely, the auditory-visual distraction task was used to combine auditory distraction with visual discrimination performance. In this task we found attenuated cortical responses to auditory distractors (Novelty-P3, slowed visual discrimination performance, and attenuated cortical P3-responses to visual targets in CI users compared to NH individuals. These results suggest that CI users process auditory distractors differently than NH individuals and that the presence of auditory CI input has an adverse effect on the processing of visual targets and the visual discrimination ability in implanted individuals. We propose that this attenuation of the visual modality occurs through the allocation of neural resources to the CI input.

  7. Toward a neurobiology of auditory object perception: What can we learn from the songbird forebrain?

    Institute of Scientific and Technical Information of China (English)

    Kai LU; David S. VICARIO

    2011-01-01

    In the acoustic world,no sounds occur entirely in isolation; they always reach the ears in combination with other sounds.How any given sound is discriminated and perceived as an independent auditory object is a challenging question in neuroscience.Although our knowledge of neural processing in the auditory pathway has expanded over the years,no good theory exists to explain how perception of auditory objects is achieved.A growing body of evidence suggests that the selectivity of neurons in the auditory forebrain is under dynamic modulation,and this plasticity may contribute to auditory object perception.We propose that stimulus-specific adaptation in the auditory forebrain of the songbird (and perhaps in other systems) may play an important role in modulating sensitivity in a way that aids discrimination,and thus can potentially contribute to auditory object perception [Current Zoology 57 (6):671-683,2011].

  8. The importance of laughing in your face: influences of visual laughter on auditory laughter perception.

    Science.gov (United States)

    Jordan, Timothy R; Abedipour, Lily

    2010-01-01

    Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.

  9. Relationship between Auditory and Cognitive Abilities in Older Adults.

    Directory of Open Access Journals (Sweden)

    Stanley Sheft

    Full Text Available The objective was to evaluate the association of peripheral and central hearing abilities with cognitive function in older adults.Recruited from epidemiological studies of aging and cognition at the Rush Alzheimer's Disease Center, participants were a community-dwelling cohort of older adults (range 63-98 years without diagnosis of dementia. The cohort contained roughly equal numbers of Black (n=61 and White (n=63 subjects with groups similar in terms of age, gender, and years of education. Auditory abilities were measured with pure-tone audiometry, speech-in-noise perception, and discrimination thresholds for both static and dynamic spectral patterns. Cognitive performance was evaluated with a 12-test battery assessing episodic, semantic, and working memory, perceptual speed, and visuospatial abilities.Among the auditory measures, only the static and dynamic spectral-pattern discrimination thresholds were associated with cognitive performance in a regression model that included the demographic covariates race, age, gender, and years of education. Subsequent analysis indicated substantial shared variance among the covariates race and both measures of spectral-pattern discrimination in accounting for cognitive performance. Among cognitive measures, working memory and visuospatial abilities showed the strongest interrelationship to spectral-pattern discrimination performance.For a cohort of older adults without diagnosis of dementia, neither hearing thresholds nor speech-in-noise ability showed significant association with a summary measure of global cognition. In contrast, the two auditory metrics of spectral-pattern discrimination ability significantly contributed to a regression model prediction of cognitive performance, demonstrating association of central auditory ability to cognitive status using auditory metrics that avoided the confounding effect of speech materials.

  10. (Central Auditory Processing: the impact of otitis media

    Directory of Open Access Journals (Sweden)

    Leticia Reis Borges

    2013-07-01

    Full Text Available OBJECTIVE: To analyze auditory processing test results in children suffering from otitis media in their first five years of age, considering their age. Furthermore, to classify central auditory processing test findings regarding the hearing skills evaluated. METHODS: A total of 109 students between 8 and 12 years old were divided into three groups. The control group consisted of 40 students from public school without a history of otitis media. Experimental group I consisted of 39 students from public schools and experimental group II consisted of 30 students from private schools; students in both groups suffered from secretory otitis media in their first five years of age and underwent surgery for placement of bilateral ventilation tubes. The individuals underwent complete audiological evaluation and assessment by Auditory Processing tests. RESULTS: The left ear showed significantly worse performance when compared to the right ear in the dichotic digits test and pitch pattern sequence test. The students from the experimental groups showed worse performance when compared to the control group in the dichotic digits test and gaps-in-noise. Children from experimental group I had significantly lower results on the dichotic digits and gaps-in-noise tests compared with experimental group II. The hearing skills that were altered were temporal resolution and figure-ground perception. CONCLUSION: Children who suffered from secretory otitis media in their first five years and who underwent surgery for placement of bilateral ventilation tubes showed worse performance in auditory abilities, and children from public schools had worse results on auditory processing tests compared with students from private schools.

  11. Representation of speech in human auditory cortex: is it special?

    Science.gov (United States)

    Steinschneider, Mitchell; Nourski, Kirill V; Fishman, Yonatan I

    2013-11-01

    Successful categorization of phonemes in speech requires that the brain analyze the acoustic signal along both spectral and temporal dimensions. Neural encoding of the stimulus amplitude envelope is critical for parsing the speech stream into syllabic units. Encoding of voice onset time (VOT) and place of articulation (POA), cues necessary for determining phonemic identity, occurs within shorter time frames. An unresolved question is whether the neural representation of speech is based on processing mechanisms that are unique to humans and shaped by learning and experience, or is based on rules governing general auditory processing that are also present in non-human animals. This question was examined by comparing the neural activity elicited by speech and other complex vocalizations in primary auditory cortex of macaques, who are limited vocal learners, with that in Heschl's gyrus, the putative location of primary auditory cortex in humans. Entrainment to the amplitude envelope is neither specific to humans nor to human speech. VOT is represented by responses time-locked to consonant release and voicing onset in both humans and monkeys. Temporal representation of VOT is observed both for isolated syllables and for syllables embedded in the more naturalistic context of running speech. The fundamental frequency of male speakers is represented by more rapid neural activity phase-locked to the glottal pulsation rate in both humans and monkeys. In both species, the differential representation of stop consonants varying in their POA can be predicted by the relationship between the frequency selectivity of neurons and the onset spectra of the speech sounds. These findings indicate that the neurophysiology of primary auditory cortex is similar in monkeys and humans despite their vastly different experience with human speech, and that Heschl's gyrus is engaged in general auditory, and not language-specific, processing. This article is part of a Special Issue entitled

  12. Computational spectrotemporal auditory model with applications to acoustical information processing

    Science.gov (United States)

    Chi, Tai-Shih

    A computational spectrotemporal auditory model based on neurophysiological findings in early auditory and cortical stages is described. The model provides a unified multiresolution representation of the spectral and temporal features of sound likely critical in the perception of timbre. Several types of complex stimuli are used to demonstrate the spectrotemporal information preserved by the model. Shown by these examples, this two stage model reflects the apparent progressive loss of temporal dynamics along the auditory pathway from the rapid phase-locking (several kHz in auditory nerve), to moderate rates of synchrony (several hundred Hz in midbrain), to much lower rates of modulations in the cortex (around 30 Hz). To complete this model, several projection-based reconstruction algorithms are implemented to resynthesize the sound from the representations with reduced dynamics. One particular application of this model is to assess speech intelligibility. The spectro-temporal Modulation Transfer Functions (MTF) of this model is investigated and shown to be consistent with the salient trends in the human MTFs (derived from human detection thresholds) which exhibit a lowpass function with respect to both spectral and temporal dimensions, with 50% bandwidths of about 16 Hz and 2 cycles/octave. Therefore, the model is used to demonstrate the potential relevance of these MTFs to the assessment of speech intelligibility in noise and reverberant conditions. Another useful feature is the phase singularity emerged in the scale space generated by this multiscale auditory model. The singularity is shown to have certain robust properties and carry the crucial information about the spectral profile. Such claim is justified by perceptually tolerable resynthesized sounds from the nonconvex singularity set. In addition, the singularity set is demonstrated to encode the pitch and formants at different scales. These properties make the singularity set very suitable for traditional

  13. Odors bias time perception in visual and auditory modalities

    Directory of Open Access Journals (Sweden)

    Zhenzhu eYue

    2016-04-01

    Full Text Available Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 ms or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor. The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a

  14. Behavioral and EEG evidence for auditory memory suppression

    Directory of Open Access Journals (Sweden)

    Maya Elizabeth Cano

    2016-03-01

    Full Text Available The neural basis of motivated forgetting using the Think/No-Think (TNT paradigm is receiving increased attention with a particular focus on the mechanisms that enable memory suppression. However, most TNT studies have been limited to the visual domain. To assess whether and to what extent direct memory suppression extends across sensory modalities, we examined behavioral and electroencephalographic (EEG effects of auditory Think/No-Think in healthy young adults by adapting the TNT paradigm to the auditory modality. Behaviorally, suppression of memory strength was indexed by prolonged response times during the retrieval of subsequently remembered No-Think words. We examined task-related EEG activity of both attempted memory retrieval and inhibition of a previously learned target word during the presentation of its paired associate. Event-related EEG responses revealed two main findings: 1 a centralized Think > No-Think positivity during auditory word presentation (from approximately 0-500ms, and 2 a sustained Think positivity over parietal electrodes beginning at approximately 600ms reflecting the memory retrieval effect which was significantly reduced for No-Think words. In addition, word-locked theta (4-8 Hz power was initially greater for No-Think compared to Think during auditory word presentation over fronto-central electrodes. This was followed by a posterior theta increase indexing successful memory retrieval in the Think condition.The observed event-related potential pattern and theta power analysis are similar to that reported in visual Think/No-Think studies and support a modality non-specific mechanism for memory inhibition. The EEG data also provide evidence supporting differing roles and time courses of frontal and parietal regions in the flexible control of auditory memory.

  15. Clinical presentation and audiologic findings in pediatric auditory neuropathy

    Directory of Open Access Journals (Sweden)

    Navneet Gupta

    2014-01-01

    Full Text Available Aim: of the study was to rule out audiologic findings, related etiologies and its effect in pediatric patients having hearing deficits that are most likely due to a neuropathy of the eighth nerve. Study Design: Retrospective neo-natal hearing screening programme based. Subject and Methods: Subjects include 30 children aged from 0 yrs to 12 yrs, were tested with pure tone audiometry, behavioral observation audiometry, free-filed audiometry, speech audiometry, auditory brainstem response, and click evoked otoacoustic emissions. Results: Pure tone and free-field testing revealed 40 ears (66.67%, n = 60 with sloping type, sensorineural hearing loss, 20 ears (33.3%, n = 60 had flat configuration. Out of this 18 (6%, n = 30 subject showed bilateral similar configuration (either bilateral sloping type/ flat type of audiogram. Rest 12 (40%, n = 30 subject showed bilateral different pattern. 10 (33.3%, n = 30 children demonstrated fair to poor word discrimination scores and the other 2 (6.67%, n = 30 had fair to good word discrimination. For other rest of 18 (60%, n = 30 children speech test couldn′t be performed because of age limit and poor speech and language development. Out of 30 subjects 28 (93.3%, n = 30 showed normal distortion product Otoacoustic emissions and 2(6.67%, n = 30 subjects showed absent emissions. Conclusions: All thirty children demonstrated absent or marked abnormalities of brainstem auditory evoked potentials which suggest cochlear outer hair cell function is normal; mostly lesion is located at the eighth nerve or beyond. Generally auditory neuropathy is associated with different etiologies and it is difficult to diagnose auditory neuropathy with single audiological test; sufficient test of battery is required for complete assessment and diagnosis of auditory neuropathy

  16. Quadri-stability of a spatially ambiguous auditory illusion

    Directory of Open Access Journals (Sweden)

    Constance May Bainbridge

    2015-01-01

    Full Text Available In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front versus behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or bouncing to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual

  17. Electrostimulation mapping of comprehension of auditory and visual words.

    Science.gov (United States)

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing.

  18. Multi-sensory integration in brainstem and auditory cortex.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience.

  19. Two distinct auditory-motor circuits for monitoring speech production as revealed by content-specific suppression of auditory cortex.

    Science.gov (United States)

    Ylinen, Sari; Nora, Anni; Leminen, Alina; Hakala, Tero; Huotilainen, Minna; Shtyrov, Yury; Mäkelä, Jyrki P; Service, Elisabet

    2015-06-01

    Speech production, both overt and covert, down-regulates the activation of auditory cortex. This is thought to be due to forward prediction of the sensory consequences of speech, contributing to a feedback control mechanism for speech production. Critically, however, these regulatory effects should be specific to speech content to enable accurate speech monitoring. To determine the extent to which such forward prediction is content-specific, we recorded the brain's neuromagnetic responses to heard multisyllabic pseudowords during covert rehearsal in working memory, contrasted with a control task. The cortical auditory processing of target syllables was significantly suppressed during rehearsal compared with control, but only when they matched the rehearsed items. This critical specificity to speech content enables accurate speech monitoring by forward prediction, as proposed by current models of speech production. The one-to-one phonological motor-to-auditory mappings also appear to serve the maintenance of information in phonological working memory. Further findings of right-hemispheric suppression in the case of whole-item matches and left-hemispheric enhancement for last-syllable mismatches suggest that speech production is monitored by 2 auditory-motor circuits operating on different timescales: Finer grain in the left versus coarser grain in the right hemisphere. Taken together, our findings provide hemisphere-specific evidence of the interface between inner and heard speech.

  20. Quantitative map of multiple auditory cortical regions with a stereotaxic fine-scale atlas of the mouse brain

    OpenAIRE

    Hiroaki Tsukano; Masao Horie; Ryuichi Hishida; Kuniyuki Takahashi; Hirohide Takebayashi; Katsuei Shibuki

    2016-01-01

    Optical imaging studies have recently revealed the presence of multiple auditory cortical regions in the mouse brain. We have previously demonstrated, using flavoprotein fluorescence imaging, at least six regions in the mouse auditory cortex, including the anterior auditory field (AAF), primary auditory cortex (AI), the secondary auditory field (AII), dorsoanterior field (DA), dorsomedial field (DM), and dorsoposterior field (DP). While multiple regions in the visual cortex and somatosensory ...

  1. Horseradish peroxidase dye tracing and embryonic statoacoustic ganglion cell transplantation in the rat auditory nerve trunk.

    Science.gov (United States)

    Palmgren, Björn; Jin, Zhe; Jiao, Yu; Kostyszyn, Beata; Olivius, Petri

    2011-03-04

    At present severe damage to hair cells and sensory neurons in the inner ear results in non-treatable auditory disorders. Cell implantation is a potential treatment for various neurological disorders and has already been used in clinical practice. In the inner ear, delivery of therapeutic substances including neurotrophic factors and stem cells provide strategies that in the future may ameliorate or restore hearing impairment. In order to describe a surgical auditory nerve trunk approach, in the present paper we injected the neuronal tracer horseradish peroxidase (HRP) into the central part of the nerve by an intra cranial approach. We further evaluated the applicability of the present approach by implanting statoacoustic ganglion (SAG) cells into the same location of the auditory nerve in normal hearing rats or animals deafened by application of β-bungarotoxin to the round window niche. The HRP results illustrate labeling in the cochlear nucleus in the brain stem as well as peripherally in the spiral ganglion neurons in the cochlea. The transplanted SAGs were observed within the auditory nerve trunk but no more peripheral than the CNS-PNS transitional zone. Interestingly, the auditory nerve injection did not impair auditory function, as evidenced by the auditory brainstem response. The present findings illustrate that an auditory nerve trunk approach may well access the entire auditory nerve and does not compromise auditory function. We suggest that such an approach might compose a suitable route for cell transplantation into this sensory cranial nerve.

  2. Auditory-model-based Feature Extraction Method for Mechanical Faults Diagnosis

    Institute of Scientific and Technical Information of China (English)

    LI Yungong; ZHANG Jinping; DAI Li; ZHANG Zhanyi; LIU Jie

    2010-01-01

    It is well known that the human auditory system possesses remarkable capabilities to analyze and identify signals. Therefore, it would be significant to build an auditory model based on the mechanism of human auditory systems, which may improve the effects of mechanical signal analysis and enrich the methods of mechanical faults features extraction. However the existing methods are all based on explicit senses of mathematics or physics, and have some shortages on distinguishing different faults, stability, and suppressing the disturbance noise, etc. For the purpose of improving the performances of the work of feature extraction, an auditory model, early auditory(EA) model, is introduced for the first time. This auditory model transforms time domain signal into auditory spectrum via bandpass filtering, nonlinear compressing, and lateral inhibiting by simulating the principle of the human auditory system. The EA model is developed with the Gammatone filterbank as the basilar membrane. According to the characteristics of vibration signals, a method is proposed for determining the parameter of inner hair cells model of EA model. The performance of EA model is evaluated through experiments on four rotor faults, including misalignment, rotor-to-stator rubbing, oil film whirl, and pedestal looseness. The results show that the auditory spectrum, output of EA model, can effectively distinguish different faults with satisfactory stability and has the ability to suppress the disturbance noise. Then, it is feasible to apply auditory model, as a new method, to the feature extraction for mechanical faults diagnosis with effect.

  3. Tuning shifts of the auditory system by corticocortical and corticofugal projections and conditioning.

    Science.gov (United States)

    Suga, Nobuo

    2012-02-01

    The central auditory system consists of the lemniscal and nonlemniscal systems. The thalamic lemniscal and nonlemniscal auditory nuclei are different from each other in response properties and neural connectivities. The cortical auditory areas receiving the projections from these thalamic nuclei interact with each other through corticocortical projections and project down to the subcortical auditory nuclei. This corticofugal (descending) system forms multiple feedback loops with the ascending system. The corticocortical and corticofugal projections modulate auditory signal processing and play an essential role in the plasticity of the auditory system. Focal electric stimulation - comparable to repetitive tonal stimulation - of the lemniscal system evokes three major types of changes in the physiological properties, such as the tuning to specific values of acoustic parameters of cortical and subcortical auditory neurons through different combinations of facilitation and inhibition. For such changes, a neuromodulator, acetylcholine, plays an essential role. Electric stimulation of the nonlemniscal system evokes changes in the lemniscal system that is different from those evoked by the lemniscal stimulation. Auditory signals ascending from the lemniscal and nonlemniscal thalamic nuclei to the cortical auditory areas appear to be selected or adjusted by a "differential" gating mechanism. Conditioning for associative learning and pseudo-conditioning for nonassociative learning respectively elicit tone-specific and nonspecific plastic changes. The lemniscal, corticofugal and cholinergic systems are involved in eliciting the former, but not the latter. The current article reviews the recent progress in the research of corticocortical and corticofugal modulations of the auditory system and its plasticity elicited by conditioning and pseudo-conditioning.

  4. Spatial audition in a static virtual environment: the role of auditory-visual interaction

    Directory of Open Access Journals (Sweden)

    Isabelle Viaud-Delmon

    2009-04-01

    Full Text Available The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.

  5. Areas of cat auditory cortex as defined by neurofilament proteins expressing SMI-32.

    Science.gov (United States)

    Mellott, Jeffrey G; Van der Gucht, Estel; Lee, Charles C; Carrasco, Andres; Winer, Jeffery A; Lomber, Stephen G

    2010-08-01

    The monoclonal antibody SMI-32 was used to characterize and distinguish individual areas of cat auditory cortex. SMI-32 labels non-phosphorylated epitopes on the high- and medium-molecular weight subunits of neurofilament proteins in cortical pyramidal cells and dendritic trees with the most robust immunoreactivity in layers III and V. Auditory areas with unique patterns of immunoreactivity included: primary auditory cortex (AI), second auditory cortex (AII), dorsal zone (DZ), posterior auditory field (PAF), ventral posterior auditory field (VPAF), ventral auditory field (VAF), temporal cortex (T), insular cortex (IN), anterior auditory field (AAF), and the auditory field of the anterior ectosylvian sulcus (fAES). Unique patterns of labeling intensity, soma shape, soma size, layers of immunoreactivity, laminar distribution of dendritic arbors, and labeled cell density were identified. Features that were consistent in all areas included: layers I and IV neurons are immunonegative; nearly all immunoreactive cells are pyramidal; and immunoreactive neurons are always present in layer V. To quantify the results, the numbers of labeled cells and dendrites, as well as cell diameter, were collected and used as tools for identifying and differentiating areas. Quantification of the labeling patterns also established profiles for ten auditory areas/layers and their degree of immunoreactivity. Areal borders delineated by SMI-32 were highly correlated with tonotopically-defined areal boundaries. Overall, SMI-32 immunoreactivity can delineate ten areas of cat auditory cortex and demarcate topographic borders. The ability to distinguish auditory areas with SMI-32 is valuable for the identification of auditory cerebral areas in electrophysiological, anatomical, and/or behavioral investigations.

  6. Auditory processing in children : a study of the effects of age, hearing impairment and language impairment on auditory abilities in children

    NARCIS (Netherlands)

    Stollman, Martin Hubertus Petrus

    2003-01-01

    In this thesis we tested the hypotheses that the auditory system of children continues to mature until at least the age of 12 years and that the development of auditory processing in hearing-impaired and language-impaired children is often delayed or even genuinely disturbed. Data from a longitudin

  7. Neural encoding of auditory discrimination in ventral premotor cortex

    Science.gov (United States)

    Lemus, Luis; Hernández, Adrián; Romo, Ranulfo

    2009-01-01

    Monkeys have the capacity to accurately discriminate the difference between two acoustic flutter stimuli. In this task, monkeys must compare information about the second stimulus to the memory trace of the first stimulus, and must postpone the decision report until a sensory cue triggers the beginning of the decision motor report. The neuronal processes associated with the different components of this task have been investigated in the primary auditory cortex (A1); but, A1 seems exclusively associated with the sensory and not with the working memory and decision components of this task. Here, we show that ventral premotor cortex (VPC) neurons reflect in their activities the current and remembered acoustic stimulus, their comparison, and the result of the animal's decision report. These results provide evidence that the neural dynamics of VPC is involved in the processing steps that link sensation and decision-making during auditory discrimination. PMID:19667191

  8. How modality specific is processing of auditory and visual rhythms?

    Science.gov (United States)

    Pasinski, Amanda C; McAuley, J Devin; Snyder, Joel S

    2016-02-01

    The present study used ERPs to test the extent to which temporal processing is modality specific or modality general. Participants were presented with auditory and visual temporal patterns that consisted of initial two- or three-event beginning patterns. This delineated a constant standard time interval, followed by a two-event ending pattern delineating a variable test interval. Participants judged whether they perceived the pattern as a whole to be speeding up or slowing down. The contingent negative variation (CNV), a negative potential reflecting temporal expectancy, showed a larger amplitude for the auditory modality compared to the visual modality but a high degree of similarity in scalp voltage patterns across modalities, suggesting that the CNV arises from modality-general processes. A late, memory-dependent positive component (P3) also showed similar patterns across modalities.

  9. Complex-tone pitch representations in the human auditory system

    DEFF Research Database (Denmark)

    Bianchi, Federica

    Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... that are necessary for the auditory system to retrieve the pitch of complex sounds. The existence of different pitch-coding mechanisms for low-numbered (spectrally resolved) and high-numbered (unresolved) harmonics was investigated by comparing pitch-discrimination performance across different cohorts of listeners......) listeners and the effect of musical training for pitch discrimination of complex tones with resolved and unresolved harmonics. Concerning the first topic, behavioral and modeling results in listeners with sensorineural hearing loss (SNHL) indicated that temporal envelope cues of complex tones...

  10. Probability and Surprisal in Auditory Comprehension of Morphologically Complex Words

    DEFF Research Database (Denmark)

    Balling, Laura Winther; Baayen, R. Harald

    2012-01-01

    Two auditory lexical decision experiments document for morphologically complex words two points at which the probability of a target word given the evidence shifts dramatically. The first point is reached when morphologically unrelated competitors are no longer compatible with the evidence....... Adapting terminology from Marslen-Wilson (1984), we refer to this as the word’s initial uniqueness point (UP1). The second point is the complex uniqueness point (CUP) introduced by Balling and Baayen (2008), at which morphologically related competitors become incompatible with the input. Later initial...... in the course of the word co-determines response latencies. The presence of effects of surprisal, both at the initial uniqueness point of complex words, and cumulatively throughout the word, challenges the Shortlist B model of Norris and McQueen (2008), and suggests that a Bayesian approach to auditory...

  11. Auditory Stimulation Dishabituates Olfactory Responses via Noradrenergic Cortical Modulation

    Directory of Open Access Journals (Sweden)

    Jonathan J. Smith

    2009-01-01

    Full Text Available Dishabituation is a return of a habituated response if context or contingency changes. In the mammalian olfactory system, metabotropic glutamate receptor mediated synaptic depression of cortical afferents underlies short-term habituation to odors. It was hypothesized that a known antagonistic interaction between these receptors and norepinephrine ß-receptors provides a mechanism for dishabituation. The results demonstrate that a 108 dB siren induces a two-fold increase in norepinephrine content in the piriform cortex. The same auditory stimulus induces dishabituation of odor-evoked heart rate orienting bradycardia responses in awake rats. Finally, blockade of piriform cortical norepinephrine ß-receptors with bilateral intracortical infusions of propranolol (100 μM disrupts auditory-induced dishabituation of odor-evoked bradycardia responses. These results provide a cortical mechanism for a return of habituated sensory responses following a cross-modal alerting stimulus.

  12. Intensity of guitar playing as a function of auditory feedback.

    Science.gov (United States)

    Johnson, C I; Pick, H L; Garber, S R; Siegel, G M

    1978-06-01

    Subjects played an electric guitar while auditory feedback was attenuated or amplified at seven sidetone levels varying 10-dB steps around a comfortable listening level. The sidetone signal was presented in quiet (experiment I) and several levels of white noise (experiment II). Subjects compensated for feedback changes, demonstrating a sidetone amplification as well as a Lombard effect. The similarity of these results to those found previously for speech suggests that guitar playing can be a useful analog for the function of auditory feedback in speech production. Unlike previous findings for speech, the sidetone-amplification effect was not potentiated by masking, consistent with a hypothesis that potentiation in speech is attributable to interference with bone conduction caused by the masking noise.

  13. Evoked response audiometry used in testing auditory organs of miners

    Energy Technology Data Exchange (ETDEWEB)

    Malinowski, T.; Klepacki, J.; Wagstyl, R.

    1980-01-01

    The evoked response audiometry method of testing hearing loss is presented and the results of comparative studies using subjective tonal audiometry and evoked response audiometry in tests of 56 healthy men with good hearing are discussed. The men were divided into three groups according to age and place of work: work place without increased noise; work place with noise and vibrations (at drilling machines); work place with noise and shocks (work at excavators in surface coal mines). The ERA-MKII audiometer produced by the Medelec-Amplaid firm was used. Audiometric threshhold curves for the three groups of tested men are given. At frequencies of 500, 1000 and 4000 Hz mean objective auditory threshhold was shifted by 4-9.5 dB in comparison to the subjective auditory threshold. (21 refs.) (In Polish)

  14. Peripheral auditory processing and speech reception in impaired hearing

    DEFF Research Database (Denmark)

    Strelcyk, Olaf

    One of the most common complaints of people with impaired hearing concerns their difficulty with understanding speech. Particularly in the presence of background noise, hearing-impaired people often encounter great difficulties with speech communication. In most cases, the problem persists even i....... Overall, this work provides insights into factors affecting auditory processing in listeners with impaired hearing and may have implications for future models of impaired auditory signal processing as well as advanced compensation strategies....... if reduced audibility has been compensated for by hearing aids. It has been hypothesized that part of the difficulty arises from changes in the perception of sounds that are well above hearing threshold, such as reduced frequency selectivity and deficits in the processing of temporal fine structure (TFS...

  15. Rodent Auditory Perception: Critical Band Limitations and Plasticity

    Science.gov (United States)

    King, Julia; Insanally, Michele; Jin, Menghan; Martins, Ana Raquel O.; D'amour, James A.; Froemke, Robert C.

    2015-01-01

    What do animals hear? While it remains challenging to adequately assess sensory perception in animal models, it is important to determine perceptual abilities in model systems to understand how physiological processes and plasticity relate to perception, learning, and cognition. Here we discuss hearing in rodents, reviewing previous and recent behavioral experiments querying acoustic perception in rats and mice, and examining the relation between behavioral data and electrophysiological recordings from the central auditory system. We focus on measurements of critical bands, which are psychoacoustic phenomena that seem to have a neural basis in the functional organization of the cochlea and the inferior colliculus. We then discuss how behavioral training, brain stimulation, and neuropathology impact auditory processing and perception. PMID:25827498

  16. Auditory stream segregation in children with Asperger syndrome.

    Science.gov (United States)

    Lepistö, T; Kuitunen, A; Sussman, E; Saalasti, S; Jansson-Verkasalo, E; Nieminen-von Wendt, T; Kujala, T

    2009-12-01

    Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically developed for studying stream segregation. Differences in the amplitudes of ERP components were found between groups only in the stream segregation conditions and not for simple feature discrimination. The results indicated that children with AS have difficulties in segregating concurrent sound streams, which ultimately may contribute to the difficulties in speech-in-noise perception.

  17. Synaptic plasticity in inhibitory neurons of the auditory brainstem.

    Science.gov (United States)

    Bender, Kevin J; Trussell, Laurence O

    2011-04-01

    There is a growing appreciation of synaptic plasticity in the early levels of auditory processing, and particularly of its role in inhibitory circuits. Synaptic strength in auditory brainstem and midbrain is sensitive to standard protocols for induction of long-term depression, potentiation, and spike-timing-dependent plasticity. Differential forms of plasticity are operative at synapses onto inhibitory versus excitatory neurons within a circuit, and together these could serve to tune circuits involved in sound localization or multisensory integration. Such activity-dependent control of synaptic function in inhibitory neurons may also be expressed after hearing loss and could underlie persistent neuronal activity in patients with tinnitus. This article is part of a Special Issue entitled 'Synaptic Plasticity & Interneurons'.

  18. Classification of Underwater Target Echoes Based on Auditory Perception Characteristics

    Institute of Scientific and Technical Information of China (English)

    Xiukun Li; Xiangxia Meng; Hang Liu; Mingye Liu

    2014-01-01

    In underwater target detection, the bottom reverberation has some of the same properties as the target echo, which has a great impact on the performance. It is essential to study the difference between target echo and reverberation. In this paper, based on the unique advantage of human listening ability on objects distinction, the Gammatone filter is taken as the auditory model. In addition, time-frequency perception features and auditory spectral features are extracted for active sonar target echo and bottom reverberation separation. The features of the experimental data have good concentration characteristics in the same class and have a large amount of differences between different classes, which shows that this method can effectively distinguish between the target echo and reverberation.

  19. Auditory lateralization of conspecific and heterospecific vocalizations in cats.

    Science.gov (United States)

    Siniscalchi, Marcello; Laddago, Serena; Quaranta, Angelo

    2016-01-01

    Auditory lateralization in response to both conspecific and heterospecific vocalizations (dog vocalizations) was observed in 16 tabby cats (Felis catus). Six different vocalizations were used: cat "purring," "meowing" and "growling" and dog typical vocalizations of "disturbance," "isolation" and "play." The head-orienting paradigm showed that cats turned their head with the right ear leading (left hemisphere activation) in response to their typical-species vocalization ("meow" and "purring"); on the other hand, a clear bias in the use of the left ear (right hemisphere activation) was observed in response to vocalizations eliciting intense emotion (dogs' vocalizations of "disturbance" and "isolation"). Overall these findings suggest that auditory sensory domain seems to be lateralized also in cat species, stressing the role of the left hemisphere for intraspecific communication and of the right hemisphere in processing threatening and alarming stimuli.

  20. Effect of neonatal asphyxia on the impairment of the auditory pathway by recording auditory brainstem responses in newborn piglets: a new experimentation model to study the perinatal hypoxic-ischemic damage on the auditory system.

    Directory of Open Access Journals (Sweden)

    Francisco Jose Alvarez

    Full Text Available Hypoxia-ischemia (HI is a major perinatal problem that results in severe damage to the brain impairing the normal development of the auditory system. The purpose of the present study is to study the effect of perinatal asphyxia on the auditory pathway by recording auditory brain responses in a novel animal experimentation model in newborn piglets.Hypoxia-ischemia was induced to 1.3 day-old piglets by clamping 30 minutes both carotid arteries by vascular occluders and lowering the fraction of inspired oxygen. We compared the Auditory Brain Responses (ABRs of newborn piglets exposed to acute hypoxia/ischemia (n = 6 and a control group with no such exposure (n = 10. ABRs were recorded for both ears before the start of the experiment (baseline, after 30 minutes of HI injury, and every 30 minutes during 6 h after the HI injury.Auditory brain responses were altered during the hypoxic-ischemic insult but recovered 30-60 minutes later. Hypoxia/ischemia seemed to induce auditory functional damage by increasing I-V latencies and decreasing wave I, III and V amplitudes, although differences were not significant.The described experimental model of hypoxia-ischemia in newborn piglets may be useful for studying the effect of perinatal asphyxia on the impairment of the auditory pathway.

  1. Automatic hearing loss detection system based on auditory brainstem response

    Energy Technology Data Exchange (ETDEWEB)

    Aldonate, J; Mercuri, C; Reta, J; Biurrun, J; Bonell, C; Gentiletti, G; Escobar, S; Acevedo, R [Laboratorio de Ingenieria en Rehabilitacion e Investigaciones Neuromusculares y Sensoriales (Argentina); Facultad de Ingenieria, Universidad Nacional de Entre Rios, Ruta 11 - Km 10, Oro Verde, Entre Rios (Argentina)

    2007-11-15

    Hearing loss is one of the pathologies with the highest prevalence in newborns. If it is not detected in time, it can affect the nervous system and cause problems in speech, language and cognitive development. The recommended methods for early detection are based on otoacoustic emissions (OAE) and/or auditory brainstem response (ABR). In this work, the design and implementation of an automated system based on ABR to detect hearing loss in newborns is presented. Preliminary evaluation in adults was satisfactory.

  2. Measuring the performance of visual to auditory information conversion.

    Directory of Open Access Journals (Sweden)

    Shern Shiou Tan

    Full Text Available BACKGROUND: Visual to auditory conversion systems have been in existence for several decades. Besides being among the front runners in providing visual capabilities to blind users, the auditory cues generated from image sonification systems are still easier to learn and adapt to compared to other similar techniques. Other advantages include low cost, easy customizability, and universality. However, every system developed so far has its own set of strengths and weaknesses. In order to improve these systems further, we propose an automated and quantitative method to measure the performance of such systems. With these quantitative measurements, it is possible to gauge the relative strengths and weaknesses of different systems and rank the systems accordingly. METHODOLOGY: Performance is measured by both the interpretability and also the information preservation of visual to auditory conversions. Interpretability is measured by computing the correlation of inter image distance (IID and inter sound distance (ISD whereas the information preservation is computed by applying Information Theory to measure the entropy of both visual and corresponding auditory signals. These measurements provide a basis and some insights on how the systems work. CONCLUSIONS: With an automated interpretability measure as a standard, more image sonification systems can be developed, compared, and then improved. Even though the measure does not test systems as thoroughly as carefully designed psychological experiments, a quantitative measurement like the one proposed here can compare systems to a certain degree without incurring much cost. Underlying this research is the hope that a major breakthrough in image sonification systems will allow blind users to cost effectively regain enough visual functions to allow them to lead secure and productive lives.

  3. Functional changes in the human auditory cortex in ageing.

    Directory of Open Access Journals (Sweden)

    Oliver Profant

    Full Text Available Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years and compared the results with young subjects (auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing.

  4. Infant Auditory Processing and Event-related Brain Oscillations

    Science.gov (United States)

    Musacchia, Gabriella; Ortiz-Mantilla, Silvia; Realpe-Bonilla, Teresa; Roesler, Cynthia P.; Benasich, April A.

    2015-01-01

    Rapid auditory processing and acoustic change detection abilities play a critical role in allowing human infants to efficiently process the fine spectral and temporal changes that are characteristic of human language. These abilities lay the foundation for effective language acquisition; allowing infants to hone in on the sounds of their native language. Invasive procedures in animals and scalp-recorded potentials from human adults suggest that simultaneous, rhythmic activity (oscillations) between and within brain regions are fundamental to sensory development; determining the resolution with which incoming stimuli are parsed. At this time, little is known about oscillatory dynamics in human infant development. However, animal neurophysiology and adult EEG data provide the basis for a strong hypothesis that rapid auditory processing in infants is mediated by oscillatory synchrony in discrete frequency bands. In order to investigate this, 128-channel, high-density EEG responses of 4-month old infants to frequency change in tone pairs, presented in two rate conditions (Rapid: 70 msec ISI and Control: 300 msec ISI) were examined. To determine the frequency band and magnitude of activity, auditory evoked response averages were first co-registered with age-appropriate brain templates. Next, the principal components of the response were identified and localized using a two-dipole model of brain activity. Single-trial analysis of oscillatory power showed a robust index of frequency change processing in bursts of Theta band (3 - 8 Hz) activity in both right and left auditory cortices, with left activation more prominent in the Rapid condition. These methods have produced data that are not only some of the first reported evoked oscillations analyses in infants, but are also, importantly, the product of a well-established method of recording and analyzing clean, meticulously collected, infant EEG and ERPs. In this article, we describe our method for infant EEG net

  5. Sex differences in brain structure in auditory and cingulate regions

    OpenAIRE

    Brun, Caroline C.; Lepore, Natasha; Luders, Eileen; Chou, Yi-Yu; Madsen, Sarah K.; Toga, Arthur W; Thompson, Paul M.

    2009-01-01

    We applied a new method to visualize the three-dimensional profile of sex differences in brain structure based on MRI scans of 100 young adults. We compared 50 men with 50 women, matched for age and other relevant demographics. As predicted, left hemisphere auditory and language-related regions were proportionally expanded in women versus men, suggesting a possible structural basis for the widely replicated sex differences in language processing. In men, primary visual, and visuo-spatial asso...

  6. Transcranial direct current stimulation as a treatment for auditory hallucinations

    OpenAIRE

    Sanne eKoops; Hilde evan den Brink; Sommer, Iris E C

    2015-01-01

    Auditory hallucinations (AH) are a symptom of several psychiatric disorders, such as schizophrenia. In a significant minority of patients, AH are resistant to antipsychotic medication. Alternative treatment options for this medication-resistant group are scarce and most of them focus on coping with the hallucinations. Finding an alternative treatment that can diminish AH is of great importance.Transcranial direct current stimulation (tDCS) is a safe and non-invasive technique that is able to...

  7. The structure and function of auditory chordotonal organs in insects.

    Science.gov (United States)

    Yack, Jayne E

    2004-04-15

    Insects are capable of detecting a broad range of acoustic signals transmitted through air, water, or solids. Auditory sensory organs are morphologically diverse with respect to their body location, accessory structures, and number of sensilla, but remarkably uniform in that most are innervated by chordotonal organs. Chordotonal organs are structurally complex Type I mechanoreceptors that are distributed throughout the insect body and function to detect a wide range of mechanical stimuli, from gross motor movements to air-borne sounds. At present, little is known about how chordotonal organs in general function to convert mechanical stimuli to nerve impulses, and our limited understanding of this process represents one of the major challenges to the study of insect auditory systems today. This report reviews the literature on chordotonal organs innervating insect ears, with the broad intention of uncovering some common structural specializations of peripheral auditory systems, and identifying new avenues for research. A general overview of chordotonal organ ultrastructure is presented, followed by a summary of the current theories on mechanical coupling and transduction in monodynal, mononematic, Type 1 scolopidia, which characteristically innervate insect ears. Auditory organs of different insect taxa are reviewed, focusing primarily on tympanal organs, and with some consideration to Johnston's and subgenual organs. It is widely accepted that insect hearing organs evolved from pre-existing proprioceptive chordotonal organs. In addition to certain non-neural adaptations for hearing, such as tracheal expansion and cuticular thinning, the chordotonal organs themselves may have intrinsic specializations for sound reception and transduction, and these are discussed. In the future, an integrated approach, using traditional anatomical and physiological techniques in combination with new methodologies in immunohistochemistry, genetics, and biophysics, will assist in

  8. Persistent fluctuations in stride intervals under fractal auditory stimulation.

    Directory of Open Access Journals (Sweden)

    Vivien Marmelat

    Full Text Available Stride sequences of healthy gait are characterized by persistent long-range correlations, which become anti-persistent in the presence of an isochronous metronome. The latter phenomenon is of particular interest because auditory cueing is generally considered to reduce stride variability and may hence be beneficial for stabilizing gait. Complex systems tend to match their correlation structure when synchronizing. In gait training, can one capitalize on this tendency by using a fractal metronome rather than an isochronous one? We examined whether auditory cues with fractal variations in inter-beat intervals yield similar fractal inter-stride interval variability as isochronous auditory cueing in two complementary experiments. In Experiment 1, participants walked on a treadmill while being paced by either an isochronous or a fractal metronome with different variation strengths between beats in order to test whether participants managed to synchronize with a fractal metronome and to determine the necessary amount of variability for participants to switch from anti-persistent to persistent inter-stride intervals. Participants did synchronize with the metronome despite its fractal randomness. The corresponding coefficient of variation of inter-beat intervals was fixed in Experiment 2, in which participants walked on a treadmill while being paced by non-isochronous metronomes with different scaling exponents. As expected, inter-stride intervals showed persistent correlations similar to self-paced walking only when cueing contained persistent correlations. Our results open up a new window to optimize rhythmic auditory cueing for gait stabilization by integrating fractal fluctuations in the inter-beat intervals.

  9. Persistent fluctuations in stride intervals under fractal auditory stimulation.

    Science.gov (United States)

    Marmelat, Vivien; Torre, Kjerstin; Beek, Peter J; Daffertshofer, Andreas

    2014-01-01

    Stride sequences of healthy gait are characterized by persistent long-range correlations, which become anti-persistent in the presence of an isochronous metronome. The latter phenomenon is of particular interest because auditory cueing is generally considered to reduce stride variability and may hence be beneficial for stabilizing gait. Complex systems tend to match their correlation structure when synchronizing. In gait training, can one capitalize on this tendency by using a fractal metronome rather than an isochronous one? We examined whether auditory cues with fractal variations in inter-beat intervals yield similar fractal inter-stride interval variability as isochronous auditory cueing in two complementary experiments. In Experiment 1, participants walked on a treadmill while being paced by either an isochronous or a fractal metronome with different variation strengths between beats in order to test whether participants managed to synchronize with a fractal metronome and to determine the necessary amount of variability for participants to switch from anti-persistent to persistent inter-stride intervals. Participants did synchronize with the metronome despite its fractal randomness. The corresponding coefficient of variation of inter-beat intervals was fixed in Experiment 2, in which participants walked on a treadmill while being paced by non-isochronous metronomes with different scaling exponents. As expected, inter-stride intervals showed persistent correlations similar to self-paced walking only when cueing contained persistent correlations. Our results open up a new window to optimize rhythmic auditory cueing for gait stabilization by integrating fractal fluctuations in the inter-beat intervals.

  10. Sparse representation of sounds in the unanesthetized auditory cortex.

    Directory of Open Access Journals (Sweden)

    Tomás Hromádka

    2008-01-01

    Full Text Available How do neuronal populations in the auditory cortex represent acoustic stimuli? Although sound-evoked neural responses in the anesthetized auditory cortex are mainly transient, recent experiments in the unanesthetized preparation have emphasized subpopulations with other response properties. To quantify the relative contributions of these different subpopulations in the awake preparation, we have estimated the representation of sounds across the neuronal population using a representative ensemble of stimuli. We used cell-attached recording with a glass electrode, a method for which single-unit isolation does not depend on neuronal activity, to quantify the fraction of neurons engaged by acoustic stimuli (tones, frequency modulated sweeps, white-noise bursts, and natural stimuli in the primary auditory cortex of awake head-fixed rats. We find that the population response is sparse, with stimuli typically eliciting high firing rates (>20 spikes/second in less than 5% of neurons at any instant. Some neurons had very low spontaneous firing rates (<0.01 spikes/second. At the other extreme, some neurons had driven rates in excess of 50 spikes/second. Interestingly, the overall population response was well described by a lognormal distribution, rather than the exponential distribution that is often reported. Our results represent, to our knowledge, the first quantitative evidence for sparse representations of sounds in the unanesthetized auditory cortex. Our results are compatible with a model in which most neurons are silent much of the time, and in which representations are composed of small dynamic subsets of highly active neurons.

  11. Auditory Reality and Self-Assessment of Hearing

    OpenAIRE

    Noble, William

    2008-01-01

    Analyses are made of three problem areas in the realm of hearing disorder and its management, all of which are cogently informed by self-assessment: (a) prosthetic technology and the auditory ecology, (b) dimensions of benefit from amplification, and (c) dimensions of disability. Technology and ecology addresses the matter of “fitness for purpose” of different prosthetic schemes, moderated by people's hearing and listening environments (ecologies) and by what they bring to the task of hearing...

  12. Auditory scene analysis: The sweet music of ambiguity

    Directory of Open Access Journals (Sweden)

    Daniel ePressnitzer

    2011-12-01

    Full Text Available In this review paper aimed at the non-specialist, we explore the use that neuroscientists and musicians have made of perceptual illusions based on ambiguity. The pivotal issue is auditory scene analysis, or what enables us to make sense of complex acoustic mixtures in order to follow, for instance, a single melody in the midst of an orchestra. In general, auditory scene analysis uncovers the most likely physical causes that account for the waveform collected at the ears. However, the acoustical problem is ill-posed and it must be solved from noisy sensory input. Recently, the neural mechanisms implicated in the transformation of ambiguous sensory information into coherent auditory scenes have been investigated using so-called bistability illusions (where an unchanging ambiguous stimulus evokes a succession of distinct percepts in the mind of the listener. After reviewing some of those studies, we turn to music, which arguably provides some of the most complex acoustic scenes that a human listener will ever encounter. Interestingly, musicians will not always aim at making each physical source intelligible, but rather to express one or more melodic lines with a small or large number of instruments. By means of a few musical illustrations and by using a computational model inspired by neuro-physiological principles, we suggest that this relies on a detailed (if perhaps implicit knowledge of the rules of auditory scene analysis and of its inherent ambiguity. We then put forward the opinion that some degree perceptual ambiguity may participate in our appreciation of music.

  13. Biological impact of music and software-based auditory training

    OpenAIRE

    Kraus, Nina

    2012-01-01

    Auditory-based communication skills are developed at a young age and are maintained throughout our lives. However, some individuals – both young and old – encounter difficulties in achieving or maintaining communication proficiency. Biological signals arising from hearing sounds relate to real-life communication skills such as listening to speech in noisy environments and reading, pointing to an intersection between hearing and cognition. Musical experience, amplification, and software-based ...

  14. Neural correlates of auditory temporal predictions during sensorimotor synchronization

    Directory of Open Access Journals (Sweden)

    Nadine ePecenka

    2013-08-01

    Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  15. Measuring the dynamics of neural responses in primary auditory cortex

    CERN Document Server

    Depireux, D A; Shamma, S A; Depireux, Didier A.; Simon, Jonathan Z.; Shamma, Shihab A.

    1998-01-01

    We review recent developments in the measurement of the dynamics of the response properties of auditory cortical neurons to broadband sounds, which is closely related to the perception of timbre. The emphasis is on a method that characterizes the spectro-temporal properties of single neurons to dynamic, broadband sounds, akin to the drifting gratings used in vision. The method treats the spectral and temporal aspects of the response on an equal footing.

  16. Brainstem auditory-evoked potentials in two meditative mental states

    Directory of Open Access Journals (Sweden)

    Kumar Sanjay

    2010-01-01

    Full Text Available Context: Practicing mental repetition of "OM" has been shown to cause significant changes in the middle latency auditory-evoked potentials, which suggests that it facilitates the neural activity at the mesencephalic or diencephalic levels. Aims: The aim of the study was to study the brainstem auditory-evoked potentials (BAEP in two meditation states based on consciousness, viz. dharana, and dhyana. Materials and Methods: Thirty subjects were selected, with ages ranging from 20 to 55 years (M=29.1; ±SD=6.5 years who had a minimum of 6 months experience in meditating "OM". Each subject was assessed in four sessions, i.e. two meditation and two control sessions. The two control sessions were: (i ekagrata, i.e. single-topic lecture on meditation and (ii cancalata, i.e. non-targeted thinking. The two meditation sessions were: (i dharana, i.e. focusing on the symbol "OM" and (ii dhyana, i.e. effortless single-thought state "OM". All four sessions were recorded on four different days and consisted of three states, i.e. pre, during and post. Results: The present results showed that the wave V peak latency significantly increased in cancalata, ekagrata and dharana, but no change occurred during the dhyana session. Conclusions: These results suggested that information transmission along the auditory pathway is delayed during cancalata, ekagrata and dharana, but there is no change during dhyana. It may be said that auditory information transmission was delayed at the inferior collicular level as the wave V corresponds to the tectum.

  17. Brain stem auditory evoked responses in chronic alcoholics.

    OpenAIRE

    Chan, Y W; McLeod, J G; Tuck, R R; Feary, P A

    1985-01-01

    Brain stem auditory evoked responses (BAERs) were performed on 25 alcoholic patients with Wernicke-Korsakoff syndrome, 56 alcoholic patients without Wernicke-Korsakoff syndrome, 24 of whom had cerebellar ataxia, and 37 control subjects. Abnormal BAERs were found in 48% of patients with Wernicke-Korsakoff syndrome, in 25% of alcoholic patients without Wernicke-Korsakoff syndrome but with cerebellar ataxia, and in 13% of alcoholic patients without Wernicke-Korsakoff syndrome or ataxia. The mean...

  18. Auditory and visual connectivity gradients in frontoparietal cortex.

    Science.gov (United States)

    Braga, Rodrigo M; Hellyer, Peter J; Wise, Richard J S; Leech, Robert

    2017-01-01

    A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal-ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior-anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top-down modulation of modality-specific information to occur within higher-order cortex. This could provide a potentially faster and more efficient pathway by which top-down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long-range connections to sensory cortices. Hum Brain Mapp 38:255-270, 2017. © 2016 Wiley Periodicals, Inc.

  19. An Improved Dissonance Measure Based on Auditory Memory

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Hjortkjær, Jens

    2012-01-01

    Dissonance is an important feature in music audio analysis. We present here a dissonance model that accounts for the temporal integration of dissonant events in auditory short term memory. We compare the memory-based dissonance extracted from musical audio sequences to the response of human...... listeners. In a number of tests, the memory model predicts listener’s response better than traditional dissonance measures....

  20. EEG signatures accompanying auditory figure-ground segregation.

    Science.gov (United States)

    Tóth, Brigitta; Kocsis, Zsuzsanna; Háden, Gábor P; Szerafin, Ágnes; Shinn-Cunningham, Barbara G; Winkler, István

    2016-11-01

    In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased - i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object.

  1. Automatic hearing loss detection system based on auditory brainstem response

    Science.gov (United States)

    Aldonate, J.; Mercuri, C.; Reta, J.; Biurrun, J.; Bonell, C.; Gentiletti, G.; Escobar, S.; Acevedo, R.

    2007-11-01

    Hearing loss is one of the pathologies with the highest prevalence in newborns. If it is not detected in time, it can affect the nervous system and cause problems in speech, language and cognitive development. The recommended methods for early detection are based on otoacoustic emissions (OAE) and/or auditory brainstem response (ABR). In this work, the design and implementation of an automated system based on ABR to detect hearing loss in newborns is presented. Preliminary evaluation in adults was satisfactory.

  2. Practical Gammatone-Like Filters for Auditory Processing

    Directory of Open Access Journals (Sweden)

    Lyon RF

    2007-01-01

    Full Text Available This paper deals with continuous-time filter transfer functions that resemble tuning curves at particular set of places on the basilar membrane of the biological cochlea and that are suitable for practical VLSI implementations. The resulting filters can be used in a filterbank architecture to realize cochlea implants or auditory processors of increased biorealism. To put the reader into context, the paper starts with a short review on the gammatone filter and then exposes two of its variants, namely, the differentiated all-pole gammatone filter (DAPGF and one-zero gammatone filter (OZGF, filter responses that provide a robust foundation for modeling cochlea transfer functions. The DAPGF and OZGF responses are attractive because they exhibit certain characteristics suitable for modeling a variety of auditory data: level-dependent gain, linear tail for frequencies well below the center frequency, asymmetry, and so forth. In addition, their form suggests their implementation by means of cascades of N identical two-pole systems which render them as excellent candidates for efficient analog or digital VLSI realizations. We provide results that shed light on their characteristics and attributes and which can also serve as "design curves" for fitting these responses to frequency-domain physiological data. The DAPGF and OZGF responses are essentially a "missing link" between physiological, electrical, and mechanical models for auditory filtering.

  3. Practical Gammatone-Like Filters for Auditory Processing

    Directory of Open Access Journals (Sweden)

    R. F. Lyon

    2007-12-01

    Full Text Available This paper deals with continuous-time filter transfer functions that resemble tuning curves at particular set of places on the basilar membrane of the biological cochlea and that are suitable for practical VLSI implementations. The resulting filters can be used in a filterbank architecture to realize cochlea implants or auditory processors of increased biorealism. To put the reader into context, the paper starts with a short review on the gammatone filter and then exposes two of its variants, namely, the differentiated all-pole gammatone filter (DAPGF and one-zero gammatone filter (OZGF, filter responses that provide a robust foundation for modeling cochlea transfer functions. The DAPGF and OZGF responses are attractive because they exhibit certain characteristics suitable for modeling a variety of auditory data: level-dependent gain, linear tail for frequencies well below the center frequency, asymmetry, and so forth. In addition, their form suggests their implementation by means of cascades of N identical two-pole systems which render them as excellent candidates for efficient analog or digital VLSI realizations. We provide results that shed light on their characteristics and attributes and which can also serve as “design curves” for fitting these responses to frequency-domain physiological data. The DAPGF and OZGF responses are essentially a “missing link” between physiological, electrical, and mechanical models for auditory filtering.

  4. Music training alters the course of adolescent auditory development.

    Science.gov (United States)

    Tierney, Adam T; Krizman, Jennifer; Kraus, Nina

    2015-08-11

    Fundamental changes in brain structure and function during adolescence are well-characterized, but the extent to which experience modulates adolescent neurodevelopment is not. Musical experience provides an ideal case for examining this question because the influence of music training begun early in life is well-known. We investigated the effects of in-school music training, previously shown to enhance auditory skills, versus another in-school training program that did not focus on development of auditory skills (active control). We tested adolescents on neural responses to sound and language skills before they entered high school (pretraining) and again 3 y later. Here, we show that in-school music training begun in high school prolongs the stability of subcortical sound processing and accelerates maturation of cortical auditory responses. Although phonological processing improved in both the music training and active control groups, the enhancement was greater in adolescents who underwent music training. Thus, music training initiated as late as adolescence can enhance neural processing of sound and confer benefits for language skills. These results establish the potential for experience-driven brain plasticity during adolescence and demonstrate that in-school programs can engender these changes.

  5. Cochlear injury and adaptive plasticity of the auditory cortex

    Directory of Open Access Journals (Sweden)

    ANNA R. eFETONI

    2015-02-01

    Full Text Available Growing evidence suggests that cochlear stressors as noise exposure and aging can induce homeostatic/maladaptive changes in the central auditory system from the brainstem to the cortex. Studies centered on such changes have revealed several mechanisms that operate in the context of sensory disruption after insult (noise trauma, drug- or age-related injury. The oxidative stress is central to current theories of induced sensory neural hearing loss and aging, and interventions to attenuate the hearing loss are based on antioxidant agent. The present review addresses the recent literature on the alterations in hair cells and spiral ganglion neurons due to noise-induced oxidative stress in the cochlea, as well on the impact of cochlear damage on the auditory cortex neurons. The emerging image emphasizes that noise-induced deafferentation and upward spread of cochlear damage is associated with the altered dendritic architecture of auditory pyramidal neurons. The cortical modifications may be reversed by treatment with antioxidants counteracting the cochlear redox imbalance. These findings open new therapeutic approaches to treat the functional consequences of the cortical reorganization following cochlear damage.

  6. Feedback delays eliminate auditory-motor learning in speech production.

    Science.gov (United States)

    Max, Ludo; Maffett, Derek G

    2015-03-30

    Neurologically healthy individuals use sensory feedback to alter future movements by updating internal models of the effector system and environment. For example, when visual feedback about limb movements or auditory feedback about speech movements is experimentally perturbed, the planning of subsequent movements is adjusted - i.e., sensorimotor adaptation occurs. A separate line of studies has demonstrated that experimentally delaying the sensory consequences of limb movements causes the sensory input to be attributed to external sources rather than to one's own actions. Yet similar feedback delays have remarkably little effect on visuo-motor adaptation (although the rate of learning varies, the amount of adaptation is only moderately affected with delays of 100-200ms, and adaptation still occurs even with a delay as long as 5000ms). Thus, limb motor learning remains largely intact even in conditions where error assignment favors external factors. Here, we show a fundamentally different result for sensorimotor control of speech articulation: auditory-motor adaptation to formant-shifted feedback is completely eliminated with delays of 100ms or more. Thus, for speech motor learning, real-time auditory feedback is critical. This novel finding informs theoretical models of human motor control in general and speech motor control in particular, and it has direct implications for the application of motor learning principles in the habilitation and rehabilitation of individuals with various sensorimotor speech disorders.

  7. Genetic pleiotropy explains associations between musical auditory discrimination and intelligence.

    Science.gov (United States)

    Mosing, Miriam A; Pedersen, Nancy L; Madison, Guy; Ullén, Fredrik

    2014-01-01

    Musical aptitude is commonly measured using tasks that involve discrimination of different types of musical auditory stimuli. Performance on such different discrimination tasks correlates positively with each other and with intelligence. However, no study to date has explored these associations using a genetically informative sample to estimate underlying genetic and environmental influences. In the present study, a large sample of Swedish twins (N = 10,500) was used to investigate the genetic architecture of the associations between intelligence and performance on three musical auditory discrimination tasks (rhythm, melody and pitch). Phenotypic correlations between the tasks ranged between 0.23 and 0.42 (Pearson r values). Genetic modelling showed that the covariation between the variables could be explained by shared genetic influences. Neither shared, nor non-shared environment had a significant effect on the associations. Good fit was obtained with a two-factor model where one underlying shared genetic factor explained all the covariation between the musical discrimination tasks and IQ, and a second genetic factor explained variance exclusively shared among the discrimination tasks. The results suggest that positive correlations among musical aptitudes result from both genes with broad effects on cognition, and genes with potentially more specific influences on auditory functions.

  8. A comparison of auditory brainstem responses across diving bird species

    Science.gov (United States)

    Crowell, Sara E.; Berlin, Alicia; Carr, Catherine E; Olsen, Glenn H.; Therrien, Ronald E; Yannuzzi, Sally E; Ketten, Darlene R

    2015-01-01

    There is little biological data available for diving birds because many live in hard-to-study, remote habitats. Only one species of diving bird, the black-footed penguin (Spheniscus demersus), has been studied in respect to auditory capabilities (Wever et al., Proc Natl Acad Sci USA 63:676–680, 1969). We, therefore, measured in-air auditory threshold in ten species of diving birds, using the auditory brainstem response (ABR). The average audiogram obtained for each species followed the U-shape typical of birds and many other animals. All species tested shared a common region of the greatest sensitivity, from 1000 to 3000 Hz, although audiograms differed significantly across species. Thresholds of all duck species tested were more similar to each other than to the two non-duck species tested. The red-throated loon (Gavia stellata) and northern gannet (Morus bassanus) exhibited the highest thresholds while the lowest thresholds belonged to the duck species, specifically the lesser scaup (Aythya affinis) and ruddy duck (Oxyura jamaicensis). Vocalization parameters were also measured for each species, and showed that with the exception of the common eider (Somateria mollisima), the peak frequency, i.e., frequency at the greatest intensity, of all species' vocalizations measured here fell between 1000 and 3000 Hz, matching the bandwidth of the most sensitive hearing range.

  9. Spectral and temporal processing in rat posterior auditory cortex.

    Science.gov (United States)

    Pandya, Pritesh K; Rathbun, Daniel L; Moucha, Raluca; Engineer, Navzer D; Kilgard, Michael P

    2008-02-01

    The rat auditory cortex is divided anatomically into several areas, but little is known about the functional differences in information processing between these areas. To determine the filter properties of rat posterior auditory field (PAF) neurons, we compared neurophysiological responses to simple tones, frequency modulated (FM) sweeps, and amplitude modulated noise and tones with responses of primary auditory cortex (A1) neurons. PAF neurons have excitatory receptive fields that are on average 65% broader than A1 neurons. The broader receptive fields of PAF neurons result in responses to narrow and broadband inputs that are stronger than A1. In contrast to A1, we found little evidence for an orderly topographic gradient in PAF based on frequency. These neurons exhibit latencies that are twice as long as A1. In response to modulated tones and noise, PAF neurons adapt to repeated stimuli at significantly slower rates. Unlike A1, neurons in PAF rarely exhibit facilitation to rapidly repeated sounds. Neurons in PAF do not exhibit strong selectivity for rate or direction of narrowband one octave FM sweeps. These results indicate that PAF, like nonprimary visual fields, processes sensory information on larger spectral and longer temporal scales than primary cortex.

  10. Hand proximity facilitates spatial discrimination of auditory tones

    Directory of Open Access Journals (Sweden)

    Philip eTseng

    2014-06-01

    Full Text Available The effect of hand proximity on vision and visual attention has been well documented. In this study we tested whether such effect(s would also be present in the auditory modality. With hands placed either near or away from the audio sources, participants performed an auditory-spatial discrimination (Exp 1: left or right side, pitch discrimination (Exp 2: high, med, or low tone, and spatial-plus-pitch (Exp 3: left or right; high, med, or low discrimination task. In Exp 1, when hands were away from the audio source, participants consistently responded faster with their right hand regardless of stimulus location. This right hand advantage, however, disappeared in the hands-near condition because of a significant improvement in left hand’s reaction time. No effect of hand proximity was found in Exp 2 or 3, where a choice reaction time task requiring pitch discrimination was used. Together, these results suggest that the effect of hand proximity is not exclusive to vision alone, but is also present in audition, though in a much weaker form. Most important, these findings provide evidence from auditory attention that supports the multimodal account originally raised by Reed et al. in 2006.

  11. Stability of Auditory Discrimination and Novelty Processing in Physiological Aging

    Directory of Open Access Journals (Sweden)

    Alberto Raggi

    2013-01-01

    Full Text Available Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.

  12. Stability of auditory discrimination and novelty processing in physiological aging.

    Science.gov (United States)

    Raggi, Alberto; Tasca, Domenica; Rundo, Francesco; Ferri, Raffaele

    2013-01-01

    Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs) allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN) and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.

  13. Temporal Integration of Auditory Stimulation and Binocular Disparity Signals

    Directory of Open Access Journals (Sweden)

    Marina Zannoli

    2011-10-01

    Full Text Available Several studies using visual objects defined by luminance have reported that the auditory event must be presented 30 to 40 ms after the visual stimulus to perceive audiovisual synchrony. In the present study, we used visual objects defined only by their binocular disparity. We measured the optimal latency between visual and auditory stimuli for the perception of synchrony using a method introduced by Moutoussis & Zeki (1997. Visual stimuli were defined either by luminance and disparity or by disparity only. They moved either back and forth between 6 and 12 arcmin or from left to right at a constant disparity of 9 arcmin. This visual modulation was presented together with an amplitude-modulated 500 Hz tone. Both modulations were sinusoidal (frequency: 0.7 Hz. We found no difference between 2D and 3D motion for luminance stimuli: a 40 ms auditory lag was necessary for perceived synchrony. Surprisingly, even though stereopsis is often thought to be slow, we found a similar optimal latency in the disparity 3D motion condition (55 ms. However, when participants had to judge simultaneity for disparity 2D motion stimuli, it led to larger latencies (170 ms, suggesting that stereo motion detectors are poorly suited to track 2D motion.

  14. Neural effects of cognitive control load on auditory selective attention.

    Science.gov (United States)

    Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R; Mangalathu, Jain; Desai, Anjali

    2014-08-01

    Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210ms, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention.

  15. Attentional modulation of auditory steady-state responses.

    Directory of Open Access Journals (Sweden)

    Yatin Mahajan

    Full Text Available Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR. The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence. The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex.

  16. Acoustic trauma-induced auditory cortex enhancement and tinnitus

    Institute of Scientific and Technical Information of China (English)

    Erin Laundrie; Wei Sun

    2014-01-01

    There is growing evidence suggests that noise-induced cochlear damage may lead to hyperexcitability in the central auditory system (CAS) which may give rise to tinnitus. However, the correlation between the onset of the neurophysiological changes in the CAS and the onset of tinnitus has not been well studied. To investigate this relationship, chronic electrodes were implanted into the auditory cortex (AC) and sound evoked activities were measured from awake rats before and after noise exposure. The auditory brainstem response (ABR) was used to assess the degree of noise-induced hearing loss. Tinnitus was evaluated by measuring gap-induced prepulse inhibition (gap-PPI). Rats were exposed monaurally to a high-intensity narrowband noise centered at 12 kHz at a level of 120 dB SPL for 1 h. After the noise exposure, all the rats developed either permanent (>2 weeks) or temporary (<3 days) hearing loss in the exposed ear(s). The AC amplitudes increased significantly 4 h after the noise exposure. Most of the exposed rats also showed decreased gap-PPI. The post-exposure AC enhancement showed a positive correlation with the amount of hearing loss. The onset of tinnitus-like behavior was happened after the onset of AC enhancement.

  17. The effects of auditory contrast tuning upon speech intelligibility

    Directory of Open Access Journals (Sweden)

    Nathaniel J Killian

    2016-08-01

    Full Text Available We have previously identified neurons tuned to spectral contrast of wideband sounds in auditory cortex of awake marmoset monkeys. Because additive noise alters the spectral contrast of speech, contrast-tuned neurons, if present in human auditory cortex, may aid in extracting speech from noise. Given that this cortical function may be underdeveloped in individuals with sensorineural hearing loss, incorporating biologically-inspired algorithms into external signal processing devices could provide speech enhancement benefits to cochlear implantees. In this study we first constructed a computational signal processing algorithm to mimic auditory cortex contrast tuning. We then manipulated the shape of contrast channels and evaluated the intelligibility of reconstructed noisy speech using a metric to predict cochlear implant user perception. Candidate speech enhancement strategies were then tested in cochlear implantees with a hearing-in-noise test. Accentuation of intermediate contrast values or all contrast values improved computed intelligibility. Cochlear implant subjects showed significant improvement in noisy speech intelligibility with a contrast shaping procedure.

  18. Stimulation of the human auditory nerve with optical radiation

    Science.gov (United States)

    Fishman, Andrew; Winkler, Piotr; Mierzwinski, Jozef; Beuth, Wojciech; Izzo Matic, Agnella; Siedlecki, Zygmunt; Teudt, Ingo; Maier, Hannes; Richter, Claus-Peter

    2009-02-01

    A novel, spatially selective method to stimulate cranial nerves has been proposed: contact free stimulation with optical radiation. The radiation source is an infrared pulsed laser. The Case Report is the first report ever that shows that optical stimulation of the auditory nerve is possible in the human. The ethical approach to conduct any measurements or tests in humans requires efficacy and safety studies in animals, which have been conducted in gerbils. This report represents the first step in a translational research project to initiate a paradigm shift in neural interfaces. A patient was selected who required surgical removal of a large meningioma angiomatum WHO I by a planned transcochlear approach. Prior to cochlear ablation by drilling and subsequent tumor resection, the cochlear nerve was stimulated with a pulsed infrared laser at low radiation energies. Stimulation with optical radiation evoked compound action potentials from the human auditory nerve. Stimulation of the auditory nerve with infrared laser pulses is possible in the human inner ear. The finding is an important step for translating results from animal experiments to human and furthers the development of a novel interface that uses optical radiation to stimulate neurons. Additional measurements are required to optimize the stimulation parameters.

  19. Development of auditory-vocal perceptual skills in songbirds.

    Directory of Open Access Journals (Sweden)

    Vanessa C Miller-Sims

    Full Text Available Songbirds are one of the few groups of animals that learn the sounds used for vocal communication during development. Like humans, songbirds memorize vocal sounds based on auditory experience with vocalizations of adult "tutors", and then use auditory feedback of self-produced vocalizations to gradually match their motor output to the memory of tutor sounds. In humans, investigations of early vocal learning have focused mainly on perceptual skills of infants, whereas studies of songbirds have focused on measures of vocal production. In order to fully exploit songbirds as a model for human speech, understand the neural basis of learned vocal behavior, and investigate links between vocal perception and production, studies of songbirds must examine both behavioral measures of perception and neural measures of discrimination during development. Here we used behavioral and electrophysiological assays of the ability of songbirds to distinguish vocal calls of varying frequencies at different stages of vocal learning. The results show that neural tuning in auditory cortex mirrors behavioral improvements in the ability to make perceptual distinctions of vocal calls as birds are engaged in vocal learning. Thus, separate measures of neural discrimination and behavioral perception yielded highly similar trends during the course of vocal development. The timing of this improvement in the ability to distinguish vocal sounds correlates with our previous work showing substantial refinement of axonal connectivity in cortico-basal ganglia pathways necessary for vocal learning.

  20. Amelioration of Auditory Response by DA9801 in Diabetic Mouse

    Directory of Open Access Journals (Sweden)

    Yeong Ro Lee

    2015-01-01

    Full Text Available Diabetes mellitus (DM is a metabolic disease that involves disorders such as diabetic retinopathy, diabetic neuropathy, and diabetic hearing loss. Recently, neurotrophin has become a treatment target that has shown to be an attractive alternative in recovering auditory function altered by DM. The aim of this study was to evaluate the effect of DA9801, a mixture of Dioscorea nipponica and Dioscorea japonica extracts, in the auditory function damage produced in a STZ-induced diabetic model and to provide evidence of the mechanisms involved in enhancing these protective effects. We found a potential application of DA9801 on hearing impairment in the STZ-induced diabetic model, demonstrated by reducing the deterioration produced by DM in ABR threshold in response to clicks and normalizing wave I–IV latencies and Pa latencies in AMLR. We also show evidence that these effects might be elicited by inducing NGF related through Nr3c1 and Akt. Therefore, this result suggests that the neuroprotective effects of DA9801 on the auditory damage produced by DM may be affected by NGF increase resulting from Nr3c1 via Akt transformation.

  1. Auditory Neural Prostheses – A Window to the Future

    Directory of Open Access Journals (Sweden)

    Mohan Kameshwaran

    2015-06-01

    Full Text Available Hearing loss is one of the commonest congenital anomalies to affect children world-over. The incidence of congenital hearing loss is more pronounced in developing countries like the Indian sub-continent, especially with the problems of consanguinity. Hearing loss is a double tragedy, as it leads to not only deafness but also language deprivation. However, hearing loss is the only truly remediable handicap, due to remarkable advances in biomedical engineering and surgical techniques. Auditory neural prostheses help to augment or restore hearing by integration of an external circuitry with the peripheral hearing apparatus and the central circuitry of the brain. A cochlear implant (CI is a surgically implantable device that helps restore hearing in patients with severe-profound hearing loss, unresponsive to amplification by conventional hearing aids. CIs are electronic devices designed to detect mechanical sound energy and convert it into electrical signals that can be delivered to the coch­lear nerve, bypassing the damaged hair cells of the coch­lea. The only true prerequisite is an intact auditory nerve. The emphasis is on implantation as early as possible to maximize speech understanding and perception. Bilateral CI has significant benefits which include improved speech perception in noisy environments and improved sound localization. Presently, the indications for CI have widened and these expanded indications for implantation are related to age, additional handicaps, residual hearing, and special etiologies of deafness. Combined electric and acoustic stimulation (EAS / hybrid device is designed for individuals with binaural low-frequency hearing and severe-to-profound high-frequency hearing loss. Auditory brainstem implantation (ABI is a safe and effective means of hearing rehabilitation in patients with retrocochlear disorders, such as neurofibromatosis type 2 (NF2 or congenital cochlear nerve aplasia, wherein the cochlear nerve is damaged

  2. Development of auditory-vocal perceptual skills in songbirds.

    Science.gov (United States)

    Miller-Sims, Vanessa C; Bottjer, Sarah W

    2012-01-01

    Songbirds are one of the few groups of animals that learn the sounds used for vocal communication during development. Like humans, songbirds memorize vocal sounds based on auditory experience with vocalizations of adult "tutors", and then use auditory feedback of self-produced vocalizations to gradually match their motor output to the memory of tutor sounds. In humans, investigations of early vocal learning have focused mainly on perceptual skills of infants, whereas studies of songbirds have focused on measures of vocal production. In order to fully exploit songbirds as a model for human speech, understand the neural basis of learned vocal behavior, and investigate links between vocal perception and production, studies of songbirds must examine both behavioral measures of perception and neural measures of discrimination during development. Here we used behavioral and electrophysiological assays of the ability of songbirds to distinguish vocal calls of varying frequencies at different stages of vocal learning. The results show that neural tuning in auditory cortex mirrors behavioral improvements in the ability to make perceptual distinctions of vocal calls as birds are engaged in vocal learning. Thus, separate measures of neural discrimination and behavioral perception yielded highly similar trends during the course of vocal development. The timing of this improvement in the ability to distinguish vocal sounds correlates with our previous work showing substantial refinement of axonal connectivity in cortico-basal ganglia pathways necessary for vocal learning.

  3. Task engagement selectively modulates neural correlations in primary auditory cortex.

    Science.gov (United States)

    Downer, Joshua D; Niwa, Mamiko; Sutter, Mitchell L

    2015-05-13

    Noise correlations (r(noise)) between neurons can affect a neural population's discrimination capacity, even without changes in mean firing rates of neurons. r(noise), the degree to which the response variability of a pair of neurons is correlated, has been shown to change with attention with most reports showing a reduction in r(noise). However, the effect of reducing r(noise) on sensory discrimination depends on many factors, including the tuning similarity, or tuning correlation (r(tuning)), between the pair. Theoretically, reducing r(noise) should enhance sensory discrimination when the pair exhibits similar tuning, but should impair discrimination when tuning is dissimilar. We recorded from pairs of neurons in primary auditory cortex (A1) under two conditions: while rhesus macaque monkeys (Macaca mulatta) actively performed a threshold amplitude modulation (AM) detection task and while they sat passively awake. We report that, for pairs with similar AM tuning, average r(noise) in A1 decreases when the animal performs the AM detection task compared with when sitting passively. For pairs with dissimilar tuning, the average r(noise) did not significantly change between conditions. This suggests that attention-related modulation can target selective subcircuits to decorrelate noise. These results demonstrate that engagement in an auditory task enhances population coding in primary auditory cortex by selectively reducing deleterious r(noise) and leaving beneficial r(noise) intact.

  4. Multiscale mapping of frequency sweep rate in mouse auditory cortex.

    Science.gov (United States)

    Issa, John B; Haeffele, Benjamin D; Young, Eric D; Yue, David T

    2017-02-01

    Functional organization is a key feature of the neocortex that often guides studies of sensory processing, development, and plasticity. Tonotopy, which arises from the transduction properties of the cochlea, is the most widely studied organizational feature in auditory cortex; however, in order to process complex sounds, cortical regions are likely specialized for higher order features. Here, motivated by the prevalence of frequency modulations in mouse ultrasonic vocalizations and aided by the use of a multiscale imaging approach, we uncover a functional organization across the extent of auditory cortex for the rate of frequency modulated (FM) sweeps. In particular, using two-photon Ca(2+) imaging of layer 2/3 neurons, we identify a tone-insensitive region at the border of AI and AAF. This central sweep region behaves fundamentally differently from nearby neurons in AI and AII, responding preferentially to fast FM sweeps but not to tones or bandlimited noise. Together these findings define a second dimension of organization in the mouse auditory cortex for sweep rate complementary to that of tone frequency.

  5. Temporal sequence of visuo-auditory interaction in multiple areas of the guinea pig visual cortex.

    Directory of Open Access Journals (Sweden)

    Masataka Nishimura

    Full Text Available Recent studies in humans and monkeys have reported that acoustic stimulation influences visual responses in the primary visual cortex (V1. Such influences can be generated in V1, either by direct auditory projections or by feedback projections from extrastriate cortices. To test these hypotheses, cortical activities were recorded using optical imaging at a high spatiotemporal resolution from multiple areas of the guinea pig visual cortex, to visual and/or acoustic stimulations. Visuo-auditory interactions were evaluated according to differences between responses evoked by combined auditory and visual stimulation, and the sum of responses evoked by separate visual and auditory stimulations. Simultaneous presentation of visual and acoustic stimulations resulted in significant interactions in V1, which occurred earlier than in other visual areas. When acoustic stimulation preceded visual stimulation, significant visuo-auditory interactions were detected only in V1. These results suggest that V1 is a cortical origin of visuo-auditory interaction.

  6. Effects of localized auditory information on visual target detection performance using a helmet-mounted display.

    Science.gov (United States)

    Nelson, W T; Hettinger, L J; Cunningham, J A; Brickman, B J; Haas, M W; McKinley, R L

    1998-09-01

    An experiment was conducted to evaluate the effects of localized auditory information on visual target detection performance. Visual targets were presented on either a wide field-of-view dome display or a helmet-mounted display and were accompanied by either localized, nonlocalized, or no auditory information. The addition of localized auditory information resulted in significant increases in target detection performance and significant reductions in workload ratings as compared with conditions in which auditory information was either nonlocalized or absent. Qualitative and quantitative analyses of participants' head motions revealed that the addition of localized auditory information resulted in extremely efficient and consistent search strategies. Implications for the development and design of multisensory virtual environments are discussed. Actual or potential applications of this research include the use of spatial auditory displays to augment visual information presented in helmet-mounted displays, thereby leading to increases in performance efficiency, reductions in physical and mental workload, and enhanced spatial awareness of objects in the environment.

  7. Acoustic Noise of MRI Scans of the Internal Auditory Canal and Potential for Intracochlear Physiological Changes

    CERN Document Server

    Busada, M A; Ibrahim, G; Huckans, J H

    2012-01-01

    Magnetic resonance imaging (MRI) is a widely used medical imaging technique to assess the health of the auditory (vestibulocochlear) nerve. A well known problem with MRI machines is that the acoustic noise they generate during a scan can cause auditory temporary threshold shifts (TTS) in humans. In addition, studies have shown that excessive noise in general can cause rapid physiological changes of constituents of the auditory within the cochlea. Here, we report in-situ measurements of the acoustic noise from a 1.5 Tesla MRI machine (GE Signa) during scans specific to auditory nerve assessment. The measured average and maximum noise levels corroborate earlier investigations where TTS occurred. We briefly discuss the potential for physiological changes to the intracochlear branches of the auditory nerve as well as iatrogenic misdiagnoses of intralabyrinthine and intracochlear schwannomas due to hypertrophe of the auditory nerve within the cochlea during MRI assessment.

  8. Electrophysiological evidence for a general auditory prediction deficit in adults who stutter.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2015-11-01

    We previously found that stuttering individuals do not show the typical auditory modulation observed during speech planning in nonstuttering individuals. In this follow-up study, we further elucidate this difference by investigating whether stuttering speakers' atypical auditory modulation is observed only when sensory predictions are based on movement planning or also when predictable auditory input is not a consequence of one's own actions. We recorded 10 stuttering and 10 nonstuttering adults' auditory evoked potentials in response to random probe tones delivered while anticipating either speaking aloud or hearing one's own speech played back and in a control condition without auditory input (besides probe tones). N1 amplitude of nonstuttering speakers was reduced prior to both speaking and hearing versus the control condition. Stuttering speakers, however, showed no N1 amplitude reduction in either the speaking or hearing condition as compared with control. Thus, findings suggest that stuttering speakers have general auditory prediction difficulties.

  9. Metabolic emergent auditory effects by means of physical particle modeling : the example of musical sand

    OpenAIRE

    Luciani, Annie; Castagné, Nicolas; Tixier, Nicolas

    2003-01-01

    International audience; In the context of Computer Music, physical modeling is usually dedicated to the modeling of sound sources or physical instruments. This paper presents an innovative use of physical modeling in order to model and synthesize complex auditory effects such as collective acoustic phenomena producing metabolic emergent auditory organizations. As a case study, we chose the "dune effect", which in open nature leads both to visual and auditory effects. The article introduces tw...

  10. Different Auditory Feedback Control for Echolocation and Communication in Horseshoe Bats

    OpenAIRE

    Ying Liu; Jiang Feng; Walter Metzner

    2013-01-01

    Auditory feedback from the animal's own voice is essential during bat echolocation: to optimize signal detection, bats continuously adjust various call parameters in response to changing echo signals. Auditory feedback seems also necessary for controlling many bat communication calls, although it remains unclear how auditory feedback control differs in echolocation and communication. We tackled this question by analyzing echolocation and communication in greater horseshoe bats, whose echoloca...

  11. Musical experience, auditory perception and reading-related skills in children.

    Directory of Open Access Journals (Sweden)

    Karen Banai

    Full Text Available BACKGROUND: The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. METHODOLOGY/PRINCIPAL FINDINGS: Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. CONCLUSIONS/SIGNIFICANCE: Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and

  12. Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation

    Directory of Open Access Journals (Sweden)

    Baumann Simon

    2007-02-01

    Full Text Available Abstract Background Recent findings of a tight coupling between visual and auditory association cortices during multisensory perception in monkeys and humans raise the question whether consistent paired presentation of simple visual and auditory stimuli prompts conditioned responses in unimodal auditory regions or multimodal association cortex once visual stimuli are presented in isolation in a post-conditioning run. To address this issue fifteen healthy participants partook in a "silent" sparse temporal event-related fMRI study. In the first (visual control habituation phase they were presented with briefly red flashing visual stimuli. In the second (auditory control habituation phase they heard brief telephone ringing. In the third (conditioning phase we coincidently presented the visual stimulus (CS paired with the auditory stimulus (UCS. In the fourth phase participants either viewed flashes paired with the auditory stimulus (maintenance, CS- or viewed the visual stimulus in isolation (extinction, CS+ according to a 5:10 partial reinforcement schedule. The participants had no other task than attending to the stimuli and indicating the end of each trial by pressing a button. Results During unpaired visual presentations (preceding and following the paired presentation we observed significant brain responses beyond primary visual cortex in the bilateral posterior auditory association cortex (planum temporale, planum parietale and in the right superior temporal sulcus whereas the primary auditory regions were not involved. By contrast, the activity in auditory core regions was markedly larger when participants were presented with auditory stimuli. Conclusion These results demonstrate involvement of multisensory and auditory association areas in perception of unimodal visual stimulation which may reflect the instantaneous forming of multisensory associations and cannot be attributed to sensation of an auditory event. More importantly, we are able

  13. Impairments in musical abilities reflected in the auditory brainstem: evidence from congenital amusia.

    Science.gov (United States)

    Lehmann, Alexandre; Skoe, Erika; Moreau, Patricia; Peretz, Isabelle; Kraus, Nina

    2015-07-01

    Congenital amusia is a neurogenetic condition, characterized by a deficit in music perception and production, not explained by hearing loss, brain damage or lack of exposure to music. Despite inferior musical performance, amusics exhibit normal auditory cortical responses, with abnormal neural correlates suggested to lie beyond auditory cortices. Here we show, using auditory brainstem responses to complex sounds in humans, that fine-grained automatic processing of sounds is impoverished in amusia. Compared with matched non-musician controls, spectral amplitude was decreased in amusics for higher harmonic components of the auditory brainstem response. We also found a delayed response to the early transient aspects of the auditory stimulus in amusics. Neural measures of spectral amplitude and response timing correlated with participants' behavioral assessments of music processing. We demonstrate, for the first time, that amusia affects how complex acoustic signals are processed in the auditory brainstem. This neural signature of amusia mirrors what is observed in musicians, such that the aspects of the auditory brainstem responses that are enhanced in musicians are degraded in amusics. By showing that gradients of music abilities are reflected in the auditory brainstem, our findings have implications not only for current models of amusia but also for auditory functioning in general.

  14. A hardware model of the auditory periphery to transduce acoustic signals into neural activity

    Directory of Open Access Journals (Sweden)

    Takashi eTateno

    2013-11-01

    Full Text Available To improve the performance of cochlear implants, we have integrated a microdevice into a model of the auditory periphery with the goal of creating a microprocessor. We constructed an artificial peripheral auditory system using a hybrid model in which polyvinylidene difluoride was used as a piezoelectric sensor to convert mechanical stimuli into electric signals. To produce frequency selectivity, the slit on a stainless steel base plate was designed such that the local resonance frequency of the membrane over the slit reflected the transfer function. In the acoustic sensor, electric signals were generated based on the piezoelectric effect from local stress in the membrane. The electrodes on the resonating plate produced relatively large electric output signals. The signals were fed into a computer model that mimicked some functions of inner hair cells, inner hair cell–auditory nerve synapses, and auditory nerve fibers. In general, the responses of the model to pure-tone burst and complex stimuli accurately represented the discharge rates of high-spontaneous-rate auditory nerve fibers across a range of frequencies greater than 1 kHz and middle to high sound pressure levels. Thus, the model provides a tool to understand information processing in the peripheral auditory system and a basic design for connecting artificial acoustic sensors to the peripheral auditory nervous system. Finally, we discuss the need for stimulus control with an appropriate model of the auditory periphery based on auditory brainstem responses that were electrically evoked by different temporal pulse patterns with the same pulse number.

  15. Clinical Observation on Treatment of Auditory Hallucinosis by Electroacupuncture--A Report of 30 Cases

    Institute of Scientific and Technical Information of China (English)

    Lin Hong; Li Cheng

    2005-01-01

    @@ Auditory hallucinosis, a kind of hallucinations in sensory disturbance, is very common in psychopathic clinic. Patients with this disorder could hear sounds of different variety or nature in the absence of any appropriate external stimulus. It is especially true in patients with schizophrenia, organic psychonosema,and alcoholic psychonosema. At present, the neuroleptic agents are often used to relieve auditory hallucinosis during treatment of the mental disease,and there is not a therapy that is effective in treating auditory hallucinosis. With electro-acupuncture, the authors have treated 30 cases of auditory hallucinosis with satisfactory results. A report follows.

  16. Multimodal morphometry and functional magnetic resonance imaging in schizophrenia and auditory hallucinations

    OpenAIRE

    García-Martí, Gracián; Aguilar, Eduardo Jesús; Martí-Bonmatí, Luis; Escartí, M José; Sanjuán, Julio

    2012-01-01

    AIM: To validate a multimodal [structural and functional magnetic resonance (MR)] approach as coincidence brain clusters are hypothesized to correlate with clinical severity of auditory hallucinations.

  17. Frequency-specific modulation of population-level frequency tuning in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Roberts Larry E

    2009-01-01

    Full Text Available Abstract Background Under natural circumstances, attention plays an important role in extracting relevant auditory signals from simultaneously present, irrelevant noises. Excitatory and inhibitory neural activity, enhanced by attentional processes, seems to sharpen frequency tuning, contributing to improved auditory performance especially in noisy environments. In the present study, we investigated auditory magnetic fields in humans that were evoked by pure tones embedded in band-eliminated noises during two different stimulus sequencing conditions (constant vs. random under auditory focused attention by means of magnetoencephalography (MEG. Results In total, we used identical auditory stimuli between conditions, but presented them in a different order, thereby manipulating the neural processing and the auditory performance of the listeners. Constant stimulus sequencing blocks were characterized by the simultaneous presentation of pure tones of identical frequency with band-eliminated noises, whereas random sequencing blocks were characterized by the simultaneous presentation of pure tones of random frequencies and band-eliminated noises. We demonstrated that auditory evoked neural responses were larger in the constant sequencing compared to the random sequencing condition, particularly when the simultaneously presented noises contained narrow stop-bands. Conclusion The present study confirmed that population-level frequency tuning in human auditory cortex can be sharpened in a frequency-specific manner. This frequency-specific sharpening may contribute to improved auditory performance during detection and processing of relevant sound inputs characterized by specific frequency distributions in noisy environments.

  18. Similar structural dimensions in bushcricket auditory organs in spite of different foreleg size: consequences for auditory tuning.

    Science.gov (United States)

    Rössler, W; Kalmring, K

    1994-11-01

    The bushcricket species Decticus albifrons, Decticus verrucivorus and Pholidoptera griseoaptera (Tettigoniidae) belong to the same subfamily (Decticinae) but differ significantly in body size. In spite of the great differences in the dimensions of the forelegs, where the auditory organs are located, the most sensitive range of the hearing threshold lies between 6 and 25 kHz in each case. Only in the frequency range from 2 to 5 kHz and above 25 kHz, significant differences are present. The anatomy of the auditory receptor organs was compared quantitatively, using the techniques of semi-thin sectioning and computer-guided morphometry. The overall number of scolopidia and the length of the crista acustica differs in the three species, but the relative distribution of scolopidia along the crista acustica is very similar. Additionally, the scolopidia and their attachment structures (tectorial membrane, dorsal tracheal wall, cap cells) are of equal size at equivalent relative positions along the crista acustica. The results indicate that the constant relations and dimensions of corresponding structures within the cristae acusticae of the three species are responsible for the similarities in the tuning of the auditory thresholds.

  19. Auditory responses and stimulus-specific adaptation in rat auditory cortex are preserved across NREM and REM sleep.

    Science.gov (United States)

    Nir, Yuval; Vyazovskiy, Vladyslav V; Cirelli, Chiara; Banks, Matthew I; Tononi, Giulio

    2015-05-01

    Sleep entails a disconnection from the external environment. By and large, sensory stimuli do not trigger behavioral responses and are not consciously perceived as they usually are in wakefulness. Traditionally, sleep disconnection was ascribed to a thalamic "gate," which would prevent signal propagation along ascending sensory pathways to primary cortical areas. Here, we compared single-unit and LFP responses in core auditory cortex as freely moving rats spontaneously switched between wakefulness and sleep states. Despite robust differences in baseline neuronal activity, both the selectivity and the magnitude of auditory-evoked responses were comparable across wakefulness, Nonrapid eye movement (NREM) and rapid eye movement (REM) sleep (pairwise differences sleep and wakefulness using an oddball paradigm. Robust stimulus-specific adaptation (SSA) was observed following the onset of repetitive tones, and the strength of SSA effects (13-20%) was comparable across vigilance states. Thus, responses in core auditory cortex are preserved across sleep states, suggesting that evoked activity in primary sensory cortices is driven by external physical stimuli with little modulation by vigilance state. We suggest that sensory disconnection during sleep occurs at a stage later than primary sensory areas.

  20. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    Science.gov (United States)

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise.