WorldWideScience

Sample records for non-verbal auditory cognition

  1. Non-verbal auditory cognition in patients with temporal epilepsy before and after anterior temporal lobectomy

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-11-01

    Full Text Available For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL - i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri - is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits.

  2. On the embedded cognition of non-verbal narratives

    DEFF Research Database (Denmark)

    Bruni, Luis Emilio; Baceviciute, Sarune

    2014-01-01

    Acknowledging that narratives are an important resource in human communication and cognition, the focus of this article is on the cognitive aspects of involvement with visual and auditory non-verbal narratives, particularly in relation to the newest immersive media and digital interactive...... representational technologies. We consider three relevant trends in narrative studies that have emerged in the 60 years of cognitive and digital revolution. The issue at hand could have implications for developmental psychology, pedagogics, cognitive science, cognitive psychology, ethology and evolutionary studies...... of language. In particular, it is of great importance for narratology in relation to interactive media and new representational technologies. Therefore we outline a research agenda for a bio-cognitive semiotic interdisciplinary investigation on how people understand, react to, and interact with narratives...

  3. Perception of non-verbal auditory stimuli in Italian dyslexic children.

    Science.gov (United States)

    Cantiani, Chiara; Lorusso, Maria Luisa; Valnegri, Camilla; Molteni, Massimo

    2010-01-01

    Auditory temporal processing deficits have been proposed as the underlying cause of phonological difficulties in Developmental Dyslexia. The hypothesis was tested in a sample of 20 Italian dyslexic children aged 8-14, and 20 matched control children. Three tasks of auditory processing of non-verbal stimuli, involving discrimination and reproduction of sequences of rapidly presented short sounds were expressly created. Dyslexic subjects performed more poorly than control children, suggesting the presence of a deficit only partially influenced by the duration of the stimuli and of inter-stimulus intervals (ISIs).

  4. Non-verbal numerical cognition: from reals to integers.

    Science.gov (United States)

    Gallistel; Gelman

    2000-02-01

    Data on numerical processing by verbal (human) and non-verbal (animal and human) subjects are integrated by the hypothesis that a non-verbal counting process represents discrete (countable) quantities by means of magnitudes with scalar variability. These appear to be identical to the magnitudes that represent continuous (uncountable) quantities such as duration. The magnitudes representing countable quantity are generated by a discrete incrementing process, which defines next magnitudes and yields a discrete ordering. In the case of continuous quantities, the continuous accumulation process does not define next magnitudes, so the ordering is also continuous ('dense'). The magnitudes representing both countable and uncountable quantity are arithmetically combined in, for example, the computation of the income to be expected from a foraging patch. Thus, on the hypothesis presented here, the primitive machinery for arithmetic processing works with real numbers (magnitudes).

  5. Judging the urgency of non-verbal auditory alarms: a case study.

    Science.gov (United States)

    Arrabito, G Robert; Mondor, Todd; Kent, Kimberley

    2004-06-22

    When designed correctly, non-verbal auditory alarms can convey different levels of urgency to the aircrew, and thereby permit the operator to establish the appropriate level of priority to address the alarmed condition. The conveyed level of urgency of five non-verbal auditory alarms presently used in the Canadian Forces CH-146 Griffon helicopter was investigated. Pilots of the CH-146 Griffon helicopter and non-pilots rated the perceived urgency of the signals using a rating scale. The pilots also ranked the urgency of the alarms in a post-experiment questionnaire to reflect their assessment of the actual situation that triggers the alarms. The results of this investigation revealed that participants' ratings of perceived urgency appear to be based on the acoustic properties of the alarms which are known to affect the listener's perceived level of urgency. Although for 28% of the pilots the mapping of perceived urgency to the urgency of their perception of the triggering situation was statistically significant for three of the five alarms, the overall data suggest that the triggering situations are not adequately conveyed by the acoustic parameters inherent in the alarms. The pilots' judgement of the triggering situation was intended as a means of evaluating the reliability of the alerting system. These data will subsequently be discussed with respect to proposed enhancements in alerting systems as it relates to addressing the problem of phase of flight. These results call for more serious consideration of incorporating situational awareness in the design and assignment of auditory alarms in aircraft.

  6. Multi-level prediction of short-term outcome of depression : non-verbal interpersonal processes, cognitions and personality traits

    NARCIS (Netherlands)

    Geerts, E; Bouhuys, N

    1998-01-01

    It was hypothesized that personality factors determine the short-term outcome of depression, and that they may do this via non-verbal interpersonal interactions and via cognitive interpretations of non-verbal behaviour. Twenty-six hospitalized depressed patients entered the study. Personality

  7. Cortical Auditory Disorders: A Case of Non-Verbal Disturbances Assessed with Event-Related Brain Potentials

    Directory of Open Access Journals (Sweden)

    Sönke Johannes

    1998-01-01

    Full Text Available In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians’ musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19–30 and by event-related potentials (ERP recorded in a modified 'oddball paradigm’. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  8. Cortical auditory disorders: a case of non-verbal disturbances assessed with event-related brain potentials.

    Science.gov (United States)

    Johannes, Sönke; Jöbges, Michael E.; Dengler, Reinhard; Münte, Thomas F.

    1998-01-01

    In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians' musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19-30) and by event-related potentials (ERP) recorded in a modified 'oddball paradigm'. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  9. Linking social cognition with social interaction: Non-verbal expressivity, social competence and "mentalising" in patients with schizophrenia spectrum disorders

    Directory of Open Access Journals (Sweden)

    Lehmkämper Caroline

    2009-01-01

    Full Text Available Abstract Background Research has shown that patients with schizophrenia spectrum disorders (SSD can be distinguished from controls on the basis of their non-verbal expression. For example, patients with SSD use facial expressions less than normals to invite and sustain social interaction. Here, we sought to examine whether non-verbal expressivity in patients corresponds with their impoverished social competence and neurocognition. Method Fifty patients with SSD were videotaped during interviews. Non-verbal expressivity was evaluated using the Ethological Coding System for Interviews (ECSI. Social competence was measured using the Social Behaviour Scale and psychopathology was rated using the Positive and Negative Symptom Scale. Neurocognitive variables included measures of IQ, executive functioning, and two mentalising tasks, which tapped into the ability to appreciate mental states of story characters. Results Non-verbal expressivity was reduced in patients relative to controls. Lack of "prosocial" nonverbal signals was associated with poor social competence and, partially, with impaired understanding of others' minds, but not with non-social cognition or medication. Conclusion This is the first study to link deficits in non-verbal expressivity to levels of social skills and awareness of others' thoughts and intentions in patients with SSD.

  10. Frontal brain deactivation during a non-verbal cognitive judgement bias test in sheep.

    Science.gov (United States)

    Guldimann, Kathrin; Vögeli, Sabine; Wolf, Martin; Wechsler, Beat; Gygax, Lorenz

    2015-02-01

    Animal welfare concerns have raised an interest in animal affective states. These states also play an important role in the proximate control of behaviour. Due to their potential to modulate short-term emotional reactions, one specific focus is on long-term affective states, that is, mood. These states can be assessed by using non-verbal cognitive judgement bias paradigms. Here, we conducted a spatial variant of such a test on 24 focal animals that were kept under either unpredictable, stimulus-poor or predictable, stimulus-rich housing conditions to induce differential mood states. Based on functional near-infrared spectroscopy, we measured haemodynamic frontal brain reactions during 10 s in which the sheep could observe the configuration of the cognitive judgement bias trial before indicating their assessment based on the go/no-go reaction. We used (generalised) mixed-effects models to evaluate the data. Sheep from the unpredictable, stimulus-poor housing conditions took longer and were less likely to reach the learning criterion and reacted slightly more optimistically in the cognitive judgement bias test than sheep from the predictable, stimulus-rich housing conditions. A frontal cortical increase in deoxy-haemoglobin [HHb] and a decrease in oxy-haemoglobin [O2Hb] were observed during the visual assessment of the test situation by the sheep, indicating a frontal cortical brain deactivation. This deactivation was more pronounced with the negativity of the test situation, which was reflected by the provenance of the sheep from the unpredictable, stimulus-poor housing conditions, the proximity of the cue to the negatively reinforced cue location, or the absence of a go reaction in the trial. It seems that (1) sheep from the unpredictable, stimulus-poor in comparison to sheep from the predictable, stimulus-rich housing conditions dealt less easily with the test conditions rich in stimuli, that (2) long-term housing conditions seemingly did not influence mood

  11. Adverse Life Events and Emotional and Behavioral Problems in Adolescence: The Role of Non-Verbal Cognitive Ability and Negative Cognitive Errors

    Science.gov (United States)

    Flouri, Eirini; Panourgia, Constantina

    2011-01-01

    The aim of this study was to test whether negative cognitive errors (overgeneralizing, catastrophizing, selective abstraction, and personalizing) mediate the moderator effect of non-verbal cognitive ability on the association between adverse life events (life stress) and emotional and behavioral problems in adolescence. The sample consisted of 430…

  12. Mood As Cumulative Expectation Mismatch: A Test of Theory Based on Data from Non-verbal Cognitive Bias Tests

    Directory of Open Access Journals (Sweden)

    Camille M. C. Raoult

    2017-12-01

    Full Text Available Affective states are known to influence behavior and cognitive processes. To assess mood (moderately long-term affective states, the cognitive judgment bias test was developed and has been widely used in various animal species. However, little is known about how mood changes, how mood can be experimentally manipulated, and how mood then feeds back into cognitive judgment. A recent theory argues that mood reflects the cumulative impact of differences between obtained outcomes and expectations. Here expectations refer to an established context. Situations in which an established context fails to match an outcome are then perceived as mismatches of expectation and outcome. We take advantage of the large number of studies published on non-verbal cognitive bias tests in recent years (95 studies with a total of 162 independent tests to test whether cumulative mismatch could indeed have led to the observed mood changes. Based on a criteria list, we assessed whether mismatch had occurred with the experimental procedure used to induce mood (mood induction mismatch, or in the context of the non-verbal cognitive bias procedure (testing mismatch. For the mood induction mismatch, we scored the mismatch between the subjects’ potential expectations and the manipulations conducted for inducing mood whereas, for the testing mismatch, we scored mismatches that may have occurred during the actual testing. We then investigated whether these two types of mismatch can predict the actual outcome of the cognitive bias study. The present evaluation shows that mood induction mismatch cannot well predict the success of a cognitive bias test. On the other hand, testing mismatch can modulate or even inverse the expected outcome. We think, cognitive bias studies should more specifically aim at creating expectation mismatch while inducing mood states to test the cumulative mismatch theory more properly. Furthermore, testing mismatch should be avoided as much as possible

  13. Role of Auditory Non-Verbal Working Memory in Sentence Repetition for Bilingual Children with Primary Language Impairment

    Science.gov (United States)

    Ebert, Kerry Danahy

    2014-01-01

    Background: Sentence repetition performance is attracting increasing interest as a valuable clinical marker for primary (or specific) language impairment (LI) in both monolingual and bilingual populations. Multiple aspects of memory appear to contribute to sentence repetition performance, but non-verbal memory has not yet been considered. Aims: To…

  14. Auditory-motor mapping training as an intervention to facilitate speech output in non-verbal children with autism: a proof of concept study.

    Directory of Open Access Journals (Sweden)

    Catherine Y Wan

    Full Text Available Although up to 25% of children with autism are non-verbal, there are very few interventions that can reliably produce significant improvements in speech output. Recently, a novel intervention called Auditory-Motor Mapping Training (AMMT has been developed, which aims to promote speech production directly by training the association between sounds and articulatory actions using intonation and bimanual motor activities. AMMT capitalizes on the inherent musical strengths of children with autism, and offers activities that they intrinsically enjoy. It also engages and potentially stimulates a network of brain regions that may be dysfunctional in autism. Here, we report an initial efficacy study to provide 'proof of concept' for AMMT. Six non-verbal children with autism participated. Prior to treatment, the children had no intelligible words. They each received 40 individual sessions of AMMT 5 times per week, over an 8-week period. Probe assessments were conducted periodically during baseline, therapy, and follow-up sessions. After therapy, all children showed significant improvements in their ability to articulate words and phrases, with generalization to items that were not practiced during therapy sessions. Because these children had no or minimal vocal output prior to treatment, the acquisition of speech sounds and word approximations through AMMT represents a critical step in expressive language development in children with autism.

  15. Adults with Asperger Syndrome with and without a Cognitive Profile Associated with "Non-Verbal Learning Disability." A Brief Report

    Science.gov (United States)

    Nyden, Agneta; Niklasson, Lena; Stahlberg, Ola; Anckarsater, Henrik; Dahlgren-Sandberg, Annika; Wentz, Elisabet; Rastam, Maria

    2010-01-01

    Asperger syndrome (AS) and non-verbal learning disability (NLD) are both characterized by impairments in motor coordination, visuo-perceptual abilities, pragmatics and comprehension of language and social understanding. NLD is also defined as a learning disorder affecting functions in the right cerebral hemisphere. The present study investigates…

  16. Beneficial auditory and cognitive effects of auditory brainstem implantation in children.

    Science.gov (United States)

    Colletti, Liliana

    2007-09-01

    This preliminary study demonstrates the development of hearing ability and shows that there is a significant improvement in some cognitive parameters related to selective visual/spatial attention and to fluid or multisensory reasoning, in children fitted with auditory brainstem implantation (ABI). The improvement in cognitive paramenters is due to several factors, among which there is certainly, as demonstrated in the literature on a cochlear implants (CIs), the activation of the auditory sensory canal, which was previously absent. The findings of the present study indicate that children with cochlear or cochlear nerve abnormalities with associated cognitive deficits should not be excluded from ABI implantation. The indications for ABI have been extended over the last 10 years to adults with non-tumoral (NT) cochlear or cochlear nerve abnormalities that cannot benefit from CI. We demonstrated that the ABI with surface electrodes may provide sufficient stimulation of the central auditory system in adults for open set speech recognition. These favourable results motivated us to extend ABI indications to children with profound hearing loss who were not candidates for a CI. This study investigated the performances of young deaf children undergoing ABI, in terms of their auditory perceptual development and their non-verbal cognitive abilities. In our department from 2000 to 2006, 24 children aged 14 months to 16 years received an ABI for different tumour and non-tumour diseases. Two children had NF2 tumours. Eighteen children had bilateral cochlear nerve aplasia. In this group, nine children had associated cochlear malformations, two had unilateral facial nerve agenesia and two had combined microtia, aural atresia and middle ear malformations. Four of these children had previously been fitted elsewhere with a CI with no auditory results. One child had bilateral incomplete cochlear partition (type II); one child, who had previously been fitted unsuccessfully elsewhere

  17. Short-term plasticity in auditory cognition.

    Science.gov (United States)

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  18. Impact of Aging on the Auditory System and Related Cognitive Functions: A Narrative Review

    Directory of Open Access Journals (Sweden)

    Dona M. P. Jayakody

    2018-03-01

    Full Text Available Age-related hearing loss (ARHL, presbycusis, is a chronic health condition that affects approximately one-third of the world's population. The peripheral and central hearing alterations associated with age-related hearing loss have a profound impact on perception of verbal and non-verbal auditory stimuli. The high prevalence of hearing loss in the older adults corresponds to the increased frequency of dementia in this population. Therefore, researchers have focused their attention on age-related central effects that occur independent of the peripheral hearing loss as well as central effects of peripheral hearing loss and its association with cognitive decline and dementia. Here we review the current evidence for the age-related changes of the peripheral and central auditory system and the relationship between hearing loss and pathological cognitive decline and dementia. Furthermore, there is a paucity of evidence on the relationship between ARHL and established biomarkers of Alzheimer's disease, as the most common cause of dementia. Such studies are critical to be able to consider any causal relationship between dementia and ARHL. While this narrative review will examine the pathophysiological alterations in both the peripheral and central auditory system and its clinical implications, the question remains unanswered whether hearing loss causes cognitive impairment or vice versa.

  19. Cognitive mechanisms associated with auditory sensory gating

    Science.gov (United States)

    Jones, L.A.; Hills, P.J.; Dick, K.M.; Jones, S.P.; Bright, P.

    2016-01-01

    Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification. PMID:26716891

  20. [Non-verbal communication in Alzheimer's disease].

    Science.gov (United States)

    Schiaratura, Loris Tamara

    2008-09-01

    This review underlines the importance of non-verbal communication in Alzheimer's disease. A social psychological perspective of communication is privileged. Non-verbal behaviors such as looks, head nods, hand gestures, body posture or facial expression provide a lot of information about interpersonal attitudes, behavioral intentions, and emotional experiences. Therefore they play an important role in the regulation of interaction between individuals. Non-verbal communication is effective in Alzheimer's disease even in the late stages. Patients still produce non-verbal signals and are responsive to others. Nevertheless, few studies have been devoted to the social factors influencing the non-verbal exchange. Misidentification and misinterpretation of behaviors may have negative consequences for the patients. Thus, improving the comprehension of and the response to non-verbal behavior would increase first the quality of the interaction, then the physical and psychological well-being of patients and that of caregivers. The role of non-verbal behavior in social interactions should be approached from an integrative and functional point of view.

  1. Anatomical Correlates of Non-Verbal Perception in Dementia Patients

    Directory of Open Access Journals (Sweden)

    Pin-Hsuan Lin

    2016-08-01

    Full Text Available Purpose: Patients with dementia who have dissociations in verbal and non-verbal sound processing may offer insights into the anatomic basis for highly related auditory modes. Methods: To determine the neuronal networks on non-verbal perception, 16 patients with Alzheimer’s dementia (AD, 15 with behavior variant fronto-temporal dementia (bv-FTD, 14 with semantic dementia (SD were evaluated and compared with 15 age-matched controls. Neuropsychological and auditory perceptive tasks were included to test the ability to compare pitch changes, scale-violated melody and for naming and associating with environmental sound. The brain 3D T1 images were acquired and voxel-based morphometry (VBM was used to compare and correlated the volumetric measures with task scores. Results: The SD group scored the lowest among 3 groups in pitch or scale-violated melody tasks. In the environmental sound test, the SD group also showed impairment in naming and also in associating sound with pictures. The AD and bv-FTD groups, compared with the controls, showed no differences in all tests. VBM with task score correlation showed that atrophy in the right supra-marginal and superior temporal gyri was strongly related to deficits in detecting violated scales, while atrophy in the bilateral anterior temporal poles and left medial temporal structures was related to deficits in environmental sound recognition. Conclusions: Auditory perception of pitch, scale-violated melody or environmental sound reflects anatomical degeneration in dementia patients and the processing of non-verbal sounds is mediated by distinct neural circuits.

  2. Consonant Differentiation Mediates the Discrepancy between Non-verbal and Verbal Abilities in Children with ASD

    Science.gov (United States)

    Key, A. P.; Yoder, P. J.; Stone, W. L.

    2016-01-01

    Background: Many children with autism spectrum disorder (ASD) demonstrate verbal communication disorders reflected in lower verbal than non-verbal abilities. The present study examined the extent to which this discrepancy is associated with atypical speech sound differentiation. Methods: Differences in the amplitude of auditory event-related…

  3. Physical growth and non-verbal intelligence: Associations in Zambia

    Science.gov (United States)

    Hein, Sascha; Reich, Jodi; Thuma, Philip E.; Grigorenko, Elena L.

    2014-01-01

    Objectives To investigate normative developmental BMI trajectories and associations of physical growth indicators (ie, height, weight, head circumference [HC], body mass index [BMI]) with non-verbal intelligence in an understudied population of children from Sub-Saharan Africa. Study design A sample of 3981 students (50.8% male), grades 3 to 7, with a mean age of 12.75 years was recruited from 34 rural Zambian schools. Children with low scores on vision and hearing screenings were excluded. Height, weight and HC were measured, and non-verbal intelligence was assessed using UNIT-symbolic memory and KABC-II-triangles. Results Results showed that students in higher grades have a higher BMI over and above the effect of age. Girls showed a marginally higher BMI, although that for both boys and girls was approximately 1 SD below the international CDC and WHO norms. Controlling for the effect of age, non-verbal intelligence showed small but significant positive relationships with HC (r = .17) and BMI (r = .11). HC and BMI accounted for 1.9% of the variance in non-verbal intelligence, over and above the contribution of grade and sex. Conclusions BMI-for-age growth curves of Zambian children follow observed worldwide developmental trajectories. The positive relationships between BMI and intelligence underscore the importance of providing adequate nutritional and physical growth opportunities for children worldwide and in sub-Saharan Africa in particular. Directions for future studies are discussed with regard to maximizing the cognitive potential of all rural African children. PMID:25217196

  4. Auditory and cognitive performance in elderly musicians and nonmusicians.

    Directory of Open Access Journals (Sweden)

    Massimo Grassi

    Full Text Available Musicians represent a model for examining brain and behavioral plasticity in terms of cognitive and auditory profile, but few studies have investigated whether elderly musicians have better auditory and cognitive abilities than nonmusicians. The aim of the present study was to examine whether being a professional musician attenuates the normal age-related changes in hearing and cognition. Elderly musicians still active in their profession were compared with nonmusicians on auditory performance (absolute threshold, frequency intensity, duration and spectral shape discrimination, gap and sinusoidal amplitude-modulation detection, and on simple (short-term memory and more complex and higher-order (working memory [WM] and visuospatial abilities cognitive tasks. The sample consisted of adults at least 65 years of age. The results showed that older musicians had similar absolute thresholds but better supra-threshold discrimination abilities than nonmusicians in four of the six auditory tasks administered. They also had a better WM performance, and stronger visuospatial abilities than nonmusicians. No differences were found between the two groups' short-term memory. Frequency discrimination and gap detection for the auditory measures, and WM complex span tasks and one of the visuospatial tasks for the cognitive ones proved to be very good classifiers of the musicians. These findings suggest that life-long music training may be associated with enhanced auditory and cognitive performance, including complex cognitive skills, in advanced age. However, whether this music training represents a protective factor or not needs further investigation.

  5. Individual differences in non-verbal number acuity correlate with maths achievement.

    Science.gov (United States)

    Halberda, Justin; Mazzocco, Michèle M M; Feigenson, Lisa

    2008-10-02

    Human mathematical competence emerges from two representational systems. Competence in some domains of mathematics, such as calculus, relies on symbolic representations that are unique to humans who have undergone explicit teaching. More basic numerical intuitions are supported by an evolutionarily ancient approximate number system that is shared by adults, infants and non-human animals-these groups can all represent the approximate number of items in visual or auditory arrays without verbally counting, and use this capacity to guide everyday behaviour such as foraging. Despite the widespread nature of the approximate number system both across species and across development, it is not known whether some individuals have a more precise non-verbal 'number sense' than others. Furthermore, the extent to which this system interfaces with the formal, symbolic maths abilities that humans acquire by explicit instruction remains unknown. Here we show that there are large individual differences in the non-verbal approximation abilities of 14-year-old children, and that these individual differences in the present correlate with children's past scores on standardized maths achievement tests, extending all the way back to kindergarten. Moreover, this correlation remains significant when controlling for individual differences in other cognitive and performance factors. Our results show that individual differences in achievement in school mathematics are related to individual differences in the acuity of an evolutionarily ancient, unlearned approximate number sense. Further research will determine whether early differences in number sense acuity affect later maths learning, whether maths education enhances number sense acuity, and the extent to which tertiary factors can affect both.

  6. Engagement with the auditory processing system during targeted auditory cognitive training mediates changes in cognitive outcomes in individuals with schizophrenia.

    Science.gov (United States)

    Biagianti, Bruno; Fisher, Melissa; Neilands, Torsten B; Loewy, Rachel; Vinogradov, Sophia

    2016-11-01

    Individuals with schizophrenia who engage in targeted cognitive training (TCT) of the auditory system show generalized cognitive improvements. The high degree of variability in cognitive gains maybe due to individual differences in the level of engagement of the underlying neural system target. 131 individuals with schizophrenia underwent 40 hours of TCT. We identified target engagement of auditory system processing efficiency by modeling subject-specific trajectories of auditory processing speed (APS) over time. Lowess analysis, mixed models repeated measures analysis, and latent growth curve modeling were used to examine whether APS trajectories were moderated by age and illness duration, and mediated improvements in cognitive outcome measures. We observed significant improvements in APS from baseline to 20 hours of training (initial change), followed by a flat APS trajectory (plateau) at subsequent time-points. Participants showed interindividual variability in the steepness of the initial APS change and in the APS plateau achieved and sustained between 20 and 40 hours. We found that participants who achieved the fastest APS plateau, showed the greatest transfer effects to untrained cognitive domains. There is a significant association between an individual's ability to generate and sustain auditory processing efficiency and their degree of cognitive improvement after TCT, independent of baseline neurocognition. APS plateau may therefore represent a behavioral measure of target engagement mediating treatment response. Future studies should examine the optimal plateau of auditory processing efficiency required to induce significant cognitive improvements, in the context of interindividual differences in neural plasticity and sensory system efficiency that characterize schizophrenia. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Auditory verbal hallucinations and cognitive functioning in healthy individuals.

    Science.gov (United States)

    Daalman, Kirstin; van Zandvoort, Martine; Bootsman, Florian; Boks, Marco; Kahn, René; Sommer, Iris

    2011-11-01

    Auditory verbal hallucinations (AVH) are a characteristic symptom in schizophrenia, and also occur in the general, non-clinical population. In schizophrenia patients, several specific cognitive deficits, such as in speech processing, working memory, source memory, attention, inhibition, episodic memory and self-monitoring have been associated with auditory verbal hallucinations. Such associations are interesting, as they may identify specific cognitive traits that constitute a predisposition for AVH. However, it is difficult to disentangle a specific relation with AVH in patients with schizophrenia, as so many other factors can affect the performance on cognitive tests. Examining the cognitive profile of healthy individuals experiencing AVH may reveal a more direct association between AVH and aberrant cognitive functioning in a specific domain. For the current study, performance in executive functioning, memory (both short- and long-term), processing speed, spatial ability, lexical access, abstract reasoning, language and intelligence performance was compared between 101 healthy individuals with AVH and 101 healthy controls, matched for gender, age, handedness and education. Although performance of both groups was within the normal range, not clinically impaired, significant differences between the groups were found in the verbal domain as well as in executive functioning. Performance on all other cognitive domains was similar in both groups. The predisposition to experience AVH is associated with lower performance in executive functioning and aberrant language performance. This association might be related to difficulties in the inhibition of irrelevant verbal information. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Persistent non-verbal memory impairment in remitted major depression - caused by encoding deficits?

    Science.gov (United States)

    Behnken, Andreas; Schöning, Sonja; Gerss, Joachim; Konrad, Carsten; de Jong-Meyer, Renate; Zwanzger, Peter; Arolt, Volker

    2010-04-01

    While neuropsychological impairments are well described in acute phases of major depressive disorders (MDD), little is known about the neuropsychological profile in remission. There is evidence for episodic memory impairments in both acute depressed and remitted patients with MDD. Learning and memory depend on individuals' ability to organize information during learning. This study investigates non-verbal memory functions in remitted MDD and whether nonverbal memory performance is mediated by organizational strategies whilst learning. 30 well-characterized fully remitted individuals with unipolar MDD and 30 healthy controls matching in age, sex and education were investigated. Non-verbal learning and memory were measured by the Rey-Osterrieth-Complex-Figure-Test (RCFT). The RCFT provides measures of planning, organizational skills, perceptual and non-verbal memory functions. For assessing the mediating effects of organizational strategies, we used the Savage Organizational Score. Compared to healthy controls, participants with remitted MDD showed more deficits in their non-verbal memory function. Moreover, participants with remitted MDD demonstrated difficulties in organizing non-verbal information appropriately during learning. In contrast, no impairments regarding visual-spatial functions in remitted MDD were observed. Except for one patient, all the others were taking psychopharmacological medication. The neuropsychological function was solely investigated in the remitted phase of MDD. Individuals with MDD in remission showed persistent non-verbal memory impairments, modulated by a deficient use of organizational strategies during encoding. Therefore, our results strongly argue for additional therapeutic interventions in order to improve these remaining deficits in cognitive function. Copyright 2009 Elsevier B.V. All rights reserved.

  9. A Meta-study of musicians' non-verbal interaction

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer; Marchetti, Emanuela

    2010-01-01

    interruptions. Hence, despite the fact that the skill to engage in a non-verbal interaction is described as tacit knowledge, it is fundamental for both musicians and teachers (Davidson and Good 2002). Typical observed non-verbal cues are for example: physical gestures, modulations of sound, steady eye contact...

  10. Non-Verbal Communication in Children with Visual Impairment

    Science.gov (United States)

    Mallineni, Sharmila; Nutheti, Rishita; Thangadurai, Shanimole; Thangadurai, Puspha

    2006-01-01

    The aim of this study was to determine: (a) whether children with visual and additional impairments show any non-verbal behaviors, and if so what were the common behaviors; (b) whether two rehabilitation professionals interpreted the non-verbal behaviors similarly; and (c) whether a speech pathologist and a rehabilitation professional interpreted…

  11. Guidelines for Teaching Non-Verbal Communications Through Visual Media

    Science.gov (United States)

    Kundu, Mahima Ranjan

    1976-01-01

    There is a natural unique relationship between non-verbal communication and visual media such as television and film. Visual media will have to be used extensively--almost exclusively--in teaching non-verbal communications, as well as other methods requiring special teaching skills. (Author/ER)

  12. Predictors of auditory performance in hearing-aid users: The role of cognitive function and auditory lifestyle (A)

    DEFF Research Database (Denmark)

    Vestergaard, Martin David

    2006-01-01

    no objective benefit can be measured. It has been suggested that lack of agreement between various hearing-aid outcome components can be explained by individual differences in cognitive function and auditory lifestyle. We measured speech identification, self-report outcome, spectral and temporal resolution...... of hearing, cognitive skills, and auditory lifestyle in 25 new hearing-aid users. The purpose was to assess the predictive power of the nonauditory measures while looking at the relationships between measures from various auditory-performance domains. The results showed that only moderate correlation exists...... between objective and subjective hearing-aid outcome. Different self-report outcome measures showed a different amount of correlation with objective auditory performance. Cognitive skills were found to play a role in explaining speech performance and spectral and temporal abilities, and auditory lifestyle...

  13. Effects of proactive interference on non-verbal working memory.

    Science.gov (United States)

    Cyr, Marilyn; Nee, Derek E; Nelson, Eric; Senger, Thea; Jonides, John; Malapani, Chara

    2017-02-01

    Working memory (WM) is a cognitive system responsible for actively maintaining and processing relevant information and is central to successful cognition. A process critical to WM is the resolution of proactive interference (PI), which involves suppressing memory intrusions from prior memories that are no longer relevant. Most studies that have examined resistance to PI in a process-pure fashion used verbal material. By contrast, studies using non-verbal material are scarce, and it remains unclear whether the effect of PI is domain-general or whether it applies solely to the verbal domain. The aim of the present study was to examine the effect of PI in visual WM using both objects with high and low nameability. Using a Directed-Forgetting paradigm, we varied discriminability between WM items on two dimensions, one verbal (high-nameability vs. low-nameability objects) and one perceptual (colored vs. gray objects). As in previous studies using verbal material, effects of PI were found with object stimuli, even after controlling for verbal labels being used (i.e., low-nameability condition). We also found that the addition of distinctive features (color, verbal label) increased performance in rejecting intrusion probes, most likely through an increase in discriminability between content-context bindings in WM.

  14. Changes in regional cerebral blood flow during auditory cognitive tasks

    International Nuclear Information System (INIS)

    Ohyama, Masashi; Kitamura, Shin; Terashi, Akiro; Senda, Michio.

    1993-01-01

    In order to investigate the relation between auditory cognitive function and regional brain activation, we measured the changes in the regional cerebral blood flow (CBF) using positron emission tomography (PET) during the 'odd-ball' paradigm in ten normal healthy volunteers. The subjects underwent 3 tasks, twice for each, while the evoked potential was recorded. In these tasks, the auditory stimulus was a series of pure tones delivered every 1.5 sec binaurally at 75 dB from the earphones. Task A: the stimulus was a series of tones with 1000 Hz only, and the subject was instructed to only hear. Task B: the stimulus was a series of tones with 1000 Hz only, and the subject was instructed to push the button on detecting a tone. Task C: the stimulus was a series of pure tones delivered every 1.5 sec binaurally at 75 dB with a frequency of 1000 Hz (non-target) in 80% and 2000 Hz (target) in 20% at random, and the subject was instructed to push the button on detecting a target tone. The event related potential (P300) was observed in task C (Pz: 334.3±19.6 msec). At each task, the CBF was measured using PET with i.v. injection of 1.5 GBq of O-15 water. The changes in CBF associated with auditory cognition was evaluated by the difference between the CBF images in task C and B. Localized increase was observed in the anterior cingulate cortex (in all subjects), the bilateral associate auditory cortex, the prefrontal cortex and the parietal cortex. The latter three areas had a large individual variation in the location of foci. These results suggested the role of those cortical areas in auditory cognition. The anterior cingulate was most activated (15.0±2.24% of global CBF). This region was not activated in the condition of task B minus task A. The anterior cingulate is a part of Papez's circuit that is related to memory and other higher cortical function. These results suggested that this area may play an important role in cognition as well as in attention. (author)

  15. Non-verbal communication barriers when dealing with Saudi sellers

    Directory of Open Access Journals (Sweden)

    Yosra Missaoui

    2015-12-01

    Full Text Available Communication has a major impact on how customers perceive sellers and their organizations. Especially, the non-verbal communication such as body language, appearance, facial expressions, gestures, proximity, posture, eye contact that can influence positively or negatively the first impression of customers and their experiences in stores. Salespeople in many countries, especially the developing ones, are just telling about their companies’ products because they are unaware of the real role of sellers and the importance of non-verbal communication. In Saudi Arabia, the seller profession has been exclusively for foreign labor until 2006. It is very recently that Saudi workforce enters to the retailing sector as sellers. The non-verbal communication of those sellers has never been evaluated from consumer’s point of view. Therefore, the aim of this paper is to explore the non-verbal communication barriers that customers are facing when dealing with Saudi sellers. After discussing the non-verbal communication skills that sellers must have in the light of the previous academic research and the depth interviews with seven focus groups of Saudi customers, this study found that the Saudi customers were not totally satisfied with the current non-verbal communication skills of Saudi sellers. Therefore, it is strongly recommended to develop the non-verbal communication skills of Saudi sellers by intensive trainings, to distinguish more the appearance of their sellers, especially the female ones, to focus on the time of intervention as well as the proximity to customers.

  16. Music lessons improve auditory perceptual and cognitive performance in deaf children.

    Science.gov (United States)

    Rochette, Françoise; Moussard, Aline; Bigand, Emmanuel

    2014-01-01

    Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5-4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  17. Music lessons improve auditory perceptual and cognitive performance in deaf children

    Directory of Open Access Journals (Sweden)

    Françoise eROCHETTE

    2014-07-01

    Full Text Available Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5 to 4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically-trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  18. From SOLER to SURETY for effective non-verbal communication.

    Science.gov (United States)

    Stickley, Theodore

    2011-11-01

    This paper critiques the model for non-verbal communication referred to as SOLER (which stands for: "Sit squarely"; "Open posture"; "Lean towards the other"; "Eye contact; "Relax"). It has been approximately thirty years since Egan (1975) introduced his acronym SOLER as an aid for teaching and learning about non-verbal communication. There is evidence that the SOLER framework has been widely used in nurse education with little published critical appraisal. A new acronym that might be appropriate for non-verbal communication skills training and education is proposed and this is SURETY (which stands for "Sit at an angle"; "Uncross legs and arms"; "Relax"; "Eye contact"; "Touch"; "Your intuition"). The proposed model advances the SOLER model by including the use of touch and the importance of individual intuition is emphasised. The model encourages student nurse educators to also think about therapeutic space when they teach skills of non-verbal communication. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Cognitive Training Enhances Auditory Attention Efficiency in Older Adults

    Directory of Open Access Journals (Sweden)

    Jennifer L. O’Brien

    2017-10-01

    Full Text Available Auditory cognitive training (ACT improves attention in older adults; however, the underlying neurophysiological mechanisms are still unknown. The present study examined the effects of ACT on the P3b event-related potential reflecting attention allocation (amplitude and speed of processing (latency during stimulus categorization and the P1-N1-P2 complex reflecting perceptual processing (amplitude and latency. Participants completed an auditory oddball task before and after 10 weeks of ACT (n = 9 or a no contact control period (n = 15. Parietal P3b amplitudes to oddball stimuli decreased at post-test in the trained group as compared to those in the control group, and frontal P3b amplitudes show a similar trend, potentially reflecting more efficient attentional allocation after ACT. No advantages for the ACT group were evident for auditory perceptual processing or speed of processing in this small sample. Our results provide preliminary evidence that ACT may enhance the efficiency of attention allocation, which may account for the positive impact of ACT on the everyday functioning of older adults.

  20. Evaluating Visual and Auditory Contributions to the Cognitive Restoration Effect

    Directory of Open Access Journals (Sweden)

    Adam G. Emfield

    2014-06-01

    Full Text Available It has been suggested that certain real-world environments can have a restorative effect on an individual, as expressed in changes in cognitive performance and mood. Much of this research builds on Attention Restoration Theory (ART, which suggests that environments that have certain characteristics induce cognitive restoration via variations in attentional demands. Specifically, natural environments that require little top-down processing have a positive effect on cognitive performance, while city-like environments show no effect. We characterized the cognitive restoration effect further by examining 1 whether natural visual stimuli, such as blue spaces, were more likely to provide a restorative effect over urban visual stimuli, 2 if increasing immersion with environment-related sound produces a similar or superior effect, 3 if this effect extends to other cognitive tasks, such as the functional field of view, and 4 if we could better understand this effect by providing controls beyond previous works. We had 202 participants complete a cognitive task battery, consisting of a reverse digit span task, the attention network task, and the functional field of view task prior to and immediately after a restoration period. In the restoration period, participants were assigned to one of seven conditions in which they listened to natural or urban sounds, watched images of natural or urban environments, or a combination of both. Additionally, some participants were in a control group with exposure to neither picture nor sound. While we found some indication of practice effects, there were no differential effects of restoration observed in any of our cognitive tasks, regardless of condition. We did, however, find evidence that our nature images and sounds were more relaxing than their urban counterparts. Overall, our findings suggest that acute exposure to relaxing pictorial and auditory stimulus is insufficient to induce improvements in cognitive

  1. Interactions of cognitive and auditory abilities in congenitally blind individuals.

    Science.gov (United States)

    Rokem, Ariel; Ahissar, Merav

    2009-02-01

    Congenitally blind individuals have been found to show superior performance in perceptual and memory tasks. In the present study, we asked whether superior stimulus encoding could account for performance in memory tasks. We characterized the performance of a group of congenitally blind individuals on a series of auditory, memory and executive cognitive tasks and compared their performance to that of sighted controls matched for age, education and musical training. As expected, we found superior verbal spans among congenitally blind individuals. Moreover, we found superior speech perception, measured by resilience to noise, and superior auditory frequency discrimination. However, when memory span was measured under conditions of equivalent speech perception, by adjusting the signal to noise ratio for each individual to the same level of perceptual difficulty (80% correct), the advantage in memory span was completely eliminated. Moreover, blind individuals did not possess any advantage in cognitive executive functions, such as manipulation of items in memory and math abilities. We propose that the short-term memory advantage of blind individuals results from better stimulus encoding, rather than from superiority at subsequent processing stages.

  2. Near Real-Time Comprehension Classification with Artificial Neural Networks: Decoding e-Learner Non-Verbal Behavior

    Science.gov (United States)

    Holmes, Mike; Latham, Annabel; Crockett, Keeley; O'Shea, James D.

    2018-01-01

    Comprehension is an important cognitive state for learning. Human tutors recognize comprehension and non-comprehension states by interpreting learner non-verbal behavior (NVB). Experienced tutors adapt pedagogy, materials, and instruction to provide additional learning scaffold in the context of perceived learner comprehension. Near real-time…

  3. The Bursts and Lulls of Multimodal Interaction: Temporal Distributions of Behavior Reveal Differences Between Verbal and Non-Verbal Communication.

    Science.gov (United States)

    Abney, Drew H; Dale, Rick; Louwerse, Max M; Kello, Christopher T

    2018-04-06

    Recent studies of naturalistic face-to-face communication have demonstrated coordination patterns such as the temporal matching of verbal and non-verbal behavior, which provides evidence for the proposal that verbal and non-verbal communicative control derives from one system. In this study, we argue that the observed relationship between verbal and non-verbal behaviors depends on the level of analysis. In a reanalysis of a corpus of naturalistic multimodal communication (Louwerse, Dale, Bard, & Jeuniaux, ), we focus on measuring the temporal patterns of specific communicative behaviors in terms of their burstiness. We examined burstiness estimates across different roles of the speaker and different communicative modalities. We observed more burstiness for verbal versus non-verbal channels, and for more versus less informative language subchannels. Using this new method for analyzing temporal patterns in communicative behaviors, we show that there is a complex relationship between verbal and non-verbal channels. We propose a "temporal heterogeneity" hypothesis to explain how the language system adapts to the demands of dialog. Copyright © 2018 Cognitive Science Society, Inc.

  4. Auditory beat stimulation and its effects on cognition and mood states

    Directory of Open Access Journals (Sweden)

    Leila eChaieb

    2015-05-01

    Full Text Available Auditory beat stimulation may be a promising new tool for the manipulation of cognitive processes and the modulation of mood-states. Here we aim to review the literature examining the most current applications of auditory beat stimulation and its targets. We give a brief overview of research on auditory steady-state responses and its relationship to auditory beat stimulation. We have summarized relevant studies investigating the neurophysiological changes related to auditory beat stimulation and how they impact upon the design of appropriate stimulation protocols. Focusing on binaural beat stimulation, we then discuss the role of monaural and binaural beat frequencies in cognition and mood-states, in addition to their efficacy in targeting disease symptoms. We aim to highlight important points concerning stimulation parameters and try to address why there are often contradictory findings with regard to the outcomes of auditory beat stimulation.

  5. The role of auditory abilities in basic mechanisms of cognition in older adults

    Directory of Open Access Journals (Sweden)

    Massimo eGrassi

    2013-10-01

    Full Text Available The aim of this study was to assess age-related differences between young and older adults in auditory abilities and to investigate the relationship between auditory abilities and basic mechanisms of cognition in older adults. Although there is a certain consensus that the participant’s sensitivity to the absolute intensity of sounds (such as that measured via pure tone audiometry explains his/her cognitive performance, there is not yet much evidence that the participant’s auditory ability (i.e., the whole supra-threshold processing of sounds explains his/her cognitive performance. Twenty-eight young adults (age < 35, 26 young-old adults (65 ≤ age ≤75 and 28 old-old adults (age > 75 were presented with a set of tasks estimating several auditory abilities (i.e., frequency discrimination, intensity discrimination, duration discrimination, timbre discrimination, gap detection, amplitude modulation detection, and the absolute threshold for a 1 kHz pure tone and the participant’s working memory, cognitive inhibition, and processing speed. Results showed an age-related decline in both auditory and cognitive performance. Moreover, regression analyses showed that a subset of the auditory abilities (i.e., the ability to discriminate frequency, duration, timbre, and the ability to detect amplitude modulation explained a significant part of the variance observed in processing speed in older adults. Overall, the present results highlight the relationship between auditory abilities and basic mechanisms of cognition.

  6. Auditory agnosia associated with bilateral putaminal hemorrhage: A case report of clinical course of recovery.

    Science.gov (United States)

    Tokida, Haruki; Kanaya, Yuhei; Shimoe, Yutaka; Imagawa, Madoka; Fukunaga, Shinya; Kuriyama, Masaru

    2017-08-31

    A 45-year-old right-handed man with a past history (10 years) of putaminal hemorrage presented with auditory agnosia associated with left putaminal hemorrhage. It was suspected that the auditory agnosia was due to bilateral damage in the acoustic radiations. Generalized auditory agnosia, verbal and non-verbal (music and environmental), was diagnosed by neuropsychological examinations. It improved 4 months after the onset. However, the clinical assessment of attention remained poor. The cognition for speech sounds improved slowly, but once it started to improve, the progress of improvement was rapid. Subsequently, the cognition for music sounds also improved, while the recovery of the cognition for environmental sounds remained delayed. There was a dissociation in recovery between these cognitions. He was able to return to work a year after the onset. We also reviewed the literature for cases with auditory agnosia and discuss their course of recovery in this report.

  7. Young Children's Understanding of Markedness in Non-Verbal Communication

    Science.gov (United States)

    Liebal, Kristin; Carpenter, Malinda; Tomasello, Michael

    2011-01-01

    Speakers often anticipate how recipients will interpret their utterances. If they wish some other, less obvious interpretation, they may "mark" their utterance (e.g. with special intonations or facial expressions). We investigated whether two- and three-year-olds recognize when adults mark a non-verbal communicative act--in this case a pointing…

  8. Videotutoring, Non-Verbal Communication and Initial Teacher Training.

    Science.gov (United States)

    Nichol, Jon; Watson, Kate

    2000-01-01

    Describes the use of video tutoring for distance education within the context of a post-graduate teacher training course at the University of Exeter. Analysis of the tapes used a protocol based on non-verbal communication research, and findings suggest that the interaction of participants was significantly different from face-to-face…

  9. Language, Power, Multilingual and Non-Verbal Multicultural Communication

    NARCIS (Netherlands)

    Marácz, L.; Zhuravleva, E.A.

    2014-01-01

    Due to developments in internal migration and mobility there is a proliferation of linguistic diversity, multilingual and non-verbal multicultural communication. At the same time the recognition of the use of one’s first language receives more and more support in international political, legal and

  10. Non-verbal behaviour in nurse-elderly patient communication.

    NARCIS (Netherlands)

    Caris-Verhallen, W.M.C.M.; Kerkstra, A.; Bensing, J.M.

    1999-01-01

    This study explores the occurence of non-verbal communication in nurse-elderly patient interaction in two different care settings: home nursing and a home for the elderly. In a sample of 181 nursing encounters involving 47 nurses a study was made of videotaped nurse-patient communication. Six

  11. The importance of individual frequencies of endogenous brain oscillations for auditory cognition - A short review.

    Science.gov (United States)

    Baltus, Alina; Herrmann, Christoph Siegfried

    2016-06-01

    Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Context, culture and (non-verbal) communication affect handover quality.

    Science.gov (United States)

    Frankel, Richard M; Flanagan, Mindy; Ebright, Patricia; Bergman, Alicia; O'Brien, Colleen M; Franks, Zamal; Allen, Andrew; Harris, Angela; Saleem, Jason J

    2012-12-01

    Transfers of care, also known as handovers, remain a substantial patient safety risk. Although research on handovers has been done since the 1980s, the science is incomplete. Surprisingly few interventions have been rigorously evaluated and, of those that have, few have resulted in long-term positive change. Researchers, both in medicine and other high reliability industries, agree that face-to-face handovers are the most reliable. It is not clear, however, what the term face-to-face means in actual practice. We studied the use of non-verbal behaviours, including gesture, posture, bodily orientation, facial expression, eye contact and physical distance, in the delivery of information during face-to-face handovers. To address this question and study the role of non-verbal behaviour on the quality and accuracy of handovers, we videotaped 52 nursing, medicine and surgery handovers covering 238 patients. Videotapes were analysed using immersion/crystallisation methods of qualitative data analysis. A team of six researchers met weekly for 18 months to view videos together using a consensus-building approach. Consensus was achieved on verbal, non-verbal, and physical themes and patterns observed in the data. We observed four patterns of non-verbal behaviour (NVB) during handovers: (1) joint focus of attention; (2) 'the poker hand'; (3) parallel play and (4) kerbside consultation. In terms of safety, joint focus of attention was deemed to have the best potential for high quality and reliability; however, it occurred infrequently, creating opportunities for education and improvement. Attention to patterns of NVB in face-to-face handovers coupled with education and practice can improve quality and reliability.

  13. Prosody Predicts Contest Outcome in Non-Verbal Dialogs.

    Science.gov (United States)

    Dreiss, Amélie N; Chatelain, Philippe G; Roulin, Alexandre; Richner, Heinz

    2016-01-01

    Non-verbal communication has important implications for inter-individual relationships and negotiation success. However, to what extent humans can spontaneously use rhythm and prosody as a sole communication tool is largely unknown. We analysed human ability to resolve a conflict without verbal dialogs, independently of semantics. We invited pairs of subjects to communicate non-verbally using whistle sounds. Along with the production of more whistles, participants unwittingly used a subtle prosodic feature to compete over a resource (ice-cream scoops). Winners can be identified by their propensity to accentuate the first whistles blown when replying to their partner, compared to the following whistles. Naive listeners correctly identified this prosodic feature as a key determinant of which whistler won the interaction. These results suggest that in the absence of other communication channels, individuals spontaneously use a subtle variation of sound accentuation (prosody), instead of merely producing exuberant sounds, to impose themselves in a conflict of interest. We discuss the biological and cultural bases of this ability and their link with verbal communication. Our results highlight the human ability to use non-verbal communication in a negotiation process.

  14. The similar effects of verbal and non-verbal intervening tasks on word recall in an elderly population.

    Science.gov (United States)

    Williams, B R; Sullivan, S K; Morra, L F; Williams, J R; Donovick, P J

    2014-01-01

    Vulnerability to retroactive interference has been shown to increase with cognitive aging. Consistent with the findings of memory and aging literature, the authors of the California Verbal Learning Test-II (CVLT-II) suggest that a non-verbal task be administered during the test's delay interval to minimize the effects of retroactive interference on delayed recall. The goal of the present study was to determine the extent to which retroactive interference caused by non-verbal and verbal intervening tasks affects recall of verbal information in non-demented, older adults. The effects of retroactive interference on recall of words during Long-Delay recall on the California Verbal Learning Test-II (CVLT-II) were evaluated. Participants included 85 adults age 60 and older. During a 20-minute delay interval on the CVLT-II, participants received either a verbal (WAIS-III Vocabulary or Peabody Picture Vocabulary Test-IIIB) or non-verbal (Raven's Standard Progressive Matrices or WAIS-III Block Design) intervening task. Similarly to previous research with young adults (Williams & Donovick, 2008), older adults recalled the same number of words across all groups, regardless of the type of intervening task. These findings suggest that the administration of verbal intervening tasks during the CVLT-II do not elicit more retroactive interference than non-verbal intervening tasks, and thus verbal tasks need not be avoided during the delay interval of the CVLT-II.

  15. The use of virtual characters to assess and train non-verbal communication in high-functioning autism.

    Science.gov (United States)

    Georgescu, Alexandra Livia; Kuzmanovic, Bojana; Roth, Daniel; Bente, Gary; Vogeley, Kai

    2014-01-01

    High-functioning autism (HFA) is a neurodevelopmental disorder, which is characterized by life-long socio-communicative impairments on the one hand and preserved verbal and general learning and memory abilities on the other. One of the areas where particular difficulties are observable is the understanding of non-verbal communication cues. Thus, investigating the underlying psychological processes and neural mechanisms of non-verbal communication in HFA allows a better understanding of this disorder, and potentially enables the development of more efficient forms of psychotherapy and trainings. However, the research on non-verbal information processing in HFA faces several methodological challenges. The use of virtual characters (VCs) helps to overcome such challenges by enabling an ecologically valid experience of social presence, and by providing an experimental platform that can be systematically and fully controlled. To make this field of research accessible to a broader audience, we elaborate in the first part of the review the validity of using VCs in non-verbal behavior research on HFA, and we review current relevant paradigms and findings from social-cognitive neuroscience. In the second part, we argue for the use of VCs as either agents or avatars in the context of "transformed social interactions." This allows for the implementation of real-time social interaction in virtual experimental settings, which represents a more sensitive measure of socio-communicative impairments in HFA. Finally, we argue that VCs and environments are a valuable assistive, educational and therapeutic tool for HFA.

  16. Testing the validity of wireless EEG for cognitive research with auditory and visual paradigms

    DEFF Research Database (Denmark)

    Weed, Ethan; Kratschmer, Alexandra Regina; Pedersen, Michael Nygaard

    and smaller cognitive components. To test the feasibility of these headsets for cognitive research, we compared performance of the Emotiv Epoc wireless headset (EM) with Brain Products ActiCAP (BP) active electrodes on two well-studied components: the auditory mismatch negativity (MMN) and the visual face...

  17. Getting the Message Across; Non-Verbal Communication in the Classroom.

    Science.gov (United States)

    Levy, Jack

    This handbook presents selected theories, activities, and resources which can be utilized by educators in the area of non-verbal communication. Particular attention is given to the use of non-verbal communication in a cross-cultural context. Categories of non-verbal communication such as proxemics, haptics, kinesics, smiling, sound, clothing, and…

  18. Cross-cultural Differences of Stereotypes about Non-verbal Communication of Russian and Chinese Students

    Directory of Open Access Journals (Sweden)

    I A Novikova

    2011-09-01

    Full Text Available The article deals with peculiarities of non-verbal communication as a factor of cross-cultural intercourse and adaptation of representatives of different cultures. The possibility of studying of ethnic stereotypes concerning non-verbal communication is considered. The results of empiric research of stereotypes about non-verbal communication of Russian and Chinese students are presented.

  19. Auditory beat stimulation and its effects on cognition and mood States.

    Science.gov (United States)

    Chaieb, Leila; Wilpert, Elke Caroline; Reber, Thomas P; Fell, Juergen

    2015-01-01

    Auditory beat stimulation may be a promising new tool for the manipulation of cognitive processes and the modulation of mood states. Here, we aim to review the literature examining the most current applications of auditory beat stimulation and its targets. We give a brief overview of research on auditory steady-state responses and its relationship to auditory beat stimulation (ABS). We have summarized relevant studies investigating the neurophysiological changes related to ABS and how they impact upon the design of appropriate stimulation protocols. Focusing on binaural-beat stimulation, we then discuss the role of monaural- and binaural-beat frequencies in cognition and mood states, in addition to their efficacy in targeting disease symptoms. We aim to highlight important points concerning stimulation parameters and try to address why there are often contradictory findings with regard to the outcomes of ABS.

  20. Drama to promote non-verbal communication skills.

    Science.gov (United States)

    Kelly, Martina; Nixon, Lara; Broadfoot, Kirsten; Hofmeister, Marianna; Dornan, Tim

    2018-05-23

    Non-verbal communication skills (NVCS) help physicians to deliver relationship-centred care, and the effective use of NVCS is associated with improved patient satisfaction, better use of health services and high-quality clinical care. In contrast to verbal communication skills, NVCS training is under developed in communication curricula for the health care professions. One of the challenges teaching NVCS is their tacit nature. In this study, we evaluated drama exercises to raise awareness of NVCS by making familiar activities 'strange'. Workshops based on drama exercises were designed to heighten an awareness of sight, hearing, touch and proxemics in non-verbal communication. These were conducted at eight medical education conferences, held between 2014 and 2016, and were open to all conference participants. Workshops were evaluated by recording narrative data generated during the workshops and an open-ended questionnaire following the workshop. Data were analysed qualitatively, using thematic analysis. Non-verbal communication skills help doctors to deliver relationship-centred care RESULTS: One hundred and twelve participants attended workshops, 73 (65%) of whom completed an evaluation form: 56 physicians, nine medical students and eight non-physician faculty staff. Two themes were described: an increased awareness of NVCS and the importance of NVCS in relationship building. Drama exercises enabled participants to experience NVCS, such as sight, sound, proxemics and touch, in novel ways. Participants reflected on how NCVS contribute to developing trust and building relationships in clinical practice. Drama-based exercises elucidate the tacit nature of NVCS and require further evaluation in formal educational settings. © 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  1. Non-verbal Persuasion and Communication in an Affective Agent

    DEFF Research Database (Denmark)

    André, Elisabeth; Bevacqua, Elisabetta; Heylen, Dirk

    2011-01-01

    the critical role of non-verbal behaviour during face-to-face communication. In this chapter we restrict the discussion to body language. We also consider embodied virtual agents. As is the case with humans, there are a number of fundamental factors to be considered when constructing persuasive agents......This chapter deals with the communication of persuasion. Only a small percentage of communication involves words: as the old saying goes, “it’s not what you say, it’s how you say it”. While this likely underestimates the importance of good verbal persuasion techniques, it is accurate in underlining...

  2. Cognitive factors shape brain networks for auditory skills: spotlight on auditory working memory

    Science.gov (United States)

    Kraus, Nina; Strait, Dana; Parbery-Clark, Alexandra

    2012-01-01

    Musicians benefit from real-life advantages such as a greater ability to hear speech in noise and to remember sounds, although the biological mechanisms driving such advantages remain undetermined. Furthermore, the extent to which these advantages are a consequence of musical training or innate characteristics that predispose a given individual to pursue music training is often debated. Here, we examine biological underpinnings of musicians’ auditory advantages and the mediating role of auditory working memory. Results from our laboratory are presented within a framework that emphasizes auditory working memory as a major factor in the neural processing of sound. Within this framework, we provide evidence for music training as a contributing source of these abilities. PMID:22524346

  3. Prediction of cognitive outcome based on the progression of auditory discrimination during coma.

    Science.gov (United States)

    Juan, Elsa; De Lucia, Marzia; Tzovara, Athina; Beaud, Valérie; Oddo, Mauro; Clarke, Stephanie; Rossetti, Andrea O

    2016-09-01

    To date, no clinical test is able to predict cognitive and functional outcome of cardiac arrest survivors. Improvement of auditory discrimination in acute coma indicates survival with high specificity. Whether the degree of this improvement is indicative of recovery remains unknown. Here we investigated if progression of auditory discrimination can predict cognitive and functional outcome. We prospectively recorded electroencephalography responses to auditory stimuli of post-anoxic comatose patients on the first and second day after admission. For each recording, auditory discrimination was quantified and its evolution over the two recordings was used to classify survivors as "predicted" when it increased vs. "other" if not. Cognitive functions were tested on awakening and functional outcome was assessed at 3 months using the Cerebral Performance Categories (CPC) scale. Thirty-two patients were included, 14 "predicted survivors" and 18 "other survivors". "Predicted survivors" were more likely to recover basic cognitive functions shortly after awakening (ability to follow a standardized neuropsychological battery: 86% vs. 44%; p=0.03 (Fisher)) and to show a very good functional outcome at 3 months (CPC 1: 86% vs. 33%; p=0.004 (Fisher)). Moreover, progression of auditory discrimination during coma was strongly correlated with cognitive performance on awakening (phonemic verbal fluency: rs=0.48; p=0.009 (Spearman)). Progression of auditory discrimination during coma provides early indication of future recovery of cognitive functions. The degree of improvement is informative of the degree of functional impairment. If confirmed in a larger cohort, this test would be the first to predict detailed outcome at the single-patient level. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Validation of auditory detection response task method for assessing the attentional effects of cognitive load.

    Science.gov (United States)

    Stojmenova, Kristina; Sodnik, Jaka

    2018-07-04

    There are 3 standardized versions of the Detection Response Task (DRT), 2 using visual stimuli (remote DRT and head-mounted DRT) and one using tactile stimuli. In this article, we present a study that proposes and validates a type of auditory signal to be used as DRT stimulus and evaluate the proposed auditory version of this method by comparing it with the standardized visual and tactile version. This was a within-subject design study performed in a driving simulator with 24 participants. Each participant performed 8 2-min-long driving sessions in which they had to perform 3 different tasks: driving, answering to DRT stimuli, and performing a cognitive task (n-back task). Presence of additional cognitive load and type of DRT stimuli were defined as independent variables. DRT response times and hit rates, n-back task performance, and pupil size were observed as dependent variables. Significant changes in pupil size for trials with a cognitive task compared to trials without showed that cognitive load was induced properly. Each DRT version showed a significant increase in response times and a decrease in hit rates for trials with a secondary cognitive task compared to trials without. Similar and significantly better results in differences in response times and hit rates were obtained for the auditory and tactile version compared to the visual version. There were no significant differences in performance rate between the trials without DRT stimuli compared to trials with and among the trials with different DRT stimuli modalities. The results from this study show that the auditory DRT version, using the signal implementation suggested in this article, is sensitive to the effects of cognitive load on driver's attention and is significantly better than the remote visual and tactile version for auditory-vocal cognitive (n-back) secondary tasks.

  5. Cognitive biases and auditory verbal hallucinations in healthy and clinical individuals

    NARCIS (Netherlands)

    Daalman, K.; Sommer, I. E. C.; Derks, E. M.; Peters, E. R.

    2013-01-01

    Background. Several cognitive biases are related to psychotic symptoms, including auditory verbal hallucinations (AVH). It remains unclear whether these biases differ in voice-hearers with and without a 'need-for-care'. Method. A total of 72 healthy controls, 72 healthy voice-hearers and 72 clinical

  6. Responses of mink to auditory stimuli: Prerequisites for applying the ‘cognitive bias’ approach

    DEFF Research Database (Denmark)

    Svendsen, Pernille Maj; Malmkvist, Jens; Halekoh, Ulrich

    2012-01-01

    The aim of the study was to determine and validate prerequisites for applying a cognitive (judgement) bias approach to assessing welfare in farmed mink (Neovison vison). We investigated discrimination ability and associative learning ability using auditory cues. The mink (n = 15 females) were...... farmed mink in a judgement bias approach would thus appear to be feasible. However several specific issues are to be considered in order to successfully adapt a cognitive bias approach to mink, and these are discussed....

  7. Patterns of non-verbal social interactions within intensive mathematics intervention contexts

    Science.gov (United States)

    Thomas, Jonathan Norris; Harkness, Shelly Sheats

    2016-06-01

    This study examined the non-verbal patterns of interaction within an intensive mathematics intervention context. Specifically, the authors draw on social constructivist worldview to examine a teacher's use of gesture in this setting. The teacher conducted a series of longitudinal teaching experiments with a small number of young, school-age children in the context of early arithmetic development. From these experiments, the authors gathered extensive video records of teaching practice and, from an inductive analysis of these records, identified three distinct patterns of teacher gesture: behavior eliciting, behavior suggesting, and behavior replicating. Awareness of their potential to influence students via gesture may prompt teachers to more closely attend to their own interactions with mathematical tools and take these teacher interactions into consideration when forming interpretations of students' cognition.

  8. Neural effects of cognitive control load on auditory selective attention.

    Science.gov (United States)

    Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R; Mangalathu, Jain; Desai, Anjali

    2014-08-01

    Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210ms, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. The role of interaction of verbal and non-verbal means of communication in different types of discourse

    OpenAIRE

    Orlova M. А.

    2010-01-01

    Communication relies on verbal and non-verbal interaction. To be most effective, group members need to improve verbal and non-verbal communication. Non-verbal communication fulfills functions within groups that are sometimes difficult to communicate verbally. But interpreting non-verbal messages requires a great deal of skill because multiple meanings abound in these messages.

  10. The impact of the teachers? non-verbal communication on success in teaching

    OpenAIRE

    BAMBAEEROO, FATEMEH; SHOKRPOUR, NASRIN

    2017-01-01

    Introduction: Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers’ non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers’ use of non-verbal communication and ...

  11. Comparative Analysis of Verbal and Non-Verbal Mental Activity Components Regarding the Young People with Different Intellectual Levels

    Directory of Open Access Journals (Sweden)

    Y. M. Revenko

    2013-01-01

    Full Text Available The paper maintains that for developing the educational pro- grams and technologies adequate to the different stages of students’ growth and maturity, there is a need for exploring the natural determinants of intel- lectual development as well as the students’ individual qualities affecting the cognition process. The authors investigate the differences of the intellect manifestations with the reference to the gender principle, and analyze the correlations be- tween verbal and non-verbal components in boys and girls’ mental activity depending on their general intellect potential. The research, carried out in Si- berian State Automobile Road Academy and focused on the first year stu- dents, demonstrates the absence of gender differences in students’ general in- tellect levels; though, there are some other conformities: the male students of different intellectual levels show the same correlation coefficient of verbal and non-verbal intellect while the female ones have the same correlation only at the high intellect level. In conclusion, the authors emphasize the need for the integral ap- proach to raising students’ mental abilities considering the close interrelation between the verbal and non-verbal component development. The teaching materials should stimulate different mental qualities by differentiating the educational process to develop students’ individual abilities. 

  12. The Use of Virtual Characters to Assess and Train Non-Verbal Communication in High-Functioning Autism

    Science.gov (United States)

    Georgescu, Alexandra Livia; Kuzmanovic, Bojana; Roth, Daniel; Bente, Gary; Vogeley, Kai

    2014-01-01

    High-functioning autism (HFA) is a neurodevelopmental disorder, which is characterized by life-long socio-communicative impairments on the one hand and preserved verbal and general learning and memory abilities on the other. One of the areas where particular difficulties are observable is the understanding of non-verbal communication cues. Thus, investigating the underlying psychological processes and neural mechanisms of non-verbal communication in HFA allows a better understanding of this disorder, and potentially enables the development of more efficient forms of psychotherapy and trainings. However, the research on non-verbal information processing in HFA faces several methodological challenges. The use of virtual characters (VCs) helps to overcome such challenges by enabling an ecologically valid experience of social presence, and by providing an experimental platform that can be systematically and fully controlled. To make this field of research accessible to a broader audience, we elaborate in the first part of the review the validity of using VCs in non-verbal behavior research on HFA, and we review current relevant paradigms and findings from social-cognitive neuroscience. In the second part, we argue for the use of VCs as either agents or avatars in the context of “transformed social interactions.” This allows for the implementation of real-time social interaction in virtual experimental settings, which represents a more sensitive measure of socio-communicative impairments in HFA. Finally, we argue that VCs and environments are a valuable assistive, educational and therapeutic tool for HFA. PMID:25360098

  13. Selective Impairment of Auditory Selective Attention under Concurrent Cognitive Load

    Science.gov (United States)

    Dittrich, Kerstin; Stahl, Christoph

    2012-01-01

    Load theory predicts that concurrent cognitive load impairs selective attention. For visual stimuli, it has been shown that this impairment can be selective: Distraction was specifically increased when the stimulus material used in the cognitive load task matches that of the selective attention task. Here, we report four experiments that…

  14. Non-verbal communication of compassion: measuring psychophysiologic effects.

    Science.gov (United States)

    Kemper, Kathi J; Shaltout, Hossam A

    2011-12-20

    Calm, compassionate clinicians comfort others. To evaluate the direct psychophysiologic benefits of non-verbal communication of compassion (NVCC), it is important to minimize the effect of subjects' expectation. This preliminary study was designed to a) test the feasibility of two strategies for maintaining subject blinding to non-verbal communication of compassion (NVCC), and b) determine whether blinded subjects would experience psychophysiologic effects from NVCC. Subjects were healthy volunteers who were told the study was evaluating the effect of time and touch on the autonomic nervous system. The practitioner had more than 10 years' experience with loving-kindness meditation (LKM), a form of NVCC. Subjects completed 10-point visual analog scales (VAS) for stress, relaxation, and peacefulness before and after LKM. To assess physiologic effects, practitioners and subjects wore cardiorespiratory monitors to assess respiratory rate (RR), heart rate (HR) and heart rate variability (HRV) throughout the 4 10-minute study periods: Baseline (both practitioner and subjects read neutral material); non-tactile-LKM (subjects read while the practitioner practiced LKM while pretending to read); tactile-LKM (subjects rested while the practitioner practiced LKM while lightly touching the subject on arms, shoulders, hands, feet, and legs); Post-Intervention Rest (subjects rested; the practitioner read). To assess blinding, subjects were asked after the interventions what the practitioner was doing during each period (reading, touch, or something else). Subjects' mean age was 43.6 years; all were women. Blinding was maintained and the practitioner was able to maintain meditation for both tactile and non-tactile LKM interventions as reflected in significantly reduced RR. Despite blinding, subjects' VAS scores improved from baseline to post-intervention for stress (5.5 vs. 2.2), relaxation (3.8 vs. 8.8) and peacefulness (3.8 vs. 9.0, P non-tactile LKM. It is possible to test the

  15. A qualitative study on non-verbal sensitivity in nursing students.

    Science.gov (United States)

    Chan, Zenobia C Y

    2013-07-01

    To explore nursing students' perception of the meanings and roles of non-verbal communication and sensitivity. It also attempts to understand how different factors influence their non-verbal communication style. The importance of non-verbal communication in the health arena lies in the need for good communication for efficient healthcare delivery. Understanding nursing students' non-verbal communication with patients and the influential factors is essential to prepare them for field work in the future. Qualitative approach based on 16 in-depth interviews. Sixteen nursing students from the Master of Nursing and the Year 3 Bachelor of Nursing program were interviewed. Major points in the recorded interviews were marked down for content analysis. Three main themes were developed: (1) understanding students' non-verbal communication, which shows how nursing students value and experience non-verbal communication in the nursing context; (2) factors that influence the expression of non-verbal cues, which reveals the effect of patients' demographic background (gender, age, social status and educational level) and participants' characteristics (character, age, voice and appearance); and (3) metaphors of non-verbal communication, which is further divided into four subthemes: providing assistance, individualisation, dropping hints and promoting interaction. Learning about students' non-verbal communication experiences in the clinical setting allowed us to understand their use of non-verbal communication and sensitivity, as well as to understand areas that may need further improvement. The experiences and perceptions revealed by the nursing students could provoke nurses to reconsider the effects of the different factors suggested in this study. The results might also help students and nurses to learn and ponder their missing gap, leading them to rethink, train and pay more attention to their non-verbal communication style and sensitivity. © 2013 John Wiley & Sons Ltd.

  16. Auditory expectation: the information dynamics of music perception and cognition.

    Science.gov (United States)

    Pearce, Marcus T; Wiggins, Geraint A

    2012-10-01

    Following in a psychological and musicological tradition beginning with Leonard Meyer, and continuing through David Huron, we present a functional, cognitive account of the phenomenon of expectation in music, grounded in computational, probabilistic modeling. We summarize a range of evidence for this approach, from psychology, neuroscience, musicology, linguistics, and creativity studies, and argue that simulating expectation is an important part of understanding a broad range of human faculties, in music and beyond. Copyright © 2012 Cognitive Science Society, Inc.

  17. Auditory-cognitive training improves language performance in prelingually deafened cochlear implant recipients.

    Science.gov (United States)

    Ingvalson, Erin M; Young, Nancy M; Wong, Patrick C M

    2014-10-01

    Phonological and working memory skills have been shown to be important for the development of spoken language. Children who use a cochlear implant (CI) show performance deficits relative to normal hearing (NH) children on all constructs: phonological skills, working memory, and spoken language. Given that phonological skills and working memory have been shown to be important for spoken language development in NH children, we hypothesized that training these foundational skills would result in improved spoken language performance in CI-using children. Nineteen prelingually deafened CI-using children aged 4- to 7-years-old participated. All children had been using their implants for at least one year and were matched on pre-implant hearing thresholds, hearing thresholds at study enrollment, and non-verbal IQ. Children were assessed on expressive vocabulary, listening language, spoken language, and composite language. Ten children received four weeks of training on phonological skills including rhyme, sound blending, and sound discrimination and auditory working memory. The remaining nine children continued with their normal classroom activities for four weeks. Language assessments were repeated following the training/control period. Children who received combined phonological-working memory training showed significant gains on expressive and composite language scores. Children who did not receive training showed no significant improvements at post-test. On average, trained children had gain scores of 6.35 points on expressive language and gain scores of 6.15 points whereas the untrained children had test-retest gain scores of 2.89 points for expressive language and 2.56 for composite language. Our results suggest that training to improve the phonological and working memory skills in CI-using children may lead to improved language performance. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. The Relationship between Central Auditory Processing, Language, and Cognition in Children Being Evaluated for Central Auditory Processing Disorder.

    Science.gov (United States)

    Brenneman, Lauren; Cash, Elizabeth; Chermak, Gail D; Guenette, Linda; Masters, Gay; Musiek, Frank E; Brown, Mallory; Ceruti, Julianne; Fitzegerald, Krista; Geissler, Kristin; Gonzalez, Jennifer; Weihing, Jeffrey

    2017-09-01

    Pediatric central auditory processing disorder (CAPD) is frequently comorbid with other childhood disorders. However, few studies have examined the relationship between commonly used CAPD, language, and cognition tests within the same sample. The present study examined the relationship between diagnostic CAPD tests and "gold standard" measures of language and cognitive ability, the Clinical Evaluation of Language Fundamentals (CELF) and the Wechsler Intelligence Scale for Children (WISC). A retrospective study. Twenty-seven patients referred for CAPD testing who scored average or better on the CELF and low average or better on the WISC were initially included. Seven children who scored below the CELF and/or WISC inclusion criteria were then added to the dataset for a second analysis, yielding a sample size of 34. Participants were administered a CAPD battery that included at least the following three CAPD tests: Frequency Patterns (FP), Dichotic Digits (DD), and Competing Sentences (CS). In addition, they were administered the CELF and WISC. Relationships between scores on CAPD, language (CELF), and cognition (WISC) tests were examined using correlation analysis. DD and FP showed significant correlations with Full Scale Intelligence Quotient, and the DD left ear and the DD interaural difference measures both showed significant correlations with working memory. However, ∼80% or more of the variance in these CAPD tests was unexplained by language and cognition measures. Language and cognition measures were more strongly correlated with each other than were the CAPD tests with any CELF or WISC scale. Additional correlations with the CAPD tests were revealed when patients who scored in the mild-moderate deficit range on the CELF and/or in the borderline low intellectual functioning range on the WISC were included in the analysis. While both the DD and FP tests showed significant correlations with one or more cognition measures, the majority of the variance in these

  19. Culture and Social Relationship as Factors of Affecting Communicative Non-Verbal Behaviors

    DEFF Research Database (Denmark)

    Lipi, Afia Akhter; Nakano, Yukiko; Rehm, Matthias

    2010-01-01

    The goal of this paper is to link a bridge between social relationship and cultural variation to predict conversants' non-verbal behaviors. This idea serves as a basis of establishing a parameter based socio-cultural model, which determines non-verbal expressive parameters that specify the shapes...

  20. Oncologists’ non-verbal behavior and analog patients’ recall of information

    NARCIS (Netherlands)

    Hillen, M.A.; de Haes, H.C.J.M.; van Tienhoven, G.; van Laarhoven, H.W.M.; van Weert, J.C.M.; Vermeulen, D.M.; Smets, E.M.A.

    2016-01-01

    Background Information in oncological consultations is often excessive. Those patients who better recall information are more satisfied, less anxious and more adherent. Optimal recall may be enhanced by the oncologist’s non-verbal communication. We tested the influence of three non-verbal behaviors,

  1. Oncologists' non-verbal behavior and analog patients' recall of information

    NARCIS (Netherlands)

    Hillen, Marij A.; de Haes, Hanneke C. J. M.; van Tienhoven, Geertjan; van Laarhoven, Hanneke W. M.; van Weert, Julia C. M.; Vermeulen, Daniëlle M.; Smets, Ellen M. A.

    2016-01-01

    Background Information in oncological consultations is often excessive. Those patients who better recall information are more satisfied, less anxious and more adherent. Optimal recall may be enhanced by the oncologist's non-verbal communication. We tested the influence of three non-verbal behaviors,

  2. Virtual Chironomia: A Multimodal Study of Verbal and Non-Verbal Communication in a Virtual World

    Science.gov (United States)

    Verhulsdonck, Gustav

    2010-01-01

    This mixed methods study examined the various aspects of multimodal use of non-verbal communication in virtual worlds during dyadic negotiations. Quantitative analysis uncovered a treatment effect whereby people with more rhetorical certainty used more neutral non-verbal communication; whereas people that were rhetorically less certain used more…

  3. Cross-cultural features of gestures in non-verbal communication

    Directory of Open Access Journals (Sweden)

    Chebotariova N. A.

    2017-09-01

    Full Text Available this article is devoted to analysis of the concept of non-verbal communication and ways of expressing it. Gesticulation is studied in detail as it is the main element of non-verbal communication and has different characteristics in various countries of the world.

  4. Cognitive control in auditory working memory is enhanced in musicians.

    Directory of Open Access Journals (Sweden)

    Karen Johanne Pallesen

    Full Text Available Musical competence may confer cognitive advantages that extend beyond processing of familiar musical sounds. Behavioural evidence indicates a general enhancement of both working memory and attention in musicians. It is possible that musicians, due to their training, are better able to maintain focus on task-relevant stimuli, a skill which is crucial to working memory. We measured the blood oxygenation-level dependent (BOLD activation signal in musicians and non-musicians during working memory of musical sounds to determine the relation among performance, musical competence and generally enhanced cognition. All participants easily distinguished the stimuli. We tested the hypothesis that musicians nonetheless would perform better, and that differential brain activity would mainly be present in cortical areas involved in cognitive control such as the lateral prefrontal cortex. The musicians performed better as reflected in reaction times and error rates. Musicians also had larger BOLD responses than non-musicians in neuronal networks that sustain attention and cognitive control, including regions of the lateral prefrontal cortex, lateral parietal cortex, insula, and putamen in the right hemisphere, and bilaterally in the posterior dorsal prefrontal cortex and anterior cingulate gyrus. The relationship between the task performance and the magnitude of the BOLD response was more positive in musicians than in non-musicians, particularly during the most difficult working memory task. The results confirm previous findings that neural activity increases during enhanced working memory performance. The results also suggest that superior working memory task performance in musicians rely on an enhanced ability to exert sustained cognitive control. This cognitive benefit in musicians may be a consequence of focused musical training.

  5. Computerized training of non-verbal reasoning and working memory in children with intellectual disability

    Directory of Open Access Journals (Sweden)

    Stina eSöderqvist

    2012-10-01

    Full Text Available Children with intellectual disabilities show deficits in both reasoning ability and working memory (WM that impact everyday functioning and academic achievement. In this study we investigated the feasibility of cognitive training for improving WM and non-verbal reasoning (NVR ability in children with intellectual disability. Participants were randomized to a 5-week adaptive training program (intervention group or non-adaptive version of the program (active control group. Cognitive assessments were conducted prior to and directly after training, and one year later to examine effects of the training. Improvements during training varied largely and amount of progress during training predicted transfer to WM and comprehension of instructions, with higher training progress being associated with greater transfer effects. The strongest predictors for training progress were found to be gender, co-morbidity and baseline capacity on verbal WM. In particular, females without an additional diagnosis and with higher baseline performance showed greater progress. No significant effects of training were observed at the one-year follow-up, suggesting that training should be more intense or repeated in order for effects to persist in children with intellectual disabilities. A major finding of this study is that cognitive training is feasible in children with intellectual disabilities and can help improve their cognitive capacities. However, a minimum cognitive capacity or training ability seems necessary for the training to be beneficial, with some individuals showing little improvement in performance. Future studies of cognitive training should take into consideration how inter-individual differences in training progress influence transfer effects and further investigate how baseline capacities predict training outcome.

  6. Auditory and Cognitive Factors Associated with Speech-in-Noise Complaints following Mild Traumatic Brain Injury.

    Science.gov (United States)

    Hoover, Eric C; Souza, Pamela E; Gallun, Frederick J

    2017-04-01

    Auditory complaints following mild traumatic brain injury (MTBI) are common, but few studies have addressed the role of auditory temporal processing in speech recognition complaints. In this study, deficits understanding speech in a background of speech noise following MTBI were evaluated with the goal of comparing the relative contributions of auditory and nonauditory factors. A matched-groups design was used in which a group of listeners with a history of MTBI were compared to a group matched in age and pure-tone thresholds, as well as a control group of young listeners with normal hearing (YNH). Of the 33 listeners who participated in the study, 13 were included in the MTBI group (mean age = 46.7 yr), 11 in the Matched group (mean age = 49 yr), and 9 in the YNH group (mean age = 20.8 yr). Speech-in-noise deficits were evaluated using subjective measures as well as monaural word (Words-in-Noise test) and sentence (Quick Speech-in-Noise test) tasks, and a binaural spatial release task. Performance on these measures was compared to psychophysical tasks that evaluate monaural and binaural temporal fine-structure tasks and spectral resolution. Cognitive measures of attention, processing speed, and working memory were evaluated as possible causes of differences between MTBI and Matched groups that might contribute to speech-in-noise perception deficits. A high proportion of listeners in the MTBI group reported difficulty understanding speech in noise (84%) compared to the Matched group (9.1%), and listeners who reported difficulty were more likely to have abnormal results on objective measures of speech in noise. No significant group differences were found between the MTBI and Matched listeners on any of the measures reported, but the number of abnormal tests differed across groups. Regression analysis revealed that a combination of auditory and auditory processing factors contributed to monaural speech-in-noise scores, but the benefit of spatial separation was

  7. Cognitive control in auditory working memory is enhanced in musicians

    DEFF Research Database (Denmark)

    Pallesen, Karen Johanne; Brattico, Elvira; Bailey, Christopher J

    2010-01-01

    focus on task-relevant stimuli, a skill which is crucial to working memory. We measured the blood oxygenation-level dependent (BOLD) activation signal in musicians and non-musicians during working memory of musical sounds to determine the relation among performance, musical competence and generally...... hemisphere, and bilaterally in the posterior dorsal prefrontal cortex and anterior cingulate gyrus. The relationship between the task performance and the magnitude of the BOLD response was more positive in musicians than in non-musicians, particularly during the most difficult working memory task....... The results confirm previous findings that neural activity increases during enhanced working memory performance. The results also suggest that superior working memory task performance in musicians rely on an enhanced ability to exert sustained cognitive control. This cognitive benefit in musicians may...

  8. Sex differences in the ability to recognise non-verbal displays of emotion: a meta-analysis.

    Science.gov (United States)

    Thompson, Ashley E; Voyer, Daniel

    2014-01-01

    The present study aimed to quantify the magnitude of sex differences in humans' ability to accurately recognise non-verbal emotional displays. Studies of relevance were those that required explicit labelling of discrete emotions presented in the visual and/or auditory modality. A final set of 551 effect sizes from 215 samples was included in a multilevel meta-analysis. The results showed a small overall advantage in favour of females on emotion recognition tasks (d=0.19). However, the magnitude of that sex difference was moderated by several factors, namely specific emotion, emotion type (negative, positive), sex of the actor, sensory modality (visual, audio, audio-visual) and age of the participants. Method of presentation (computer, slides, print, etc.), type of measurement (response time, accuracy) and year of publication did not significantly contribute to variance in effect sizes. These findings are discussed in the context of social and biological explanations of sex differences in emotion recognition.

  9. Generalization of Auditory Sensory and Cognitive Learning in Typically Developing Children.

    Directory of Open Access Journals (Sweden)

    Cristina F B Murphy

    Full Text Available Despite the well-established involvement of both sensory ("bottom-up" and cognitive ("top-down" processes in literacy, the extent to which auditory or cognitive (memory or attention learning transfers to phonological and reading skills remains unclear. Most research has demonstrated learning of the trained task or even learning transfer to a closely related task. However, few studies have reported "far-transfer" to a different domain, such as the improvement of phonological and reading skills following auditory or cognitive training. This study assessed the effectiveness of auditory, memory or attention training on far-transfer measures involving phonological and reading skills in typically developing children. Mid-transfer was also assessed through untrained auditory, attention and memory tasks. Sixty 5- to 8-year-old children with normal hearing were quasi-randomly assigned to one of five training groups: attention group (AG, memory group (MG, auditory sensory group (SG, placebo group (PG; drawing, painting, and a control, untrained group (CG. Compliance, mid-transfer and far-transfer measures were evaluated before and after training. All trained groups received 12 x 45-min training sessions over 12 weeks. The CG did not receive any intervention. All trained groups, especially older children, exhibited significant learning of the trained task. On pre- to post-training measures (test-retest, most groups exhibited improvements on most tasks. There was significant mid-transfer for a visual digit span task, with highest span in the MG, relative to other groups. These results show that both sensory and cognitive (memory or attention training can lead to learning in the trained task and to mid-transfer learning on a task (visual digit span within the same domain as the trained tasks. However, learning did not transfer to measures of language (reading and phonological awareness, as the PG and CG improved as much as the other trained groups. Further

  10. Generalization of Auditory Sensory and Cognitive Learning in Typically Developing Children.

    Science.gov (United States)

    Murphy, Cristina F B; Moore, David R; Schochat, Eliane

    2015-01-01

    Despite the well-established involvement of both sensory ("bottom-up") and cognitive ("top-down") processes in literacy, the extent to which auditory or cognitive (memory or attention) learning transfers to phonological and reading skills remains unclear. Most research has demonstrated learning of the trained task or even learning transfer to a closely related task. However, few studies have reported "far-transfer" to a different domain, such as the improvement of phonological and reading skills following auditory or cognitive training. This study assessed the effectiveness of auditory, memory or attention training on far-transfer measures involving phonological and reading skills in typically developing children. Mid-transfer was also assessed through untrained auditory, attention and memory tasks. Sixty 5- to 8-year-old children with normal hearing were quasi-randomly assigned to one of five training groups: attention group (AG), memory group (MG), auditory sensory group (SG), placebo group (PG; drawing, painting), and a control, untrained group (CG). Compliance, mid-transfer and far-transfer measures were evaluated before and after training. All trained groups received 12 x 45-min training sessions over 12 weeks. The CG did not receive any intervention. All trained groups, especially older children, exhibited significant learning of the trained task. On pre- to post-training measures (test-retest), most groups exhibited improvements on most tasks. There was significant mid-transfer for a visual digit span task, with highest span in the MG, relative to other groups. These results show that both sensory and cognitive (memory or attention) training can lead to learning in the trained task and to mid-transfer learning on a task (visual digit span) within the same domain as the trained tasks. However, learning did not transfer to measures of language (reading and phonological awareness), as the PG and CG improved as much as the other trained groups. Further research

  11. Non-verbal communication in meetings of psychiatrists and patients with schizophrenia.

    Science.gov (United States)

    Lavelle, M; Dimic, S; Wildgrube, C; McCabe, R; Priebe, S

    2015-03-01

    Recent evidence found that patients with schizophrenia display non-verbal behaviour designed to avoid social engagement during the opening moments of their meetings with psychiatrists. This study aimed to replicate, and build on, this finding, assessing the non-verbal behaviour of patients and psychiatrists during meetings, exploring changes over time and its association with patients' symptoms and the quality of the therapeutic relationship. 40-videotaped routine out-patient consultations, involving patients with schizophrenia, were analysed. Non-verbal behaviour of patients and psychiatrists was assessed during three fixed, 2-min intervals using a modified Ethological Coding System for Interviews. Symptoms, satisfaction with communication and the quality of the therapeutic relationship were also measured. Over time, patients' non-verbal behaviour remained stable, whilst psychiatrists' flight behaviour decreased. Patients formed two groups based on their non-verbal profiles, one group (n = 25) displaying pro-social behaviour, inviting interaction and a second (n = 15) displaying flight behaviour, avoiding interaction. Psychiatrists interacting with pro-social patients displayed more pro-social behaviours (P communication (P non-verbal behaviour during routine psychiatric consultations remains unchanged, and is linked to both their psychiatrist's non-verbal behaviour and the quality of the therapeutic relationship. © 2014 The Authors. Acta Psychiatrica Scandinavica Published by John Wiley & Sons Ltd.

  12. The influence of tactile cognitive maps on auditory space perception in sighted persons.

    Directory of Open Access Journals (Sweden)

    Alessia Tonelli

    2016-11-01

    Full Text Available We have recently shown that vision is important to improve spatial auditory cognition. In this study we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people – one experimental and one control group – in an auditory space bisection task. In the first group the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound

  13. Cognitive components of regularity processing in the auditory domain.

    Directory of Open Access Journals (Sweden)

    Stefan Koelsch

    Full Text Available BACKGROUND: Music-syntactic irregularities often co-occur with the processing of physical irregularities. In this study we constructed chord-sequences such that perceived differences in the cognitive processing between regular and irregular chords could not be due to the sensory processing of acoustic factors like pitch repetition or pitch commonality (the major component of 'sensory dissonance'. METHODOLOGY/PRINCIPAL FINDINGS: Two groups of subjects (musicians and nonmusicians were investigated with electroencephalography (EEG. Irregular chords elicited an early right anterior negativity (ERAN in the event-related brain potentials (ERPs. The ERAN had a latency of around 180 ms after the onset of the music-syntactically irregular chords, and had maximum amplitude values over right anterior electrode sites. CONCLUSIONS/SIGNIFICANCE: Because irregular chords were hardly detectable based on acoustical factors (such as pitch repetition and sensory dissonance, this ERAN effect reflects for the most part cognitive (not sensory components of regularity-based, music-syntactic processing. Our study represents a methodological advance compared to previous ERP-studies investigating the neural processing of music-syntactically irregular chords.

  14. Supramodal Enhancement of Auditory Perceptual and Cognitive Learning by Video Game Playing.

    Science.gov (United States)

    Zhang, Yu-Xuan; Tang, Ding-Lan; Moore, David R; Amitay, Sygal

    2017-01-01

    Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD) with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training.

  15. Supramodal Enhancement of Auditory Perceptual and Cognitive Learning by Video Game Playing

    Directory of Open Access Journals (Sweden)

    Yu-Xuan Zhang

    2017-06-01

    Full Text Available Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training.

  16. Resource allocation models of auditory working memory.

    Science.gov (United States)

    Joseph, Sabine; Teki, Sundeep; Kumar, Sukhbinder; Husain, Masud; Griffiths, Timothy D

    2016-06-01

    Auditory working memory (WM) is the cognitive faculty that allows us to actively hold and manipulate sounds in mind over short periods of time. We develop here a particular perspective on WM for non-verbal, auditory objects as well as for time based on the consideration of possible parallels to visual WM. In vision, there has been a vigorous debate on whether WM capacity is limited to a fixed number of items or whether it represents a limited resource that can be allocated flexibly across items. Resource allocation models predict that the precision with which an item is represented decreases as a function of total number of items maintained in WM because a limited resource is shared among stored objects. We consider here auditory work on sequentially presented objects of different pitch as well as time intervals from the perspective of dynamic resource allocation. We consider whether the working memory resource might be determined by perceptual features such as pitch or timbre, or bound objects comprising multiple features, and we speculate on brain substrates for these behavioural models. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Cognitive insight and objective quality of life in people with schizophrenia and auditory hallucinations.

    Science.gov (United States)

    Rathee, Ruchika; Luhrmann, Tanya M; Bhatia, Triptish; Deshpande, Smita N

    2018-01-01

    Poor cognitive insight in schizophrenia has been linked to delusions, hallucinations, and negative symptoms as well as to depressive/anxiety symptoms. Its impact on quality of life has been less studied, especially in schizophrenia subjects with ongoing auditory hallucinations. The Beck Cognitive Insight Scale (BCIS) and the Quality of Life Scale (QLS) were administered to subjects who met DSM IV criteria for schizophrenia after due translation and validation. All subjects reported ongoing auditory hallucinations at recruitment. Mean composite cognitive insight scores from participants (N = 60) (2.97 ± 2.649) were in the lower range as compared to published literature. Cognitive insight scores as well as self-reflectiveness subscale scores, but not self-certainty scores, correlated significantly with the QLS scores p insight, especially self-reflectiveness, may be linked to better quality of life. Self-reflectiveness could be a useful construct to address in psychotherapy to improve rehabilitation. Copyright © 2017. Published by Elsevier B.V.

  18. [Non-verbal communication of patients submitted to heart surgery: from awaking after anesthesia to extubation].

    Science.gov (United States)

    Werlang, Sueli da Cruz; Azzolin, Karina; Moraes, Maria Antonieta; de Souza, Emiliane Nogueira

    2008-12-01

    Preoperative orientation is an essential tool for patient's communication after surgery. This study had the objective of evaluating non-verbal communication of patients submitted to cardiac surgery from the time of awaking from anesthesia until extubation, after having received preoperative orientation by nurses. A quantitative cross-sectional study was developed in a reference hospital of the state of Rio Grande do Sul, Brazil, from March to July 2006. Data were collected in the pre and post operative periods. A questionnaire to evaluate non-verbal communication on awaking from sedation was applied to a sample of 100 patients. Statistical analysis included Student, Wilcoxon, and Mann Whittney tests. Most of the patients responded satisfactorily to non-verbal communication strategies as instructed on the preoperative orientation. Thus, non-verbal communication based on preoperative orientation was helpful during the awaking period.

  19. Parents and Physiotherapists Recognition of Non-Verbal Communication of Pain in Individuals with Cerebral Palsy.

    Science.gov (United States)

    Riquelme, Inmaculada; Pades Jiménez, Antonia; Montoya, Pedro

    2017-08-29

    Pain assessment is difficult in individuals with cerebral palsy (CP). This is of particular relevance in children with communication difficulties, when non-verbal pain behaviors could be essential for appropriate pain recognition. Parents are considered good proxies in the recognition of pain in their children; however, health professionals also need a good understanding of their patients' pain experience. This study aims at analyzing the agreement between parents' and physiotherapists' assessments of verbal and non-verbal pain behaviors in individuals with CP. A written survey about pain characteristics and non-verbal pain expression of 96 persons with CP (45 classified as communicative, and 51 as non-communicative individuals) was performed. Parents and physiotherapists displayed a high agreement in their estimations of the presence of chronic pain, healthcare seeking, pain intensity and pain interference, as well as in non-verbal pain behaviors. Physiotherapists and parents can recognize pain behaviors in individuals with CP regardless of communication disabilities.

  20. Non-verbal mother-child communication in conditions of maternal HIV in an experimental environment.

    Science.gov (United States)

    de Sousa Paiva, Simone; Galvão, Marli Teresinha Gimeniz; Pagliuca, Lorita Marlena Freitag; de Almeida, Paulo César

    2010-01-01

    Non-verbal communication is predominant in the mother-child relation. This study aimed to analyze non-verbal mother-child communication in conditions of maternal HIV. In an experimental environment, five HIV-positive mothers were evaluated during care delivery to their babies of up to six months old. Recordings of the care were analyzed by experts, observing aspects of non-verbal communication, such as: paralanguage, kinesics, distance, visual contact, tone of voice, maternal and infant tactile behavior. In total, 344 scenes were obtained. After statistical analysis, these permitted inferring that mothers use non-verbal communication to demonstrate their close attachment to their children and to perceive possible abnormalities. It is suggested that the mothers infection can be a determining factor for the formation of mothers strong attachment to their children after birth.

  1. Dissociation of neural correlates of verbal and non-verbal visual working memory with different delays

    Directory of Open Access Journals (Sweden)

    Endestad Tor

    2007-10-01

    Full Text Available Abstract Background Dorsolateral prefrontal cortex (DLPFC, posterior parietal cortex, and regions in the occipital cortex have been identified as neural sites for visual working memory (WM. The exact involvement of the DLPFC in verbal and non-verbal working memory processes, and how these processes depend on the time-span for retention, remains disputed. Methods We used functional MRI to explore the neural correlates of the delayed discrimination of Gabor stimuli differing in orientation. Twelve subjects were instructed to code the relative orientation either verbally or non-verbally with memory delays of short (2 s or long (8 s duration. Results Blood-oxygen level dependent (BOLD 3-Tesla fMRI revealed significantly more activity for the short verbal condition compared to the short non-verbal condition in bilateral superior temporal gyrus, insula and supramarginal gyrus. Activity in the long verbal condition was greater than in the long non-verbal condition in left language-associated areas (STG and bilateral posterior parietal areas, including precuneus. Interestingly, right DLPFC and bilateral superior frontal gyrus was more active in the non-verbal long delay condition than in the long verbal condition. Conclusion The results point to a dissociation between the cortical sites involved in verbal and non-verbal WM for long and short delays. Right DLPFC seems to be engaged in non-verbal WM tasks especially for long delays. Furthermore, the results indicate that even slightly different memory maintenance intervals engage largely differing networks and that this novel finding may explain differing results in previous verbal/non-verbal WM studies.

  2. The impact of culture and education on non-verbal neuropsychological measurements: a critical review.

    Science.gov (United States)

    Rosselli, Mónica; Ardila, Alfredo

    2003-08-01

    Clinical neuropsychology has frequently considered visuospatial and non-verbal tests to be culturally and educationally fair or at least fairer than verbal tests. This paper reviews the cross-cultural differences in performance on visuoperceptual and visuoconstructional ability tasks and analyzes the impact of education and culture on non-verbal neuropsychological measurements. This paper compares: (1) non-verbal test performance among groups with different educational levels, and the same cultural background (inter-education intra-culture comparison); (2) the test performance among groups with the same educational level and different cultural backgrounds (intra-education inter-culture comparisons). Several studies have demonstrated a strong association between educational level and performance on common non-verbal neuropsychological tests. When neuropsychological test performance in different cultural groups is compared, significant differences are evident. Performance on non-verbal tests such as copying figures, drawing maps or listening to tones can be significantly influenced by the individual's culture. Arguments against the use of some current neuropsychological non-verbal instruments, procedures, and norms in the assessment of diverse educational and cultural groups are discussed and possible solutions to this problem are presented.

  3. The role of non-verbal behaviour in racial disparities in health care: implications and solutions.

    Science.gov (United States)

    Levine, Cynthia S; Ambady, Nalini

    2013-09-01

    People from racial minority backgrounds report less trust in their doctors and have poorer health outcomes. Although these deficiencies have multiple roots, one important set of explanations involves racial bias, which may be non-conscious, on the part of providers, and minority patients' fears that they will be treated in a biased way. Here, we focus on one mechanism by which this bias may be communicated and reinforced: namely, non-verbal behaviour in the doctor-patient interaction. We review 2 lines of research on race and non-verbal behaviour: (i) the ways in which a patient's race can influence a doctor's non-verbal behaviour toward the patient, and (ii) the relative difficulty that doctors can have in accurately understanding the nonverbal communication of non-White patients. Further, we review research on the implications that both lines of work can have for the doctor-patient relationship and the patient's health. The research we review suggests that White doctors interacting with minority group patients are likely to behave and respond in ways that are associated with worse health outcomes. As doctors' disengaged non-verbal behaviour towards minority group patients and lower ability to read minority group patients' non-verbal behaviours may contribute to racial disparities in patients' satisfaction and health outcomes, solutions that target non-verbal behaviour may be effective. A number of strategies for such targeting are discussed. © 2013 John Wiley & Sons Ltd.

  4. Evaluating verbal and non-verbal communication skills, in an ethnogeriatric OSCE.

    Science.gov (United States)

    Collins, Lauren G; Schrimmer, Anne; Diamond, James; Burke, Janice

    2011-05-01

    Communication during medical interviews plays a large role in patient adherence, satisfaction with care, and health outcomes. Both verbal and non-verbal communication (NVC) skills are central to the development of rapport between patients and healthcare professionals. The purpose of this study was to assess the role of non-verbal and verbal communication skills on evaluations by standardized patients during an ethnogeriatric Objective Structured Clinical Examination (OSCE). Interviews from 19 medical students, residents, and fellows in an ethnogeriatric OSCE were analyzed. Each interview was videotaped and evaluated on a 14 item verbal and an 8 item non-verbal communication checklist. The relationship between verbal and non-verbal communication skills on interview evaluations by standardized patients were examined using correlational analyses. Maintaining adequate facial expression (FE), using affirmative gestures (AG), and limiting both unpurposive movements (UM) and hand gestures (HG) had a significant positive effect on perception of interview quality during this OSCE. Non-verbal communication skills played a role in perception of overall interview quality as well as perception of culturally competent communication. Incorporating formative and summative evaluation of both verbal and non-verbal communication skills may be a critical component of curricular innovations in ethnogeriatrics, such as the OSCE. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  5. Patients' perceptions of GP non-verbal communication: a qualitative study.

    Science.gov (United States)

    Marcinowicz, Ludmila; Konstantynowicz, Jerzy; Godlewski, Cezary

    2010-02-01

    During doctor-patient interactions, many messages are transmitted without words, through non-verbal communication. To elucidate the types of non-verbal behaviours perceived by patients interacting with family GPs and to determine which cues are perceived most frequently. In-depth interviews with patients of family GPs. Nine family practices in different regions of Poland. At each practice site, interviews were performed with four patients who were scheduled consecutively to see their family doctor. Twenty-four of 36 studied patients spontaneously perceived non-verbal behaviours of the family GP during patient-doctor encounters. They reported a total of 48 non-verbal cues. The most frequent features were tone of voice, eye contact, and facial expressions. Less frequent were examination room characteristics, touch, interpersonal distance, GP clothing, gestures, and posture. Non-verbal communication is an important factor by which patients spontaneously describe and evaluate their interactions with a GP. Family GPs should be trained to better understand and monitor their own non-verbal behaviours towards patients.

  6. Musical Experience and the Aging Auditory System: Implications for Cognitive Abilities and Hearing Speech in Noise

    Science.gov (United States)

    Parbery-Clark, Alexandra; Strait, Dana L.; Anderson, Samira; Hittner, Emily; Kraus, Nina

    2011-01-01

    Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18–30), we asked whether musical experience benefits an older cohort of musicians (ages 45–65), potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory). Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline. PMID:21589653

  7. Musical experience and the aging auditory system: implications for cognitive abilities and hearing speech in noise.

    Directory of Open Access Journals (Sweden)

    Alexandra Parbery-Clark

    Full Text Available Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18-30, we asked whether musical experience benefits an older cohort of musicians (ages 45-65, potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory. Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline.

  8. Musical experience and the aging auditory system: implications for cognitive abilities and hearing speech in noise.

    Science.gov (United States)

    Parbery-Clark, Alexandra; Strait, Dana L; Anderson, Samira; Hittner, Emily; Kraus, Nina

    2011-05-11

    Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18-30), we asked whether musical experience benefits an older cohort of musicians (ages 45-65), potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory). Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline.

  9. On the relationship between auditory cognition and speech intelligibility in cochlear implant users: An ERP study.

    Science.gov (United States)

    Finke, Mareike; Büchner, Andreas; Ruigendijk, Esther; Meyer, Martin; Sandmann, Pascale

    2016-07-01

    There is a high degree of variability in speech intelligibility outcomes across cochlear-implant (CI) users. To better understand how auditory cognition affects speech intelligibility with the CI, we performed an electroencephalography study in which we examined the relationship between central auditory processing, cognitive abilities, and speech intelligibility. Postlingually deafened CI users (N=13) and matched normal-hearing (NH) listeners (N=13) performed an oddball task with words presented in different background conditions (quiet, stationary noise, modulated noise). Participants had to categorize words as living (targets) or non-living entities (standards). We also assessed participants' working memory (WM) capacity and verbal abilities. For the oddball task, we found lower hit rates and prolonged response times in CI users when compared with NH listeners. Noise-related prolongation of the N1 amplitude was found for all participants. Further, we observed group-specific modulation effects of event-related potentials (ERPs) as a function of background noise. While NH listeners showed stronger noise-related modulation of the N1 latency, CI users revealed enhanced modulation effects of the N2/N4 latency. In general, higher-order processing (N2/N4, P3) was prolonged in CI users in all background conditions when compared with NH listeners. Longer N2/N4 latency in CI users suggests that these individuals have difficulties to map acoustic-phonetic features to lexical representations. These difficulties seem to be increased for speech-in-noise conditions when compared with speech in quiet background. Correlation analyses showed that shorter ERP latencies were related to enhanced speech intelligibility (N1, N2/N4), better lexical fluency (N1), and lower ratings of listening effort (N2/N4) in CI users. In sum, our findings suggest that CI users and NH listeners differ with regards to both the sensory and the higher-order processing of speech in quiet as well as in

  10. Auditory ERPs to stimulus deviance in an awake chimpanzee (Pan troglodytes: towards hominid cognitive neurosciences.

    Directory of Open Access Journals (Sweden)

    Ari Ueno

    Full Text Available BACKGROUND: For decades, the chimpanzee, phylogenetically closest to humans, has been analyzed intensively in comparative cognitive studies. Other than the accumulation of behavioral data, the neural basis for cognitive processing in the chimpanzee remains to be clarified. To increase our knowledge on the evolutionary and neural basis of human cognition, comparative neurophysiological studies exploring endogenous neural activities in the awake state are needed. However, to date, such studies have rarely been reported in non-human hominid species, due to the practical difficulties in conducting non-invasive measurements on awake individuals. METHODOLOGY/PRINCIPAL FINDINGS: We measured auditory event-related potentials (ERPs of a fully awake chimpanzee, with reference to a well-documented component of human studies, namely mismatch negativity (MMN. In response to infrequent, deviant tones that were delivered in a uniform sound stream, a comparable ERP component could be detected as negative deflections in early latencies. CONCLUSIONS/SIGNIFICANCE: The present study reports the MMN-like component in a chimpanzee for the first time. In human studies, various ERP components, including MMN, are well-documented indicators of cognitive and neural processing. The results of the present study validate the use of non-invasive ERP measurements for studies on cognitive and neural processing in chimpanzees, and open the way for future studies comparing endogenous neural activities between humans and chimpanzees. This signifies an essential step in hominid cognitive neurosciences.

  11. How Social Psychological Factors May Modulate Auditory and Cognitive Functioning During Listening.

    Science.gov (United States)

    Pichora-Fuller, M Kathleen

    2016-01-01

    The framework for understanding effortful listening (FUEL) draws on psychological theories of cognition and motivation. In the present article, theories of social-cognitive psychology are related to the FUEL. Listening effort is defined in our consensus as the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a task that involves listening. Listening effort depends not only on hearing difficulties and task demands but also on the listener's motivation to expend mental effort in challenging situations. Listeners' cost/benefit evaluations involve appraisals of listening demands, their own capacity, and the importance of listening goals. Social psychological factors can affect a listener's actual and self-perceived auditory and cognitive abilities, especially when those abilities may be insufficient to readily meet listening demands. Whether or not listeners experience stress depends not only on how demanding a situation is relative to their actual abilities but also on how they appraise their capacity to meet those demands. The self-perception or appraisal of one's abilities can be lowered by poor self-efficacy or negative stereotypes. Stress may affect performance in a given situation and chronic stress can have deleterious effects on many aspects of health, including auditory and cognitive functioning. Social support can offset demands and mitigate stress; however, the burden of providing support may stress the significant other. Some listeners cope by avoiding challenging situations and withdrawing from social participation. Extending the FUEL using social-cognitive psychological theories may provide valuable insights into how effortful listening could be reduced by adopting health-promoting approaches to rehabilitation.

  12. Breastfeeding duration and non-verbal IQ in children

    NARCIS (Netherlands)

    A. Sajjad (Ayesha); A. Tharner (Anne); J.C. Kiefte-de Jong (Jessica); V.W.V. Jaddoe (Vincent); A. Hofman (Albert); F.C. Verhulst (Frank); O.H. Franco (Oscar); H.W. Tiemeier (Henning); S.J. Roza (Sabine)

    2015-01-01

    textabstractBackground: Breastfeeding has been related to better cognitive development in children. However, due to methodological challenges, such as confounding, recall bias or insufficient power, the mechanism and nature of the relation remains subject to debate. Methods: We included 3761

  13. Non-verbal communication in severe aphasia: influence of aphasia, apraxia, or semantic processing?

    Science.gov (United States)

    Hogrefe, Katharina; Ziegler, Wolfram; Weidinger, Nicole; Goldenberg, Georg

    2012-09-01

    Patients suffering from severe aphasia have to rely on non-verbal means of communication to convey a message. However, to date it is not clear which patients are able to do so. Clinical experience indicates that some patients use non-verbal communication strategies like gesturing very efficiently whereas others fail to transmit semantic content by non-verbal means. Concerns have been expressed that limb apraxia would affect the production of communicative gestures. Research investigating if and how apraxia influences the production of communicative gestures, led to contradictory outcomes. The purpose of this study was to investigate the impact of limb apraxia on spontaneous gesturing. Further, linguistic and non-verbal semantic processing abilities were explored as potential factors that might influence non-verbal expression in aphasic patients. Twenty-four aphasic patients with highly limited verbal output were asked to retell short video-clips. The narrations were videotaped. Gestural communication was analyzed in two ways. In the first part of the study, we used a form-based approach. Physiological and kinetic aspects of hand movements were transcribed with a notation system for sign languages. We determined the formal diversity of the hand gestures as an indicator of potential richness of the transmitted information. In the second part of the study, comprehensibility of the patients' gestural communication was evaluated by naive raters. The raters were familiarized with the model video-clips and shown the recordings of the patients' retelling without sound. They were asked to indicate, for each narration, which story was being told and which aspects of the stories they recognized. The results indicate that non-verbal faculties are the most important prerequisites for the production of hand gestures. Whereas results on standardized aphasia testing did not correlate with any gestural indices, non-verbal semantic processing abilities predicted the formal diversity

  14. The Contribution of Auditory and Cognitive Factors to Intelligibility of Words and Sentences in Noise.

    Science.gov (United States)

    Heinrich, Antje; Knight, Sarah

    2016-01-01

    Understanding the causes for speech-in-noise (SiN) perception difficulties is complex, and is made even more difficult by the fact that listening situations can vary widely in target and background sounds. While there is general agreement that both auditory and cognitive factors are important, their exact relationship to SiN perception across various listening situations remains unclear. This study manipulated the characteristics of the listening situation in two ways: first, target stimuli were either isolated words, or words heard in the context of low- (LP) and high-predictability (HP) sentences; second, the background sound, speech-modulated noise, was presented at two signal-to-noise ratios. Speech intelligibility was measured for 30 older listeners (aged 62-84) with age-normal hearing and related to individual differences in cognition (working memory, inhibition and linguistic skills) and hearing (PTA(0.25-8 kHz) and temporal processing). The results showed that while the effect of hearing thresholds on intelligibility was rather uniform, the influence of cognitive abilities was more specific to a certain listening situation. By revealing a complex picture of relationships between intelligibility and cognition, these results may help us understand some of the inconsistencies in the literature as regards cognitive contributions to speech perception.

  15. Speech comprehension training and auditory and cognitive processing in older adults.

    Science.gov (United States)

    Pichora-Fuller, M Kathleen; Levitt, Harry

    2012-12-01

    To provide a brief history of speech comprehension training systems and an overview of research on auditory and cognitive aging as background to recommendations for future directions for rehabilitation. Two distinct domains were reviewed: one concerning technological and the other concerning psychological aspects of training. Historical trends and advances in these 2 domains were interrelated to highlight converging trends and directions for future practice. Over the last century, technological advances have influenced both the design of hearing aids and training systems. Initially, training focused on children and those with severe loss for whom amplification was insufficient. Now the focus has shifted to older adults with relatively little loss but difficulties listening in noise. Evidence of brain plasticity from auditory and cognitive neuroscience provides new insights into how to facilitate perceptual (re-)learning by older adults. There is a new imperative to complement training to increase bottom-up processing of the signal with more ecologically valid training to boost top-down information processing based on knowledge of language and the world. Advances in digital technologies enable the development of increasingly sophisticated training systems incorporating complex meaningful materials such as music, audiovisual interactive displays, and conversation.

  16. Is conflict monitoring supramodal? Spatiotemporal dynamics of cognitive control processes in an auditory Stroop task

    Science.gov (United States)

    Donohue, Sarah E.; Liotti, Mario; Perez, Rick; Woldorff, Marty G.

    2011-01-01

    The electrophysiological correlates of conflict processing and cognitive control have been well characterized for the visual modality in paradigms such as the Stroop task. Much less is known about corresponding processes in the auditory modality. Here, electroencephalographic recordings of brain activity were measured during an auditory Stroop task, using three different forms of behavioral response (Overt verbal, Covert verbal, and Manual), that closely paralleled our previous visual-Stroop study. As expected, behavioral responses were slower and less accurate for incongruent compared to congruent trials. Neurally, incongruent trials showed an enhanced fronto-central negative-polarity wave (Ninc), similar to the N450 in visual-Stroop tasks, with similar variations as a function of behavioral response mode, but peaking ~150 ms earlier, followed by an enhanced positive posterior wave. In addition, sequential behavioral and neural effects were observed that supported the conflict-monitoring and cognitive-adjustment hypothesis. Thus, while some aspects of the conflict detection processes, such as timing, may be modality-dependent, the general mechanisms would appear to be supramodal. PMID:21964643

  17. The impact of the teachers’ non-verbal communication on success in teaching

    Directory of Open Access Journals (Sweden)

    FATEMEH BAMBAEEROO

    2017-04-01

    Full Text Available Introduction: Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers’ non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers’ use of non-verbal communication and also its impact on success in teaching. Methods: Considering the research method, i.e. a review article, we searched for all articles in this field using key words such as success in teaching, verbal communication and non-verbal communication. In this study, we did not encode the articles. Results: The results of this revealed that there was a strong relationship among the quality, amount and the method of using non-verbal communication by teachers while teaching. Based on the findings of the studies reviewed, it was found that the more the teachers used verbal and non-verbal communication, the more efficacious their education and the students’ academic progress were. Under non-verbal communication, some other patterns were used. For example, emotive, team work, supportive, imaginative, purposive, and balanced communication using speech, body, and pictures all have been effective in students’ learning and academic success. The teachers’ attention to the students’ non-verbal reactions and arranging the syllabus considering the students’ mood and readiness have been emphasized in the studies reviewed. Conclusion: It was concluded that if this skill is practiced by teachers, it will have a positive and profound effect on the students’ mood. Non-verbal communication is highly reliable in the communication process, so if the recipient of a message is between two contradictory verbal and nonverbal messages, logic dictates that we push him toward the non-verbal message

  18. The impact of the teachers' non-verbal communication on success in teaching.

    Science.gov (United States)

    Bambaeeroo, Fatemeh; Shokrpour, Nasrin

    2017-04-01

    Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers' non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers' use of non-verbal communication and also its impact on success in teaching. Considering the research method, i.e. a review article, we searched for all articles in this field using key words such as success in teaching, verbal communication and non-verbal communication. In this study, we did not encode the articles. The results of this revealed that there was a strong relationship among the quality, amount and the method of using non-verbal communication by teachers while teaching. Based on the findings of the studies reviewed, it was found that the more the teachers used verbal and non-verbal communication, the more efficacious their education and the students' academic progress were. Under non-verbal communication, some other patterns were used. For example, emotive, team work, supportive, imaginative, purposive, and balanced communication using speech, body, and pictures all have been effective in students' learning and academic success. The teachers' attention to the students' non-verbal reactions and arranging the syllabus considering the students' mood and readiness have been emphasized in the studies reviewed. It was concluded that if this skill is practiced by teachers, it will have a positive and profound effect on the students' mood. Non-verbal communication is highly reliable in the communication process, so if the recipient of a message is between two contradictory verbal and nonverbal messages, logic dictates that we push him toward the non-verbal message and ask him to pay more attention to non-verbal than verbal messages because non-verbal

  19. The impact of the teachers’ non-verbal communication on success in teaching

    Science.gov (United States)

    BAMBAEEROO, FATEMEH; SHOKRPOUR, NASRIN

    2017-01-01

    Introduction: Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers’ non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers’ use of non-verbal communication and also its impact on success in teaching. Methods: Considering the research method, i.e. a review article, we searched for all articles in this field using key words such as success in teaching, verbal communication and non-verbal communication. In this study, we did not encode the articles. Results: The results of this revealed that there was a strong relationship among the quality, amount and the method of using non-verbal communication by teachers while teaching. Based on the findings of the studies reviewed, it was found that the more the teachers used verbal and non-verbal communication, the more efficacious their education and the students’ academic progress were. Under non-verbal communication, some other patterns were used. For example, emotive, team work, supportive, imaginative, purposive, and balanced communication using speech, body, and pictures all have been effective in students’ learning and academic success. The teachers’ attention to the students’ non-verbal reactions and arranging the syllabus considering the students’ mood and readiness have been emphasized in the studies reviewed. Conclusion: It was concluded that if this skill is practiced by teachers, it will have a positive and profound effect on the students’ mood. Non-verbal communication is highly reliable in the communication process, so if the recipient of a message is between two contradictory verbal and nonverbal messages, logic dictates that we push him toward the non-verbal message and ask him to pay

  20. Negative Symptoms and Avoidance of Social Interaction: A Study of Non-Verbal Behaviour.

    Science.gov (United States)

    Worswick, Elizabeth; Dimic, Sara; Wildgrube, Christiane; Priebe, Stefan

    2018-01-01

    Non-verbal behaviour is fundamental to social interaction. Patients with schizophrenia display an expressivity deficit of non-verbal behaviour, exhibiting behaviour that differs from both healthy subjects and patients with different psychiatric diagnoses. The present study aimed to explore the association between non-verbal behaviour and symptom domains, overcoming methodological shortcomings of previous studies. Standardised interviews with 63 outpatients diagnosed with schizophrenia were videotaped. Symptoms were assessed using the Clinical Assessment Interview for Negative Symptoms (CAINS), the Positive and Negative Syndrome Scale (PANSS) and the Calgary Depression Scale. Independent raters later analysed the videos for non-verbal behaviour, using a modified version of the Ethological Coding System for Interviews (ECSI). Patients with a higher level of negative symptoms displayed significantly fewer prosocial (e.g., nodding and smiling), gesture, and displacement behaviours (e.g., fumbling), but significantly more flight behaviours (e.g., looking away, freezing). No gender differences were found, and these associations held true when adjusted for antipsychotic medication dosage. Negative symptoms are associated with both a lower level of actively engaging non-verbal behaviour and an increased active avoidance of social contact. Future research should aim to identify the mechanisms behind flight behaviour, with implications for the development of treatments to improve social functioning. © 2017 S. Karger AG, Basel.

  1. Parts of Speech in Non-typical Function: (Asymmetrical Encoding of Non-verbal Predicates in Erzya

    Directory of Open Access Journals (Sweden)

    Rigina Turunen

    2011-01-01

    Full Text Available Erzya non-verbal conjugation refers to symmetric paradigms in which non-verbal predicates behave morphosyntactically in a similar way to verbal predicates. Notably, though, non-verbal conjugational paradigms are asymmetric, which is seen as an outcome of paradigmatic neutralisation in less frequent/less typical contexts. For non-verbal predicates it is not obligatory to display the same amount of behavioural potential as it is for verbal predicates, and the lexical class of non-verbal predicate operates in such a way that adjectival predicates are more likely to be conjugated than nominals. Further, besides symmetric paradigms and constructions, in Erzya there are non-verbal predicate constructions which display a more overt structural encoding than do verbal ones, namely, copula constructions. Complexity in the domain of non-verbal predication in Erzya decreases the symmetry of the paradigms. Complexity increases in asymmetric constructions, as well as in paradigmatic neutralisation when non-verbal predicates cannot be inflected in all the tenses and moods occurring in verbal predication. The results would be the reverse if we were to measure complexity in terms of the morphological structure. The asymmetric features in non-verbal predication are motivated language-externally, because non-verbal predicates refer to states and occur less frequently as predicates than verbal categories. The symmetry of the paradigms and constructions is motivated language-internally: a grammatical system with fewer rules is economical.

  2. Non-verbal Communication in a Neonatal Intensive Care Unit: A Video Audit Using Non-verbal Immediacy Scale (NIS-O).

    Science.gov (United States)

    Nimbalkar, Somashekhar Marutirao; Raval, Himalaya; Bansal, Satvik Chaitanya; Pandya, Utkarsh; Pathak, Ajay

    2018-05-03

    Effective communication with parents is a very important skill for pediatricians especially in a neonatal setup. The authors analyzed non-verbal communication of medical caregivers during counseling sessions. Recorded videos of counseling sessions from the months of March-April 2016 were audited. Counseling episodes were scored using Non-verbal Immediacy Scale Observer Report (NIS-O). A total of 150 videos of counseling sessions were audited. The mean (SD) total score on (NIS-O) was 78.96(7.07). Female counseled sessions had significantly higher proportion of low scores (p communication skills in a neonatal unit. This study lays down a template on which other Neonatal intensive care units (NICUs) can carry out gap defining audits.

  3. Phenomenology of non-verbal communication as a representation of sports activities

    Directory of Open Access Journals (Sweden)

    Liubov Karpets

    2018-04-01

    Full Text Available The priority of language professional activity in sports is such non-verbal communication as body language. Purpose: to delete the main aspects of non-verbal communication as a representation of sports activities. Material & Methods: in the study participated members of sports teams, individual athletes, in particular, for such sports: basketball, handball, volleyball, football, hockey, bodybuilding. Results: in the process of research it was revealed that in sports activities such nonverbal communication as gestures, facial expressions, physique, etc., are lapped, and, as a consequence, the position "everything is language" (Lyotard is embodied. Conclusions: non-verbal communication is one of the most significant forms of communication in sports. Additional means of communication through the "language" of the body help the athletes to realize themselves and self-determination.

  4. The Effects of Verbal and Non-Verbal Features on the Reception of DRTV Commercials

    Directory of Open Access Journals (Sweden)

    Smiljana Komar

    2016-12-01

    Full Text Available Analyses of consumer response are important for successful advertising as they help advertisers to find new, original and successful ways of persuasion. Successful advertisements have to boost the product’s benefits but they also have to appeal to consumers’ emotions. In TV advertisements, this is done by means of verbal and non-verbal strategies. The paper presents the results of an empirical investigation whose purpose was to examine the viewers’ emotional responses to a DRTV commercial induced by different verbal and non-verbal features, the amount of credibility and persuasiveness of the commercial and its general acceptability. Our findings indicate that (1 an overload of the same verbal and non-verbal information decreases persuasion; and (2 highly marked prosodic delivery is either exaggerated or funny, while the speaker is perceived as annoying.

  5. Culture and Social Relationship as Factors of Affecting Communicative Non-verbal Behaviors

    Science.gov (United States)

    Akhter Lipi, Afia; Nakano, Yukiko; Rehm, Mathias

    The goal of this paper is to link a bridge between social relationship and cultural variation to predict conversants' non-verbal behaviors. This idea serves as a basis of establishing a parameter based socio-cultural model, which determines non-verbal expressive parameters that specify the shapes of agent's nonverbal behaviors in HAI. As the first step, a comparative corpus analysis is done for two cultures in two specific social relationships. Next, by integrating the cultural and social parameters factors with the empirical data from corpus analysis, we establish a model that predicts posture. The predictions from our model successfully demonstrate that both cultural background and social relationship moderate communicative non-verbal behaviors.

  6. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  7. Blocking-out auditory distracters while driving : A cognitive strategy to reduce task-demands on the road

    NARCIS (Netherlands)

    Unal, Ayca Berfu; Platteel, Samantha; Steg, Linda; Epstude, Kai

    The current research examined how drivers handle task-demands induced by listening to the radio while driving. In particular, we explored the traces of a possible cognitive strategy that might be used by drivers to cope with task-demands, namely blocking-out auditory distracters. In Study 1 (N =

  8. Cognitive Control of Auditory Distraction: Impact of Task Difficulty, Foreknowledge, and Working Memory Capacity Supports Duplex-Mechanism Account

    Science.gov (United States)

    Hughes, Robert W.; Hurlstone, Mark J.; Marsh, John E.; Vachon, Francois; Jones, Dylan M.

    2013-01-01

    The influence of top-down cognitive control on 2 putatively distinct forms of distraction was investigated. Attentional capture by a task-irrelevant auditory deviation (e.g., a female-spoken token following a sequence of male-spoken tokens)--as indexed by its disruption of a visually presented recall task--was abolished when focal-task engagement…

  9. Executive functioning and non-verbal intelligence as predictors of bullying in early elementary school

    NARCIS (Netherlands)

    Verlinden, Marina; Veenstra, René; Ghassabian, Akhgar; Jansen, P.W.; Hofman, Albert; Jaddoe, Vincent W. V.; Verhulst, F.C.; Tiemeier, Henning

    Executive function and intelligence are negatively associated with aggression, yet the role of executive function has rarely been examined in the context of school bullying. We studied whether different domains of executive function and non-verbal intelligence are associated with bullying

  10. Toward a digitally mediated, transgenerational negotiation of verbal and non-verbal concepts in daycare

    DEFF Research Database (Denmark)

    Chimirri, Niklas Alexander

    an adult researcher’s research problem and her/his conceptual knowledge of the child-adult-digital media interaction are able to do justice to what the children actually intend to communicate about their experiences and actions, both verbally and non-verbally, by and large remains little explored...

  11. “Communication by impact” and other forms of non-verbal ...

    African Journals Online (AJOL)

    This article aims to review the importance, place and especially the emotional impact of non-verbal communication in psychiatry. The paper argues that while biological psychiatry is in the ascendency with increasing discoveries being made about the functioning of the brain and psycho-pharmacology, it is important to try ...

  12. Development of non-verbal intellectual capacity in school-age children with cerebral palsy

    NARCIS (Netherlands)

    Smits, D. W.; Ketelaar, M.; Gorter, J. W.; van Schie, P. E.; Becher, J. G.; Lindeman, E.; Jongmans, M. J.

    Background Children with cerebral palsy (CP) are at greater risk for a limited intellectual development than typically developing children. Little information is available which children with CP are most at risk. This study aimed to describe the development of non-verbal intellectual capacity of

  13. Presentation Trainer: a toolkit for learning non-verbal public speaking skills

    NARCIS (Netherlands)

    Schneider, Jan; Börner, Dirk; Van Rosmalen, Peter; Specht, Marcus

    2014-01-01

    The paper presents and outlines the demonstration of Presentation Trainer, a prototype that works as a public speaking instructor. It tracks and analyses the body posture, movements and voice of the user in order to give in- structional feedback on non-verbal communication skills. Besides exploring

  14. Non-verbal communication between primary care physicians and older patients: how does race matter?

    Science.gov (United States)

    Stepanikova, Irena; Zhang, Qian; Wieland, Darryl; Eleazer, G Paul; Stewart, Thomas

    2012-05-01

    Non-verbal communication is an important aspect of the diagnostic and therapeutic process, especially with older patients. It is unknown how non-verbal communication varies with physician and patient race. To examine the joint influence of physician race and patient race on non-verbal communication displayed by primary care physicians during medical interviews with patients 65 years or older. Video-recordings of visits of 209 patients 65 years old or older to 30 primary care physicians at three clinics located in the Midwest and Southwest. Duration of physicians' open body position, eye contact, smile, and non-task touch, coded using an adaption of the Nonverbal Communication in Doctor-Elderly Patient Transactions form. African American physicians with African American patients used more open body position, smile, and touch, compared to the average across other dyads (adjusted mean difference for open body position = 16.55, p non-verbal communication with older patients. Its influence is best understood when physician race and patient race are considered jointly.

  15. Interactive use of communication by verbal and non-verbal autistic children.

    Science.gov (United States)

    Amato, Cibelle Albuquerque de la Higuera; Fernandes, Fernanda Dreux Miranda

    2010-01-01

    Communication of autistic children. To assess the communication functionality of verbal and non-verbal children of the autistic spectrum and to identify possible associations amongst the groups. Subjects were 20 children of the autistic spectrum divided into two groups: V with 10 verbal children and NV with 10 non-verbal children with ages varying between 2y10m and 10y6m. All subjects were video recorded during 30 minutes of spontaneous interaction with their mothers. The samples were analyzed according to the functional communicative profile and comparisons within and between groups were conducted. Data referring to the occupation of communicative space suggest that there is an even balance between each child and his mother. The number of communicative acts per minute shows a clear difference between verbal and non-verbal children. Both verbal and non-verbal children use mostly the gestual communicative mean in their interactions. Data about the use of interpersonal communicative functions point out to the autistic children's great interactive impairment. The characterization of the functional communicative profile proposed in this study confirmed the autistic children's difficulties with interpersonal communication and that these difficulties do not depend on the preferred communicative mean.

  16. An executable model of the interaction between verbal and non-verbal communication.

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.

    2000-01-01

    In this paper an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The model has been formalised by three-levelled partial temporal models, covering both the material and mental processes and their relations. The generic

  17. Non-Verbal Communication Training: An Avenue for University Professionalizing Programs?

    Science.gov (United States)

    Gazaille, Mariane

    2011-01-01

    In accordance with today's workplace expectations, many university programs identify the ability to communicate as a crucial asset for future professionals. Yet, if the teaching of verbal communication is clearly identifiable in most university programs, the same cannot be said of non-verbal communication (NVC). Knowing the importance of the…

  18. An Executable Model of the Interaction between Verbal and Non-Verbal Communication

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.; Dignum, F.; Greaves, M.

    2000-01-01

    In this paper an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The model has been formalised by three-levelled partial temporal models, covering both the material and mental processes and their relations. The generic

  19. Automated Video Analysis of Non-verbal Communication in a Medical Setting.

    Science.gov (United States)

    Hart, Yuval; Czerniak, Efrat; Karnieli-Miller, Orit; Mayo, Avraham E; Ziv, Amitai; Biegon, Anat; Citron, Atay; Alon, Uri

    2016-01-01

    Non-verbal communication plays a significant role in establishing good rapport between physicians and patients and may influence aspects of patient health outcomes. It is therefore important to analyze non-verbal communication in medical settings. Current approaches to measure non-verbal interactions in medicine employ coding by human raters. Such tools are labor intensive and hence limit the scale of possible studies. Here, we present an automated video analysis tool for non-verbal interactions in a medical setting. We test the tool using videos of subjects that interact with an actor portraying a doctor. The actor interviews the subjects performing one of two scripted scenarios of interviewing the subjects: in one scenario the actor showed minimal engagement with the subject. The second scenario included active listening by the doctor and attentiveness to the subject. We analyze the cross correlation in total kinetic energy of the two people in the dyad, and also characterize the frequency spectrum of their motion. We find large differences in interpersonal motion synchrony and entrainment between the two performance scenarios. The active listening scenario shows more synchrony and more symmetric followership than the other scenario. Moreover, the active listening scenario shows more high-frequency motion termed jitter that has been recently suggested to be a marker of followership. The present approach may be useful for analyzing physician-patient interactions in terms of synchrony and dominance in a range of medical settings.

  20. Quality Matters! Differences between Expressive and Receptive Non-Verbal Communication Skills in Adolescents with ASD

    Science.gov (United States)

    Grossman, Ruth B.; Tager-Flusberg, Helen

    2012-01-01

    We analyzed several studies of non-verbal communication (prosody and facial expressions) completed in our lab and conducted a secondary analysis to compare performance on receptive vs. expressive tasks by adolescents with ASD and their typically developing peers. Results show a significant between-group difference for the aggregate score of…

  1. Interpersonal Interactions in Instrumental Lessons: Teacher/Student Verbal and Non-Verbal Behaviours

    Science.gov (United States)

    Zhukov, Katie

    2013-01-01

    This study examined verbal and non-verbal teacher/student interpersonal interactions in higher education instrumental music lessons. Twenty-four lessons were videotaped and teacher/student behaviours were analysed using a researcher-designed instrument. The findings indicate predominance of student and teacher joke among the verbal behaviours with…

  2. The Introduction of Non-Verbal Communication in Greek Education: A Literature Review

    Science.gov (United States)

    Stamatis, Panagiotis J.

    2012-01-01

    Introduction: The introductory part of this paper underlines the research interest of the educational community in the issue of non-verbal communication in education. The question for the introduction of this scientific field in Greek education enter within the context of this research which include many aspects. Method: The paper essentially…

  3. Effect of interaction with clowns on vital signs and non-verbal communication of hospitalized children.

    Science.gov (United States)

    Alcântara, Pauline Lima; Wogel, Ariane Zonho; Rossi, Maria Isabela Lobo; Neves, Isabela Rodrigues; Sabates, Ana Llonch; Puggina, Ana Cláudia

    2016-12-01

    Compare the non-verbal communication of children before and during interaction with clowns and compare their vital signs before and after this interaction. Uncontrolled, intervention, cross-sectional, quantitative study with children admitted to a public university hospital. The intervention was performed by medical students dressed as clowns and included magic tricks, juggling, singing with the children, making soap bubbles and comedic performances. The intervention time was 20minutes. Vital signs were assessed in two measurements with an interval of one minute immediately before and after the interaction. Non-verbal communication was observed before and during the interaction using the Non-Verbal Communication Template Chart, a tool in which nonverbal behaviors are assessed as effective or ineffective in the interactions. The sample consisted of 41 children with a mean age of 7.6±2.7 years; most were aged 7 to 11 years (n=23; 56%) and were males (n=26; 63.4%). There was a statistically significant difference in systolic and diastolic blood pressure, pain and non-verbal behavior of children with the intervention. Systolic and diastolic blood pressure increased and pain scales showed decreased scores. The playful interaction with clowns can be a therapeutic resource to minimize the effects of the stressing environment during the intervention, improve the children's emotional state and reduce the perception of pain. Copyright © 2016 Sociedade de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.

  4. Verbal and Non-Verbal Communication and Coordination in Mission Control

    Science.gov (United States)

    Vinkhuyzen, Erik; Norvig, Peter (Technical Monitor)

    1998-01-01

    In this talk I will present some video-materials gathered in Mission Control during simulations. The focus of the presentation will be on verbal and non-verbal communication between the officers in the front and backroom, especially the practices that have evolved around a peculiar communications technology called voice loops.

  5. Trauma team leaders' non-verbal communication: video registration during trauma team training.

    Science.gov (United States)

    Härgestam, Maria; Hultin, Magnus; Brulin, Christine; Jacobsson, Maritha

    2016-03-25

    There is widespread consensus on the importance of safe and secure communication in healthcare, especially in trauma care where time is a limiting factor. Although non-verbal communication has an impact on communication between individuals, there is only limited knowledge of how trauma team leaders communicate. The purpose of this study was to investigate how trauma team members are positioned in the emergency room, and how leaders communicate in terms of gaze direction, vocal nuances, and gestures during trauma team training. Eighteen trauma teams were audio and video recorded during trauma team training in the emergency department of a hospital in northern Sweden. Quantitative content analysis was used to categorize the team members' positions and the leaders' non-verbal communication: gaze direction, vocal nuances, and gestures. The quantitative data were interpreted in relation to the specific context. Time sequences of the leaders' gaze direction, speech time, and gestures were identified separately and registered as time (seconds) and proportions (%) of the total training time. The team leaders who gained control over the most important area in the emergency room, the "inner circle", positioned themselves as heads over the team, using gaze direction, gestures, vocal nuances, and verbal commands that solidified their verbal message. Changes in position required both attention and collaboration. Leaders who spoke in a hesitant voice, or were silent, expressed ambiguity in their non-verbal communication: and other team members took over the leader's tasks. In teams where the leader had control over the inner circle, the members seemed to have an awareness of each other's roles and tasks, knowing when in time and where in space these tasks needed to be executed. Deviations in the leaders' communication increased the ambiguity in the communication, which had consequences for the teamwork. Communication cannot be taken for granted; it needs to be practiced

  6. Partial maintenance of auditory-based cognitive training benefits in older adults

    Science.gov (United States)

    Anderson, Samira; White-Schwoch, Travis; Choi, Hee Jae; Kraus, Nina

    2014-01-01

    The potential for short-term training to improve cognitive and sensory function in older adults has captured the public’s interest. Initial results have been promising. For example, eight weeks of auditory-based cognitive training decreases peak latencies and peak variability in neural responses to speech presented in a background of noise and instills gains in speed of processing, speech-in-noise recognition, and short-term memory in older adults. But while previous studies have demonstrated short-term plasticity in older adults, we must consider the long-term maintenance of training gains. To evaluate training maintenance, we invited participants from an earlier training study to return for follow-up testing six months after the completion of training. We found that improvements in response peak timing to speech in noise and speed of processing were maintained, but the participants did not maintain speech-in-noise recognition or memory gains. Future studies should consider factors that are important for training maintenance, including the nature of the training, compliance with the training schedule, and the need for booster sessions after the completion of primary training. PMID:25111032

  7. Auditory Short-Term Memory Capacity Correlates with Gray Matter Density in the Left Posterior STS in Cognitively Normal and Dyslexic Adults

    Science.gov (United States)

    Richardson, Fiona M.; Ramsden, Sue; Ellis, Caroline; Burnett, Stephanie; Megnin, Odette; Catmur, Caroline; Schofield, Tom M.; Leff, Alex P.; Price, Cathy J.

    2011-01-01

    A central feature of auditory STM is its item-limited processing capacity. We investigated whether auditory STM capacity correlated with regional gray and white matter in the structural MRI images from 74 healthy adults, 40 of whom had a prior diagnosis of developmental dyslexia whereas 34 had no history of any cognitive impairment. Using…

  8. Exploring Children’s Peer Relationships through Verbal and Non-verbal Communication: A Qualitative Action Research Focused on Waldorf Pedagogy

    Directory of Open Access Journals (Sweden)

    Aida Milena Montenegro Mantilla

    2007-12-01

    Full Text Available This study analyzes the relationships that children around seven and eight years old establish in a classroom. It shows that peer relationships have a positive dimension with features such as the development of children’s creativity to communicate and modify norms. These features were found through an analysis of children’s verbal and non-verbal communication and an interdisciplinary view of children’s learning process from Rudolf Steiner, founder of Waldorf Pedagogy, and Jean Piaget and Lev Vygotsky, specialists in children’s cognitive and social dimensions. This research is an invitation to recognize children’s capacity to construct their own rules in peer relationships.

  9. Improviser non verbalement pour l’apprentissage de la langue parlée

    Directory of Open Access Journals (Sweden)

    Francine Chaîné

    2015-04-01

    Full Text Available Un texte réflexif sur la pratique de l'improvisation dans un contexte scolaire en vue d'apprendre la langue parlée. D'aucun penserait que l'improvisation verbale est le moyen par excellence pour faire l'apprentissage de la langue, mais l'expérience nous a fait découvrir la richesse de l'improvisation non-verbale suivie de prise de parole sur la pratique comme moyen privilégié. L'article est illustré d'un atelier d'improvisation-non verbale s'adressant à des enfants ou à des adolescents.

  10. A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    OpenAIRE

    Mavridis, Nikolaos

    2014-01-01

    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-lookin...

  11. Oncologists' non-verbal behavior and analog patients' recall of information.

    Science.gov (United States)

    Hillen, Marij A; de Haes, Hanneke C J M; van Tienhoven, Geertjan; van Laarhoven, Hanneke W M; van Weert, Julia C M; Vermeulen, Daniëlle M; Smets, Ellen M A

    2016-06-01

    Background Information in oncological consultations is often excessive. Those patients who better recall information are more satisfied, less anxious and more adherent. Optimal recall may be enhanced by the oncologist's non-verbal communication. We tested the influence of three non-verbal behaviors, i.e. eye contact, body posture and smiling, on patients' recall of information and perceived friendliness of the oncologist. Moreover, the influence of patient characteristics on recall was examined, both directly or as a moderator of non-verbal communication. Material and methods Non-verbal communication of an oncologist was experimentally varied using video vignettes. In total 194 breast cancer patients/survivors and healthy women participated as 'analog patients', viewing a randomly selected video version while imagining themselves in the role of the patient. Directly after viewing, they evaluated the oncologist. From 24 to 48 hours later, participants' passive recall, i.e. recognition, and free recall of information provided by the oncologist were assessed. Results Participants' recognition was higher if the oncologist maintained more consistent eye contact (β = 0.17). More eye contact and smiling led to a perception of the oncologist as more friendly. Body posture and smiling did not significantly influence recall. Older age predicted significantly worse recognition (β = -0.28) and free recall (β = -0.34) of information. Conclusion Oncologists may be able to facilitate their patients' recall functioning through consistent eye contact. This seems particularly relevant for older patients, whose recall is significantly worse. These findings can be used in training, focused on how to maintain eye contact while managing computer tasks.

  12. Shall we use non-verbal fluency in schizophrenia? A pilot study.

    Science.gov (United States)

    Rinaldi, Romina; Trappeniers, Julie; Lefebvre, Laurent

    2014-05-30

    Over the last few years, numerous studies have attempted to explain fluency impairments in people with schizophrenia, leading to heterogeneous results. This could notably be due to the fact that fluency is often used in its verbal form where semantic dimensions are implied. In order to gain an in-depth understanding of fluency deficits, a non-verbal fluency task - the Five-Point Test (5PT) - was proposed to 24 patients with schizophrenia and to 24 healthy subjects categorized in terms of age, gender and schooling. The 5PT involves producing as many abstract figures as possible within 1min by connecting points with straight lines. All subjects also completed the Frontal Assessment Battery (FAB) while those with schizophrenia were further assessed using the Positive and Negative Syndrome Scale (PANSS). Results show that the 5PT evaluation differentiates patients from healthy subjects with regard to the number of figures produced. Patients׳ results also suggest that the number of figures produced is linked to the "overall executive functioning" and to some inhibition components. Although this study is a first step in the non-verbal efficiency research field, we believe that experimental psychopathology could benefit from the investigations on non-verbal fluency. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. Auditory and cognitive deficits associated with acquired amusia after stroke: a magnetoencephalography and neuropsychological follow-up study.

    Directory of Open Access Journals (Sweden)

    Teppo Särkämö

    2010-12-01

    Full Text Available Acquired amusia is a common disorder after damage to the middle cerebral artery (MCA territory. However, its neurocognitive mechanisms, especially the relative contribution of perceptual and cognitive factors, are still unclear. We studied cognitive and auditory processing in the amusic brain by performing neuropsychological testing as well as magnetoencephalography (MEG measurements of frequency and duration discrimination using magnetic mismatch negativity (MMNm recordings. Fifty-three patients with a left (n = 24 or right (n = 29 hemisphere MCA stroke (MRI verified were investigated 1 week, 3 months, and 6 months after the stroke. Amusia was evaluated using the Montreal Battery of Evaluation of Amusia (MBEA. We found that amusia caused by right hemisphere damage (RHD, especially to temporal and frontal areas, was more severe than amusia caused by left hemisphere damage (LHD. Furthermore, the severity of amusia was found to correlate with weaker frequency MMNm responses only in amusic RHD patients. Additionally, within the RHD subgroup, the amusic patients who had damage to the auditory cortex (AC showed worse recovery on the MBEA as well as weaker MMNm responses throughout the 6-month follow-up than the non-amusic patients or the amusic patients without AC damage. Furthermore, the amusic patients both with and without AC damage performed worse than the non-amusic patients on tests of working memory, attention, and cognitive flexibility. These findings suggest domain-general cognitive deficits to be the primary mechanism underlying amusia without AC damage whereas amusia with AC damage is associated with both auditory and cognitive deficits.

  14. Auditory and cognitive deficits associated with acquired amusia after stroke: a magnetoencephalography and neuropsychological follow-up study.

    Science.gov (United States)

    Särkämö, Teppo; Tervaniemi, Mari; Soinila, Seppo; Autti, Taina; Silvennoinen, Heli M; Laine, Matti; Hietanen, Marja; Pihko, Elina

    2010-12-02

    Acquired amusia is a common disorder after damage to the middle cerebral artery (MCA) territory. However, its neurocognitive mechanisms, especially the relative contribution of perceptual and cognitive factors, are still unclear. We studied cognitive and auditory processing in the amusic brain by performing neuropsychological testing as well as magnetoencephalography (MEG) measurements of frequency and duration discrimination using magnetic mismatch negativity (MMNm) recordings. Fifty-three patients with a left (n = 24) or right (n = 29) hemisphere MCA stroke (MRI verified) were investigated 1 week, 3 months, and 6 months after the stroke. Amusia was evaluated using the Montreal Battery of Evaluation of Amusia (MBEA). We found that amusia caused by right hemisphere damage (RHD), especially to temporal and frontal areas, was more severe than amusia caused by left hemisphere damage (LHD). Furthermore, the severity of amusia was found to correlate with weaker frequency MMNm responses only in amusic RHD patients. Additionally, within the RHD subgroup, the amusic patients who had damage to the auditory cortex (AC) showed worse recovery on the MBEA as well as weaker MMNm responses throughout the 6-month follow-up than the non-amusic patients or the amusic patients without AC damage. Furthermore, the amusic patients both with and without AC damage performed worse than the non-amusic patients on tests of working memory, attention, and cognitive flexibility. These findings suggest domain-general cognitive deficits to be the primary mechanism underlying amusia without AC damage whereas amusia with AC damage is associated with both auditory and cognitive deficits.

  15. A comparison of processing load during non-verbal decision-making in two individuals with aphasia

    Directory of Open Access Journals (Sweden)

    Salima Suleman

    2015-05-01

    Full Text Available INTRODUCTION A growing body of evidence suggests people with aphasia (PWA can have impairments to cognitive functions such as attention, working memory and executive functions.(1-5 Such cognitive impairments have been shown to negatively affect the decision-making (DM abilities adults with neurological damage. (6,7 However, little is known about DM abilities of PWA.(8 Pupillometry is “the measurement of changes in pupil diameter”.(9;p.1 Researchers have reported a positive relationship between processing load and phasic pupil size (i.e., as processing load increases, pupil size increases.(10 Thus pupillometry has the potential to be a useful tool for investigating processing load during DM in PWA. AIMS The primary aim of this study was to establish the feasibility of using pupillometry during a non-verbal DM task with PWA. The secondary aim was to explore non-verbal DM performance in PWA and determine the relationship between DM performance and processing load using pupillometry. METHOD DESIGN. A single-subject case-study design with two participants was used in this study. PARTICIPANTS. Two adult males with anomic aphasia participated in this study. Participants were matched for age and education. Both participants were independent, able to drive, and had legal autonomy. MEASURES. PERFORMANCE ON A DM TASK. We used a computerized risk-taking card game called the Iowa Gambling Task (IGT as our non-verbal DM task.(11 In the IGT, participants made 100 selections (via eye gaze from four decks of cards presented on the computer screen with the goal of maximizing their overall hypothetical monetary gain. PROCESSING LOAD. The EyeLink 1000+ eye tracking system was used to collect pupil size measures while participants deliberated before each deck selection during the IGT. For this analysis, we calculated change in pupil size as a measure of processing load. RESULTS P1. P1 made increasingly advantageous decisions as the task progressed (Fig.1. When

  16. Auditory and Visual Continuous Performance Tests: Relationships with Age, Gender, Cognitive Functioning, and Classroom Behavior

    Science.gov (United States)

    Lehman, Elyse Brauch; Olson, Vanessa A.; Aquilino, Sally A.; Hall, Laura C.

    2006-01-01

    Elementary school children in three grade groups (Grades K/1, 3, and 5/6) completed either the auditory or the visual 1/9 vigilance task from the Gordon Diagnostic System (GDS) as well as subtests from the Wechsler Intelligence Scale for Children--Third Edition and auditory or visual processing subtests from the Woodcock-Johnson Tests of Cognitive…

  17. Auditory and Visual Memory Span: Cognitive Processing by TMR Individuals with Down Syndrome or Other Etiologies.

    Science.gov (United States)

    Varnhagen, Connie K.; And Others

    1987-01-01

    Auditory and visual memory span were examined with 13 Down Syndrome and 15 other trainable mentally retarded young adults. Although all subjects demonstrated relatively poor auditory memory span, Down Syndrome subjects were especially poor at long-term memory access for visual stimulus identification and short-term storage and processing of…

  18. From ear to body: the auditory-motor loop in spatial cognition

    Directory of Open Access Journals (Sweden)

    Isabelle eViaud-Delmon

    2014-09-01

    Full Text Available Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers was used to send the coordinates of the subject’s head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e. a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorise the localisation of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed.The configuration of searching paths allowed observing how auditory information was coded to memorise the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favour of the hypothesis that the brain has access to a modality-invariant representation of external space.

  19. From ear to body: the auditory-motor loop in spatial cognition.

    Science.gov (United States)

    Viaud-Delmon, Isabelle; Warusfel, Olivier

    2014-01-01

    SPATIAL MEMORY IS MAINLY STUDIED THROUGH THE VISUAL SENSORY MODALITY: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space.

  20. Deaf children’s non-verbal working memory is impacted by their language experience

    Directory of Open Access Journals (Sweden)

    Chloe eMarshall

    2015-05-01

    Full Text Available Recent studies suggest that deaf children perform more poorly on working memory tasks compared to hearing children, but do not say whether this poorer performance arises directly from deafness itself or from deaf children’s reduced language exposure. The issue remains unresolved because findings come from (1 tasks that are verbal as opposed to non-verbal, and (2 involve deaf children who use spoken communication and therefore may have experienced impoverished input and delayed language acquisition. This is in contrast to deaf children who have been exposed to a sign language since birth from Deaf parents (and who therefore have native language-learning opportunities. A more direct test of how the type and quality of language exposure impacts working memory is to use measures of non-verbal working memory (NVWM and to compare hearing children with two groups of deaf signing children: those who have had native exposure to a sign language, and those who have experienced delayed acquisition compared to their native-signing peers. In this study we investigated the relationship between NVWM and language in three groups aged 6-11 years: hearing children (n=27, deaf native users of British Sign Language (BSL; n=7, and deaf children non native signers (n=19. We administered a battery of non-verbal reasoning, NVWM, and language tasks. We examined whether the groups differed on NVWM scores, and if language tasks predicted scores on NVWM tasks. For the two NVWM tasks, the non-native signers performed less accurately than the native signer and hearing groups (who did not differ from one another. Multiple regression analysis revealed that the vocabulary measure predicted scores on NVWM tasks. Our results suggest that whatever the language modality – spoken or signed – rich language experience from birth, and the good language skills that result from this early age of aacquisition, play a critical role in the development of NVWM and in performance on NVWM

  1. Network structure underlying resolution of conflicting non-verbal and verbal social information.

    Science.gov (United States)

    Watanabe, Takamitsu; Yahata, Noriaki; Kawakubo, Yuki; Inoue, Hideyuki; Takano, Yosuke; Iwashiro, Norichika; Natsubori, Tatsunobu; Takao, Hidemasa; Sasaki, Hiroki; Gonoi, Wataru; Murakami, Mizuho; Katsura, Masaki; Kunimatsu, Akira; Abe, Osamu; Kasai, Kiyoto; Yamasue, Hidenori

    2014-06-01

    Social judgments often require resolution of incongruity in communication contents. Although previous studies revealed that such conflict resolution recruits brain regions including the medial prefrontal cortex (mPFC) and posterior inferior frontal gyrus (pIFG), functional relationships and networks among these regions remain unclear. In this functional magnetic resonance imaging study, we investigated the functional dissociation and networks by measuring human brain activity during resolving incongruity between verbal and non-verbal emotional contents. First, we found that the conflict resolutions biased by the non-verbal contents activated the posterior dorsal mPFC (post-dmPFC), bilateral anterior insula (AI) and right dorsal pIFG, whereas the resolutions biased by the verbal contents activated the bilateral ventral pIFG. In contrast, the anterior dmPFC (ant-dmPFC), bilateral superior temporal sulcus and fusiform gyrus were commonly involved in both of the resolutions. Second, we found that the post-dmPFC and right ventral pIFG were hub regions in networks underlying the non-verbal- and verbal-content-biased resolutions, respectively. Finally, we revealed that these resolution-type-specific networks were bridged by the ant-dmPFC, which was recruited for the conflict resolutions earlier than the two hub regions. These findings suggest that, in social conflict resolutions, the ant-dmPFC selectively recruits one of the resolution-type-specific networks through its interaction with resolution-type-specific hub regions. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  2. School effects on non-verbal intelligence and nutritional status in rural Zambia

    OpenAIRE

    Hein, Sascha; Tan, Mei; Reich, Jodi; Thuma, Philip E.; Grigorenko, Elena L.

    2015-01-01

    This study uses hierarchical linear modeling (HLM) to examine the school factors (i.e., related to school organization and teacher and student body) associated with non-verbal intelligence (NI) and nutritional status (i.e., body mass index; BMI) of 4204 3rd to 7th graders in rural areas of Southern Province, Zambia. Results showed that 23.5% and 7.7% of the NI and BMI variance, respectively, were conditioned by differences between schools. The set of 14 school factors accounted for 58.8% and ...

  3. Linguistic analysis of verbal and non-verbal communication in the operating room.

    Science.gov (United States)

    Moore, Alison; Butt, David; Ellis-Clarke, Jodie; Cartmill, John

    2010-12-01

    Surgery can be a triumph of co-operation, the procedure evolving as a result of joint action between multiple participants. The communication that mediates the joint action of surgery is conveyed by verbal but particularly by non-verbal signals. Competing priorities superimposed by surgical learning must also be negotiated within this context and this paper draws on techniques of systemic functional linguistics to observe and analyse the flow of information during such a phase of surgery. © 2010 The Authors. ANZ Journal of Surgery © 2010 Royal Australasian College of Surgeons.

  4. Psychoacoustic and cognitive aspects of auditory roughness: definitions, models, and applications

    Science.gov (United States)

    Vassilakis, Pantelis N.; Kendall, Roger A.

    2010-02-01

    The term "auditory roughness" was first introduced in the 19th century to describe the buzzing, rattling auditory sensation accompanying narrow harmonic intervals (i.e. two tones with frequency difference in the range of ~15-150Hz, presented simultaneously). A broader definition and an overview of the psychoacoustic correlates of the auditory roughness sensation, also referred to as sensory dissonance, is followed by an examination of efforts to quantify it over the past one hundred and fifty years and leads to the introduction of a new roughness calculation model and an application that automates spectral and roughness analysis of sound signals. Implementation of spectral and roughness analysis is briefly discussed in the context of two pilot perceptual experiments, designed to assess the relationship among cultural background, music performance practice, and aesthetic attitudes towards the auditory roughness sensation.

  5. Self-supervised, mobile-application based cognitive training of auditory attention: A behavioral and fMRI evaluation

    Directory of Open Access Journals (Sweden)

    Josef J. Bless

    2014-07-01

    Full Text Available Emerging evidence of the validity of collecting data in natural settings using smartphone applications has opened new possibilities for psychological assessment, treatment, and research. In this study we explored the feasibility and effectiveness of using a mobile application for self-supervised training of auditory attention. In addition, we investigated the neural underpinnings of the training procedure with functional magnetic resonance imaging (fMRI, as well as possible transfer effects to untrained cognitive interference tasks. Subjects in the training group performed the training task on an iPod touch two times a day (morning/evening for three weeks; subjects in the control group received no training, but were tested at the same time interval as the training group. Behavioral responses were measured before and after the training period in both groups, together with measures of task-related neural activations by fMRI. The results showed an expected performance increase after training that corresponded to activation decreases in brain regions associated with selective auditory processing (left posterior temporal gyrus and executive functions (right middle frontal gyrus, indicating more efficient processing in task-related neural networks after training. Our study suggests that cognitive training delivered via mobile applications is feasible and improves the ability to focus attention with corresponding effects on neural plasticity. Future research should focus on the clinical benefits of mobile cognitive training. Limitations of the study are discussed including reduced experimental control and lack of transfer effects.

  6. Verbal and non-verbal semantic impairment: From fluent primary progressive aphasia to semantic dementia

    Directory of Open Access Journals (Sweden)

    Mirna Lie Hosogi Senaha

    Full Text Available Abstract Selective disturbances of semantic memory have attracted the interest of many investigators and the question of the existence of single or multiple semantic systems remains a very controversial theme in the literature. Objectives: To discuss the question of multiple semantic systems based on a longitudinal study of a patient who presented semantic dementia from fluent primary progressive aphasia. Methods: A 66 year-old woman with selective impairment of semantic memory was examined on two occasions, undergoing neuropsychological and language evaluations, the results of which were compared to those of three paired control individuals. Results: In the first evaluation, physical examination was normal and the score on the Mini-Mental State Examination was 26. Language evaluation revealed fluent speech, anomia, disturbance in word comprehension, preservation of the syntactic and phonological aspects of the language, besides surface dyslexia and dysgraphia. Autobiographical and episodic memories were relatively preserved. In semantic memory tests, the following dissociation was found: disturbance of verbal semantic memory with preservation of non-verbal semantic memory. Magnetic resonance of the brain revealed marked atrophy of the left anterior temporal lobe. After 14 months, the difficulties in verbal semantic memory had become more severe and the semantic disturbance, limited initially to the linguistic sphere, had worsened to involve non-verbal domains. Conclusions: Given the dissociation found in the first examination, we believe there is sufficient clinical evidence to refute the existence of a unitary semantic system.

  7. Neurophysiological Modulations of Non-Verbal and Verbal Dual-Tasks Interference during Word Planning.

    Directory of Open Access Journals (Sweden)

    Raphaël Fargier

    Full Text Available Running a concurrent task while speaking clearly interferes with speech planning, but whether verbal vs. non-verbal tasks interfere with the same processes is virtually unknown. We investigated the neural dynamics of dual-task interference on word production using event-related potentials (ERPs with either tones or syllables as concurrent stimuli. Participants produced words from pictures in three conditions: without distractors, while passively listening to distractors and during a distractor detection task. Production latencies increased for tasks with higher attentional demand and were longer for syllables relative to tones. ERP analyses revealed common modulations by dual-task for verbal and non-verbal stimuli around 240 ms, likely corresponding to lexical selection. Modulations starting around 350 ms prior to vocal onset were only observed when verbal stimuli were involved. These later modulations, likely reflecting interference with phonological-phonetic encoding, were observed only when overlap between tasks was maximal and the same underlying neural circuits were engaged (cross-talk.

  8. Neurophysiological Modulations of Non-Verbal and Verbal Dual-Tasks Interference during Word Planning.

    Science.gov (United States)

    Fargier, Raphaël; Laganaro, Marina

    2016-01-01

    Running a concurrent task while speaking clearly interferes with speech planning, but whether verbal vs. non-verbal tasks interfere with the same processes is virtually unknown. We investigated the neural dynamics of dual-task interference on word production using event-related potentials (ERPs) with either tones or syllables as concurrent stimuli. Participants produced words from pictures in three conditions: without distractors, while passively listening to distractors and during a distractor detection task. Production latencies increased for tasks with higher attentional demand and were longer for syllables relative to tones. ERP analyses revealed common modulations by dual-task for verbal and non-verbal stimuli around 240 ms, likely corresponding to lexical selection. Modulations starting around 350 ms prior to vocal onset were only observed when verbal stimuli were involved. These later modulations, likely reflecting interference with phonological-phonetic encoding, were observed only when overlap between tasks was maximal and the same underlying neural circuits were engaged (cross-talk).

  9. Motor system contributions to verbal and non-verbal working memory

    Directory of Open Access Journals (Sweden)

    Diana A Liao

    2014-09-01

    Full Text Available Working memory (WM involves the ability to maintain and manipulate information held in mind. Neuroimaging studies have shown that secondary motor areas activate during WM for verbal content (e.g., words or letters, in the absence of primary motor area activation. This activation pattern may reflect an inner speech mechanism supporting online phonological rehearsal. Here, we examined the causal relationship between motor system activity and WM processing by using transcranial magnetic stimulation (TMS to manipulate motor system activity during WM rehearsal. We tested WM performance for verbalizable (words and pseudowords and non-verbalizable (Chinese characters visual information. We predicted that disruption of motor circuits would specifically affect WM processing of verbalizable information. We found that TMS targeting motor cortex slowed response times on verbal WM trials with high (pseudoword vs. low (real word phonological load. However, non-verbal WM trials were also significantly slowed with motor TMS. WM performance was unaffected by sham stimulation or TMS over visual cortex. Self-reported use of motor strategy predicted the degree of motor stimulation disruption on WM performance. These results provide evidence of the motor system’s contributions to verbal and non-verbal WM processing. We speculate that the motor system supports WM by creating motor traces consistent with the type of information being rehearsed during maintenance.

  10. [Non-verbal communication and executive function impairment after traumatic brain injury: a case report].

    Science.gov (United States)

    Sainson, C

    2007-05-01

    Following post-traumatic impairment in executive function, failure to adjust to communication situations often creates major obstacles to social and professional reintegration. The analysis of pathological verbal communication has been based on clinical scales since the 1980s, but that of nonverbal elements has been neglected, although their importance should be acknowledged. The aim of this research was to study non-verbal aspects of communication in a case of executive-function impairment after traumatic brain injury. During the patient's conversation with an interlocutor, all nonverbal parameters - coverbal gestures, gaze, posture, proxemics and facial expressions - were studied in as much an ecological way as possible, to closely approximate natural conversation conditions. Such an approach highlights the difficulties such patients experience in communicating, difficulties of a pragmatic kind, that have so far been overlooked by traditional investigations, which mainly take into account the formal linguistic aspects of language. The analysis of the patient's conversation revealed non-verbal dysfunctions, not only on a pragmatic and interactional level but also in terms of enunciation. Moreover, interactional adjustment phenomena were noted in the interlocutor's behaviour. The two inseparable aspects of communication - verbal and nonverbal - should be equally assessed in patients with communication difficulties; highlighting distortions in each area might bring about an improvement in the rehabilitation of such people.

  11. Speaking Two Languages Enhances an Auditory but Not a Visual Neural Marker of Cognitive Inhibition

    Directory of Open Access Journals (Sweden)

    Mercedes Fernandez

    2014-09-01

    Full Text Available The purpose of the present study was to replicate and extend our original findings of enhanced neural inhibitory control in bilinguals. We compared English monolinguals to Spanish/English bilinguals on a non-linguistic, auditory Go/NoGo task while recording event-related brain potentials. New to this study was the visual Go/NoGo task, which we included to investigate whether enhanced neural inhibition in bilinguals extends from the auditory to the visual modality. Results confirmed our original findings and revealed greater inhibition in bilinguals compared to monolinguals. As predicted, compared to monolinguals, bilinguals showed increased N2 amplitude during the auditory NoGo trials, which required inhibitory control, but no differences during the Go trials, which required a behavioral response and no inhibition. Interestingly, during the visual Go/NoGo task, event related brain potentials did not distinguish the two groups, and behavioral responses were similar between the groups regardless of task modality. Thus, only auditory trials that required inhibitory control revealed between-group differences indicative of greater neural inhibition in bilinguals. These results show that experience-dependent neural changes associated with bilingualism are specific to the auditory modality and that the N2 event-related brain potential is a sensitive marker of this plasticity.

  12. The Effect of Cognitive Control on Different Types of Auditory Distraction.

    Science.gov (United States)

    Bell, Raoul; Röer, Jan P; Marsh, John E; Storch, Dunja; Buchner, Axel

    2017-09-01

    Deviant as well as changing auditory distractors interfere with short-term memory. According to the duplex model of auditory distraction, the deviation effect is caused by a shift of attention while the changing-state effect is due to obligatory order processing. This theory predicts that foreknowledge should reduce the deviation effect, but should have no effect on the changing-state effect. We compared the effect of foreknowledge on the two phenomena directly within the same experiment. In a pilot study, specific foreknowledge was impotent in reducing either the changing-state effect or the deviation effect, but it reduced disruption by sentential speech, suggesting that the effects of foreknowledge on auditory distraction may increase with the complexity of the stimulus material. Given the unexpected nature of this finding, we tested whether the same finding would be obtained in (a) a direct preregistered replication in Germany and (b) an additional replication with translated stimulus materials in Sweden.

  13. Non-verbal communication of the residents living in homes for the older people in Slovenia.

    Science.gov (United States)

    Zaletel, Marija; Kovacev, Asja Nina; Sustersic, Olga; Kragelj, Lijana Zaletel

    2010-09-01

    Aging of the population is a growing problem in all developed societies. The older people need more health and social services, and their life quality in there is getting more and more important. The study aimed at determining the characteristics of non-verbal communication of the older people living in old people's homes (OPH). The sample consisted of 267 residents of the OPH, aged 65-96 years, and 267 caregivers from randomly selected twenty-seven OPH. Three types of non-verbal communication were observed and analysed using univariate and multivariate statistical methods. In face expressions and head movements about 75% older people looked at the eyes of their caregivers, and about 60% were looking around, while laughing or pressing the lips together was rarely noticed. The differences between genders were not statistically significant while statistically significant differences among different age groups was observed in dropping the eyes (p = 0.004) and smiling (0.008). In hand gestures and trunk movements, majority of older people most often moved forwards and clenched fingers, while most rarely they stroked and caressed their caregivers. The differences between genders were statistically significant in leaning on the table (p = 0.001), and changing the position on the chair (0.013). Statistically significant differences among age groups were registered in leaning forwards (p = 0.006) and pointing to the others (p = 0.036). In different modes of speaking and paralinguistic signs almost 75% older people spoke normally, about 70% kept silent, while they rarely quarrelled. The differences between genders were not statistically significant while statistically significant differences among age groups was observed in persuasive speaking (p = 0.007). The present study showed that older people in OPH in Slovenia communicated significantly less frequently with hand gestures and trunk movements than with face expressions and head movements or different modes of speaking

  14. Profiling Fragile X Syndrome in Males: Strengths and Weaknesses in Cognitive Abilities

    Science.gov (United States)

    Van der Molen, M. J. W.; Huizinga, M.; Huizenga, H. M.; Ridderinkhof, K. R.; Van der Molen, M. W.; Hamel, B. J. C.; Curfs, L. M. G.; Ramakers, G. J. A.

    2010-01-01

    The present study examined the cognitive profile in Fragile X Syndrome (FXS) males, and investigated whether cognitive profiles are similar for FXS males at different levels of intellectual functioning. Cognitive abilities in non-verbal, verbal, memory and executive functioning domains were contrasted to both a non-verbal and verbal mental age…

  15. Non-verbal emotion communication training induces specific changes in brain function and structure.

    Science.gov (United States)

    Kreifelts, Benjamin; Jacob, Heike; Brück, Carolin; Erb, Michael; Ethofer, Thomas; Wildgruber, Dirk

    2013-01-01

    The perception of emotional cues from voice and face is essential for social interaction. However, this process is altered in various psychiatric conditions along with impaired social functioning. Emotion communication trainings have been demonstrated to improve social interaction in healthy individuals and to reduce emotional communication deficits in psychiatric patients. Here, we investigated the impact of a non-verbal emotion communication training (NECT) on cerebral activation and brain structure in a controlled and combined functional magnetic resonance imaging (fMRI) and voxel-based morphometry study. NECT-specific reductions in brain activity occurred in a distributed set of brain regions including face and voice processing regions as well as emotion processing- and motor-related regions presumably reflecting training-induced familiarization with the evaluation of face/voice stimuli. Training-induced changes in non-verbal emotion sensitivity at the behavioral level and the respective cerebral activation patterns were correlated in the face-selective cortical areas in the posterior superior temporal sulcus and fusiform gyrus for valence ratings and in the temporal pole, lateral prefrontal cortex and midbrain/thalamus for the response times. A NECT-induced increase in gray matter (GM) volume was observed in the fusiform face area. Thus, NECT induces both functional and structural plasticity in the face processing system as well as functional plasticity in the emotion perception and evaluation system. We propose that functional alterations are presumably related to changes in sensory tuning in the decoding of emotional expressions. Taken together, these findings highlight that the present experimental design may serve as a valuable tool to investigate the altered behavioral and neuronal processing of emotional cues in psychiatric disorders as well as the impact of therapeutic interventions on brain function and structure.

  16. Intensive Auditory Cognitive Training Improves Verbal Memory in Adolescents and Young Adults at Clinical High Risk for Psychosis.

    Science.gov (United States)

    Loewy, Rachel; Fisher, Melissa; Schlosser, Danielle A; Biagianti, Bruno; Stuart, Barbara; Mathalon, Daniel H; Vinogradov, Sophia

    2016-07-01

    Individuals at clinical high risk (CHR) for psychosis demonstrate cognitive impairments that predict later psychotic transition and real-world functioning. Cognitive training has shown benefits in schizophrenia, but has not yet been adequately tested in the CHR population. In this double-blind randomized controlled trial, CHR individuals (N = 83) were given laptop computers and trained at home on 40 hours of auditory processing-based exercises designed to target verbal learning and memory operations, or on computer games (CG). Participants were assessed with neurocognitive tests based on the Measurement and Treatment Research to Improve Cognition in Schizophrenia initiative (MATRICS) battery and rated on symptoms and functioning. Groups were compared before and after training using a mixed-effects model with restricted maximum likelihood estimation, given the high study attrition rate (42%). Participants in the targeted cognitive training group showed a significant improvement in Verbal Memory compared to CG participants (effect size = 0.61). Positive and Total symptoms improved in both groups over time. CHR individuals showed patterns of training-induced cognitive improvement in verbal memory consistent with prior observations in schizophrenia. This is a particularly vulnerable domain in individuals at-risk for psychosis that predicts later functioning and psychotic transition. Ongoing follow-up of this cohort will assess the durability of training effects in CHR individuals, as well as the potential impact on symptoms and functioning over time. Clinical Trials Number: NCT00655239. URL: https://clinicaltrials.gov/ct2/show/NCT00655239?term=vinogradov&rank=5. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center 2016.

  17. The Influence of Manifest Strabismus and Stereoscopic Vision on Non-Verbal Abilities of Visually Impaired Children

    Science.gov (United States)

    Gligorovic, Milica; Vucinic, Vesna; Eskirovic, Branka; Jablan, Branka

    2011-01-01

    This research was conducted in order to examine the influence of manifest strabismus and stereoscopic vision on non-verbal abilities of visually impaired children aged between 7 and 15. The sample included 55 visually impaired children from the 1st to the 6th grade of elementary schools for visually impaired children in Belgrade. RANDOT stereotest…

  18. Contextual analysis of human non-verbal guide behaviors to inform the development of FROG, the Fun Robotic Outdoor Guide

    NARCIS (Netherlands)

    Karreman, Daphne Eleonora; van Dijk, Elisabeth M.A.G.; Evers, Vanessa

    2012-01-01

    This paper reports the first step in a series of studies to design the interaction behaviors of an outdoor robotic guide. We describe and report the use case development carried out to identify effective human tour guide behaviors. In this paper we focus on non-verbal communication cues in gaze,

  19. Treating depressive symptoms in psychosis : A Network Meta-Analysis on the Effects of Non-Verbal Therapies

    NARCIS (Netherlands)

    Steenhuis, L. A.; Nauta, M. H.; Bockting, C. L. H.; Pijnenborg, G. H. M.

    2015-01-01

    AIMS: The aim of this study was to examine whether non-verbal therapies are effective in treating depressive symptoms in psychotic disorders. MATERIAL AND METHODS: A systematic literature search was performed in PubMed, Psychinfo, Picarta, Embase and ISI Web of Science, up to January 2015.

  20. Treating depressive symptoms in psychosis : A network meta-analysis on the effects of non-verbal therapies

    NARCIS (Netherlands)

    Steenhuis, Laura A.; Nauta, Maaike H.; Bocking, Claudi L.H.; Pijnenborg, Gerdina H.M.

    2015-01-01

    AIMS: The aim of this study was to examine whether non-verbal therapies are effective in treating depressive symptoms in psychotic disorders. MATERIAL AND METHODS: A systematic literature search was performed in PubMed, Psychinfo, Picarta, Embase and ISI Web of Science, up to January 2015.

  1. Gender Differences in Variance and Means on the Naglieri Non-Verbal Ability Test: Data from the Philippines

    Science.gov (United States)

    Vista, Alvin; Care, Esther

    2011-01-01

    Background: Research on gender differences in intelligence has focused mostly on samples from Western countries and empirical evidence on gender differences from Southeast Asia is relatively sparse. Aims: This article presents results on gender differences in variance and means on a non-verbal intelligence test using a national sample of public…

  2. The Efficiency of Peer Teaching of Developing Non Verbal Communication to Children with Autism Spectrum Disorder (ASD)

    Science.gov (United States)

    Alshurman, Wael; Alsreaa, Ihsani

    2015-01-01

    This study aimed at identifying the efficiency of peer teaching of developing non-verbal communication to children with autism spectrum disorder (ASD). The study was carried out on a sample of (10) children with autism spectrum disorder (ASD), diagnosed according to basics and criteria adopted at Al-taif qualification center at (2013) in The…

  3. Measuring Verbal and Non-Verbal Communication in Aphasia: Reliability, Validity, and Sensitivity to Change of the Scenario Test

    Science.gov (United States)

    van der Meulen, Ineke; van de Sandt-Koenderman, W. Mieke E.; Duivenvoorden, Hugo J.; Ribbers, Gerard M.

    2010-01-01

    Background: This study explores the psychometric qualities of the Scenario Test, a new test to assess daily-life communication in severe aphasia. The test is innovative in that it: (1) examines the effectiveness of verbal and non-verbal communication; and (2) assesses patients' communication in an interactive setting, with a supportive…

  4. Seizure-related factors and non-verbal intelligence in children with epilepsy. A population-based study from Western Norway.

    Science.gov (United States)

    Høie, B; Mykletun, A; Sommerfelt, K; Bjørnaes, H; Skeidsvoll, H; Waaler, P E

    2005-06-01

    To study the relationship between seizure-related factors, non-verbal intelligence, and socio-economic status (SES) in a population-based sample of children with epilepsy. The latest ILAE International classifications of epileptic seizures and syndromes were used to classify seizure types and epileptic syndromes in all 6-12 year old children (N=198) with epilepsy in Hordaland County, Norway. The children had neuropediatric and EEG examinations. Of the 198 patients, demographic characteristics were collected on 183 who participated in psychological studies including Raven matrices. 126 healthy controls underwent the same testing. Severe non-verbal problems (SNVP) were defined as a Raven score at or Raven percentile group, whereas controls were highly over-represented in the higher percentile groups. SNVP were present in 43% of children with epilepsy and 3% of controls. These problems were especially common in children with remote symptomatic epilepsy aetiology, undetermined epilepsy syndromes, myoclonic seizures, early seizure debut, high seizure frequency and in children with polytherapy. Seizure-related characteristics that were not usually associated with SNVP were idiopathic epilepsies, localization related (LR) cryptogenic epilepsies, absence and simple partial seizures, and a late debut of epilepsy. Adjusting for socio-economic status factors did not significantly change results. In childhood epilepsy various seizure-related factors, but not SES factors, were associated with the presence or absence of SNVP. Such deficits may be especially common in children with remote symptomatic epilepsy aetiology and in complex and therapy resistant epilepsies. Low frequencies of SNVP may be found in children with idiopathic and LR cryptogenic epilepsy syndromes, simple partial or absence seizures and a late epilepsy debut. Our study contributes to an overall picture of cognitive function and its relation to central seizure characteristics in a childhood epilepsy population

  5. Verbal and non-verbal behaviour and patient perception of communication in primary care: an observational study.

    Science.gov (United States)

    Little, Paul; White, Peter; Kelly, Joanne; Everitt, Hazel; Gashi, Shkelzen; Bikker, Annemieke; Mercer, Stewart

    2015-06-01

    Few studies have assessed the importance of a broad range of verbal and non-verbal consultation behaviours. To explore the relationship of observer ratings of behaviours of videotaped consultations with patients' perceptions. Observational study in general practices close to Southampton, Southern England. Verbal and non-verbal behaviour was rated by independent observers blind to outcome. Patients competed the Medical Interview Satisfaction Scale (MISS; primary outcome) and questionnaires addressing other communication domains. In total, 275/360 consultations from 25 GPs had useable videotapes. Higher MISS scores were associated with slight forward lean (an 0.02 increase for each degree of lean, 95% confidence interval [CI] = 0.002 to 0.03), the number of gestures (0.08, 95% CI = 0.01 to 0.15), 'back-channelling' (for example, saying 'mmm') (0.11, 95% CI = 0.02 to 0.2), and social talk (0.29, 95% CI = 0.4 to 0.54). Starting the consultation with professional coolness ('aloof') was helpful and optimism unhelpful. Finishing with non-verbal 'cut-offs' (for example, looking away), being professionally cool ('aloof'), or patronising, ('infantilising') resulted in poorer ratings. Physical contact was also important, but not traditional verbal communication. These exploratory results require confirmation, but suggest that patients may be responding to several non-verbal behaviours and non-specific verbal behaviours, such as social talk and back-channelling, more than traditional verbal behaviours. A changing consultation dynamic may also help, from professional 'coolness' at the beginning of the consultation to becoming warmer and avoiding non-verbal cut-offs at the end. © British Journal of General Practice 2015.

  6. School effects on non-verbal intelligence and nutritional status in rural Zambia.

    Science.gov (United States)

    Hein, Sascha; Tan, Mei; Reich, Jodi; Thuma, Philip E; Grigorenko, Elena L

    2016-02-01

    This study uses hierarchical linear modeling (HLM) to examine the school factors (i.e., related to school organization and teacher and student body) associated with non-verbal intelligence (NI) and nutritional status (i.e., body mass index; BMI) of 4204 3 rd to 7 th graders in rural areas of Southern Province, Zambia. Results showed that 23.5% and 7.7% of the NI and BMI variance, respectively, were conditioned by differences between schools. The set of 14 school factors accounted for 58.8% and 75.9% of the between-school differences in NI and BMI, respectively. Grade-specific HLM yielded higher between-school variation of NI (41%) and BMI (14.6%) for students in grade 3 compared to grades 4 to 7. School factors showed a differential pattern of associations with NI and BMI across grades. The distance to a health post and teacher's teaching experience were the strongest predictors of NI (particularly in grades 4, 6 and 7); the presence of a preschool was linked to lower BMI in grades 4 to 6. Implications for improving access and quality of education in rural Zambia are discussed.

  7. Exploring laterality and memory effects in the haptic discrimination of verbal and non-verbal shapes.

    Science.gov (United States)

    Stoycheva, Polina; Tiippana, Kaisa

    2018-03-14

    The brain's left hemisphere often displays advantages in processing verbal information, while the right hemisphere favours processing non-verbal information. In the haptic domain due to contra-lateral innervations, this functional lateralization is reflected in a hand advantage during certain functions. Findings regarding the hand-hemisphere advantage for haptic information remain contradictory, however. This study addressed these laterality effects and their interaction with memory retention times in the haptic modality. Participants performed haptic discrimination of letters, geometric shapes and nonsense shapes at memory retention times of 5, 15 and 30 s with the left and right hand separately, and we measured the discriminability index d'. The d' values were significantly higher for letters and geometric shapes than for nonsense shapes. This might result from dual coding (naming + spatial) or/and from a low stimulus complexity. There was no stimulus-specific laterality effect. However, we found a time-dependent laterality effect, which revealed that the performance of the left hand-right hemisphere was sustained up to 15 s, while the performance of the right-hand-left hemisphere decreased progressively throughout all retention times. This suggests that haptic memory traces are more robust to decay when they are processed by the left hand-right hemisphere.

  8. Consistency between verbal and non-verbal affective cues: a clue to speaker credibility.

    Science.gov (United States)

    Gillis, Randall L; Nilsen, Elizabeth S

    2017-06-01

    Listeners are exposed to inconsistencies in communication; for example, when speakers' words (i.e. verbal) are discrepant with their demonstrated emotions (i.e. non-verbal). Such inconsistencies introduce ambiguity, which may render a speaker to be a less credible source of information. Two experiments examined whether children make credibility discriminations based on the consistency of speakers' affect cues. In Experiment 1, school-age children (7- to 8-year-olds) preferred to solicit information from consistent speakers (e.g. those who provided a negative statement with negative affect), over novel speakers, to a greater extent than they preferred to solicit information from inconsistent speakers (e.g. those who provided a negative statement with positive affect) over novel speakers. Preschoolers (4- to 5-year-olds) did not demonstrate this preference. Experiment 2 showed that school-age children's ratings of speakers were influenced by speakers' affect consistency when the attribute being judged was related to information acquisition (speakers' believability, "weird" speech), but not general characteristics (speakers' friendliness, likeability). Together, findings suggest that school-age children are sensitive to, and use, the congruency of affect cues to determine whether individuals are credible sources of information.

  9. Memory and comprehension deficits in spatial descriptions of children with non-verbal and reading disabilities.

    Science.gov (United States)

    Mammarella, Irene C; Meneghetti, Chiara; Pazzaglia, Francesca; Cornoldi, Cesare

    2014-01-01

    The present study investigated the difficulties encountered by children with non-verbal learning disability (NLD) and reading disability (RD) when processing spatial information derived from descriptions, based on the assumption that both groups should find it more difficult than matched controls, but for different reasons, i.e., due to a memory encoding difficulty in cases of RD and to spatial information comprehension problems in cases of NLD. Spatial descriptions from both survey and route perspectives were presented to 9-12-year-old children divided into three groups: NLD (N = 12); RD (N = 12), and typically developing controls (TD; N = 15); then participants completed a sentence verification task and a memory for locations task. The sentence verification task was presented in two conditions: in one the children could refer to the text while answering the questions (i.e., text present condition), and in the other the text was withdrawn (i.e., text absent condition). Results showed that the RD group benefited from the text present condition, but was impaired to the same extent as the NLD group in the text absent condition, suggesting that the NLD children's difficulty is due mainly to their poor comprehension of spatial descriptions, while the RD children's difficulty is due more to a memory encoding problem. These results are discussed in terms of their implications in the neuropsychological profiles of children with NLD or RD, and the processes involved in spatial descriptions.

  10. Relationship of Non-Verbal Intelligence Materials as Catalyst for Academic Achievement and Peaceful Co-Existence among Secondary School Students in Nigeria

    Science.gov (United States)

    Sambo, Aminu

    2015-01-01

    This paper examines students' performance in Non-verbal Intelligence tests relative academic achievement of some selected secondary school students. Two hypotheses were formulated with a view to generating data for the ease of analyses. Two non-verbal intelligent tests viz: Raven's Standard Progressive Matrices (SPM) and AH[subscript 4] Part II…

  11. Symbiotic Relations of Verbal and Non-Verbal Components of Creolized Text on the Example of Stephen King’s Books Covers Analysis

    OpenAIRE

    Anna S. Kobysheva; Viktoria A. Nakaeva

    2017-01-01

    The article examines the symbiotic relationships between non-verbal and verbal components of the creolized text. The research focuses on the analysis of the correlation between verbal and visual elements of horror book covers based on three types of correlations between verbal and non-verbal text constituents, i.e. recurrent, additive and emphatic.

  12. Symbiotic Relations of Verbal and Non-Verbal Components of Creolized Text on the Example of Stephen King’s Books Covers Analysis

    Directory of Open Access Journals (Sweden)

    Anna S. Kobysheva

    2017-12-01

    Full Text Available The article examines the symbiotic relationships between non-verbal and verbal components of the creolized text. The research focuses on the analysis of the correlation between verbal and visual elements of horror book covers based on three types of correlations between verbal and non-verbal text constituents, i.e. recurrent, additive and emphatic.

  13. Establishing the norm of Cognitive Adaptive Test/Clinical Linguistic and Auditory Milestone Scale (CAT/CLAMS) in Chinese infants.

    Science.gov (United States)

    Chang, Y C; Huang, C C; Hu, S C

    1998-01-01

    The Cognitive Adaptive Test/Clinical Linguistic and Auditory Milestone Scale (CAT/CLAMS) has been recommended as a useful diagnostic tool for cognitive delay. To provide wider application of this instrument as a general screening tool for pediatricians, a normative percentile graph from a large convenience sample of Chinese infants has been established. The effects of environmental factors on early language and adaptive development were also analyzed. A total of 402 normal infants aged 4 to 36 months attending well-child clinics were recruited. These infants were from all socioeconomic strata, and half were bilingual with Mandarin and Taiwanese. Grandmothers were the chief caretakers or co-caretakers in 28% of this population. The milestones were attained in a sequential and orderly fashion. In contrast to the adaptive ability, there was a wide variation in language acquisition between age 12 and 24 months. Multiple stepwise regression of demographic and environmental factors revealed that age was the main variance in CAT score (P = 0.0001). In CLAMS score, however, age and caretakers were the significant predictors (P = 0.0001). Infants cared for by both mothers and grandmothers had higher CLAMS score by two months over those cared by mothers only (P = 0.001). Those cared for by grandmothers only had lower language score than those by mothers only, though without statistical significance (P = 0.05). Bilingualism, birth order, numbers of siblings, familial structures, and parental socioeconomic status had no effect on early development.

  14. Cognitive Behavioral Therapy Compared with Non-specialized Therapy for Alleviating the Effect of Auditory Hallucinations in People with Reoccurring Schizophrenia: A Systematic Review and Meta-analysis.

    Science.gov (United States)

    Kennedy, Laura; Xyrichis, Andreas

    2017-02-01

    Cognitive behavioral therapy (CBT) is recommended as a psychological intervention for those diagnosed with schizophrenia. The prevalence of auditory hallucinations is high among this group, many of whom are cared for by community mental health teams that may not have easy access to qualified CBT practitioners. This systematic review examined the evidence for the superiority of CBT compared to non-specialized therapy in alleviating auditory hallucinations in community patients with schizophrenia. Two RCTs met the inclusion criteria totaling 105 participants. The Positive and Negative Syndrome Scale (PANSS)-Positive Scale was the outcome measure examined. A meta-analysis revealed a pooled mean difference of -0.86 [95 % CI -2.38, 0.65] in favor of CBT, although this did not reach statistical significance. This systematic review concluded there is no clinically significant difference in the reduction of positive symptoms of schizophrenia when treated by CBT compared to a non-specialized therapy for adults experiencing auditory hallucinations.

  15. Coupling between Theta Oscillations and Cognitive Control Network during Cross-Modal Visual and Auditory Attention: Supramodal vs Modality-Specific Mechanisms.

    Science.gov (United States)

    Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T

    2016-01-01

    Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.

  16. Non-Verbal Psychotherapy of Deaf Children with Disorders in Personality Development.

    Science.gov (United States)

    Zalewska, Marina

    1989-01-01

    Discussed are principles of nonverbal therapy for deaf children with disorders in the development of self, and the possible existence of a relationship between lack of auditory experiences in deaf children and disorders in mother-child bonding. A case study presents a three-year-old deaf boy successfully treated through a nonverbal…

  17. Age-related changes in auditory and cognitive abilities in elderly persons with hearing aids fitted at the initial stages of hearing loss

    Directory of Open Access Journals (Sweden)

    C. Obuchi

    2011-03-01

    Full Text Available In this study, we investigated the relation between the use of hearing aids at the initial stages of hearing loss and age-related changes in the auditory and cognitive abilities of elderly persons. 12 healthy elderly persons participated in an annual auditory and cognitive longitudinal examination for three years. According to their hearing level, they were divided into 3 subgroups - the normal hearing group, the hearing loss without hearing aids group, and the hearing loss with hearing aids group. All the subjects underwent 4 tests: pure-tone audiometry, syllable intelligibility test, dichotic listening test (DLT, and Wechsler Adult Intelligence Scale-Revised (WAIS-R Short Forms. Comparison between the 3 groups revealed that the hearing loss without hearing aids group showed the lowest scores for the performance tasks, in contrast to the hearing level and intelligibility results. The other groups showed no significant difference in the WAIS-R subtests. This result indicates that prescription of a hearing aid during the early stages of hearing loss is related to the retention of cognitive abilities in such elderly people. However, there were no statistical significant correlations between the auditory and cognitive tasks.

  18. Incongruence between Verbal and Non-Verbal Information Enhances the Late Positive Potential.

    Science.gov (United States)

    Morioka, Shu; Osumi, Michihiro; Shiotani, Mayu; Nobusako, Satoshi; Maeoka, Hiroshi; Okada, Yohei; Hiyamizu, Makoto; Matsuo, Atsushi

    2016-01-01

    Smooth social communication consists of both verbal and non-verbal information. However, when presented with incongruence between verbal information and nonverbal information, the relationship between an individual judging trustworthiness in those who present the verbal-nonverbal incongruence and the brain activities observed during judgment for trustworthiness are not clear. In the present study, we attempted to identify the impact of incongruencies between verbal information and facial expression on the value of trustworthiness and brain activity using event-related potentials (ERP). Combinations of verbal information [positive/negative] and facial expressions [smile/angry] expressions were presented randomly on a computer screen to 17 healthy volunteers. The value of trustworthiness of the presented facial expression was evaluated by the amount of donation offered by the observer to the person depicted on the computer screen. In addition, the time required to judge the value of trustworthiness was recorded for each trial. Using electroencephalography, ERP were obtained by averaging the wave patterns recorded while the participants judged the value of trustworthiness. The amount of donation offered was significantly lower when the verbal information and facial expression were incongruent, particularly for [negative × smile]. The amplitude of the early posterior negativity (EPN) at the temporal lobe showed no significant difference between all conditions. However, the amplitude of the late positive potential (LPP) at the parietal electrodes for the incongruent condition [negative × smile] was higher than that for the congruent condition [positive × smile]. These results suggest that the LPP amplitude observed from the parietal cortex is involved in the processing of incongruence between verbal information and facial expression.

  19. Non-verbal communication between nurses and people with an intellectual disability: a review of the literature.

    Science.gov (United States)

    Martin, Anne-Marie; O'Connor-Fenelon, Maureen; Lyons, Rosemary

    2010-12-01

    This article critically synthesizes current literature regarding communication between nurses and people with an intellectual disability who communicate non-verbally. The unique context of communication between the intellectual disability nurse and people with intellectual disability and the review aims and strategies are outlined. Communication as a concept is explored in depth. Communication between the intellectual disability nurse and the person with an intellectual disability is then comprehensively examined in light of existing literature. Issues including knowledge of the person with intellectual disability, mismatch of communication ability, and knowledge of communication arose as predominant themes. A critical review of the importance of communication in nursing practice follows. The paucity of literature relating to intellectual disability nursing and non-verbal communication clearly indicates a need for research.

  20. Investigating the Impact of Hearing Aid Use and Auditory Training on Cognition, Depressive Symptoms, and Social Interaction in Adults With Hearing Loss: Protocol for a Crossover Trial.

    Science.gov (United States)

    Nkyekyer, Joanna; Meyer, Denny; Blamey, Peter J; Pipingas, Andrew; Bhar, Sunil

    2018-03-23

    Sensorineural hearing loss is the most common sensory deficit among older adults. Some of the psychosocial consequences of this condition include difficulty in understanding speech, depression, and social isolation. Studies have shown that older adults with hearing loss show some age-related cognitive decline. Hearing aids have been proven as successful interventions to alleviate sensorineural hearing loss. In addition to hearing aid use, the positive effects of auditory training-formal listening activities designed to optimize speech perception-are now being documented among adults with hearing loss who use hearing aids, especially new hearing aid users. Auditory training has also been shown to produce prolonged cognitive performance improvements. However, there is still little evidence to support the benefits of simultaneous hearing aid use and individualized face-to-face auditory training on cognitive performance in adults with hearing loss. This study will investigate whether using hearing aids for the first time will improve the impact of individualized face-to-face auditory training on cognition, depression, and social interaction for adults with sensorineural hearing loss. The rationale for this study is based on the hypothesis that, in adults with sensorineural hearing loss, using hearing aids for the first time in combination with individualized face-to-face auditory training will be more effective for improving cognition, depressive symptoms, and social interaction rather than auditory training on its own. This is a crossover trial targeting 40 men and women between 50 and 90 years of age with either mild or moderate symmetric sensorineural hearing loss. Consented, willing participants will be recruited from either an independent living accommodation or via a community database to undergo a 6-month intensive face-to-face auditory training program (active control). Participants will be assigned in random order to receive hearing aid (intervention) for

  1. Investigating the Impact of Hearing Aid Use and Auditory Training on Cognition, Depressive Symptoms, and Social Interaction in Adults With Hearing Loss: Protocol for a Crossover Trial

    Science.gov (United States)

    Meyer, Denny; Blamey, Peter J; Pipingas, Andrew; Bhar, Sunil

    2018-01-01

    Background Sensorineural hearing loss is the most common sensory deficit among older adults. Some of the psychosocial consequences of this condition include difficulty in understanding speech, depression, and social isolation. Studies have shown that older adults with hearing loss show some age-related cognitive decline. Hearing aids have been proven as successful interventions to alleviate sensorineural hearing loss. In addition to hearing aid use, the positive effects of auditory training—formal listening activities designed to optimize speech perception—are now being documented among adults with hearing loss who use hearing aids, especially new hearing aid users. Auditory training has also been shown to produce prolonged cognitive performance improvements. However, there is still little evidence to support the benefits of simultaneous hearing aid use and individualized face-to-face auditory training on cognitive performance in adults with hearing loss. Objective This study will investigate whether using hearing aids for the first time will improve the impact of individualized face-to-face auditory training on cognition, depression, and social interaction for adults with sensorineural hearing loss. The rationale for this study is based on the hypothesis that, in adults with sensorineural hearing loss, using hearing aids for the first time in combination with individualized face-to-face auditory training will be more effective for improving cognition, depressive symptoms, and social interaction rather than auditory training on its own. Methods This is a crossover trial targeting 40 men and women between 50 and 90 years of age with either mild or moderate symmetric sensorineural hearing loss. Consented, willing participants will be recruited from either an independent living accommodation or via a community database to undergo a 6-month intensive face-to-face auditory training program (active control). Participants will be assigned in random order to receive

  2. Randomised controlled trial of a brief intervention targeting predominantly non-verbal communication in general practice consultations.

    Science.gov (United States)

    Little, Paul; White, Peter; Kelly, Joanne; Everitt, Hazel; Mercer, Stewart

    2015-06-01

    The impact of changing non-verbal consultation behaviours is unknown. To assess brief physician training on improving predominantly non-verbal communication. Cluster randomised parallel group trial among adults aged ≥16 years attending general practices close to the study coordinating centres in Southampton. Sixteen GPs were randomised to no training, or training consisting of a brief presentation of behaviours identified from a prior study (acronym KEPe Warm: demonstrating Knowledge of the patient; Encouraging [back-channelling by saying 'hmm', for example]; Physically engaging [touch, gestures, slight lean]; Warm-up: cool/professional initially, warming up, avoiding distancing or non-verbal cut-offs at the end of the consultation); and encouragement to reflect on videos of their consultation. Outcomes were the Medical Interview Satisfaction Scale (MISS) mean item score (1-7) and patients' perceptions of other domains of communication. Intervention participants scored higher MISS overall (0.23, 95% confidence interval [CI] = 0.06 to 0.41), with the largest changes in the distress-relief and perceived relationship subscales. Significant improvement occurred in perceived communication/partnership (0.29, 95% CI = 0.09 to 0.49) and health promotion (0.26, 95% CI = 0.05 to 0.46). Non-significant improvements occurred in perceptions of a personal relationship, a positive approach, and understanding the effects of the illness on life. Brief training of GPs in predominantly non-verbal communication in the consultation and reflection on consultation videotapes improves patients' perceptions of satisfaction, distress, a partnership approach, and health promotion. © British Journal of General Practice 2015.

  3. Cognitive deficits in unipolar depression during remission-Auditory Verbal Learning test findings

    Czech Academy of Sciences Publication Activity Database

    Preiss, M.; Kučerová, H.; Štěpánková, H.; Sos, P.; Lukavský, Jiří; Kawaciuková, R.

    2007-01-01

    Roč. 11, Suppl. 3 (2007), s. 79-83 ISSN 1211-7579 Institutional research plan: CEZ:AV0Z70250504 Keywords : major depressive episode * remission * cognitive function Subject RIV: AN - Psychology http://www.tigis.cz/PSYCHIAT/Psych_suppl_3_07/21Preiss_suppl_3_07.pdf)

  4. Maternal postpartum depressive symptoms predict delay in non-verbal communication in 14-month-old infants.

    Science.gov (United States)

    Kawai, Emiko; Takagai, Shu; Takei, Nori; Itoh, Hiroaki; Kanayama, Naohiro; Tsuchiya, Kenji J

    2017-02-01

    We investigated the potential relationship between maternal depressive symptoms during the postpartum period and non-verbal communication skills of infants at 14 months of age in a birth cohort study of 951 infants and assessed what factors may influence this association. Maternal depressive symptoms were measured using the Edinburgh Postnatal Depression Scale, and non-verbal communication skills were measured using the MacArthur-Bates Communicative Development Inventories, which include Early Gestures and Later Gestures domains. Infants whose mothers had a high level of depressive symptoms (13+ points) during both the first month postpartum and at 10 weeks were approximately 0.5 standard deviations below normal in Early Gestures scores and 0.5-0.7 standard deviations below normal in Later Gestures scores. These associations were independent of potential explanations, such as maternal depression/anxiety prior to birth, breastfeeding practices, and recent depressive symptoms among mothers. These findings indicate that infants whose mothers have postpartum depressive symptoms may be at increased risk of experiencing delay in non-verbal development. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Fundamental deficits of auditory perception in Wernicke's aphasia.

    Science.gov (United States)

    Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen

    2013-01-01

    This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Effects of cognitive-motor dual-task training combined with auditory motor synchronization training on cognitive functioning in individuals with chronic stroke: A pilot randomized controlled trial.

    Science.gov (United States)

    Park, Myoung-Ok; Lee, Sang-Heon

    2018-06-01

    Preservation and enhancement of cognitive function are essential for the restoration of functional abilities and independence following stroke. While cognitive-motor dual-task training (CMDT) has been utilized in rehabilitation settings, many patients with stroke experience impairments in cognitive function that can interfere with dual-task performance. In the present study, we investigated the effects of CMDT combined with auditory motor synchronization training (AMST) utilizing rhythmic cues on cognitive function in patients with stroke. The present randomized controlled trial was conducted at a single rehabilitation hospital. Thirty patients with chronic stroke were randomly divided an experimental group (n = 15) and a control group (n = 15). The experimental group received 3 CMDT + AMST sessions per week for 6 weeks, whereas the control group received CMDT only 3 times per week for 6 weeks. Changes in cognitive function were evaluated using the trail making test (TMT), digit span test (DST), and stroop test (ST). Significant differences in TMT-A and B (P = .001, P = .001), DST-forward (P = .001, P = .001), DST-backward (P = .000, P = .001), ST-word (P = .001, P = .001), and ST-color (P = .002, P = .001) scores were observed in both the control and experimental groups, respectively. Significant differences in TMT-A (P = .001), DST-forward (P = .027), DST-backward (P = .002), and ST-word (P = .025) scores were observed between the 2 groups. Performance speed on the TMT-A was faster in the CMDT + AMST group than in the CMDT group. Moreover, DST-forward and DST-backward scores were higher in the CMDT + AMST group than in the CDMT group. Although ST-color results were similar in the 2 groups, ST-word scores were higher in the CMDT + AMST group than in the CMDT group. This finding indicates that the combined therapy CMDT and AMST can be used to increase attention, memory, and executive

  7. GF-GC Theory of Human Cognition: Differentiation of Short-Term Auditory and Visual Memory Factors.

    Science.gov (United States)

    McGhee, Ron; Lieberman, Lewis

    1994-01-01

    Study sought to determine whether separate short-term auditory and visual memory factors would emerge given a sufficient number of markers in a factor matrix. A principal component factor analysis with varimax rotation was performed. Short-term visual and short-term auditory memory factors emerged as expected. (RJM)

  8. Auditory and Cognitive Factors Underlying Individual Differences in Aided Speech-Understanding among Older Adults

    Directory of Open Access Journals (Sweden)

    Larry E. Humes

    2013-10-01

    Full Text Available This study was designed to address individual differences in aided speech understanding among a relatively large group of older adults. The group of older adults consisted of 98 adults (50 female and 48 male ranging in age from 60 to 86 (mean = 69.2. Hearing loss was typical for this age group and about 90% had not worn hearing aids. All subjects completed a battery of tests, including cognitive (6 measures, psychophysical (17 measures, and speech-understanding (9 measures, as well as the Speech, Spatial and Qualities of Hearing (SSQ self-report scale. Most of the speech-understanding measures made use of competing speech and the non-speech psychophysical measures were designed to tap phenomena thought to be relevant for the perception of speech in competing speech (e.g., stream segregation, modulation-detection interference. All measures of speech understanding were administered with spectral shaping applied to the speech stimuli to fully restore audibility through at least 4000 Hz. The measures used were demonstrated to be reliable in older adults and, when compared to a reference group of 28 young normal-hearing adults, age-group differences were observed on many of the measures. Principal-components factor analysis was applied successfully to reduce the number of independent and dependent (speech understanding measures for a multiple-regression analysis. Doing so yielded one global cognitive-processing factor and five non-speech psychoacoustic factors (hearing loss, dichotic signal detection, multi-burst masking, stream segregation, and modulation detection as potential predictors. To this set of six potential predictor variables were added subject age, Environmental Sound Identification (ESI, and performance on the text-recognition-threshold (TRT task (a visual analog of interrupted speech recognition. These variables were used to successfully predict one global aided speech-understanding factor, accounting for about 60% of the variance.

  9. Peculiarities of Stereotypes about Non-Verbal Communication and their Role in Cross-Cultural Interaction between Russian and Chinese Students

    Directory of Open Access Journals (Sweden)

    I A Novikova

    2012-12-01

    Full Text Available The article is devoted to the analysis of the peculiarities of the stereotypes about non-verbal communication, formed in Russian and Chinese cultures. The results of the experimental research of the role of ethnic auto- and heterostereotypes about non-verbal communication in cross-cultural interaction between Russian and Chinese students of the Peoples’ Friendship University of Russia are presented.

  10. Prevalence of inter-hemispheric asymetry in children and adolescents with interdisciplinary diagnosis of non-verbal learning disorder.

    Science.gov (United States)

    Wajnsztejn, Alessandra Bernardes Caturani; Bianco, Bianca; Barbosa, Caio Parente

    2016-01-01

    To describe clinical and epidemiological features of children and adolescents with interdisciplinary diagnosis of non-verbal learning disorder and to investigate the prevalence of inter-hemispheric asymmetry in this population group. Cross-sectional study including children and adolescents referred for interdisciplinary assessment with learning difficulty complaints, who were given an interdisciplinary diagnosis of non-verbal learning disorder. The following variables were included in the analysis: sex-related prevalence, educational system, initial presumptive diagnoses and respective prevalence, overall non-verbal learning disorder prevalence, prevalence according to school year, age range at the time of assessment, major family complaints, presence of inter-hemispheric asymmetry, arithmetic deficits, visuoconstruction impairments and major signs and symptoms of non-verbal learning disorder. Out of 810 medical records analyzed, 14 were from individuals who met the diagnostic criteria for non-verbal learning disorder, including the presence of inter-hemispheric asymmetry. Of these 14 patients, 8 were male. The high prevalence of inter-hemispheric asymmetry suggests this parameter can be used to predict or support the diagnosis of non-verbal learning disorder. Descrever as características clínicas e epidemiológicas de crianças e adolescentes com transtorno de aprendizagem não verbal, e investigar a prevalência de assimetria inter-hemisférica neste grupo populacional. Estudo transversal que incluiu crianças e adolescentes encaminhados para uma avaliação interdisciplinar, com queixas de dificuldades de aprendizagem e que receberam diagnóstico interdisciplinar de transtorno de aprendizagem não verbal. As variáveis avaliadas foram prevalência por sexo, sistema de ensino, hipóteses diagnósticas iniciais e respectivas prevalências, prevalência de condições em relação à amostra total, prevalência geral do transtorno de aprendizagem não verbal

  11. Stroke caused auditory attention deficits in children

    Directory of Open Access Journals (Sweden)

    Karla Maria Ibraim da Freiria Elias

    2013-01-01

    Full Text Available OBJECTIVE: To verify the auditory selective attention in children with stroke. METHODS: Dichotic tests of binaural separation (non-verbal and consonant-vowel and binaural integration - digits and Staggered Spondaic Words Test (SSW - were applied in 13 children (7 boys, from 7 to 16 years, with unilateral stroke confirmed by neurological examination and neuroimaging. RESULTS: The attention performance showed significant differences in comparison to the control group in both kinds of tests. In the non-verbal test, identifications the ear opposite the lesion in the free recall stage was diminished and, in the following stages, a difficulty in directing attention was detected. In the consonant- vowel test, a modification in perceptual asymmetry and difficulty in focusing in the attended stages was found. In the digits and SSW tests, ipsilateral, contralateral and bilateral deficits were detected, depending on the characteristics of the lesions and demand of the task. CONCLUSION: Stroke caused auditory attention deficits when dealing with simultaneous sources of auditory information.

  12. A brain-computer interface for potential non-verbal facial communication based on EEG signals related to specific emotions.

    Science.gov (United States)

    Kashihara, Koji

    2014-01-01

    Unlike assistive technology for verbal communication, the brain-machine or brain-computer interface (BMI/BCI) has not been established as a non-verbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients' emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based non-verbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600-700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus (FG). This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals. A classification method based on a support vector machine enables the easy classification of neutral faces that trigger specific individual emotions. In

  13. Non-verbal Full Body Emotional and Social Interaction: A Case Study on Multimedia Systems for Active Music Listening

    Science.gov (United States)

    Camurri, Antonio

    Research on HCI and multimedia systems for art and entertainment based on non-verbal, full-body, emotional and social interaction is the main topic of this paper. A short review of previous research projects in this area at our centre are presented, to introduce the main issues discussed in the paper. In particular, a case study based on novel paradigms of social active music listening is presented. Active music listening experience enables users to dynamically mould expressive performance of music and of audiovisual content. This research is partially supported by the 7FP EU-ICT Project SAME (Sound and Music for Everyone, Everyday, Everywhere, Every Way, www.sameproject.eu).

  14. MODELO DE COMUNICACIÓN NO VERBAL EN DEPORTE Y BALLET NON-VERBAL COMMUNICATION MODELS IN SPORTS AND BALLET

    Directory of Open Access Journals (Sweden)

    Gloria Vallejo

    2010-12-01

    Full Text Available Este estudio analiza el modelo de comunicación que se genera en los entrenadores de fútbol y de gimnasia artística a nivel profesional, y en los instructores de ballet en modalidad folklórica, tomando como referente el lenguaje corporal dinámico propio de la comunicación especializada de deportistas y bailarines, en la que se evidencia lenguaje no verbal. Este último se estudió tanto en prácticas psicomotrices como sociomotrices, para identificar y caracterizar relaciones entre diferentes conceptos y su correspondiente representación gestual. Los resultados indican que el lenguaje no verbal de los entrenadores e instructores toma ocasionalmente el lugar del lenguaje verbal, cuando este último resulta insuficiente o inapropiado para describir una acción motriz de gran precisión, debido a las condiciones de distancia o de interferencias acústicas. En los instructores de ballet se encontró una forma generalizada de dirigir los ensayos utilizando conteos rítmicos con las palmas o los pies. De igual forma, se destacan los componentes paralingüísticos de los diversos actos de habla, especialmente, en lo que se refiere a entonación, duración e intensidad.This study analyzes the communication model generated among professional soccer trainers, artistic gymnastics trainers, and folkloric ballet instructors, on the basis of the dynamic body language typical of specialized communication among sportspeople and dancers, which includes a high percentage of non-verbal language. Non-verbal language was observed in both psychomotor and sociomotor practices in order to identify and characterize relations between different concepts and their corresponding gestural representation. This made it possible to generate a communication model that takes into account the non-verbal aspects of specialized communicative contexts. The results indicate that the non-verbal language of trainers and instructors occasionally replaces verbal language when the

  15. The Functional Role of Neural Oscillations in Non-Verbal Emotional Communication.

    Science.gov (United States)

    Symons, Ashley E; El-Deredy, Wael; Schwartze, Michael; Kotz, Sonja A

    2016-01-01

    Effective interpersonal communication depends on the ability to perceive and interpret nonverbal emotional expressions from multiple sensory modalities. Current theoretical models propose that visual and auditory emotion perception involves a network of brain regions including the primary sensory cortices, the superior temporal sulcus (STS), and orbitofrontal cortex (OFC). However, relatively little is known about how the dynamic interplay between these regions gives rise to the perception of emotions. In recent years, there has been increasing recognition of the importance of neural oscillations in mediating neural communication within and between functional neural networks. Here we review studies investigating changes in oscillatory activity during the perception of visual, auditory, and audiovisual emotional expressions, and aim to characterize the functional role of neural oscillations in nonverbal emotion perception. Findings from the reviewed literature suggest that theta band oscillations most consistently differentiate between emotional and neutral expressions. While early theta synchronization appears to reflect the initial encoding of emotionally salient sensory information, later fronto-central theta synchronization may reflect the further integration of sensory information with internal representations. Additionally, gamma synchronization reflects facilitated sensory binding of emotional expressions within regions such as the OFC, STS, and, potentially, the amygdala. However, the evidence is more ambiguous when it comes to the role of oscillations within the alpha and beta frequencies, which vary as a function of modality (or modalities), presence or absence of predictive information, and attentional or task demands. Thus, the synchronization of neural oscillations within specific frequency bands mediates the rapid detection, integration, and evaluation of emotional expressions. Moreover, the functional coupling of oscillatory activity across multiples

  16. Cross-cultural differences in the processing of non-verbal affective vocalizations by Japanese and canadian listeners.

    Science.gov (United States)

    Koeda, Michihiko; Belin, Pascal; Hama, Tomoko; Masuda, Tadashi; Matsuura, Masato; Okubo, Yoshiro

    2013-01-01

    The Montreal Affective Voices (MAVs) consist of a database of non-verbal affect bursts portrayed by Canadian actors, and high recognitions accuracies were observed in Canadian listeners. Whether listeners from other cultures would be as accurate is unclear. We tested for cross-cultural differences in perception of the MAVs: Japanese listeners were asked to rate the MAVs on several affective dimensions and ratings were compared to those obtained by Canadian listeners. Significant Group × Emotion interactions were observed for ratings of Intensity, Valence, and Arousal. Whereas Intensity and Valence ratings did not differ across cultural groups for sad and happy vocalizations, they were significantly less intense and less negative in Japanese listeners for angry, disgusted, and fearful vocalizations. Similarly, pleased vocalizations were rated as less intense and less positive by Japanese listeners. These results demonstrate important cross-cultural differences in affective perception not just of non-verbal vocalizations expressing positive affect (Sauter et al., 2010), but also of vocalizations expressing basic negative emotions.

  17. Cross-Cultural Differences in the Processing of Non-Verbal Affective Vocalizations by Japanese and Canadian Listeners

    Science.gov (United States)

    Koeda, Michihiko; Belin, Pascal; Hama, Tomoko; Masuda, Tadashi; Matsuura, Masato; Okubo, Yoshiro

    2013-01-01

    The Montreal Affective Voices (MAVs) consist of a database of non-verbal affect bursts portrayed by Canadian actors, and high recognitions accuracies were observed in Canadian listeners. Whether listeners from other cultures would be as accurate is unclear. We tested for cross-cultural differences in perception of the MAVs: Japanese listeners were asked to rate the MAVs on several affective dimensions and ratings were compared to those obtained by Canadian listeners. Significant Group × Emotion interactions were observed for ratings of Intensity, Valence, and Arousal. Whereas Intensity and Valence ratings did not differ across cultural groups for sad and happy vocalizations, they were significantly less intense and less negative in Japanese listeners for angry, disgusted, and fearful vocalizations. Similarly, pleased vocalizations were rated as less intense and less positive by Japanese listeners. These results demonstrate important cross-cultural differences in affective perception not just of non-verbal vocalizations expressing positive affect (Sauter et al., 2010), but also of vocalizations expressing basic negative emotions. PMID:23516137

  18. Condom use: exploring verbal and non-verbal communication strategies among Latino and African American men and women.

    Science.gov (United States)

    Zukoski, Ann P; Harvey, S Marie; Branch, Meredith

    2009-08-01

    A growing body of literature provides evidence of a link between communication with sexual partners and safer sexual practices, including condom use. More research is needed that explores the dynamics of condom communication including gender differences in initiation, and types of communication strategies. The overall objective of this study was to explore condom use and the dynamics surrounding condom communication in two distinct community-based samples of African American and Latino heterosexual couples at increased risk for HIV. Based on 122 in-depth interviews, 80% of women and 74% of men reported ever using a condom with their primary partner. Of those who reported ever using a condom with their current partner, the majority indicated that condom use was initiated jointly by men and women. In addition, about one-third of the participants reported that the female partner took the lead and let her male partner know she wanted to use a condom. A sixth of the sample reported that men initiated use. Although over half of the respondents used bilateral verbal strategies (reminding, asking and persuading) to initiate condom use, one-fourth used unilateral verbal strategies (commanding and threatening to withhold sex). A smaller number reported using non-verbal strategies involving condoms themselves (e.g. putting a condom on or getting condoms). The results suggest that interventions designed to improve condom use may need to include both members of a sexual dyad and focus on improving verbal and non-verbal communication skills of individuals and couples.

  19. Effect of the cognitive-motor dual-task using auditory cue on balance of surviviors with chronic stroke: a pilot study.

    Science.gov (United States)

    Choi, Wonjae; Lee, GyuChang; Lee, Seungwon

    2015-08-01

    To investigate the effect of a cognitive-motor dual-task using auditory cues on the balance of patients with chronic stroke. Randomized controlled trial. Inpatient rehabilitation center. Thirty-seven individuals with chronic stroke. The participants were randomly allocated to the dual-task group (n=19) and the single-task group (n=18). The dual-task group performed a cognitive-motor dual-task in which they carried a circular ring from side to side according to a random auditory cue during treadmill walking. The single-task group walked on a treadmill only. All subjects completed 15 min per session, three times per week, for four weeks with conventional rehabilitation five times per week over the four weeks. Before and after intervention, both static and dynamic balance were measured with a force platform and using the Timed Up and Go (TUG) test. The dual-task group showed significant improvement in all variables compared to the single-task group, except for anteroposterior (AP) sway velocity with eyes open and TUG at follow-up: mediolateral (ML) sway velocity with eye open (dual-task group vs. single-task group: 2.11 mm/s vs. 0.38 mm/s), ML sway velocity with eye close (2.91 mm/s vs. 1.35 mm/s), AP sway velocity with eye close (4.84 mm/s vs. 3.12 mm/s). After intervention, all variables showed significant improvement in the dual-task group compared to baseline. The study results suggest that the performance of a cognitive-motor dual-task using auditory cues may influence balance improvements in chronic stroke patients. © The Author(s) 2014.

  20. Mutism and auditory agnosia due to bilateral insular damage--role of the insula in human communication.

    Science.gov (United States)

    Habib, M; Daquin, G; Milandre, L; Royere, M L; Rey, M; Lanteri, A; Salamon, G; Khalil, R

    1995-03-01

    We report a case of transient mutism and persistent auditory agnosia due to two successive ischemic infarcts mainly involving the insular cortex on both hemispheres. During the 'mutic' period, which lasted about 1 month, the patient did not respond to any auditory stimuli and made no effort to communicate. On follow-up examinations, language competences had re-appeared almost intact, but a massive auditory agnosia for non-verbal sounds was observed. From close inspection of lesion site, as determined with brain resonance imaging, and from a study of auditory evoked potentials, it is concluded that bilateral insular damage was crucial to both expressive and receptive components of the syndrome. The role of the insula in verbal and non-verbal communication is discussed in the light of anatomical descriptions of the pattern of connectivity of the insular cortex.

  1. Non-verbal sensorimotor timing deficits in children and adolescents who stutter

    Directory of Open Access Journals (Sweden)

    Simone eFalk

    2015-07-01

    Full Text Available There is growing evidence that motor and speech disorders co-occur during development. In the present study, we investigated whether stuttering, a developmental speech disorder, is associated with a motor timing deficit in childhood and adolescence. By testing sensorimotor synchronization abilities, we aimed to assess whether predictive timing is dysfunctional in young participants who stutter (8-16 years. Twenty German children and adolescents who stutter and 43 non-stuttering participants matched for age and musical training were tested on their ability to synchronize their finger taps with periodic tone sequences and with a musical beat. Forty percent of children and 90 percent of adolescents who stutter displayed poor synchronization with both metronome and musical stimuli, falling below 2.5 % of the estimated population based on the performance of the group without the disorder. Synchronization deficits were characterized by either lower synchronization accuracy or lower consistency or both. Lower accuracy resulted in an over-anticipation of the pacing event in participants who stutter. Moreover, individual profiles revealed that lower consistency was typical of participants that were severely stuttering. These findings support the idea that malfunctioning predictive timing during auditory-motor coupling plays a role in stuttering in children and adolescents.

  2. Comparison of effects of valsartan and amlodipine on cognitive functions and auditory p300 event-related potentials in elderly hypertensive patients.

    Science.gov (United States)

    Katada, Eiichi; Uematsu, Norihiko; Takuma, Yuko; Matsukawa, Noriyuki

    2014-01-01

    We compared the antihypertensive effect of valsartan (VAL) and amlodipine (AML) treatments in elderly hypertensive patients by examining the long-term changes in cognitive function and auditory P300 event-related potentials. We enrolled 20 outpatients, including 12 men and 8 women in the age group of 56 to 81 years who had mild to moderate essential hypertension. The subjects were randomly allocated to receive either 80 mg VAL once a day (10 patients) or 5 mg AML once a day (10 patients). Neuropsychological assessment and auditory P300 event-related potentials were obtained before initiation of VAL or AML treatment and after 6 months of the treatment with VAL or AML. Neuropsychological assessment was evaluated by conducting the Mini-Mental State Examination, the verbal fluency, word-list memory, word-list recall test, word-list recognition, and Trails B tests. Both the groups showed significantly reduced-blood pressure after 6 months of treatment, and the intergroup difference was not significant. The mean baseline Mini-Mental State Examination scores of the VAL and AML groups were not significantly different. Amlodipine treatment did not significantly affect any test score, but VAL treatment significantly increased the word-list memory and word-list recall test scores. Valsartan, and not AML, significantly reduced the mean P300 latency after 6 months. These results suggest that VAL exerts a positive effect on cognitive functions, independent of its antihypertensive effect.

  3. Do children with autism have a theory of mind? A non-verbal test of autism vs. specific language impairment.

    Science.gov (United States)

    Colle, Livia; Baron-Cohen, Simon; Hill, Jacqueline

    2007-04-01

    Children with autism have delays in the development of theory of mind. However, the sub-group of children with autism who have little or no language have gone untested since false belief tests (FB) typically involve language. FB understanding has been reported to be intact in children with specific language impairment (SLI). This raises the possibility that a non-verbal FB test would distinguish children with autism vs. children with SLI. The present study tested two predictions: (1) FB understanding is to some extent independent of language ability; and (2) Children with autism with low language levels show specific impairment in theory of mind. Results confirmed both predictions. Results are discussed in terms of the role of language in the development of mindreading.

  4. Deficits in visual short-term memory binding in children at risk of non-verbal learning disabilities.

    Science.gov (United States)

    Garcia, Ricardo Basso; Mammarella, Irene C; Pancera, Arianna; Galera, Cesar; Cornoldi, Cesare

    2015-01-01

    It has been hypothesized that learning disabled children meet short-term memory (STM) problems especially when they must bind different types of information, however the hypothesis has not been systematically tested. This study assessed visual STM for shapes and colors and the binding of shapes and colors, comparing a group of children (aged between 8 and 10 years) at risk of non-verbal learning disabilities (NLD) with a control group of children matched for general verbal abilities, age, gender, and socioeconomic level. Results revealed that groups did not differ in retention of either shapes or colors, but children at risk of NLD were poorer than controls in memory for shape-color bindings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. A puzzle form of a non-verbal intelligence test gives significantly higher performance measures in children with severe intellectual disability.

    Science.gov (United States)

    Bello, Katrina D; Goharpey, Nahal; Crewther, Sheila G; Crewther, David P

    2008-08-01

    Assessment of 'potential intellectual ability' of children with severe intellectual disability (ID) is limited, as current tests designed for normal children do not maintain their interest. Thus a manual puzzle version of the Raven's Coloured Progressive Matrices (RCPM) was devised to appeal to the attentional and sensory preferences and language limitations of children with ID. It was hypothesized that performance on the book and manual puzzle forms would not differ for typically developing children but that children with ID would perform better on the puzzle form. The first study assessed the validity of this puzzle form of the RCPM for 76 typically developing children in a test-retest crossover design, with a 3 week interval between tests. A second study tested performance and completion rate for the puzzle form compared to the book form in a sample of 164 children with ID. In the first study, no significant difference was found between performance on the puzzle and book forms in typically developing children, irrespective of the order of completion. The second study demonstrated a significantly higher performance and completion rate for the puzzle form compared to the book form in the ID population. Similar performance on book and puzzle forms of the RCPM by typically developing children suggests that both forms measure the same construct. These findings suggest that the puzzle form does not require greater cognitive ability but demands sensory-motor attention and limits distraction in children with severe ID. Thus, we suggest the puzzle form of the RCPM is a more reliable measure of the non-verbal mentation of children with severe ID than the book form.

  6. Towards a Cognitive Model of Distraction by Auditory Novelty: The Role of Involuntary Attention Capture and Semantic Processing

    Science.gov (United States)

    Parmentier, Fabrice B. R.

    2008-01-01

    Unexpected auditory stimuli are potent distractors, able to break through selective attention and disrupt performance in an unrelated visual task. This study examined the processing fate of novel sounds by examining the extent to which their semantic content is analyzed and whether the outcome of this processing can impact on subsequent behavior.…

  7. Comparing Auditory Noise Treatment with Stimulant Medication on Cognitive Task Performance in Children with Attention Deficit Hyperactivity Disorder: Results from a Pilot Study.

    Science.gov (United States)

    Söderlund, Göran B W; Björk, Christer; Gustafsson, Peik

    2016-01-01

    Recent research has shown that acoustic white noise (80 dB) can improve task performance in people with attention deficits and/or Attention Deficit Hyperactivity Disorder (ADHD). This is attributed to the phenomenon of stochastic resonance in which a certain amount of noise can improve performance in a brain that is not working at its optimum. We compare here the effect of noise exposure with the effect of stimulant medication on cognitive task performance in ADHD. The aim of the present study was to compare the effects of auditory noise exposure with stimulant medication for ADHD children on a cognitive test battery. A group of typically developed children (TDC) took the same tests as a comparison. Twenty children with ADHD of combined or inattentive subtypes and twenty TDC matched for age and gender performed three different tests (word recall, spanboard and n-back task) during exposure to white noise (80 dB) and in a silent condition. The ADHD children were tested with and without central stimulant medication. In the spanboard- and the word recall tasks, but not in the 2-back task, white noise exposure led to significant improvements for both non-medicated and medicated ADHD children. No significant effects of medication were found on any of the three tasks. This pilot study shows that exposure to white noise resulted in a task improvement that was larger than the one with stimulant medication thus opening up the possibility of using auditory noise as an alternative, non-pharmacological treatment of cognitive ADHD symptoms.

  8. Comparing Auditory Noise Treatment with Stimulant Medication on Cognitive Task Performance in Children with Attention Deficit Hyperactivity Disorder: Results from a Pilot Study

    Directory of Open Access Journals (Sweden)

    Göran B W Söderlund

    2016-09-01

    Full Text Available Background: Recent research has shown that acoustic white noise (80 dB can improve task performance in people with attention deficits and/or Attention Deficit Hyperactivity Disorder (ADHD. This is attributed to the phenomenon of stochastic resonance in which a certain amount of noise can improve performance in a brain that is not working at its optimum. We compare here the effect of noise exposure with the effect of stimulant medication on cognitive task performance in ADHD. The aim of the present study was to compare the effects of auditory noise exposure with stimulant medication for ADHD children on a cognitive test battery. A group of typically developed children (TDC took the same tests as a comparison.Methods: Twenty children with ADHD of combined or inattentive subtypes and twenty typically developed children matched for age and gender performed three different tests (word recall, spanboard and n-back task during exposure to white noise (80 dB and in a silent condition. The ADHD children were tested with and without central stimulant medication.Results: In the spanboard- and the word recall tasks, but not in the 2-back task, white noise exposure led to significant improvements for both non-medicated and medicated ADHD children. No significant effects of medication were found on any of the three tasks.Conclusion: This pilot study shows that exposure to white noise resulted in a task improvement that was larger than the one with stimulant medication thus opening up the possibility of using auditory noise as an alternative, non-pharmacological treatment of cognitive ADHD symptoms.

  9. Mild cognitive impairment: applicability of research criteria in a memory clinic and characterization of cognitive profile.

    Science.gov (United States)

    Alladi, Suvarna; Arnold, Robert; Mitchell, Joanna; Nestor, Peter J; Hodges, John R

    2006-04-01

    We explored the applicability of recently proposed research criteria for mild cognitive impairment (MCI) in a memory clinic and changes in case definition related to which memory tests are used and the status of general cognitive function in MCI. A total of 166 consecutive GP referrals to the Cambridge Memory Clinic underwent comprehensive neuropsychological and psychiatric evaluation. Of 166 cases, 42 were excluded (significant depression 8, established dementia 29 and other disorders 5). Of 124 non-demented, non-depressed patients, 72 fulfilled Petersen's criteria for amnestic MCI based upon verbal memory performance [the Rey Auditory Verbal Learning Test (RAVLT)] and 90 met criteria if performance on verbal and/or non-verbal memory tests [the Rey figure recall or the Paired Associates Learning test (PAL)] was considered. Of the 90 broadly defined MCI cases, only 25 had pure amnesia: other subtle semantic and/or attention deficits were typically present. A further 12 were classed as non-amnestic MCI and 22 as 'worried well'. Definition of MCI varies considerably dependent upon the tests used for case definition. The majority have other cognitive deficits despite normal performance on the Mini-mental State Examination (MMSE) and intact activities of daily living (ADL) and fit within multi-domain MCI. Pure amnesic MCI is rare.

  10. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    Science.gov (United States)

    Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.

    2015-01-01

    Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests. Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study. Forty-four listeners aged between 50 and 74 years with mild sensorineural hearing loss were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet), to medium (digit triplet perception in speech-shaped noise) to high (sentence perception in modulated noise); cognitive tests of attention, memory, and non-verbal intelligence quotient; and self-report questionnaires of general health-related and hearing-specific quality of life. Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that

  11. Psychometric evaluation of the Orofacial Pain Scale for Non-Verbal Individuals as a screening tool for orofacial pain in people with dementia.

    Science.gov (United States)

    Delwel, Suzanne; Perez, Roberto S G M; Maier, Andrea B; Hertogh, Cees M P M; de Vet, Henrica C W; Lobbezoo, Frank; Scherder, Erik J A

    2018-04-29

    The aim of this study was to describe the psychometric evaluation of the Orofacial Pain Scale for Non-Verbal Individuals (OPS-NVI) as a screening tool for orofacial pain in people with dementia. The OPS-NVI has recently been developed and needs psychometric evaluation for clinical use in people with dementia. The pain self-report is imperative as a reference standard and can be provided by people with mild-to-moderate cognitive impairment. The presence of orofacial pain during rest, drinking, chewing and oral hygiene care was observed in people with mild cognitive impairment (MCI) and dementia using the OPS-NVI. Participants who were considered to present a reliable self-report were asked about pain presence, and in all participants, the oral health was examined by a dentist for the presence of potential painful conditions. After item-reduction, inter-rater reliability and criterion validity were determined. The presence of orofacial pain in this population was low (0%-10%), resulting in an average Positive Agreement of 0%-100%, an average Negative Agreement of 77%-100%, a sensitivity of 0%-100% and a specificity of 66%-100% for the individual items of the OPS-NVI. At the same time, the presence of oral problems, such as ulcers, tooth root remnants and caries was high (64.5%). The orofacial pain presence in this MCI and dementia population was low, resulting in low scores for average Positive Agreement and sensitivity and high scores for average Negative Agreement and specificity. Therefore, the OPS-NVI in its current form cannot be recommended as a screening tool for orofacial pain in people with MCI and dementia. However, the inter-rater reliability and criterion validity of the individual items in this study provide more insight for the further adjustment of the OPS-NVI for diagnostic use. Notably, oral health problems were frequently present, although no pain was reported or observed, indicating that oral health problems cannot be used as a new reference

  12. Referential Interactions of Turkish-Learning Children with Their Caregivers about Non-Absent Objects: Integration of Non-Verbal Devices and Prior Discourse

    Science.gov (United States)

    Ates, Beyza S.; Küntay, Aylin C.

    2018-01-01

    This paper examines the way children younger than two use non-verbal devices (i.e., deictic gestures and communicative functional acts) and pay attention to discourse status (i.e., prior mention vs. newness) of referents in interactions with caregivers. Data based on semi-naturalistic interactions with caregivers of four children, at ages 1;00,…

  13. Development of the auditory system

    Science.gov (United States)

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  14. Individual Differences in Verbal and Non-Verbal Affective Responses to Smells: Influence of Odor Label Across Cultures.

    Science.gov (United States)

    Ferdenzi, Camille; Joussain, Pauline; Digard, Bérengère; Luneau, Lucie; Djordjevic, Jelena; Bensafi, Moustafa

    2017-01-01

    Olfactory perception is highly variable from one person to another, as a function of individual and contextual factors. Here, we investigated the influence of 2 important factors of variation: culture and semantic information. More specifically, we tested whether cultural-specific knowledge and presence versus absence of odor names modulate odor perception, by measuring these effects in 2 populations differing in cultural background but not in language. Participants from France and Quebec, Canada, smelled 4 culture-specific and 2 non-specific odorants in 2 conditions: first without label, then with label. Their ratings of pleasantness, familiarity, edibility, and intensity were collected as well as their psychophysiological and olfactomotor responses. The results revealed significant effects of culture and semantic information, both at the verbal and non-verbal level. They also provided evidence that availability of semantic information reduced cultural differences. Semantic information had a unifying action on olfactory perception that overrode the influence of cultural background. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Emotion Recognition as a Real Strength in Williams Syndrome: Evidence From a Dynamic Non-verbal Task

    Directory of Open Access Journals (Sweden)

    Laure Ibernon

    2018-04-01

    Full Text Available The hypersocial profile characterizing individuals with Williams syndrome (WS, and particularly their attraction to human faces and their desire to form relationships with other people, could favor the development of their emotion recognition capacities. This study seeks to better understand the development of emotion recognition capacities in WS. The ability to recognize six emotions was assessed in 15 participants with WS. Their performance was compared to that of 15 participants with Down syndrome (DS and 15 typically developing (TD children of the same non-verbal developmental age, as assessed with Raven’s Colored Progressive Matrices (RCPM; Raven et al., 1998. The analysis of the three groups’ results revealed that the participants with WS performed better than the participants with DS and also than the TD children. Individuals with WS performed at a similar level to TD participants in terms of recognizing different types of emotions. The study of development trajectories confirmed that the participants with WS presented the same development profile as the TD participants. These results seem to indicate that the recognition of emotional facial expressions constitutes a real strength in people with WS.

  16. Contrasting visual working memory for verbal and non-verbal material with multivariate analysis of fMRI

    Science.gov (United States)

    Habeck, Christian; Rakitin, Brian; Steffener, Jason; Stern, Yaakov

    2012-01-01

    We performed a delayed-item-recognition task to investigate the neural substrates of non-verbal visual working memory with event-related fMRI (‘Shape task’). 25 young subjects (mean age: 24.0 years; STD=3.8 years) were instructed to study a list of either 1,2 or 3 unnamable nonsense line drawings for 3 seconds (‘stimulus phase’ or STIM). Subsequently, the screen went blank for 7 seconds (‘retention phase’ or RET), and then displayed a probe stimulus for 3 seconds in which subject indicated with a differential button press whether the probe was contained in the studied shape-array or not (‘probe phase’ or PROBE). Ordinal Trend Canonical Variates Analysis (Habeck et al., 2005a) was performed to identify spatial covariance patterns that showed a monotonic increase in expression with memory load during all task phases. Reliable load-related patterns were identified in the stimulus and retention phase (pmemory loads (pmemory load, and mediofrontal and temporal regions that were decreasing. Mean subject expression of both patterns across memory load during retention also correlated positively with recognition accuracy (dL) in the Shape task (prehearsal processes. Encoding processes, on the other hand, are critically dependent on the to-be-remembered material, and seem to necessitate material-specific neural substrates. PMID:22652306

  17. Verbal and Non-verbal Fluency in Adults with Developmental Dyslexia: Phonological Processing or Executive Control Problems?

    Science.gov (United States)

    Smith-Spark, James H; Henry, Lucy A; Messer, David J; Zięcik, Adam P

    2017-08-01

    The executive function of fluency describes the ability to generate items according to specific rules. Production of words beginning with a certain letter (phonemic fluency) is impaired in dyslexia, while generation of words belonging to a certain semantic category (semantic fluency) is typically unimpaired. However, in dyslexia, verbal fluency has generally been studied only in terms of overall words produced. Furthermore, performance of adults with dyslexia on non-verbal design fluency tasks has not been explored but would indicate whether deficits could be explained by executive control, rather than phonological processing, difficulties. Phonemic, semantic and design fluency tasks were presented to adults with dyslexia and without dyslexia, using fine-grained performance measures and controlling for IQ. Hierarchical regressions indicated that dyslexia predicted lower phonemic fluency, but not semantic or design fluency. At the fine-grained level, dyslexia predicted a smaller number of switches between subcategories on phonemic fluency, while dyslexia did not predict the size of phonemically related clusters of items. Overall, the results suggested that phonological processing problems were at the root of dyslexia-related fluency deficits; however, executive control difficulties could not be completely ruled out as an alternative explanation. Developments in research methodology, equating executive demands across fluency tasks, may resolve this issue. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Selection of words for implementation of the Picture Exchange Communication System - PECS in non-verbal autistic children.

    Science.gov (United States)

    Ferreira, Carine; Bevilacqua, Monica; Ishihara, Mariana; Fiori, Aline; Armonia, Aline; Perissinoto, Jacy; Tamanaha, Ana Carina

    2017-03-09

    It is known that some autistic individuals are considered non-verbal, since they are unable to use verbal language and barely use gestures to compensate for the absence of speech. Therefore, these individuals' ability to communicate may benefit from the use of the Picture Exchange Communication System - PECS. The objective of this study was to verify the most frequently used words in the implementation of PECS in autistic children, and on a complementary basis, to analyze the correlation between the frequency of these words and the rate of maladaptive behaviors. This is a cross-sectional study. The sample was composed of 31 autistic children, twenty-five boys and six girls, aged between 5 and 10 years old. To identify the most frequently used words in the initial period of implementation of PECS, the Vocabulary Selection Worksheet was used. And to measure the rate of maladaptive behaviors, we applied the Autism Behavior Checklist (ABC). There was a significant prevalence of items in the category "food", followed by "activities" and "beverages". There was no correlation between the total amount of items identified by the families and the rate of maladaptive behaviors. The categories of words most mentioned by the families could be identified, and it was confirmed that the level of maladaptive behaviors did not interfere directly in the preparation of the vocabulary selection worksheet for the children studied.

  19. The influence of non-verbal educational and therapeutic Practices in autism spectrum disorder: the possibilities for physical education professionals

    Directory of Open Access Journals (Sweden)

    Adryelle Fabiane Campelo de Lima

    2017-09-01

    Full Text Available The individual with autism spectrum disorder (ASD have symptoms that begin in childhood and affects the individual's ability to function in life and in their day to day. For reduce and control the symptoms of ASD exist several types of practices. Thus, this study aims to analyze the contributions of the main pedagogical and therapeutic practices of non-verbal communication in motivation, emotional stability, communication and socialization of individuals with autism spectrum disorders, which may collaborate in the intervention of the physical education professional. The study was done through a systematic review that was conducted in the electronic databases. Initially, 390 documents have been identified. After the reading and analysis of the titles of the documents, have selected 109. After reading the summaries were considered eligible 53 and, finally, we've included 18, which completely satisfy our criteria for inclusion. The results showed that intervention programs are distinct and the majority is in music therapy. This systematic review showed that there is direct intervention of physical education professionals only in psychomotricity.

  20. How physician electronic health record screen sharing affects patient and doctor non-verbal communication in primary care.

    Science.gov (United States)

    Asan, Onur; Young, Henry N; Chewning, Betty; Montague, Enid

    2015-03-01

    Use of electronic health records (EHRs) in primary-care exam rooms changes the dynamics of patient-physician interaction. This study examines and compares doctor-patient non-verbal communication (eye-gaze patterns) during primary care encounters for three different screen/information sharing groups: (1) active information sharing, (2) passive information sharing, and (3) technology withdrawal. Researchers video recorded 100 primary-care visits and coded the direction and duration of doctor and patient gaze. Descriptive statistics compared the length of gaze patterns as a percentage of visit length. Lag sequential analysis determined whether physician eye-gaze influenced patient eye gaze, and vice versa, and examined variations across groups. Significant differences were found in duration of gaze across groups. Lag sequential analysis found significant associations between several gaze patterns. Some, such as DGP-PGD ("doctor gaze patient" followed by "patient gaze doctor") were significant for all groups. Others, such DGT-PGU ("doctor gaze technology" followed by "patient gaze unknown") were unique to one group. Some technology use styles (active information sharing) seem to create more patient engagement, while others (passive information sharing) lead to patient disengagement. Doctors can engage patients in communication by using EHRs in the visits. EHR training and design should facilitate this. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. Visual and auditory socio-cognitive perception in unilateral temporal lobe epilepsy in children and adolescents: a prospective controlled study.

    Science.gov (United States)

    Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania

    2014-12-01

    A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re

  2. Are there pre-existing neural, cognitive, or motoric markers for musical ability?

    Science.gov (United States)

    Norton, Andrea; Winner, Ellen; Cronin, Karl; Overy, Katie; Lee, Dennis J; Schlaug, Gottfried

    2005-11-01

    Adult musician's brains show structural enlargements, but it is not known whether these are inborn or a consequence of long-term training. In addition, music training in childhood has been shown to have positive effects on visual-spatial and verbal outcomes. However, it is not known whether pre-existing advantages in these skills are found in children who choose to study a musical instrument nor is it known whether there are pre-existing associations between music and any of these outcome measures that could help explain the training effects. To answer these questions, we compared 5- to 7-year-olds beginning piano or string lessons (n=39) with 5- to 7-year-olds not beginning instrumental training (n=31). All children received a series of tests (visual-spatial, non-verbal reasoning, verbal, motor, and musical) and underwent magnetic resonance imaging. We found no pre-existing neural, cognitive, motor, or musical differences between groups and no correlations (after correction for multiple analyses) between music perceptual skills and any brain or visual-spatial measures. However, correlations were found between music perceptual skills and both non-verbal reasoning and phonemic awareness. Such pre-existing correlations suggest similarities in auditory and visual pattern recognition as well a sharing of the neural substrates for language and music processing, most likely due to innate abilities or implicit learning during early development. This baseline study lays the groundwork for an ongoing longitudinal study addressing the effects of intensive musical training on brain and cognitive development, and making it possible to look retroactively at the brain and cognitive development of those children who emerge showing exceptional musical talent.

  3. SeaTouch: A Haptic and Auditory Maritime Environment for Non Visual Cognitive Mapping of Blind Sailors

    Science.gov (United States)

    Simonnet, Mathieu; Jacobson, Dan; Vieilledent, Stephane; Tisseau, Jacques

    Navigating consists of coordinating egocentric and allocentric spatial frames of reference. Virtual environments have afforded researchers in the spatial community with tools to investigate the learning of space. The issue of the transfer between virtual and real situations is not trivial. A central question is the role of frames of reference in mediating spatial knowledge transfer to external surroundings, as is the effect of different sensory modalities accessed in simulated and real worlds. This challenges the capacity of blind people to use virtual reality to explore a scene without graphics. The present experiment involves a haptic and auditory maritime virtual environment. In triangulation tasks, we measure systematic errors and preliminary results show an ability to learn configurational knowledge and to navigate through it without vision. Subjects appeared to take advantage of getting lost in an egocentric “haptic” view in the virtual environment to improve performances in the real environment.

  4. What a Smile Means: Contextual Beliefs and Facial Emotion Expressions in a Non-verbal Zero-Sum Game.

    Science.gov (United States)

    Pádua Júnior, Fábio P; Prado, Paulo H M; Roeder, Scott S; Andrade, Eduardo B

    2016-01-01

    Research into the authenticity of facial emotion expressions often focuses on the physical properties of the face while paying little attention to the role of beliefs in emotion perception. Further, the literature most often investigates how people express a pre-determined emotion rather than what facial emotion expressions people strategically choose to express. To fill these gaps, this paper proposes a non-verbal zero-sum game - the Face X Game - to assess the role of contextual beliefs and strategic displays of facial emotion expression in interpersonal interactions. This new research paradigm was used in a series of three studies, where two participants are asked to play the role of the sender (individual expressing emotional information on his/her face) or the observer (individual interpreting the meaning of that expression). Study 1 examines the outcome of the game with reference to the sex of the pair, where senders won more frequently when the pair was comprised of at least one female. Study 2 examines the strategic display of facial emotion expressions. The outcome of the game was again contingent upon the sex of the pair. Among female pairs, senders won the game more frequently, replicating the pattern of results from study 1. We also demonstrate that senders who strategically express an emotion incongruent with the valence of the event (e.g., smile after seeing a negative event) are able to mislead observers, who tend to hold a congruent belief about the meaning of the emotion expression. If sending an incongruent signal helps to explain why female senders win more frequently, it logically follows that female observers were more prone to hold a congruent, and therefore inaccurate, belief. This prospect implies that while female senders are willing and/or capable of displaying fake smiles, paired-female observers are not taking this into account. Study 3 investigates the role of contextual factors by manipulating female observers' beliefs. When prompted

  5. Short-term delayed recall of auditory verbal learning test is equivalent to long-term delayed recall for identifying amnestic mild cognitive impairment.

    Directory of Open Access Journals (Sweden)

    Qianhua Zhao

    Full Text Available Delayed recall of words in a verbal learning test is a sensitive measure for the diagnosis of amnestic mild cognitive impairment (aMCI and early Alzheimer's disease (AD. The relative validity of different retention intervals of delayed recall has not been well characterized. Using the Auditory Verbal Learning Test-Huashan version, we compared the differentiating value of short-term delayed recall (AVL-SR, that is, a 3- to 5-minute delay time and long-term delayed recall (AVL-LR, that is, a 20-minute delay time in distinguishing patients with aMCI (n = 897 and mild AD (n = 530 from the healthy elderly (n = 1215. In patients with aMCI, the correlation between AVL-SR and AVL-LR was very high (r = 0.94, and the difference between the two indicators was less than 0.5 points. There was no difference between AVL-SR and AVL-LR in the frequency of zero scores. In the receiver operating characteristic curves analysis, although the area under the curve (AUC of AVL-SR and AVL-LR for diagnosing aMCI was significantly different, the cut-off scores of the two indicators were identical. In the subgroup of ages 80 to 89, the AUC of the two indicators showed no significant difference. Therefore, we concluded that AVL-SR could substitute for AVL-LR in identifying aMCI, especially for the oldest patients.

  6. Auditory Neuropathy

    Science.gov (United States)

    ... children and adults with auditory neuropathy. Cochlear implants (electronic devices that compensate for damaged or nonworking parts ... and Drug Administration: Information on Cochlear Implants Telecommunications Relay Services Your Baby's Hearing Screening News Deaf health ...

  7. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.

  8. Clinical study on the value of combining neuropsychological tests with auditory event-related potential P300 for cognitive assessment in elderly patients with cerebral small vessel disease

    Directory of Open Access Journals (Sweden)

    Xiao-ling ZHAO

    2016-11-01

    Full Text Available Objective To investigate the value of combining neuropsychological tests with auditory event-related potential (ERP P300 for cognitive assessment in elderly patients with cerebral small vessel disease (cSVD.  Methods A total of 183 elderly patients with cSVD were enrolled in this study. They were divided into 3 groups according to brain MRI: lacunar infarct (LACI group (N = 62, white matter hyperintensity (WMH group (N = 60 and LACI + WMH group (N = 61. A total of 50 brain MRI normal persons were selected as control group. Montreal Cognitive Assessment (MoCA, Chinese version was used to evaluate the cognitive function, and the amplitude and latency of P300 were measured in each group.  Results Compared with control group, the MoCA total score in LACI, WMH and LACI + WMH groups were significantly lower (P = 0.042, 0.015, 0.000, and the score in LACI + WMH group was significantly lower than that in LACI and WMH groups (P = 0.001, 0.042. In the eight cognitive domains of MoCA scale, the visual space and executive function (P = 0.006, 0.041, 0.035, delayed memory (P = 0.006, 0.012, 0.048, language (P = 0.001, 0.032, 0.047 and calculation (P = 0.009, 0.001, 0.003 in LACI + WMH group were significantly lower than those in control, LACI and WMH groups. The delayed memory in LACI group was significantly lower than that in control group (P = 0.037. The delayed memory (P = 0.005 and language (P = 0.047 in WMH group were significantly lower than those in control group. Compared with control group, the amplitudes of P300 (P = 0.025, 0.033, 0.000 in LACI, WMH and LACI + WMH groups were significantly decreased, and the latencies (P = 0.018, 0.000, 0.000 were significantly prolonged. The amplitude of P300 in LACI + WMH group was significantly lower than that in LACI and WMH groups (P = 0.041, 0.018, and the latency was significantly prolonged (P = 0.000, 0.022.  Conclusions Elderly patients of cSVD all suffer from different degrees of cognitive impairment

  9. From Sensory Perception to Lexical-Semantic Processing: An ERP Study in Non-Verbal Children with Autism.

    Science.gov (United States)

    Cantiani, Chiara; Choudhury, Naseem A; Yu, Yan H; Shafer, Valerie L; Schwartz, Richard G; Benasich, April A

    2016-01-01

    This study examines electrocortical activity associated with visual and auditory sensory perception and lexical-semantic processing in nonverbal (NV) or minimally-verbal (MV) children with Autism Spectrum Disorder (ASD). Currently, there is no agreement on whether these children comprehend incoming linguistic information and whether their perception is comparable to that of typically developing children. Event-related potentials (ERPs) of 10 NV/MV children with ASD and 10 neurotypical children were recorded during a picture-word matching paradigm. Atypical ERP responses were evident at all levels of processing in children with ASD. Basic perceptual processing was delayed in both visual and auditory domains but overall was similar in amplitude to typically-developing children. However, significant differences between groups were found at the lexical-semantic level, suggesting more atypical higher-order processes. The results suggest that although basic perception is relatively preserved in NV/MV children with ASD, higher levels of processing, including lexical- semantic functions, are impaired. The use of passive ERP paradigms that do not require active participant response shows significant potential for assessment of non-compliant populations such as NV/MV children with ASD.

  10. From Sensory Perception to Lexical-Semantic Processing: An ERP Study in Non-Verbal Children with Autism

    Science.gov (United States)

    Cantiani, Chiara; Choudhury, Naseem A.; Yu, Yan H.; Shafer, Valerie L.; Schwartz, Richard G.; Benasich, April A.

    2016-01-01

    This study examines electrocortical activity associated with visual and auditory sensory perception and lexical-semantic processing in nonverbal (NV) or minimally-verbal (MV) children with Autism Spectrum Disorder (ASD). Currently, there is no agreement on whether these children comprehend incoming linguistic information and whether their perception is comparable to that of typically developing children. Event-related potentials (ERPs) of 10 NV/MV children with ASD and 10 neurotypical children were recorded during a picture-word matching paradigm. Atypical ERP responses were evident at all levels of processing in children with ASD. Basic perceptual processing was delayed in both visual and auditory domains but overall was similar in amplitude to typically-developing children. However, significant differences between groups were found at the lexical-semantic level, suggesting more atypical higher-order processes. The results suggest that although basic perception is relatively preserved in NV/MV children with ASD, higher levels of processing, including lexical- semantic functions, are impaired. The use of passive ERP paradigms that do not require active participant response shows significant potential for assessment of non-compliant populations such as NV/MV children with ASD. PMID:27560378

  11. Cognition and Physicality in Musical Cyberinstruments

    NARCIS (Netherlands)

    Ungvary, T.; Vertegaal, R.P.H.; Wanderley, M.M.; Battier, M.

    1999-01-01

    In this paper, we present the SensOrg, a musical CyberInstrument designed as a modular assembly of input/output devices and musical software, mapped and arranged according to functional characteristics of the Man-Instrument system. We discuss how the cognitive ergonomics of non-verbal and symbolic

  12. Bi-directional effects of depressed mood in the postnatal period on mother-infant non-verbal engagement with picture books.

    Science.gov (United States)

    Reissland, Nadja; Burt, Mike

    2010-12-01

    The purpose of the present study is to examine the bi-directional nature of maternal depressed mood in the postnatal period on maternal and infant non-verbal behaviors while looking at a picture book. Although, it is acknowledged that non-verbal engagement with picture books in infancy plays an important role, the effect of maternal depressed mood on stimulating the interest of infants in books is not known. Sixty-one mothers and their infants, 38 boys and 23 girls, were observed twice approximately 3 months apart (first observation: mean age 6.8 months, range 3-11 months, 32 mothers with depressed mood; second observation: mean age 10.2 months, range 6-16 months, 17 mothers with depressed mood). There was a significant effect for depressed mood on negative behaviors: infants of mothers with depressed mood tended to push away and close books more often. The frequency of negative behaviors (pushing the book away/closing it on the part of the infant and withholding the book and restraining the infant on the part of the mother) were behaviors which if expressed during the first visit were more likely to be expressed during the second visit. Levels of negative behaviors by mother and infant were strongly related during each visit. Additionally, the pattern between visits suggests that maternal negative behavior may be the cause of her infant negative behavior. These results are discussed in terms of the effects of maternal depressed mood on the bi-directional relation of non-verbal engagement of mother and child. Crown Copyright © 2010. Published by Elsevier Inc. All rights reserved.

  13. The neural basis of non-verbal communication-enhanced processing of perceived give-me gestures in 9-month-old girls.

    Science.gov (United States)

    Bakker, Marta; Kaduk, Katharina; Elsner, Claudia; Juvrud, Joshua; Gustaf Gredebäck

    2015-01-01

    This study investigated the neural basis of non-verbal communication. Event-related potentials were recorded while 29 nine-month-old infants were presented with a give-me gesture (experimental condition) and the same hand shape but rotated 90°, resulting in a non-communicative hand configuration (control condition). We found different responses in amplitude between the two conditions, captured in the P400 ERP component. Moreover, the size of this effect was modulated by participants' sex, with girls generally demonstrating a larger relative difference between the two conditions than boys.

  14. Musical practice and cognitive aging: two cross-sectional studies point to phonemic fluency as a potential candidate for a use-dependent adaptation.

    Science.gov (United States)

    Fauvel, Baptiste; Groussard, Mathilde; Mutlu, Justine; Arenaza-Urquijo, Eider M; Eustache, Francis; Desgranges, Béatrice; Platel, Hervé

    2014-01-01

    Because of permanent use-dependent brain plasticity, all lifelong individuals' experiences are believed to influence the cognitive aging quality. In older individuals, both former and current musical practices have been associated with better verbal skills, visual memory, processing speed, and planning function. This work sought for an interaction between musical practice and cognitive aging by comparing musician and non-musician individuals for two lifetime periods (middle and late adulthood). Long-term memory, auditory-verbal short-term memory, processing speed, non-verbal reasoning, and verbal fluencies were assessed. In Study 1, measures of processing speed and auditory-verbal short-term memory were significantly better performed by musicians compared with controls, but both groups displayed the same age-related differences. For verbal fluencies, musicians scored higher than controls and displayed different age effects. In Study 2, we found that lifetime period at training onset (childhood vs. adulthood) was associated with phonemic, but not semantic, fluency performances (musicians who had started to practice in adulthood did not perform better on phonemic fluency than non-musicians). Current frequency of training did not account for musicians' scores on either of these two measures. These patterns of results are discussed by setting the hypothesis of a transformative effect of musical practice against a non-causal explanation.

  15. Achieving visibility? Use of non-verbal communication in interactions between patients and pharmacists who do not share a common language.

    Science.gov (United States)

    Stevenson, Fiona

    2014-06-01

    Despite the seemingly insatiable interest in healthcare professional-patient communication, less attention has been paid to the use of non-verbal communication in medical consultations. This article considers pharmacists' and patients' use of non-verbal communication to interact directly in consultations in which they do not share a common language. In total, 12 video-recorded, interpreted pharmacy consultations concerned with a newly prescribed medication or a change in medication were analysed in detail. The analysis focused on instances of direct communication initiated by either the patient or the pharmacist, despite the presence of a multilingual pharmacy assistant acting as an interpreter. Direct communication was shown to occur through (i) the demonstration of a medical device, (ii) the indication of relevant body parts and (iii) the use of limited English. These connections worked to make patients and pharmacists visible to each other and thus to maintain a sense of mutual involvement in consultations within which patients and pharmacists could enact professionally and socially appropriate roles. In a multicultural society this work is important in understanding the dynamics involved in consultations in situations in which language is not shared and thus in considering the development of future research and policy. © 2014 The Author. Sociology of Health & Illness published by John Wiley & Sons Ltd on behalf of Foundation for SHIL (SHIL).

  16. Language representation of the emotional state of the personage in non-verbal speech behavior (on the material of Russian and German languages

    Directory of Open Access Journals (Sweden)

    Scherbakova Irina Vladimirovna

    2016-06-01

    Full Text Available The article examines the features of actualization of emotions in a non-verbal speech behavior of the character of a literary text. Emotions are considered basic, the most actively used method of literary character reaction to any object, action, or the communicative situation. Nonverbal ways of expressing emotions more fully give the reader an idea of the emotional state of the character. The main focus in the allocation of non-verbal means of communication in art is focused on the description of kinetic, proxemic and prosodic components. The material of the study is the microdialogue fragments extracted by continuous sampling of their works of art texts of the Russian-speaking and German-speaking classical and modern literature XIX - XX centuries. Fragments of the dialogues were analyzed, where the recorded voice of nonverbal behavior of the character of different emotional content (surprise, joy, fear, anger, rage, excitement, etc. was fixed. It was found that means of verbalization and descriptions of emotion of nonverbal behavior of the character are primarily indirect nomination, expressed verbal vocabulary, adjectives and adverbs. The lexical level is the most significant in the presentation of the emotional state of the character.

  17. Musical experience shapes top-down auditory mechanisms: evidence from masking and auditory attention performance.

    Science.gov (United States)

    Strait, Dana L; Kraus, Nina; Parbery-Clark, Alexandra; Ashley, Richard

    2010-03-01

    A growing body of research suggests that cognitive functions, such as attention and memory, drive perception by tuning sensory mechanisms to relevant acoustic features. Long-term musical experience also modulates lower-level auditory function, although the mechanisms by which this occurs remain uncertain. In order to tease apart the mechanisms that drive perceptual enhancements in musicians, we posed the question: do well-developed cognitive abilities fine-tune auditory perception in a top-down fashion? We administered a standardized battery of perceptual and cognitive tests to adult musicians and non-musicians, including tasks either more or less susceptible to cognitive control (e.g., backward versus simultaneous masking) and more or less dependent on auditory or visual processing (e.g., auditory versus visual attention). Outcomes indicate lower perceptual thresholds in musicians specifically for auditory tasks that relate with cognitive abilities, such as backward masking and auditory attention. These enhancements were observed in the absence of group differences for the simultaneous masking and visual attention tasks. Our results suggest that long-term musical practice strengthens cognitive functions and that these functions benefit auditory skills. Musical training bolsters higher-level mechanisms that, when impaired, relate to language and literacy deficits. Thus, musical training may serve to lessen the impact of these deficits by strengthening the corticofugal system for hearing. 2009 Elsevier B.V. All rights reserved.

  18. The Role of Auditory Cues in the Spatial Knowledge of Blind Individuals

    Science.gov (United States)

    Papadopoulos, Konstantinos; Papadimitriou, Kimon; Koutsoklenis, Athanasios

    2012-01-01

    The study presented here sought to explore the role of auditory cues in the spatial knowledge of blind individuals by examining the relation between the perceived auditory cues and the landscape of a given area and by investigating how blind individuals use auditory cues to create cognitive maps. The findings reveal that several auditory cues…

  19. Auditory Hallucinations in Polyglots*

    African Journals Online (AJOL)

    1971-12-18

    Dec 18, 1971 ... that they were false. Schizophrenics on ... memory. Verbal as well as non-verbal thinking is em- ployed by everyone, and probably is essential in the forma- ... qualities or emotions such as anger or joy or threats from the voice ...

  20. Auditory memory function in expert chess players.

    Science.gov (United States)

    Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona

    2015-01-01

    Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time.

  1. Brain activity is related to individual differences in the number of items stored in auditory short-term memory for pitch: evidence from magnetoencephalography.

    Science.gov (United States)

    Grimault, Stephan; Nolden, Sophie; Lefebvre, Christine; Vachon, François; Hyde, Krista; Peretz, Isabelle; Zatorre, Robert; Robitaille, Nicolas; Jolicoeur, Pierre

    2014-07-01

    We used magnetoencephalography (MEG) to examine brain activity related to the maintenance of non-verbal pitch information in auditory short-term memory (ASTM). We focused on brain activity that increased with the number of items effectively held in memory by the participants during the retention interval of an auditory memory task. We used very simple acoustic materials (i.e., pure tones that varied in pitch) that minimized activation from non-ASTM related systems. MEG revealed neural activity in frontal, temporal, and parietal cortices that increased with a greater number of items effectively held in memory by the participants during the maintenance of pitch representations in ASTM. The present results reinforce the functional role of frontal and temporal cortices in the retention of pitch information in ASTM. This is the first MEG study to provide both fine spatial localization and temporal resolution on the neural mechanisms of non-verbal ASTM for pitch in relation to individual differences in the capacity of ASTM. This research contributes to a comprehensive understanding of the mechanisms mediating the representation and maintenance of basic non-verbal auditory features in the human brain. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Gender differences in identifying emotions from auditory and visual stimuli.

    Science.gov (United States)

    Waaramaa, Teija

    2017-12-01

    The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.

  3. Habilidades de praxia verbal e não-verbal em indivíduos gagos Verbal and non-verbal praxic abilities in stutterers

    Directory of Open Access Journals (Sweden)

    Natália Casagrande Brabo

    2009-12-01

    Full Text Available OBJETIVO: caracterizar as habilidades de praxias verbal e não-verbal em indivíduos gagos. MÉTODOS: participaram do estudo 40 indivíduos, com idade igual ou superior a 18 anos, do sexo masculino e feminino: 20 gagos adultos e 20 sem queixas de comunicação. Para a avaliação das praxias verbal e não-verbal, os indivíduos foram submetidos à aplicação do Protocolo de Avaliação da Apraxia Verbal e Não-verbal (Martins e Ortiz, 2004. RESULTADOS: com relação às habilidades de praxia verbal houve diferença estatisticamente significante no número de disfluências típicas e atípicas apresentadas pelos grupos estudados. Quanto à tipologia das disfluências observou-se que nas típicas houve diferença estatisticamente significante entre os grupos estudados apenas na repetição de frase, e nas atípicas, houve diferença estatisticamente significante, tanto no bloqueio quanto na repetição de sílaba e no prolongamento. Com relação às habilidades de praxia não-verbal, não foram observadas diferenças estatisticamente significantes entre os indivíduos estudados na realização dos movimentos de lábios, língua e mandíbula, isolados e em sequência. CONCLUSÃO: com relação às habilidades de praxia verbal, os gagos apresentaram frequência maior de rupturas da fala, tanto de disfluências típicas quanto de atípicas, quando comparado ao grupo controle. Já na realização de movimentos práxicos isolados e em sequência, ou seja, nas habilidades de praxia não-verbal, os indivíduos gagos não se diferenciaram dos fluentes não confirmando a hipótese de que o início precoce da gagueira poderia comprometer as habilidades de praxia não-verbal.PURPOSE: to characterize the verbal and non-verbal praxic abilities in adult stutterers. METHODS: for this research, 40 over 18-year old men and women were selected: 20 stuttering adults and 20 without communication complaints. For the praxis evaluation, they were submitted to

  4. Visuospatial working memory for locations, colours, and binding in typically developing children and in children with dyslexia and non-verbal learning disability.

    Science.gov (United States)

    Garcia, Ricardo Basso; Mammarella, Irene C; Tripodi, Doriana; Cornoldi, Cesare

    2014-03-01

    This study examined forward and backward recall of locations and colours and the binding of locations and colours, comparing typically developing children - aged between 8 and 10 years - with two different groups of children of the same age with learning disabilities (dyslexia in one group, non-verbal learning disability [NLD] in the other). Results showed that groups with learning disabilities had different visuospatial working memory problems and that children with NLD had particular difficulties in the backward recall of locations. The differences between the groups disappeared, however, when locations and colours were bound together. It was concluded that specific processes may be involved in children in the binding and backward recall of different types of information, as they are not simply the resultant of combining the single processes needed to recall single features. © 2013 The British Psychological Society.

  5. Non-verbal communication between Registered Nurses Intellectual Disability and people with an intellectual disability: an exploratory study of the nurse's experiences. Part 1.

    Science.gov (United States)

    Martin, Anne-Marie; Connor-Fenelon, Maureen O'; Lyons, Rosemary

    2012-03-01

    This is the first of two articles presenting the findings of a qualitative study which explored the experiences of Registered Nurses Intellectual Disability (RNIDs) of communicating with people with an intellectual disability who communicate non-verbally. The article reports and critically discusses the findings in the context of the policy and service delivery discourses of person-centredness, inclusion, choice and independence. Arguably, RNIDs are the profession who most frequently encounter people with an intellectual disability and communication impairment. The results suggest that the communication studied is both complicated and multifaceted. An overarching category of 'familiarity/knowing the person' encompasses discrete but related themes and subthemes that explain the process: the RNID knowing the service-user; the RNID/service-user relationship; and the value of experience. People with an intellectual disability, their families and disability services are facing a time of great change, and RNIDs will have a crucial role in supporting this transition.

  6. WHAT’S THE “SECRET” OF THE GESTURE LANGUAGE? A FEW CRITICAL REFLECTIONS ON THE PSEUDO-SCIENCES DEALING WITH THE “NON-VERBAL DECODING”

    Directory of Open Access Journals (Sweden)

    PASCAL LARDELLIER

    2015-05-01

    Full Text Available In this article we deal with a situation commonly met with in contemporary society: the representatives of pseudo-sciences invite their readers to learn „to decode the non-verbal language”. They pretend that in this way our body is supposed to be „readable” and it would be enough to know these „theories” in order to read into our interlocutors and to find out their thoughts and emotions. It is obvious that we find ourselves in front of a discourse imitating the rhetorical codes of science, but having nothing to do with science. Moreover, these pseudo-sciences have never been presented or discussed within the academic sphere

  7. Working memory capacity and visual-verbal cognitive load modulate auditory-sensory gating in the brainstem: toward a unified view of attention.

    Science.gov (United States)

    Sörqvist, Patrik; Stenfelt, Stefan; Rönnberg, Jerker

    2012-11-01

    Two fundamental research questions have driven attention research in the past: One concerns whether selection of relevant information among competing, irrelevant, information takes place at an early or at a late processing stage; the other concerns whether the capacity of attention is limited by a central, domain-general pool of resources or by independent, modality-specific pools. In this article, we contribute to these debates by showing that the auditory-evoked brainstem response (an early stage of auditory processing) to task-irrelevant sound decreases as a function of central working memory load (manipulated with a visual-verbal version of the n-back task). Furthermore, individual differences in central/domain-general working memory capacity modulated the magnitude of the auditory-evoked brainstem response, but only in the high working memory load condition. The results support a unified view of attention whereby the capacity of a late/central mechanism (working memory) modulates early precortical sensory processing.

  8. Neural correlates of strategy use during auditory working memory in musicians and non-musicians.

    Science.gov (United States)

    Schulze, K; Mueller, K; Koelsch, S

    2011-01-01

    Working memory (WM) performance in humans can be improved by structuring and organizing the material to be remembered. For visual and verbal information, this process of structuring has been associated with the involvement of a prefrontal-parietal network, but for non-verbal auditory material, the brain areas that facilitate WM for structured information have remained elusive. Using functional magnetic resonance imaging, this study compared neural correlates underlying encoding and rehearsal of auditory WM for structured and unstructured material. Musicians and non-musicians performed a WM task on five-tone sequences that were either tonally structured (with all tones belonging to one tonal key) or tonally unstructured (atonal) sequences. Functional differences were observed for musicians (who are experts in the music domain), but not for non-musicians - The right pars orbitalis was activated more strongly in musicians during the encoding of unstructured (atonal) vs. structured (tonal) sequences. In addition, data for musicians showed that a lateral (pre)frontal-parietal network (including the right premotor cortex, right inferior precentral sulcus and left intraparietal sulcus) was activated during WM rehearsal of structured, as compared with unstructured, sequences. Our findings indicate that this network plays a role in strategy-based WM for non-verbal auditory information, corroborating previous results showing a similar network for strategy-based WM for visual and verbal information. © 2010 The Authors. European Journal of Neuroscience © 2010 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  9. Atypical brain lateralisation in the auditory cortex and language performance in 3- to 7-year-old children with high-functioning autism spectrum disorder: a child-customised magnetoencephalography (MEG) study.

    Science.gov (United States)

    Yoshimura, Yuko; Kikuchi, Mitsuru; Shitamichi, Kiyomi; Ueno, Sanae; Munesue, Toshio; Ono, Yasuki; Tsubokawa, Tsunehisa; Haruta, Yasuhiro; Oi, Manabu; Niida, Yo; Remijn, Gerard B; Takahashi, Tsutomu; Suzuki, Michio; Higashida, Haruhiro; Minabe, Yoshio

    2013-10-08

    Magnetoencephalography (MEG) is used to measure the auditory evoked magnetic field (AEF), which reflects language-related performance. In young children, however, the simultaneous quantification of the bilateral auditory-evoked response during binaural hearing is difficult using conventional adult-sized MEG systems. Recently, a child-customised MEG device has facilitated the acquisition of bi-hemispheric recordings, even in young children. Using the child-customised MEG device, we previously reported that language-related performance was reflected in the strength of the early component (P50m) of the auditory evoked magnetic field (AEF) in typically developing (TD) young children (2 to 5 years old) [Eur J Neurosci 2012, 35:644-650]. The aim of this study was to investigate how this neurophysiological index in each hemisphere is correlated with language performance in autism spectrum disorder (ASD) and TD children. We used magnetoencephalography (MEG) to measure the auditory evoked magnetic field (AEF), which reflects language-related performance. We investigated the P50m that is evoked by voice stimuli (/ne/) bilaterally in 33 young children (3 to 7 years old) with ASD and in 30 young children who were typically developing (TD). The children were matched according to their age (in months) and gender. Most of the children with ASD were high-functioning subjects. The results showed that the children with ASD exhibited significantly less leftward lateralisation in their P50m intensity compared with the TD children. Furthermore, the results of a multiple regression analysis indicated that a shorter P50m latency in both hemispheres was specifically correlated with higher language-related performance in the TD children, whereas this latency was not correlated with non-verbal cognitive performance or chronological age. The children with ASD did not show any correlation between P50m latency and language-related performance; instead, increasing chronological age was a

  10. Exploration of verbal and non-verbal semantic knowledge and autobiographical memories starting from popular songs in Alzheimer's disease.

    Science.gov (United States)

    Basaglia-Pappas, S; Laterza, M; Borg, C; Richard-Mornas, A; Favre, E; Thomas-Antérion, C

    2013-05-01

    In mild Alzheimer's disease (AD), a deficit in episodic memory, particularly autobiographical memory, is clearly established. Several recent studies have also shown impaired semantic memory from the onset of the disease. Musical memory capacities may be especially preserved and listening to music might encourage autobiographical recall. The aim of this study was to explore recall of popular songs in AD. We tested 12 patients with mild AD and 12 control subjects. We created a tool made up of old French popular songs: POP 10. This tool is a questionnaire composed of several subtests: melodic free recall, chorus free recall, melodic recognition, chorus recognition, semantic knowledge, autobiographical recall about the song, and autobiographical recall about the interpreter. We used non-parametric tests, the Mann-Whitney test (M-W), the Friedman test, and the a posteriori Wilcoxon test. Results of AD patients were rather similar to those of control participants for melodic memory. Concerning chorus memory (except recognition), semantic knowledge, and autobiographical recall about the interpreter, results of AD patients were significantly weaker than those of control participants. The most important result concerned autobiographical recall about the song: we found no impairment-related differences between the two groups. Our findings demonstrate that popular songs can be excellent stimuli for reminiscence, such as the ability to produce an autobiographical memory related to a song. Thus, we confirm that musical semantic knowledge associated with a song may be relatively preserved in the early stages of AD. This leads to new possibilities for cognitive stimulation.

  11. Performance on Auditory and Visual Tasks of Inhibition in English Monolingual and Spanish-English Bilingual Adults: Do Bilinguals Have a Cognitive Advantage?

    Science.gov (United States)

    Desjardins, Jamie L.; Fernandez, Francisco

    2018-01-01

    Purpose: Bilingual individuals have been shown to be more proficient on visual tasks of inhibition compared with their monolingual counterparts. However, the bilingual advantage has not been evidenced in all studies, and very little is known regarding how bilingualism influences inhibitory control in the perception of auditory information. The…

  12. Neural circuits in auditory and audiovisual memory.

    Science.gov (United States)

    Plakke, B; Romanski, L M

    2016-06-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Impact of Educational Level on Performance on Auditory Processing Tests.

    Science.gov (United States)

    Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane

    2016-01-01

    Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.

  14. The Effects of Social Cue Principles on Cognitive Load, Situational Interest, Motivation, and Achievement in Pedagogical Agent Multimedia Learning

    Science.gov (United States)

    Park, Sanghoon

    2015-01-01

    Animated pedagogical agents have become popular in multimedia learning with combined delivery of verbal and non-verbal forms of information. In order to reduce unnecessary cognitive load caused by such multiple forms of information and also to foster generative cognitive processing, multimedia design principles with social cues are suggested…

  15. "You can also save a life!": children's drawings as a non-verbal assessment of the impact of cardiopulmonary resuscitation training.

    Science.gov (United States)

    Petriş, Antoniu Octavian; Tatu-Chiţoiu, Gabriel; Cimpoeşu, Diana; Ionescu, Daniela Florentina; Pop, Călin; Oprea, Nadia; Ţînţ, Diana

    2017-04-01

    Drawings made by training children into cardiopulmonary resuscitation (CPR) during the special education week called "School otherwise" can be used as non-verbal means of expression and communication to assess the impact of such training. We analyzed the questionnaires and drawings completed by 327 schoolchildren in different stages of education. After a brief overview of the basic life support (BLS) steps and after watching a video presenting the dynamic performance of the BLS sequence, subjects were asked to complete a questionnaire and make a drawing to express main CPR messages. Questionnaires were filled completely in 97.6 % and drawings were done in 90.2 % cases. Half of the subjects had already witnessed a kind of medical emergency and 96.94 % knew the correct "112" emergency phone number. The drawings were single images (83.81 %) and less cartoon strips (16.18 %). Main themes of the slogans were "Save a life!", "Help!", "Call 112!", "Do not be indifferent/insensible/apathic!" through the use of drawings interpretation, CPR trainers can use art as a way to build a better relation with schoolchildren, to connect to their thoughts and feelings and obtain the highest quality education.

  16. Exploring the Domain Specificity of Creativity in Children: The Relationship between a Non-Verbal Creative Production Test and Creative Problem-Solving Activities

    Directory of Open Access Journals (Sweden)

    Ahmed Mohamed

    2012-12-01

    Full Text Available AbstractIn this study, we explored whether creativity was domain specific or domain general. The relationships between students’ scores on three creative problem-solving activities (math, spa-tial artistic, and oral linguistic in the DIS-COVER assessment (Discovering Intellectual Strengths and Capabilities While Observing Varied Ethnic Responses and the TCT-DP (Test of Creative Thinking-Drawing Produc-tion, a non-verbal general measure of creativi-ty, were examined. The participants were 135 first and second graders from two schools in the Southwestern United States from linguisti-cally and culturally diverse backgrounds. Pearson correlations, canonical correlations, and multiple regression analyses were calcu-lated to describe the relationship between the TCT-DP and the three DISCOVER creative problem-solving activities. We found that crea-tivity has both domain-specific and domain-general aspects, but that the domain-specific component seemed more prominent. One im-plication of these results is that educators should consider assessing creativity in specific domains to place students in special programs for gifted students rather than relying only on domain-general measures of divergent think-ing or creativity.

  17. Heart rate variability during acute psychosocial stress: A randomized cross-over trial of verbal and non-verbal laboratory stressors.

    Science.gov (United States)

    Brugnera, Agostino; Zarbo, Cristina; Tarvainen, Mika P; Marchettini, Paolo; Adorni, Roberta; Compare, Angelo

    2018-05-01

    Acute psychosocial stress is typically investigated in laboratory settings using protocols with distinctive characteristics. For example, some tasks involve the action of speaking, which seems to alter Heart Rate Variability (HRV) through acute changes in respiration patterns. However, it is still unknown which task induces the strongest subjective and autonomic stress response. The present cross-over randomized trial sought to investigate the differences in perceived stress and in linear and non-linear analyses of HRV between three different verbal (Speech and Stroop) and non-verbal (Montreal Imaging Stress Task; MIST) stress tasks, in a sample of 60 healthy adults (51.7% females; mean age = 25.6 ± 3.83 years). Analyses were run controlling for respiration rates. Participants reported similar levels of perceived stress across the three tasks. However, MIST induced a stronger cardiovascular response than Speech and Stroop tasks, even after controlling for respiration rates. Finally, women reported higher levels of perceived stress and lower HRV both at rest and in response to acute psychosocial stressors, compared to men. Taken together, our results suggest the presence of gender-related differences during psychophysiological experiments on stress. They also suggest that verbal activity masked the vagal withdrawal through altered respiration patterns imposed by speaking. Therefore, our findings support the use of highly-standardized math task, such as MIST, as a valid and reliable alternative to verbal protocols during laboratory studies on stress. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Auditory processing and phonological awareness skills of five-year-old children with and without musical experience.

    Science.gov (United States)

    Escalda, Júlia; Lemos, Stela Maris Aguiar; França, Cecília Cavalieri

    2011-09-01

    To investigate the relations between musical experience, auditory processing and phonological awareness of groups of 5-year-old children with and without musical experience. Participants were 56 5-year-old subjects of both genders, 26 in the Study Group, consisting of children with musical experience, and 30 in the Control Group, consisting of children without musical experience. All participants were assessed with the Simplified Auditory Processing Assessment and Phonological Awareness Test and the data was statistically analyzed. There was a statistically significant difference between the results of the sequential memory test for verbal and non-verbal sounds with four stimuli, phonological awareness tasks of rhyme recognition, phonemic synthesis and phonemic deletion. Analysis of multiple binary logistic regression showed that, with exception of the sequential verbal memory with four syllables, the observed difference in subjects' performance was associated with their musical experience. Musical experience improves auditory and metalinguistic abilities of 5-year-old children.

  19. Assessing the aging effect on auditory-verbal memory by Persian version of dichotic auditory verbal memory test

    OpenAIRE

    Zahra Shahidipour; Ahmad Geshani; Zahra Jafari; Shohreh Jalaie; Elham Khosravifard

    2014-01-01

    Background and Aim: Memory is one of the aspects of cognitive function which is widely affected among aged people. Since aging has different effects on different memorial systems and little studies have investigated auditory-verbal memory function in older adults using dichotic listening techniques, the purpose of this study was to evaluate the auditory-verbal memory function among old people using Persian version of dichotic auditory-verbal memory test. Methods: The Persian version of dic...

  20. Relationship between anthropometric indicators and cognitive performance in Southeast Asian school-aged children.

    Science.gov (United States)

    Sandjaja; Poh, Bee Koon; Rojroonwasinkul, Nipa; Le Nyugen, Bao Khanh; Budiman, Basuki; Ng, Lai Oon; Soonthorndhada, Kusol; Xuyen, Hoang Thi; Deurenberg, Paul; Parikh, Panam

    2013-09-01

    Nutrition is an important factor in mental development and, as a consequence, in cognitive performance. Malnutrition is reflected in children's weight, height and BMI curves. The present cross-sectional study aimed to evaluate the association between anthropometric indices and cognitive performance in 6746 school-aged children (aged 6-12 years) of four Southeast Asian countries: Indonesia; Malaysia; Thailand; Vietnam. Cognitive performance (non-verbal intelligence quotient (IQ)) was measured using Raven's Progressive Matrices test or Test of Non-Verbal Intelligence, third edition (TONI-3). Height-for-age z-scores (HAZ), weight-for-age z-scores (WAZ) and BMI-for-age z-scores (BAZ) were used as anthropometric nutritional status indices. Data were weighted using age, sex and urban/rural weight factors to resemble the total primary school-aged population per country. Overall, 21% of the children in the four countries were underweight and 19% were stunted. Children with low WAZ were 3·5 times more likely to have a non-verbal IQ < 89 (OR 3·53 and 95% CI 3·52, 3·54). The chance of having a non-verbal IQ < 89 was also doubled with low BAZ and HAZ. In contrast, except for severe obesity, the relationship between high BAZ and IQ was less clear and differed per country. The odds of having non-verbal IQ levels < 89 also increased with severe obesity. In conclusion, undernourishment and non-verbal IQ are significantly associated in 6-12-year-old children. Effective strategies to improve nutrition in preschoolers and school-aged children can have a pronounced effect on cognition and, in the longer term, help in positively contributing to individual and national development.

  1. Auditory attention activates peripheral visual cortex.

    Directory of Open Access Journals (Sweden)

    Anthony D Cate

    Full Text Available BACKGROUND: Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs remains unclear. METHODOLOGY/PRINCIPAL FINDINGS: We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency. CONCLUSIONS/SIGNIFICANCE: Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.

  2. Auditory conflict and congruence in frontotemporal dementia.

    Science.gov (United States)

    Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D

    2017-09-01

    Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  3. Specialized prefrontal auditory fields: organization of primate prefrontal-temporal pathways

    Directory of Open Access Journals (Sweden)

    Maria eMedalla

    2014-04-01

    Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal auditory field as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.

  4. Self-recognition Deficits in Schizophrenia Patients With Auditory Hallucinations : A Meta-analysis of the Literature

    NARCIS (Netherlands)

    Waters, Flavie; Woodward, Todd; Allen, Paul; Aleman, Andre; Sommers, Iris

    Theories about auditory hallucinations in schizophrenia suggest that these experiences occur because patients fail to recognize thoughts and mental events as self-generated. Different theoretical models have been proposed about the cognitive mechanisms underlying auditory hallucinations. Regardless

  5. Auditory Perspective Taking

    National Research Council Canada - National Science Library

    Martinson, Eric; Brock, Derek

    2006-01-01

    .... From this knowledge of another's auditory perspective, a conversational partner can then adapt his or her auditory output to overcome a variety of environmental challenges and insure that what is said is intelligible...

  6. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    Bethany ePlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  7. Auditory connections and functions of prefrontal cortex

    Science.gov (United States)

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  8. Auditory dysfunction in patients with Huntington's disease.

    Czech Academy of Sciences Publication Activity Database

    Profant, Oliver; Roth, J.; Bureš, Zbyněk; Balogová, Zuzana; Lišková, Irena; Betka, J.; Syka, Josef

    2017-01-01

    Roč. 128, č. 10 (2017), s. 1946-1953 ISSN 1388-2457 R&D Projects: GA ČR(CZ) GBP304/12/G069 Institutional support: RVO:68378041 ; RVO:67985904 Keywords : auditory pathology * central hearing loss * cognition Subject RIV: ED - Physiology OBOR OECD: Otorhinolaryngology; Clinical neurology (UZFG-Y) Impact factor: 3.866, year: 2016

  9. The dissociation of perception and cognition in children with early brain damage.

    Science.gov (United States)

    Stiers, Peter; Vandenbussche, Erik

    2004-03-01

    Reduced non-verbal compared to verbal intelligence is used in many outcome studies of perinatal complications as an indication of visual perceptual impairment. To investigate whether this is justified, we re-examined data sets from two previous studies, both of which used the visual perceptual battery L94. The first study comprised 47 children at risk for cerebral visual impairment due to prematurity or birth asphyxia, who had been administered the McCarthy Scales of Children's abilities. The second study evaluated visual perceptual abilities in 82 children with a physical disability. These children's intellectual ability had been assessed with the Wechsler Intelligence Scale for Children-Revised and/or Wechsler Pre-school and Primary Scale of Intelligence-Revised. No significant association was found between visual perceptual impairment and (1) reduced non-verbal to verbal intelligence; (2) increased non-verbal subtest scatter; or (3) non-verbal subtest profile deviation, for any of the intelligence scales. This result suggests that non-verbal intelligence subtests assess a complex of cognitive skills that are distinct from visual perceptual abilities, and that this assessment is not hampered by deficits in perceptual abilities as manifested in these children.

  10. Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.

    Science.gov (United States)

    Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M

    1991-06-01

    An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.

  11. Speech discrimination difficulties in High-Functioning Autism Spectrum Disorder are likely independent of auditory hypersensitivity.

    Directory of Open Access Journals (Sweden)

    William Andrew Dunlop

    2016-08-01

    Full Text Available Autism Spectrum Disorder (ASD, characterised by impaired communication skills and repetitive behaviours, can also result in differences in sensory perception. Individuals with ASD often perform normally in simple auditory tasks but poorly compared to typically developed (TD individuals on complex auditory tasks like discriminating speech from complex background noise. A common trait of individuals with ASD is hypersensitivity to auditory stimulation. No studies to our knowledge consider whether hypersensitivity to sounds is related to differences in speech-in-noise discrimination. We provide novel evidence that individuals with high-functioning ASD show poor performance compared to TD individuals in a speech-in-noise discrimination task with an attentionally demanding background noise, but not in a purely energetic noise. Further, we demonstrate in our small sample that speech-hypersensitivity does not appear to predict performance in the speech-in-noise task. The findings support the argument that an attentional deficit, rather than a perceptual deficit, affects the ability of individuals with ASD to discriminate speech from background noise. Finally, we piloted a novel questionnaire that measures difficulty hearing in noisy environments, and sensitivity to non-verbal and verbal sounds. Psychometric analysis using 128 TD participants provided novel evidence for a difference in sensitivity to non-verbal and verbal sounds, and these findings were reinforced by participants with ASD who also completed the questionnaire. The study was limited by a small and high-functioning sample of participants with ASD. Future work could test larger sample sizes and include lower-functioning ASD participants.

  12. Effects of auditory training in individuals with high-frequency hearing loss

    Directory of Open Access Journals (Sweden)

    Renata Beatriz Fernandes Santos

    2014-01-01

    Full Text Available OBJECTIVE: To determine the effects of a formal auditory training program on the behavioral, electrophysiological and subjective aspects of auditory function in individuals with bilateral high-frequency hearing loss. METHOD: A prospective study of seven individuals aged 46 to 57 years with symmetric, moderate high-frequency hearing loss ranging from 3 to 8 kHz was conducted. Evaluations of auditory processing (sound location, verbal and non-verbal sequential memory tests, the speech-in-noise test, the staggered spondaic word test, synthetic sentence identification with competitive ipsilateral and contralateral competitive messages, random gap detection and the standard duration test, auditory brainstem response and long-latency potentials and the administration of the Abbreviated Profile of Hearing Aid Benefit questionnaire were performed in a sound booth before and immediately after formal auditory training. RESULTS: All of the participants demonstrated abnormal pre-training long-latency characteristics (abnormal latency or absence of the P3 component and these abnormal characteristics were maintained in six of the seven individuals at the post-training evaluation. No significant differences were found between ears in the quantitative analysis of auditory brainstem responses or long-latency potentials. However, the subjects demonstrated improvements on all behavioral tests. For the questionnaire, the difference on the background noise subscale achieved statistical significance. CONCLUSION: Auditory training in adults with high-frequency hearing loss led to improvements in figure-background hearing skills for verbal sounds, temporal ordination and resolution, and communication in noisy environments. Electrophysiological changes were also observed because, after the training, some long latency components that were absent pre-training were observed during the re-evaluation.

  13. Can We Use Creativity to Improve Generic Skills in Our Higher Education Students? A Proposal Based on Non-Verbal Communication and Creative Movement

    Science.gov (United States)

    Rodriquez, Rosa Maria; Castilla, Guillermo

    2013-01-01

    Traditionally, general skills and personal growth have been developed through cognitive processes within academic contexts. Development based on experience may be an alternative route to achieve cognitive knowledge. Enact-learning is based on the biunivocal relationship between knowledge and action. Action is movement. Participants interact with…

  14. Ketamine-induced deficits in auditory and visual context-dependent processing in healthy volunteers: implications for models of cognitive deficits in schizophrenia.

    Science.gov (United States)

    Umbricht, D; Schmid, L; Koller, R; Vollenweider, F X; Hell, D; Javitt, D C

    2000-12-01

    In patients with schizophrenia, deficient generation of mismatch negativity (MMN)-an event-related potential (ERP) indexing auditory sensory ("echoic") memory-and a selective increase of "context dependent" ("BX") errors in the "A-X" version of the Continuous Performance Test (AX-CPT) indicate an impaired ability to form and use transient memory traces. Animal and human studies implicate deficient N-methyl-D-aspartate receptor (NMDAR) functioning in such abnormalities. In this study, effects of the NMDAR antagonists ketamine on MMN generation and AX-CPT performance were investigated in healthy volunteers to test the hypothesis that NMDARs are critically involved in human MMN generation, and to assess the nature of ketamine-induced deficits in AX-CPT performance. In a single-blind placebo-controlled study, 20 healthy volunteers underwent an infusion with subanesthetic doses of ketamine. The MMN-to-pitch and MMN-to-duration deviants were obtained while subjects performed an AX-CPT. Ketamine significantly decreased the peak amplitudes of the MMN-to-pitch and MMN-to-duration deviants by 27% and 21%, respectively. It induced performance deficits in the AX-CPT characterized by decreased hit rates and specific increases of errors (BX errors), reflecting a failure to form and use transient memory traces of task relevant information. The NMDARs are critically involved in human MMN generation. Deficient MMN in schizophrenia thus suggests deficits in NMDAR-related neurotransmission. N-methyl-D-aspartate receptor dysfunction may also contribute to the impairment of patients with schizophrenia in forming and using transient memory traces in more complex tasks, such as the AX-CPT. Thus, NMDAR-related dysfunction may underlie deficits in transient memory at different levels of information processing in schizophrenia. Arch Gen Psychiatry. 2000;57:1139-1147.

  15. Attending to auditory memory.

    Science.gov (United States)

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. The attenuation of auditory neglect by implicit cues.

    Science.gov (United States)

    Coleman, A Rand; Williams, J Michael

    2006-09-01

    This study examined implicit semantic and rhyming cues on perception of auditory stimuli among nonaphasic participants who suffered a lesion of the right cerebral hemisphere and auditory neglect of sound perceived by the left ear. Because language represents an elaborate processing of auditory stimuli and the language centers were intact among these patients, it was hypothesized that interactive verbal stimuli presented in a dichotic manner would attenuate neglect. The selected participants were administered an experimental dichotic listening test composed of six types of word pairs: unrelated words, synonyms, antonyms, categorically related words, compound words, and rhyming words. Presentation of word pairs that were semantically related resulted in a dramatic reduction of auditory neglect. Dichotic presentations of rhyming words exacerbated auditory neglect. These findings suggest that the perception of auditory information is strongly affected by the specific content conveyed by the auditory system. Language centers will process a degraded stimulus that contains salient language content. A degraded auditory stimulus is neglected if it is devoid of content that activates the language centers or other cognitive systems. In general, these findings suggest that auditory neglect involves a complex interaction of intact and impaired cerebral processing centers with content that is selectively processed by these centers.

  17. Verbal--Spatial IQ Discrepancies Impact Brain Activation Associated with the Resolution of Cognitive Conflict in Children and Adolescents

    Science.gov (United States)

    Margolis, Amy E.; Davis, Katie S.; Pao, Lisa S.; Lewis, Amy; Yang, Xiao; Tau, Gregory; Zhao, Guihu; Wang, Zhishun; Marsh, Rachel

    2018-01-01

    Verbal--spatial discrepancies are common in healthy individuals and in those with neurodevelopmental disorders associated with cognitive control deficits including: Autism Spectrum Disorder, Non-Verbal Learning Disability, Fragile X, 22q11 deletion, and Turner Syndrome. Previous data from healthy individuals suggest that the magnitude of the…

  18. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  19. Attention, awareness, and the perception of auditory scenes

    Directory of Open Access Journals (Sweden)

    Joel S Snyder

    2012-02-01

    Full Text Available Auditory perception and cognition entails both low-level and high-level processes, which are likely to interact with each other to create our rich conscious experience of soundscapes. Recent research that we review has revealed numerous influences of high-level factors, such as attention, intention, and prior experience, on conscious auditory perception. And recently, studies have shown that auditory scene analysis tasks can exhibit multistability in a manner very similar to ambiguous visual stimuli, presenting a unique opportunity to study neural correlates of auditory awareness and the extent to which mechanisms of perception are shared across sensory modalities. Research has also led to a growing number of techniques through which auditory perception can be manipulated and even completely suppressed. Such findings have important consequences for our understanding of the mechanisms of perception and also should allow scientists to precisely distinguish the influences of different higher-level influences.

  20. Absence of auditory 'global interference' in autism.

    Science.gov (United States)

    Foxton, Jessica M; Stewart, Mary E; Barnard, Louise; Rodgers, Jacqui; Young, Allan H; O'Brien, Gregory; Griffiths, Timothy D

    2003-12-01

    There has been considerable recent interest in the cognitive style of individuals with Autism Spectrum Disorder (ASD). One theory, that of weak central coherence, concerns an inability to combine stimulus details into a coherent whole. Here we test this theory in the case of sound patterns, using a new definition of the details (local structure) and the coherent whole (global structure). Thirteen individuals with a diagnosis of autism or Asperger's syndrome and 15 control participants were administered auditory tests, where they were required to match local pitch direction changes between two auditory sequences. When the other local features of the sequence pairs were altered (the actual pitches and relative time points of pitch direction change), the control participants obtained lower scores compared with when these details were left unchanged. This can be attributed to interference from the global structure, defined as the combination of the local auditory details. In contrast, the participants with ASD did not obtain lower scores in the presence of such mismatches. This was attributed to the absence of interference from an auditory coherent whole. The results are consistent with the presence of abnormal interactions between local and global auditory perception in ASD.

  1. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  2. The relation between working memory capacity and auditory lateralization in children with auditory processing disorders.

    Science.gov (United States)

    Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed

    2014-11-01

    Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  4. Biological impact of music and software-based auditory training

    Science.gov (United States)

    Kraus, Nina

    2012-01-01

    Auditory-based communication skills are developed at a young age and are maintained throughout our lives. However, some individuals – both young and old – encounter difficulties in achieving or maintaining communication proficiency. Biological signals arising from hearing sounds relate to real-life communication skills such as listening to speech in noisy environments and reading, pointing to an intersection between hearing and cognition. Musical experience, amplification, and software-based training can improve these biological signals. These findings of biological plasticity, in a variety of subject populations, relate to attention and auditory memory, and represent an integrated auditory system influenced by both sensation and cognition. Learning outcomes The reader will (1) understand that the auditory system is malleable to experience and training, (2) learn the ingredients necessary for auditory learning to successfully be applied to communication, (3) learn that the auditory brainstem response to complex sounds (cABR) is a window into the integrated auditory system, and (4) see examples of how cABR can be used to track the outcome of experience and training. PMID:22789822

  5. Verbal working memory deficits predict levels of auditory hallucination in first-episode psychosis.

    Science.gov (United States)

    Gisselgård, Jens; Anda, Liss Gøril; Brønnick, Kolbjørn; Langeveld, Johannes; Ten Velden Hegelstad, Wenche; Joa, Inge; Johannessen, Jan Olav; Larsen, Tor Ketil

    2014-03-01

    Auditory verbal hallucinations are a characteristic symptom in schizophrenia. Recent causal models of auditory verbal hallucinations propose that cognitive mechanisms involving verbal working memory are involved in the genesis of auditory verbal hallucinations. Thus, in the present study, we investigate the hypothesis that verbal working memory is a specific factor behind auditory verbal hallucinations. In the present study, we investigated the association between verbal working memory manipulation (Backward Digit Span and Letter-Number Sequencing) and auditory verbal hallucinations in a population study (N=52) of first episode psychosis. The degree of auditory verbal hallucination as reported in the P3-subscale of the PANSS interview was included as dependent variable using sequential multiple regression, while controlling for age, psychosis symptom severity, executive cognitive functions and simple auditory working memory span. Multiple sequential regression analyses revealed verbal working memory manipulation to be the only significant predictor of verbal hallucination severity. Consistent with cognitive data from auditory verbal hallucinations in healthy individuals, the present results suggest a specific association between auditory verbal hallucinations, and cognitive processes involving the manipulation of phonological representations during a verbal working memory task. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  7. Review: Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Ja'fari

    2003-01-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  8. The impact of educational level on performance on auditory processing tests

    Directory of Open Access Journals (Sweden)

    Cristina F.B. Murphy

    2016-03-01

    Full Text Available Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor years of schooling was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.

  9. Perceptual Plasticity for Auditory Object Recognition

    Science.gov (United States)

    Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.

    2017-01-01

    of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed. PMID:28588524

  10. Plasticity in the Primary Auditory Cortex, Not What You Think it is: Implications for Basic and Clinical Auditory Neuroscience

    Science.gov (United States)

    Weinberger, Norman M.

    2013-01-01

    Standard beliefs that the function of the primary auditory cortex (A1) is the analysis of sound have proven to be incorrect. Its involvement in learning, memory and other complex processes in both animals and humans is now well-established, although often not appreciated. Auditory coding is strongly modifed by associative learning, evident as associative representational plasticity (ARP) in which the representation of an acoustic dimension, like frequency, is re-organized to emphasize a sound that has become behaviorally important. For example, the frequency tuning of a cortical neuron can be shifted to match that of a significant sound and the representational area of sounds that acquire behavioral importance can be increased. ARP depends on the learning strategy used to solve an auditory problem and the increased cortical area confers greater strength of auditory memory. Thus, primary auditory cortex is involved in cognitive processes, transcending its assumed function of auditory stimulus analysis. The implications for basic neuroscience and clinical auditory neuroscience are presented and suggestions for remediation of auditory processing disorders are introduced. PMID:25356375

  11. Brief Report: Predicting Inner Speech Use amongst Children with Autism Spectrum Disorder (ASD)--The Roles of Verbal Ability and Cognitive Profile

    Science.gov (United States)

    Williams, David M.; Jarrold, Christopher

    2010-01-01

    Studies of inner speech use in ASD have produced conflicting results. Lidstone et al., J "Autism Dev Disord" (2009) hypothesised that Cognitive Profile (i.e., "discrepancy" between non-verbal and verbal abilities) is a predictor of inner speech use amongst children with ASD. They suggested other, contradictory results might be explained in terms…

  12. Modification of sudden onset auditory ERP by involuntary attention to visual stimuli.

    Science.gov (United States)

    Oray, Serkan; Lu, Zhong-Lin; Dawson, Michael E

    2002-03-01

    To investigate the cross-modal nature of the exogenous attention system, we studied how involuntary attention in the visual modality affects ERPs elicited by sudden onset of events in the auditory modality. Relatively loud auditory white noise bursts were presented to subjects with random and long inter-trial intervals. The noise bursts were either presented alone, or paired with a visual stimulus with a visual to auditory onset asynchrony of 120 ms. In a third condition, the visual stimuli were shown alone. All three conditions, auditory alone, visual alone, and paired visual/auditory, were randomly inter-mixed and presented with equal probabilities. Subjects were instructed to fixate on a point in front of them without task instructions concerning either the auditory or visual stimuli. ERPs were recorded from 28 scalp sites throughout every experimental session. Compared to ERPs in the auditory alone condition, pairing the auditory noise bursts with the visual stimulus reduced the amplitude of the auditory N100 component at Cz by 40% and the auditory P200/P300 component at Cz by 25%. No significant topographical change was observed in the scalp distributions of the N100 and P200/P300. Our results suggest that involuntary attention to visual stimuli suppresses early sensory (N100) as well as late cognitive (P200/P300) processing of sudden auditory events. The activation of the exogenous attention system by sudden auditory onset can be modified by involuntary visual attention in a cross-model, passive prepulse inhibition paradigm.

  13. Self-grounding visual, auditory and olfactory autobiographical memories.

    Science.gov (United States)

    Knez, Igor; Ljunglöf, Louise; Arshamian, Artin; Willander, Johan

    2017-07-01

    Given that autobiographical memory provides a cognitive foundation for the self, we investigated the relative importance of visual, auditory and olfactory autobiographical memories for the self. Thirty subjects, with a mean age of 35.4years, participated in a study involving a three×three within-subject design containing nine different types of autobiographical memory cues: pictures, sounds and odors presented with neutral, positive and negative valences. It was shown that visual compared to auditory and olfactory autobiographical memories involved higher cognitive and emotional constituents for the self. Furthermore, there was a trend showing positive autobiographical memories to increase their proportion to both cognitive and emotional components of the self, from olfactory to auditory to visually cued autobiographical memories; but, yielding a reverse trend for negative autobiographical memories. Finally, and independently of modality, positive affective states were shown to be more involved in autobiographical memory than negative ones. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Modality and domain specific components in auditory and visual working memory tasks.

    Science.gov (United States)

    Lehnert, Günther; Zimmer, Hubert D

    2008-03-01

    In the tripartite model of working memory (WM) it is postulated that a unique part system-the visuo-spatial sketchpad (VSSP)-processes non-verbal content. Due to behavioral and neurophysiological findings, the VSSP was later subdivided into visual object and visual spatial processing, the former representing objects' appearance and the latter spatial information. This distinction is well supported. However, a challenge to this model is the question how spatial information from non-visual sensory modalities, for example the auditory one, is processed. Only a few studies so far have directly compared visual and auditory spatial WM. They suggest that the distinction of two processing domains--one for object and one for spatial information--also holds true for auditory WM, but that only a part of the processes is modality specific. We propose that processing in the object domain (the item's appearance) is modality specific, while spatial WM as well as object-location binding relies on modality general processes.

  15. Regional cerebral blood flow during the auditory oddball task measured by positron emission tomography

    International Nuclear Information System (INIS)

    Mochida, Masahiko

    1997-01-01

    Regional cerebral blood flow (rCBF) was measured by employing PET in nine healthy right-handed male subjects, while they simultaneously performed the auditory oddball task using tone bursts. Results showed that the rCBF value was highest in the transverse gyrus of Heschl in both right and left hemispheres. When comparing the rCBF values between right and left hemispheres, four areas had higher rCBF values in the left hemisphere and eight areas had higher rCBF values in the right hemisphere. Of these, the anterior and posterior parts of the superior temporal gyrus, especially, showed significant differences. The hemispheric differences in the rCBF values of the auditory areas can be attributed to the performance of the oddball task which requires higher processing of non verbal auditory input. The P300 amplitude which reflects the amount of the allocated information processing resources correlated positively with rCBF in the following areas: left piriform cortex, the transverse gyrus of Heschl in both left and right hemispheres. Mean-while, P300 amplitude correlated negatively with rCBF in the nucleus accumbens septi in both right and left hemispheres. The N100 amplitude evoked by frequent stimulus did not correlate with rCBF in almost all ROIs. (K.H.)

  16. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  17. Cognitive abilities in children in contexts of poverty

    Directory of Open Access Journals (Sweden)

    Silvina Cohen Imach

    2015-09-01

    Full Text Available A series of studies of cognitive abilities were conducted on a group of children at a context of poverty, in reason of learning about the quality of such capabilities, and in direct relationship to low school performance and subsequent risk of academic underachievement. Fifty three 4th year EGB-2 (Elementary School children of both sexes participated. They attend a suburban school outside the city of San Miguel de Tucumán, Argentina. Tests of Analogies and Building with Cubes of Wechsler ́s Intelligence Scale III (WISCIII were used in the process. Additionally, a register protocol was prepared by the research team. Outcomes were articulated with a demographic poll inquiring on the social-economic context of the children. Results reveal a proportion of 18.9 % of the children showing below standard records in cognitive abilities related to the aptitude in forming verbal concepts, and of 13.2 % in non- verbal concepts. Verbal abilities refer to the faculty of classifying and categorizing, for which the subject needs to organize, abstract and find relationship between facts or ideas and the comprehension of oral/audio assignments. Non- verbal abilities submit to the aptitude of making processes of analysis- synthesis and applying non- verbal reasoning to spatial relationships. This group of children was selected to receive – in second stage- training in these abilities through the Instrumental Enrichment Program. 

  18. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual

  20. Opposite brain laterality in analogous auditory and visual tests.

    Science.gov (United States)

    Oltedal, Leif; Hugdahl, Kenneth

    2017-11-01

    Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.

  1. Missing a trick: Auditory load modulates conscious awareness in audition.

    Science.gov (United States)

    Fairnie, Jake; Moore, Brian C J; Remington, Anna

    2016-07-01

    In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Subcortical pathways: Towards a better understanding of auditory disorders.

    Science.gov (United States)

    Felix, Richard A; Gourévitch, Boris; Portfors, Christine V

    2018-05-01

    Hearing loss is a significant problem that affects at least 15% of the population. This percentage, however, is likely significantly higher because of a variety of auditory disorders that are not identifiable through traditional tests of peripheral hearing ability. In these disorders, individuals have difficulty understanding speech, particularly in noisy environments, even though the sounds are loud enough to hear. The underlying mechanisms leading to such deficits are not well understood. To enable the development of suitable treatments to alleviate or prevent such disorders, the affected processing pathways must be identified. Historically, mechanisms underlying speech processing have been thought to be a property of the auditory cortex and thus the study of auditory disorders has largely focused on cortical impairments and/or cognitive processes. As we review here, however, there is strong evidence to suggest that, in fact, deficits in subcortical pathways play a significant role in auditory disorders. In this review, we highlight the role of the auditory brainstem and midbrain in processing complex sounds and discuss how deficits in these regions may contribute to auditory dysfunction. We discuss current research with animal models of human hearing and then consider human studies that implicate impairments in subcortical processing that may contribute to auditory disorders. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. The Study of Frequency Self Care Strategies against Auditory Hallucinations

    Directory of Open Access Journals (Sweden)

    Mahin Nadem

    2012-03-01

    Full Text Available Background: In schizophrenic clients, self-care strategies against auditory hallucinations can decrease disturbances results in hallucination. This study was aimed to assess frequency of self-care strategies against auditory hallucinations in paranoid schizophrenic patients, hospitalized in Shafa Hospital.Materials and Method: This was a descriptive study on 201 patients with paranoid schizophrenia hospitalized in psychiatry unit with convenience sampling in Rasht. The gathered data consists of two parts, first unit demographic characteristic and the second part, self- report questionnaire include 38 items about self-care strategies.Results: There were statistically significant relationship between demographic variables and knowledg effect and self-care strategies against auditory hallucinaions. Sex with phisical domain p0.07, marriage status with cognitive domain (p>0.07 and life status with behavioural domain (p>0.01. 53.2% of reported type of our auditory hallucinations were command hallucinations, furtheremore the most effective self-care strategies against auditory hallucinations were from physical domain and substance abuse (82.1% was the most effective strategies in this domain.Conclusion: The client with paranoid schizophrenia used more than physical domain strategies against auditory hallucinaions and this result highlight need those to approprait nursing intervention. Instruction and leading about selection the effective self-care strategies against auditory ha

  4. Visual versus auditory Simon effect: A behavioural and physiological investigation.

    Science.gov (United States)

    D'Ascenzo, Stefania; Lugli, Luisa; Baroni, Giulia; Guidotti, Roberto; Rubichi, Sandro; Iani, Cristina; Nicoletti, Roberto

    2018-04-01

    This study investigated whether the visual and auditory Simon effects could be accounted for by the same mechanism. In a single experiment, we performed a detailed comparison of the visual and the auditory Simon effects arising in behavioural responses and in pupil dilation, a psychophysiological measure considered as a marker of the cognitive effort induced by conflict processing. To address our question, we performed sequential and distributional analyses on both reaction times and pupil dilation. Results confirmed that the mechanisms underlying the visual and auditory Simon effects are functionally equivalent in terms of the interaction between unconditional and conditional response processes. The two modalities, however, differ with respect to the strength of their activation and inhibition. Importantly, pupillary data mirrored the pattern observed in behavioural data for both tasks, adding physiological evidence to the current literature on the processing of visual and auditory information in a conflict task.

  5. Auditory hallucinations: A review of the ERC "VOICE" project.

    Science.gov (United States)

    Hugdahl, Kenneth

    2015-06-22

    In this invited review I provide a selective overview of recent research on brain mechanisms and cognitive processes involved in auditory hallucinations. The review is focused on research carried out in the "VOICE" ERC Advanced Grant Project, funded by the European Research Council, but I also review and discuss the literature in general. Auditory hallucinations are suggested to be perceptual phenomena, with a neuronal origin in the speech perception areas in the temporal lobe. The phenomenology of auditory hallucinations is conceptualized along three domains, or dimensions; a perceptual dimension, experienced as someone speaking to the patient; a cognitive dimension, experienced as an inability to inhibit, or ignore the voices, and an emotional dimension, experienced as the "voices" having primarily a negative, or sinister, emotional tone. I will review cognitive, imaging, and neurochemistry data related to these dimensions, primarily the first two. The reviewed data are summarized in a model that sees auditory hallucinations as initiated from temporal lobe neuronal hyper-activation that draws attentional focus inward, and which is not inhibited due to frontal lobe hypo-activation. It is further suggested that this is maintained through abnormal glutamate and possibly gamma-amino-butyric-acid transmitter mediation, which could point towards new pathways for pharmacological treatment. A final section discusses new methods of acquiring quantitative data on the phenomenology and subjective experience of auditory hallucination that goes beyond standard interview questionnaires, by suggesting an iPhone/iPod app.

  6. Increased Early Processing of Task-Irrelevant Auditory Stimuli in Older Adults.

    Directory of Open Access Journals (Sweden)

    Erich S Tusch

    Full Text Available The inhibitory deficit hypothesis of cognitive aging posits that older adults' inability to adequately suppress processing of irrelevant information is a major source of cognitive decline. Prior research has demonstrated that in response to task-irrelevant auditory stimuli there is an age-associated increase in the amplitude of the N1 wave, an ERP marker of early perceptual processing. Here, we tested predictions derived from the inhibitory deficit hypothesis that the age-related increase in N1 would be 1 observed under an auditory-ignore, but not auditory-attend condition, 2 attenuated in individuals with high executive capacity (EC, and 3 augmented by increasing cognitive load of the primary visual task. ERPs were measured in 114 well-matched young, middle-aged, young-old, and old-old adults, designated as having high or average EC based on neuropsychological testing. Under the auditory-ignore (visual-attend task, participants ignored auditory stimuli and responded to rare target letters under low and high load. Under the auditory-attend task, participants ignored visual stimuli and responded to rare target tones. Results confirmed an age-associated increase in N1 amplitude to auditory stimuli under the auditory-ignore but not auditory-attend task. Contrary to predictions, EC did not modulate the N1 response. The load effect was the opposite of expectation: the N1 to task-irrelevant auditory events was smaller under high load. Finally, older adults did not simply fail to suppress the N1 to auditory stimuli in the task-irrelevant modality; they generated a larger response than to identical stimuli in the task-relevant modality. In summary, several of the study's findings do not fit the inhibitory-deficit hypothesis of cognitive aging, which may need to be refined or supplemented by alternative accounts.

  7. Contextual modulation of primary visual cortex by auditory signals.

    Science.gov (United States)

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  8. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  9. Modularity in Sensory Auditory Memory

    OpenAIRE

    Clement, Sylvain; Moroni, Christine; Samson, Séverine

    2004-01-01

    The goal of this paper was to review various experimental and neuropsychological studies that support the modular conception of auditory sensory memory or auditory short-term memory. Based on initial findings demonstrating that verbal sensory memory system can be dissociated from a general auditory memory store at the functional and anatomical levels. we reported a series of studies that provided evidence in favor of multiple auditory sensory stores specialized in retaining eit...

  10. Are Auditory and Visual Processing Deficits Related to Developmental Dyslexia?

    Science.gov (United States)

    Georgiou, George K.; Papadopoulos, Timothy C.; Zarouna, Elena; Parrila, Rauno

    2012-01-01

    The purpose of this study was to examine if children with dyslexia learning to read a consistent orthography (Greek) experience auditory and visual processing deficits and if these deficits are associated with phonological awareness, rapid naming speed and orthographic processing. We administered measures of general cognitive ability, phonological…

  11. Interference by Process, Not Content, Determines Semantic Auditory Distraction

    Science.gov (United States)

    Marsh, John E.; Hughes, Robert W.; Jones, Dylan M.

    2009-01-01

    Distraction by irrelevant background sound of visually-based cognitive tasks illustrates the vulnerability of attentional selectivity across modalities. Four experiments centred on auditory distraction during tests of memory for visually-presented semantic information. Meaningful irrelevant speech disrupted the free recall of semantic…

  12. Development of auditory sensory memory from 2 to 6 years: an MMN study.

    Science.gov (United States)

    Glass, Elisabeth; Sachse, Steffi; von Suchodoletz, Waldemar

    2008-08-01

    Short-term storage of auditory information is thought to be a precondition for cognitive development, and deficits in short-term memory are believed to underlie learning disabilities and specific language disorders. We examined the development of the duration of auditory sensory memory in normally developing children between the ages of 2 and 6 years. To probe the lifetime of auditory sensory memory we elicited the mismatch negativity (MMN), a component of the late auditory evoked potential, with tone stimuli of two different frequencies presented with various interstimulus intervals between 500 and 5,000 ms. Our findings suggest that memory traces for tone characteristics have a duration of 1-2 s in 2- and 3-year-old children, more than 2 s in 4-year-olds and 3-5 s in 6-year-olds. The results provide insights into the maturational processes involved in auditory sensory memory during the sensitive period of cognitive development.

  13. Neuropsychopharmacology of auditory hallucinations: insights from pharmacological functional MRI and perspectives for future research.

    Science.gov (United States)

    Johnsen, Erik; Hugdahl, Kenneth; Fusar-Poli, Paolo; Kroken, Rune A; Kompus, Kristiina

    2013-01-01

    Experiencing auditory verbal hallucinations is a prominent symptom in schizophrenia that also occurs in subjects at enhanced risk for psychosis and in the general population. Drug treatment of auditory hallucinations is challenging, because the current understanding is limited with respect to the neural mechanisms involved, as well as how CNS drugs, such as antipsychotics, influence the subjective experience and neurophysiology of hallucinations. In this article, the authors review studies of the effect of antipsychotic medication on brain activation as measured with functional MRI in patients with auditory verbal hallucinations. First, the authors examine the neural correlates of ongoing auditory hallucinations. Then, the authors critically discuss studies addressing the antipsychotic effect on the neural correlates of complex cognitive tasks. Current evidence suggests that blood oxygen level-dependant effects of antipsychotic drugs reflect specific, regional effects but studies on the neuropharmacology of auditory hallucinations are scarce. Future directions for pharmacological neuroimaging of auditory hallucinations are discussed.

  14. Neural Substrates of Auditory Emotion Recognition Deficits in Schizophrenia.

    Science.gov (United States)

    Kantrowitz, Joshua T; Hoptman, Matthew J; Leitman, David I; Moreno-Ortega, Marta; Lehrfeld, Jonathan M; Dias, Elisa; Sehatpour, Pejman; Laukka, Petri; Silipo, Gail; Javitt, Daniel C

    2015-11-04

    Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition

  15. Auditory Memory for Timbre

    Science.gov (United States)

    McKeown, Denis; Wellsted, David

    2009-01-01

    Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…

  16. Auditory evacuation beacons

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.

    2005-01-01

    Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new

  17. Minimal effects of visual memory training on the auditory performance of adult cochlear implant users

    Science.gov (United States)

    Oba, Sandra I.; Galvin, John J.; Fu, Qian-Jie

    2014-01-01

    Auditory training has been shown to significantly improve cochlear implant (CI) users’ speech and music perception. However, it is unclear whether post-training gains in performance were due to improved auditory perception or to generally improved attention, memory and/or cognitive processing. In this study, speech and music perception, as well as auditory and visual memory were assessed in ten CI users before, during, and after training with a non-auditory task. A visual digit span (VDS) task was used for training, in which subjects recalled sequences of digits presented visually. After the VDS training, VDS performance significantly improved. However, there were no significant improvements for most auditory outcome measures (auditory digit span, phoneme recognition, sentence recognition in noise, digit recognition in noise), except for small (but significant) improvements in vocal emotion recognition and melodic contour identification. Post-training gains were much smaller with the non-auditory VDS training than observed in previous auditory training studies with CI users. The results suggest that post-training gains observed in previous studies were not solely attributable to improved attention or memory, and were more likely due to improved auditory perception. The results also suggest that CI users may require targeted auditory training to improve speech and music perception. PMID:23516087

  18. Neuronal activity in primate auditory cortex during the performance of audiovisual tasks.

    Science.gov (United States)

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2015-03-01

    This study aimed at a deeper understanding of which cognitive and motivational aspects of tasks affect auditory cortical activity. To this end we trained two macaque monkeys to perform two different tasks on the same audiovisual stimulus and to do this with two different sizes of water rewards. The monkeys had to touch a bar after a tone had been turned on together with an LED, and to hold the bar until either the tone (auditory task) or the LED (visual task) was turned off. In 399 multiunits recorded from core fields of auditory cortex we confirmed that during task engagement neurons responded to auditory and non-auditory stimuli that were task-relevant, such as light and water. We also confirmed that firing rates slowly increased or decreased for several seconds during various phases of the tasks. Responses to non-auditory stimuli and slow firing changes were observed during both the auditory and the visual task, with some differences between them. There was also a weak task-dependent modulation of the responses to auditory stimuli. In contrast to these cognitive aspects, motivational aspects of the tasks were not reflected in the firing, except during delivery of the water reward. In conclusion, the present study supports our previous proposal that there are two response types in the auditory cortex that represent the timing and the type of auditory and non-auditory elements of a auditory tasks as well the association between elements. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  19. Age at implantation and auditory memory in cochlear implanted children.

    Science.gov (United States)

    Mikic, B; Miric, D; Nikolic-Mikic, M; Ostojic, S; Asanovic, M

    2014-05-01

    Early cochlear implantation, before the age of 3 years, provides the best outcome regarding listening, speech, cognition an memory due to maximal central nervous system plasticity. Intensive postoperative training improves not only auditory performance and language, but affects auditory memory as well. The aim of this study was to discover if the age at implantation affects auditory memory function in cochlear implanted children. A total of 50 cochlear implanted children aged 4 to 8 years were enrolled in this study: early implanted (1-3y) n = 27 and late implanted (4-6y) n = 23. Two types of memory tests were used: Immediate Verbal Memory Test and Forward and Backward Digit Span Test. Early implanted children performed better on both verbal and numeric tasks of auditory memory. The difference was statistically significant, especially on the complex tasks. Early cochlear implantation, before the age of 3 years, significantly improve auditory memory and contribute to better cognitive and education outcomes.

  20. Neurofeedback in Learning Disabled Children: Visual versus Auditory Reinforcement.

    Science.gov (United States)

    Fernández, Thalía; Bosch-Bayard, Jorge; Harmony, Thalía; Caballero, María I; Díaz-Comas, Lourdes; Galán, Lídice; Ricardo-Garcell, Josefina; Aubert, Eduardo; Otero-Ojeda, Gloria

    2016-03-01

    Children with learning disabilities (LD) frequently have an EEG characterized by an excess of theta and a deficit of alpha activities. NFB using an auditory stimulus as reinforcer has proven to be a useful tool to treat LD children by positively reinforcing decreases of the theta/alpha ratio. The aim of the present study was to optimize the NFB procedure by comparing the efficacy of visual (with eyes open) versus auditory (with eyes closed) reinforcers. Twenty LD children with an abnormally high theta/alpha ratio were randomly assigned to the Auditory or the Visual group, where a 500 Hz tone or a visual stimulus (a white square), respectively, was used as a positive reinforcer when the value of the theta/alpha ratio was reduced. Both groups had signs consistent with EEG maturation, but only the Auditory Group showed behavioral/cognitive improvements. In conclusion, the auditory reinforcer was more efficacious in reducing the theta/alpha ratio, and it improved the cognitive abilities more than the visual reinforcer.

  1. Aging increases distraction by auditory oddballs in visual, but not auditory tasks.

    Science.gov (United States)

    Leiva, Alicia; Parmentier, Fabrice B R; Andrés, Pilar

    2015-05-01

    Aging is typically considered to bring a reduction of the ability to resist distraction by task-irrelevant stimuli. Yet recent work suggests that this conclusion must be qualified and that the effect of aging is mitigated by whether irrelevant and target stimuli emanate from the same modalities or from distinct ones. Some studies suggest that aging is especially sensitive to distraction within-modality while others suggest it is greater across modalities. Here we report the first study to measure the effect of aging on deviance distraction in cross-modal (auditory-visual) and uni-modal (auditory-auditory) oddball tasks. Young and older adults were asked to judge the parity of target digits (auditory or visual in distinct blocks of trials), each preceded by a task-irrelevant sound (the same tone on most trials-the standard sound-or, on rare and unpredictable trials, a burst of white noise-the deviant sound). Deviant sounds yielded distraction (longer response times relative to standard sounds) in both tasks and age groups. However, an age-related increase in distraction was observed in the cross-modal task and not in the uni-modal task. We argue that aging might affect processes involved in the switching of attention across modalities and speculate that this may due to the slowing of this type of attentional shift or a reduction in cognitive control required to re-orient attention toward the target's modality.

  2. Auditory training and challenges associated with participation and compliance.

    Science.gov (United States)

    Sweetow, Robert W; Sabes, Jennifer Henderson

    2010-10-01

    When individuals have hearing loss, physiological changes in their brain interact with relearning of sound patterns. Some individuals utilize compensatory strategies that may result in successful hearing aid use. Others, however, are not so fortunate. Modern hearing aids can provide audibility but may not rectify spectral and temporal resolution, susceptibility to noise interference, or degradation of cognitive skills, such as declining auditory memory and slower speed of processing associated with aging. Frequently, these deficits are not identified during a typical "hearing aid evaluation." Aural rehabilitation has long been advocated to enhance communication but has not been considered time or cost-effective. Home-based, interactive adaptive computer therapy programs are available that are designed to engage the adult hearing-impaired listener in the hearing aid fitting process, provide listening strategies, build confidence, and address cognitive changes. Despite the availability of these programs, many patients and professionals are reluctant to engage in and complete therapy. The purposes of this article are to discuss the need for identifying auditory and nonauditory factors that may adversely affect the overall audiological rehabilitation process, to discuss important features that should be incorporated into training, and to examine reasons for the lack of compliance with therapeutic options. Possible solutions to maximizing compliance are explored. Only a small portion of audiologists (fewer than 10%) offer auditory training to patients with hearing impairment, even though auditory training appears to lower the rate of hearing aid returns for credit. Patients to whom auditory training programs are recommended often do not complete the training, however. Compliance for a cohort of home-based auditory therapy trainees was less than 30%. Activities to increase patient compliance to auditory training protocols are proposed. American Academy of Audiology.

  3. Specific components of face perception in the human fusiform gyrus studied by tomographic estimates of magnetoencephalographic signals: a tool for the evaluation of non-verbal communication in psychosomatic paradigms

    Directory of Open Access Journals (Sweden)

    Ioannides Andreas A

    2007-12-01

    Full Text Available Abstract Aims The aim of this study was to determine the specific spatiotemporal activation patterns of face perception in the fusiform gyrus (FG. The FG is a key area in the specialized brain system that makes possible the recognition of face with ease and speed in our daily life. Characterization of FG response provides a quantitative method for evaluating the fundamental functions that contribute to non-verbal communication in various psychosomatic paradigms. Methods The MEG signal was recorded during passive visual stimulus presentation with three stimulus types – Faces, Hands and Shoes. The stimuli were presented separately to the central and peripheral visual fields. We performed statistical parametric mapping (SPM analysis of tomographic estimates of activity to compare activity between a pre- and post-stimulus period in the same object (baseline test, and activity between objects (active test. The time course of regional activation curves was analyzed for each stimulus condition. Results The SPM baseline test revealed a response to each stimulus type, which was very compact at the initial segment of main MFG170. For hands and shoes the area of significant change remains compact. For faces the area expanded widely within a few milliseconds and its boundaries engulfed the other object areas. The active test demonstrated that activity for faces was significantly larger than the activity for hands. The same face specific compact area as in the baseline test was identified, and then again expanded widely. For each stimulus type and presentation in each one of the visual fields locations, the analysis of the time course of FG activity identified three components in the FG: MFG100, MFG170, and MFG200 – all showed preference for faces. Conclusion Early compact face-specific activity in the FG expands widely along the occipito-ventral brain within a few milliseconds. The significant difference between faces and the other object stimuli in MFG

  4. [Low level auditory skills compared to writing skills in school children attending third and fourth grade: evidence for the rapid auditory processing deficit theory?].

    Science.gov (United States)

    Ptok, M; Meisen, R

    2008-01-01

    The rapid auditory processing defi-cit theory holds that impaired reading/writing skills are not caused exclusively by a cognitive deficit specific to representation and processing of speech sounds but arise due to sensory, mainly auditory, deficits. To further explore this theory we compared different measures of auditory low level skills to writing skills in school children. prospective study. School children attending third and fourth grade. just noticeable differences for intensity and frequency (JNDI, JNDF), gap detection (GD) monaural and binaural temporal order judgement (TOJb and TOJm); grade in writing, language and mathematics. correlation analysis. No relevant correlation was found between any auditory low level processing variable and writing skills. These data do not support the rapid auditory processing deficit theory.

  5. Animal models for auditory streaming

    Science.gov (United States)

    Itatani, Naoya

    2017-01-01

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044022

  6. Saudi normative data for the Wisconsin Card Sorting test, Stroop test, Test of Non-verbal Intelligence-3, Picture Completion and Vocabulary (subtest of the Wechsler Adult Intelligence Scale-Revised).

    Science.gov (United States)

    Al-Ghatani, Ali M; Obonsawin, Marc C; Binshaig, Basmah A; Al-Moutaery, Khalaf R

    2011-01-01

    There are 2 aims for this study: first, to collect normative data for the Wisconsin Card Sorting Test (WCST), Stroop test, Test of Non-verbal Intelligence (TONI-3), Picture Completion (PC) and Vocabulary (VOC) sub-test of the Wechsler Adult Intelligence Scale-Revised for use in a Saudi Arabian culture, and second, to use the normative data provided to generate the regression equations. To collect the normative data and generate the regression equations, 198 healthy individuals were selected to provide a representative distribution for age, gender, years of education, and socioeconomic class. The WCST, Stroop test, TONI-3, PC, and VOC were administrated to the healthy individuals. This study was carried out at the Department of Clinical Neurosciences, Riyadh Military Hospital, Riyadh, Kingdom of Saudi Arabia from January 2000 to July 2002. Normative data were obtained for all tests, and tables were constructed to interpret scores for different age groups. Regression equations to predict performance on the 3 tests of frontal function from scores on tests of fluid (TONI-3) and premorbid intelligence were generated from the data from the healthy individuals. The data collected in this study provide normative tables for 3 tests of frontal lobe function and for tests of general intellectual ability for use in Saudi Arabia. The data also provide a method to estimate pre-injury ability without the use of verbally based tests.

  7. Sonic morphology: Aesthetic dimensional auditory spatial awareness

    Science.gov (United States)

    Whitehouse, Martha M.

    The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.

  8. Using neuroplasticity-based auditory training to improve verbal memory in schizophrenia.

    Science.gov (United States)

    Fisher, Melissa; Holland, Christine; Merzenich, Michael M; Vinogradov, Sophia

    2009-07-01

    Impaired verbal memory in schizophrenia is a key rate-limiting factor for functional outcome, does not respond to currently available medications, and shows only modest improvement after conventional behavioral remediation. The authors investigated an innovative approach to the remediation of verbal memory in schizophrenia, based on principles derived from the basic neuroscience of learning-induced neuroplasticity. The authors report interim findings in this ongoing study. Fifty-five clinically stable schizophrenia subjects were randomly assigned to either 50 hours of computerized auditory training or a control condition using computer games. Those receiving auditory training engaged in daily computerized exercises that placed implicit, increasing demands on auditory perception through progressively more difficult auditory-verbal working memory and verbal learning tasks. Relative to the control group, subjects who received active training showed significant gains in global cognition, verbal working memory, and verbal learning and memory. They also showed reliable and significant improvement in auditory psychophysical performance; this improvement was significantly correlated with gains in verbal working memory and global cognition. Intensive training in early auditory processes and auditory-verbal learning results in substantial gains in verbal cognitive processes relevant to psychosocial functioning in schizophrenia. These gains may be due to a training method that addresses the early perceptual impairments in the illness, that exploits intact mechanisms of repetitive practice in schizophrenia, and that uses an intensive, adaptive training approach.

  9. Effects of emotionally charged auditory stimulation on gait performance in the elderly: a preliminary study.

    Science.gov (United States)

    Rizzo, John-Ross; Raghavan, Preeti; McCrery, J R; Oh-Park, Mooyeon; Verghese, Joe

    2015-04-01

    To evaluate the effect of a novel divided attention task-walking under auditory constraints-on gait performance in older adults and to determine whether this effect was moderated by cognitive status. Validation cohort. General community. Ambulatory older adults without dementia (N=104). Not applicable. In this pilot study, we evaluated walking under auditory constraints in 104 older adults who completed 3 pairs of walking trials on a gait mat under 1 of 3 randomly assigned conditions: 1 pair without auditory stimulation and 2 pairs with emotionally charged auditory stimulation with happy or sad sounds. The mean age of subjects was 80.6±4.9 years, and 63% (n=66) were women. The mean velocity during normal walking was 97.9±20.6cm/s, and the mean cadence was 105.1±9.9 steps/min. The effect of walking under auditory constraints on gait characteristics was analyzed using a 2-factorial analysis of variance with a 1-between factor (cognitively intact and minimal cognitive impairment groups) and a 1-within factor (type of auditory stimuli). In both happy and sad auditory stimulation trials, cognitively intact older adults (n=96) showed an average increase of 2.68cm/s in gait velocity (F1.86,191.71=3.99; P=.02) and an average increase of 2.41 steps/min in cadence (F1.75,180.42=10.12; Pactivities of daily living accounted for these differences. Our results provide preliminary evidence of the differentiating effect of emotionally charged auditory stimuli on gait performance in older individuals with minimal cognitive impairment compared with those without minimal cognitive impairment. A divided attention task using emotionally charged auditory stimuli might be able to elicit compensatory improvement in gait performance in cognitively intact older individuals, but lead to decompensation in those with minimal cognitive impairment. Further investigation is needed to compare gait performance under this task to gait on other dual-task paradigms and to separately examine the

  10. Auditory interfaces: The human perceiver

    Science.gov (United States)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  11. Desenvolvimento cognitivo, auditivo e linguístico em crianças expostas à música: produção de conhecimento nacional e internacional Cognitive, auditory and linguistic development in children exposed to music: production of national and international knowledge

    Directory of Open Access Journals (Sweden)

    Mayra Lopes Eugênio

    2012-10-01

    Full Text Available A música é um fator ambiental importante para o desenvolvimento das habilidades motoras, auditivas, linguísticas, cognitivas, visuais, entre outras. Estudos recentes citam a relação entre o estudo da música e o aprimoramento do processamento auditivo, das habilidades linguísticas e metalinguísticas e dos processos cognitivos, que são habilidades inerentes à comunicação humana. A Fonoaudiologia se ocupa da aquisição e do desenvolvimento, bem como, do aperfeiçoamento das habilidades necessárias à comunicação humana. Assim, parece haver uma inter-relação entre as áreas Música e Fonoaudiologia. O objetivo deste estudo é descrever e analisar as produções científicas relevantes para compreender a influência da música nas habilidades auditivas, linguísticas e cognitivas. Apesar da escassa produção científica sobre o tema, os estudos apresentados apontam relação positiva entre prática musical e desenvolvimento global infantil. O tema mais abordado foi o processamento auditivo, seguido do desenvolvimento cognitivo e da linguagem. A música pode ser considerada verdadeira aliada na terapia fonoaudiológica, demonstrando a importância da educação musical para crianças com desvio fonológico, alteração do processamento auditivo, distúrbio de linguagem oral e escrita. Baseado no que foi encontrado na revisão de literatura abrem-se novas perspectivas de trabalhos a serem realizados na fonoaudiologia para que as lacunas existentes possam ser preenchidas e que novos conhecimentos possam ser somados aos já construídos para a promoção do pleno desenvolvimento infantil.Music is an important environmental factor for developing motor, hearing, language, cognitive and visual skills, among others. Recent studies refer to the relationship between music study and improving the auditory processing, linguistic and metalinguistic abilities and cognitive processes that are inherent skills in human communication. Speech

  12. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    Directory of Open Access Journals (Sweden)

    Yael Zaltz

    2017-11-01

    Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant

  13. Trait aspects of auditory mismatch negativity predict response to auditory training in individuals with early illness schizophrenia.

    Science.gov (United States)

    Biagianti, Bruno; Roach, Brian J; Fisher, Melissa; Loewy, Rachel; Ford, Judith M; Vinogradov, Sophia; Mathalon, Daniel H

    2017-01-01

    Individuals with schizophrenia have heterogeneous impairments of the auditory processing system that likely mediate differences in the cognitive gains induced by auditory training (AT). Mismatch negativity (MMN) is an event-related potential component reflecting auditory echoic memory, and its amplitude reduction in schizophrenia has been linked to cognitive deficits. Therefore, MMN may predict response to AT and identify individuals with schizophrenia who have the most to gain from AT. Furthermore, to the extent that AT strengthens auditory deviance processing, MMN may also serve as a readout of the underlying changes in the auditory system induced by AT. Fifty-six individuals early in the course of a schizophrenia-spectrum illness (ESZ) were randomly assigned to 40 h of AT or Computer Games (CG). Cognitive assessments and EEG recordings during a multi-deviant MMN paradigm were obtained before and after AT and CG. Changes in these measures were compared between the treatment groups. Baseline and trait-like MMN data were evaluated as predictors of treatment response. MMN data collected with the same paradigm from a sample of Healthy Controls (HC; n = 105) were compared to baseline MMN data from the ESZ group. Compared to HC, ESZ individuals showed significant MMN reductions at baseline ( p = .003). Reduced Double-Deviant MMN was associated with greater general cognitive impairment in ESZ individuals ( p = .020). Neither ESZ intervention group showed significant change in MMN. We found high correlations in all MMN deviant types (rs = .59-.68, all ps < .001) between baseline and post-intervention amplitudes irrespective of treatment group, suggesting trait-like stability of the MMN signal. Greater deficits in trait-like Double-Deviant MMN predicted greater cognitive improvements in the AT group ( p = .02), but not in the CG group. In this sample of ESZ individuals, AT had no effect on auditory deviance processing as assessed by MMN. In ESZ individuals, baseline MMN

  14. Happiness increases distraction by auditory deviant stimuli.

    Science.gov (United States)

    Pacheco-Unguetti, Antonia Pilar; Parmentier, Fabrice B R

    2016-08-01

    Rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards) capture attention and impair behavioural performance in an ongoing visual task. Recent evidence indicates that this effect is increased by sadness in a task involving neutral stimuli. We tested the hypothesis that such effect may not be limited to negative emotions but reflect a general depletion of attentional resources by examining whether a positive emotion (happiness) would increase deviance distraction too. Prior to performing an auditory-visual oddball task, happiness or a neutral mood was induced in participants by means of the exposure to music and the recollection of an autobiographical event. Results from the oddball task showed significantly larger deviance distraction following the induction of happiness. Interestingly, the small amount of distraction typically observed on the standard trial following a deviant trial (post-deviance distraction) was not increased by happiness. We speculate that happiness might interfere with the disengagement of attention from the deviant sound back towards the target stimulus (through the depletion of cognitive resources and/or mind wandering) but help subsequent cognitive control to recover from distraction. © 2015 The British Psychological Society.

  15. Imitation Therapy for Non-Verbal Toddlers

    Science.gov (United States)

    Gill, Cindy; Mehta, Jyutika; Fredenburg, Karen; Bartlett, Karen

    2011-01-01

    When imitation skills are not present in young children, speech and language skills typically fail to emerge. There is little information on practices that foster the emergence of imitation skills in general and verbal imitation skills in particular. The present study attempted to add to our limited evidence base regarding accelerating the…

  16. Spontaneous Non-verbal Counting in Toddlers

    Science.gov (United States)

    Sella, Francesco; Berteletti, Ilaria; Lucangeli, Daniela; Zorzi, Marco

    2016-01-01

    A wealth of studies have investigated numerical abilities in infants and in children aged 3 or above, but research on pre-counting toddlers is sparse. Here we devised a novel version of an imitation task that was previously used to assess spontaneous focusing on numerosity (i.e. the predisposition to grasp numerical properties of the environment)…

  17. Non-verbal communication through sensor fusion

    Science.gov (United States)

    Tairych, Andreas; Xu, Daniel; O'Brien, Benjamin M.; Anderson, Iain A.

    2016-04-01

    When we communicate face to face, we subconsciously engage our whole body to convey our message. In telecommunication, e.g. during phone calls, this powerful information channel cannot be used. Capturing nonverbal information from body motion and transmitting it to the receiver parallel to speech would make these conversations feel much more natural. This requires a sensing device that is capable of capturing different types of movements, such as the flexion and extension of joints, and the rotation of limbs. In a first embodiment, we developed a sensing glove that is used to control a computer game. Capacitive dielectric elastomer (DE) sensors measure finger positions, and an inertial measurement unit (IMU) detects hand roll. These two sensor technologies complement each other, with the IMU allowing the player to move an avatar through a three-dimensional maze, and the DE sensors detecting finger flexion to fire weapons or open doors. After demonstrating the potential of sensor fusion in human-computer interaction, we take this concept to the next level and apply it in nonverbal communication between humans. The current fingerspelling glove prototype uses capacitive DE sensors to detect finger gestures performed by the sending person. These gestures are mapped to corresponding messages and transmitted wirelessly to another person. A concept for integrating an IMU into this system is presented. The fusion of the DE sensor and the IMU combines the strengths of both sensor types, and therefore enables very comprehensive body motion sensing, which makes a large repertoire of gestures available to nonverbal communication over distances.

  18. Cognitive precursors of arithmetic development in primary school children with cerebral palsy.

    Science.gov (United States)

    Van Rooijen, M; Verhoeven, L; Smits, D W; Dallmeijer, A J; Becher, J G; Steenbergen, B

    2014-04-01

    The aim of this study was to examine the development of arithmetic performance and its cognitive precursors in children with CP from 7 till 9 years of age. Previous research has shown that children with CP are generally delayed in arithmetic performance compared to their typically developing peers. In children with CP, the developmental trajectory of the ability to solve addition- and subtraction tasks has, however, rarely been studied, as well as the cognitive factors affecting this trajectory. Sixty children (M=7.2 years, SD=.23 months at study entry) with CP participated in this study. Standardized tests were administered to assess arithmetic performance, word decoding skills, non-verbal intelligence, and working memory. The results showed that the ability to solve addition- and subtraction tasks increased over a two year period. Word decoding skills were positively related to the initial status of arithmetic performance. In addition, non-verbal intelligence and working memory were associated with the initial status and growth rate of arithmetic performance from 7 till 9 years of age. The current study highlights the importance of non-verbal intelligence and working memory to the development of arithmetic performance of children with CP. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Automatic detection of frequency changes depends on auditory stimulus intensity.

    Science.gov (United States)

    Salo, S; Lang, A H; Aaltonen, O; Lertola, K; Kärki, T

    1999-06-01

    A cortical cognitive auditory evoked potential, mismatch negativity (MMN), reflects automatic discrimination and echoic memory functions of the auditory system. For this study, we examined whether this potential is dependent on the stimulus intensity. The MMN potentials were recorded from 10 subjects with normal hearing using a sine tone of 1000 Hz as the standard stimulus and a sine tone of 1141 Hz as the deviant stimulus, with probabilities of 90% and 10%, respectively. The intensities were 40, 50, 60, 70, and 80 dB HL for both standard and deviant stimuli in separate blocks. Stimulus intensity had a statistically significant effect on the mean amplitude, rise time parameter, and onset latency of the MMN. Automatic auditory discrimination seems to be dependent on the sound pressure level of the stimuli.

  20. Altered intrinsic connectivity of the auditory cortex in congenital amusia.

    Science.gov (United States)

    Leveque, Yohana; Fauvel, Baptiste; Groussard, Mathilde; Caclin, Anne; Albouy, Philippe; Platel, Hervé; Tillmann, Barbara

    2016-07-01

    Congenital amusia, a neurodevelopmental disorder of music perception and production, has been associated with abnormal anatomical and functional connectivity in a right frontotemporal pathway. To investigate whether spontaneous connectivity in brain networks involving the auditory cortex is altered in the amusic brain, we ran a seed-based connectivity analysis, contrasting at-rest functional MRI data of amusic and matched control participants. Our results reveal reduced frontotemporal connectivity in amusia during resting state, as well as an overconnectivity between the auditory cortex and the default mode network (DMN). The findings suggest that the auditory cortex is intrinsically more engaged toward internal processes and less available to external stimuli in amusics compared with controls. Beyond amusia, our findings provide new evidence for the link between cognitive deficits in pathology and abnormalities in the connectivity between sensory areas and the DMN at rest. Copyright © 2016 the American Physiological Society.

  1. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    Science.gov (United States)

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  2. 基于视听交互刺激的认知机理与脑机接口范式研究进展%Research on cognitive mechanism and brain-computer interface application in visual-auditory crossmodal stimuli

    Institute of Scientific and Technical Information of China (English)

    安兴伟; 曹勇; 焦学军; 明东

    2017-01-01

    与人类视听觉感知密切相关的图像、语音和文本(语言)信息在社会经济发展与国家安全保障等领域中扮演着重要角色,脑机接口(BCI)是无需外围神经肌肉便可操控外部设备、使"思想"直接变成"行动"的创新技术,基于视、听觉刺激的BCI系统具有极其广阔应用前景.目前认知心理学研究普遍认为视听交互刺激模式可产生比视或听单模态刺激更强的事件相关电位信号,因而可使识别大脑思维模式的正确率更高和响应速度更快,然而BCI研究至今尚难有充分验证其优势的成果.分别从认知心理机制和BCI实验范式两方面综述了视听交互刺激的信息整合机制、视听刺激之间的协同与竞争关系、匹配度对刺激效果的影响及相关BCI实验范式的研究现状;深入分析了视听交互刺激范式未在现有BCI系统中体现其优越性的原因;最后从BCI实验范式选择、系统硬件性能改进、脑电信号处理等方面提出了改进思路并展望了基于视听交互刺激BCI范式研究未来动向.%The information of image, voice, and text (language), which is closely related to the human perception of visual and auditory, plays an important role in socioeconomic development and country security assurance.Brain-computer interface (BCI) is an innovative technology that directly controls peripheral equipment and makes ''thought'' into ''action'' with non-muscular methods.Visual and auditory stimuli based BCI system has extremely broad application prospects.It is commonly believed that brain has stronger event-related potential, higher reaction accuracy and faster reaction speed on visual-auditory crossmodal stimuli (VACS) than single modal stimuli (visual or auditory) in current cognitive psychology researches, while BCI researches have difficulty on the verification of the VACS superiority.In this paper, the VACS researches in cognitive psychology and BCI paradigms were reviewed

  3. Auditory Reserve and the Legacy of Auditory Experience

    Directory of Open Access Journals (Sweden)

    Erika Skoe

    2014-11-01

    Full Text Available Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function.

  4. Persistent neural activity in auditory cortex is related to auditory working memory in humans and nonhuman primates.

    Science.gov (United States)

    Huang, Ying; Matysiak, Artur; Heil, Peter; König, Reinhard; Brosch, Michael

    2016-07-20

    Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memory processes in humans can be studied in monkeys.

  5. Auditory changes in acromegaly.

    Science.gov (United States)

    Tabur, S; Korkmaz, H; Baysal, E; Hatipoglu, E; Aytac, I; Akarsu, E

    2017-06-01

    The aim of this study is to determine the changes involving auditory system in cases with acromegaly. Otological examinations of 41 cases with acromegaly (uncontrolled n = 22, controlled n = 19) were compared with those of age and gender-matched 24 healthy subjects. Whereas the cases with acromegaly underwent examination with pure tone audiometry (PTA), speech audiometry for speech discrimination (SD), tympanometry, stapedius reflex evaluation and otoacoustic emission tests, the control group did only have otological examination and PTA. Additionally, previously performed paranasal sinus-computed tomography of all cases with acromegaly and control subjects were obtained to measure the length of internal acoustic canal (IAC). PTA values were higher (p acromegaly group was narrower compared to that in control group (p = 0.03 for right ears and p = 0.02 for left ears). When only cases with acromegaly were taken into consideration, PTA values in left ears had positive correlation with growth hormone and insulin-like growth factor-1 levels (r = 0.4, p = 0.02 and r = 0.3, p = 0.03). Of all cases with acromegaly 13 (32%) had hearing loss in at least one ear, 7 (54%) had sensorineural type and 6 (46%) had conductive type hearing loss. Acromegaly may cause certain changes in the auditory system in cases with acromegaly. The changes in the auditory system may be multifactorial causing both conductive and sensorioneural defects.

  6. Stability of auditory discrimination and novelty processing in physiological aging.

    Science.gov (United States)

    Raggi, Alberto; Tasca, Domenica; Rundo, Francesco; Ferri, Raffaele

    2013-01-01

    Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs) allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN) and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.

  7. Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder.

    Science.gov (United States)

    Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva

    2016-01-01

    Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in

  8. Behavioural and neuroanatomical correlates of auditory speech analysis in primary progressive aphasias.

    Science.gov (United States)

    Hardy, Chris J D; Agustus, Jennifer L; Marshall, Charles R; Clark, Camilla N; Russell, Lucy L; Bond, Rebecca L; Brotherhood, Emilie V; Thomas, David L; Crutch, Sebastian J; Rohrer, Jonathan D; Warren, Jason D

    2017-07-27

    Non-verbal auditory impairment is increasingly recognised in the primary progressive aphasias (PPAs) but its relationship to speech processing and brain substrates has not been defined. Here we addressed these issues in patients representing the non-fluent variant (nfvPPA) and semantic variant (svPPA) syndromes of PPA. We studied 19 patients with PPA in relation to 19 healthy older individuals. We manipulated three key auditory parameters-temporal regularity, phonemic spectral structure and prosodic predictability (an index of fundamental information content, or entropy)-in sequences of spoken syllables. The ability of participants to process these parameters was assessed using two-alternative, forced-choice tasks and neuroanatomical associations of task performance were assessed using voxel-based morphometry of patients' brain magnetic resonance images. Relative to healthy controls, both the nfvPPA and svPPA groups had impaired processing of phonemic spectral structure and signal predictability while the nfvPPA group additionally had impaired processing of temporal regularity in speech signals. Task performance correlated with standard disease severity and neurolinguistic measures. Across the patient cohort, performance on the temporal regularity task was associated with grey matter in the left supplementary motor area and right caudate, performance on the phoneme processing task was associated with grey matter in the left supramarginal gyrus, and performance on the prosodic predictability task was associated with grey matter in the right putamen. Our findings suggest that PPA syndromes may be underpinned by more generic deficits of auditory signal analysis, with a distributed cortico-subcortical neuraoanatomical substrate extending beyond the canonical language network. This has implications for syndrome classification and biomarker development.

  9. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  10. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  11. The impact of visual gaze direction on auditory object tracking.

    Science.gov (United States)

    Pomper, Ulrich; Chait, Maria

    2017-07-05

    Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention while participants detected targets presented from one of three loudspeakers. We observed increased response times when gaze was directed away from the locus of auditory attention. Further, we found an increase in occipital alpha-band power contralateral to the direction of gaze, indicative of a suppression of distracting input. Finally, this condition also led to stronger central theta-band power, which correlated with the observed effect in response times, indicative of differences in top-down processing. Our data suggest that a misalignment between gaze and auditory attention both reduce behavioural performance and modulate underlying neural processes. The involvement of central theta-band and occipital alpha-band effects are in line with compensatory neural mechanisms such as increased cognitive control and the suppression of task irrelevant inputs.

  12. Effect of background music on auditory-verbal memory performance

    Directory of Open Access Journals (Sweden)

    Sona Matloubi

    2014-12-01

    Full Text Available Background and Aim: Music exists in all cultures; many scientists are seeking to understand how music effects cognitive development such as comprehension, memory, and reading skills. More recently, a considerable number of neuroscience studies on music have been developed. This study aimed to investigate the effects of null and positive background music in comparison with silence on auditory-verbal memory performance.Methods: Forty young adults (male and female with normal hearing, aged between 18 and 26, participated in this comparative-analysis study. An auditory and speech evaluation was conducted in order to investigate the effects of background music on working memory. Subsequently, the Rey auditory-verbal learning test was performed for three conditions: silence, positive, and null music.Results: The mean score of the Rey auditory-verbal learning test in silence condition was higher than the positive music condition (p=0.003 and the null music condition (p=0.01. The tests results did not reveal any gender differences.Conclusion: It seems that the presence of competitive music (positive and null music and the orientation of auditory attention have negative effects on the performance of verbal working memory. It is possibly owing to the intervention of music with verbal information processing in the brain.

  13. Auditory and visual sustained attention in Down syndrome.

    Science.gov (United States)

    Faught, Gayle G; Conners, Frances A; Himmelberger, Zachary M

    2016-01-01

    Sustained attention (SA) is important to task performance and development of higher functions. It emerges as a separable component of attention during preschool and shows incremental improvements during this stage of development. The current study investigated if auditory and visual SA match developmental level or are particular challenges for youth with DS. Further, we sought to determine if there were modality effects in SA that could predict those seen in short-term memory (STM). We compared youth with DS to typically developing youth matched for nonverbal mental age and receptive vocabulary. Groups completed auditory and visual sustained attention to response tests (SARTs) and STM tasks. Results indicated groups performed similarly on both SARTs, even over varying cognitive ability. Further, within groups participants performed similarly on auditory and visual SARTs, thus SA could not predict modality effects in STM. However, SA did generally predict a significant portion of unique variance in groups' STM. Ultimately, results suggested both auditory and visual SA match developmental level in DS. Further, SA generally predicts STM, though SA does not necessarily predict the pattern of poor auditory relative to visual STM characteristic of DS. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Insult-induced adaptive plasticity of the auditory system

    Directory of Open Access Journals (Sweden)

    Joshua R Gold

    2014-05-01

    Full Text Available The brain displays a remarkable capacity for both widespread and region-specific modifications in response to environmental challenges, with adaptive processes bringing about the reweighting of connections in neural networks putatively required for optimising performance and behaviour. As an avenue for investigation, studies centred around changes in the mammalian auditory system, extending from the brainstem to the cortex, have revealed a plethora of mechanisms that operate in the context of sensory disruption after insult, be it lesion-, noise trauma, drug-, or age-related. Of particular interest in recent work are those aspects of auditory processing which, after sensory disruption, change at multiple – if not all – levels of the auditory hierarchy. These include changes in excitatory, inhibitory and neuromodulatory networks, consistent with theories of homeostatic plasticity; functional alterations in gene expression and in protein levels; as well as broader network processing effects with cognitive and behavioural implications. Nevertheless, there abounds substantial debate regarding which of these processes may only be sequelae of the original insult, and which may, in fact, be maladaptively compelling further degradation of the organism’s competence to cope with its disrupted sensory context. In this review, we aim to examine how the mammalian auditory system responds in the wake of particular insults, and to disambiguate how the changes that develop might underlie a correlated class of phantom disorders, including tinnitus and hyperacusis, which putatively are brought about through maladaptive neuroplastic disruptions to auditory networks governing the spatial and temporal processing of acoustic sensory information.

  15. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  16. Do informal musical activities shape auditory skill development in preschool-age children?

    OpenAIRE

    Putkinen, Vesa; Saarikivi, Katri; Tervaniemi, Mari

    2013-01-01

    The influence of formal musical training on auditory cognition has been well established. For the majority of children, however, musical experience does not primarily consist of adult-guided training on a musical instrument. Instead, young children mostly engage in everyday musical activities such as singing and musical play. Here, we review recent electrophysiological and behavioral studies carried out in our laboratory and elsewhere which have begun to map how developing auditory skills are...

  17. Statistical learning and auditory processing in children with music training: An ERP study.

    Science.gov (United States)

    Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne

    2017-07-01

    The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  18. The cognitive organization of music knowledge: a clinical analysis.

    Science.gov (United States)

    Omar, Rohani; Hailstone, Julia C; Warren, Jane E; Crutch, Sebastian J; Warren, Jason D

    2010-04-01

    Despite much recent interest in the clinical neuroscience of music processing, the cognitive organization of music as a domain of non-verbal knowledge has been little studied. Here we addressed this issue systematically in two expert musicians with clinical diagnoses of semantic dementia and Alzheimer's disease, in comparison with a control group of healthy expert musicians. In a series of neuropsychological experiments, we investigated associative knowledge of musical compositions (musical objects), musical emotions, musical instruments (musical sources) and music notation (musical symbols). These aspects of music knowledge were assessed in relation to musical perceptual abilities and extra-musical neuropsychological functions. The patient with semantic dementia showed relatively preserved recognition of musical compositions and musical symbols despite severely impaired recognition of musical emotions and musical instruments from sound. In contrast, the patient with Alzheimer's disease showed impaired recognition of compositions, with somewhat better recognition of composer and musical era, and impaired comprehension of musical symbols, but normal recognition of musical emotions and musical instruments from sound. The findings suggest that music knowledge is fractionated, and superordinate musical knowledge is relatively more robust than knowledge of particular music. We propose that music constitutes a distinct domain of non-verbal knowledge but shares certain cognitive organizational features with other brain knowledge systems. Within the domain of music knowledge, dissociable cognitive mechanisms process knowledge derived from physical sources and the knowledge of abstract musical entities.

  19. Higher dietary diversity is related to better visual and auditory sustained attention.

    Science.gov (United States)

    Shiraseb, Farideh; Siassi, Fereydoun; Qorbani, Mostafa; Sotoudeh, Gity; Rostami, Reza; Narmaki, Elham; Yavari, Parvaneh; Aghasi, Mohadeseh; Shaibu, Osman Mohammed

    2016-04-01

    Attention is a complex cognitive function that is necessary for learning, for following social norms of behaviour and for effective performance of responsibilities and duties. It is especially important in sensitive occupations requiring sustained attention. Improvement of dietary diversity (DD) is recognised as an important factor in health promotion, but its association with sustained attention is unknown. The aim of this study was to determine the association between auditory and visual sustained attention and DD. A cross-sectional study was carried out on 400 women aged 20-50 years who attended sports clubs at Tehran Municipality. Sustained attention was evaluated on the basis of the Integrated Visual and Auditory Continuous Performance Test using Integrated Visual and Auditory software. A single 24-h dietary recall questionnaire was used for DD assessment. Dietary diversity scores (DDS) were determined using the FAO guidelines. The mean visual and auditory sustained attention scores were 40·2 (sd 35·2) and 42·5 (sd 38), respectively. The mean DDS was 4·7 (sd 1·5). After adjusting for age, education years, physical activity, energy intake and BMI, mean visual and auditory sustained attention showed a significant increase as the quartiles of DDS increased (P=0·001). In addition, the mean subscales of attention, including auditory consistency and vigilance, visual persistence, visual and auditory focus, speed, comprehension and full attention, increased significantly with increasing DDS (Pvisual and auditory sustained attention.

  20. Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence

    Science.gov (United States)

    Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles

    2015-01-01

    The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective. PMID:26348628

  1. Minimal effects of visual memory training on auditory performance of adult cochlear implant users.

    Science.gov (United States)

    Oba, Sandra I; Galvin, John J; Fu, Qian-Jie

    2013-01-01

    Auditory training has been shown to significantly improve cochlear implant (CI) users' speech and music perception. However, it is unclear whether posttraining gains in performance were due to improved auditory perception or to generally improved attention, memory, and/or cognitive processing. In this study, speech and music perception, as well as auditory and visual memory, were assessed in 10 CI users before, during, and after training with a nonauditory task. A visual digit span (VDS) task was used for training, in which subjects recalled sequences of digits presented visually. After the VDS training, VDS performance significantly improved. However, there were no significant improvements for most auditory outcome measures (auditory digit span, phoneme recognition, sentence recognition in noise, digit recognition in noise), except for small (but significant) improvements in vocal emotion recognition and melodic contour identification. Posttraining gains were much smaller with the nonauditory VDS training than observed in previous auditory training studies with CI users. The results suggest that posttraining gains observed in previous studies were not solely attributable to improved attention or memory and were more likely due to improved auditory perception. The results also suggest that CI users may require targeted auditory training to improve speech and music perception.

  2. Developmental programming of auditory learning

    Directory of Open Access Journals (Sweden)

    Melania Puddu

    2012-10-01

    Full Text Available The basic structures involved in the development of auditory function and consequently in language acquisition are directed by genetic code, but the expression of individual genes may be altered by exposure to environmental factors, which if favorable, orient it in the proper direction, leading its development towards normality, if unfavorable, they deviate it from its physiological course. Early sensorial experience during the foetal period (i.e. intrauterine noise floor, sounds coming from the outside and attenuated by the uterine filter, particularly mother’s voice and modifications induced by it at the cochlear level represent the first example of programming in one of the earliest critical periods in development of the auditory system. This review will examine the factors that influence the developmental programming of auditory learning from the womb to the infancy. In particular it focuses on the following points: the prenatal auditory experience and the plastic phenomena presumably induced by it in the auditory system from the basilar membrane to the cortex;the involvement of these phenomena on language acquisition and on the perception of language communicative intention after birth;the consequences of auditory deprivation in critical periods of auditory development (i.e. premature interruption of foetal life.

  3. Automatic hearing loss detection system based on auditory brainstem response

    International Nuclear Information System (INIS)

    Aldonate, J; Mercuri, C; Reta, J; Biurrun, J; Bonell, C; Gentiletti, G; Escobar, S; Acevedo, R

    2007-01-01

    Hearing loss is one of the pathologies with the highest prevalence in newborns. If it is not detected in time, it can affect the nervous system and cause problems in speech, language and cognitive development. The recommended methods for early detection are based on otoacoustic emissions (OAE) and/or auditory brainstem response (ABR). In this work, the design and implementation of an automated system based on ABR to detect hearing loss in newborns is presented. Preliminary evaluation in adults was satisfactory

  4. Effect of background music on auditory-verbal memory performance

    OpenAIRE

    Sona Matloubi; Ali Mohammadzadeh; Zahra Jafari; Alireza Akbarzade Baghban

    2014-01-01

    Background and Aim: Music exists in all cultures; many scientists are seeking to understand how music effects cognitive development such as comprehension, memory, and reading skills. More recently, a considerable number of neuroscience studies on music have been developed. This study aimed to investigate the effects of null and positive background music in comparison with silence on auditory-verbal memory performance.Methods: Forty young adults (male and female) with normal hearing, aged betw...

  5. Biological impact of music and software-based auditory training

    OpenAIRE

    Kraus, Nina

    2012-01-01

    Auditory-based communication skills are developed at a young age and are maintained throughout our lives. However, some individuals – both young and old – encounter difficulties in achieving or maintaining communication proficiency. Biological signals arising from hearing sounds relate to real-life communication skills such as listening to speech in noisy environments and reading, pointing to an intersection between hearing and cognition. Musical experience, amplification, and software-based ...

  6. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  7. Auditory short-term memory in the primate auditory cortex

    OpenAIRE

    Scott, Brian H.; Mishkin, Mortimer

    2015-01-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ���working memory��� bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive sho...

  8. The relation between cognitive and motor performance and their relevance for children's transition to school: a latent variable approach.

    Science.gov (United States)

    Roebers, Claudia M; Röthlisberger, Marianne; Neuenschwander, Regula; Cimeli, Patrizia; Michel, Eva; Jäger, Katja

    2014-02-01

    Both theoretically and empirically there is a continuous interest in understanding the specific relation between cognitive and motor development in childhood. In the present longitudinal study including three measurement points, this relation was targeted. At the beginning of the study, the participating children were 5-6-year-olds. By assessing participants' fine motor skills, their executive functioning, and their non-verbal intelligence, their cross-sectional and cross-lagged interrelations were examined. Additionally, performance in these three areas was used to predict early school achievement (in terms of mathematics, reading, and spelling) at the end of participants' first grade. Correlational analyses and structural equation modeling revealed that fine motor skills, non-verbal intelligence and executive functioning were significantly interrelated. Both fine motor skills and intelligence had significant links to later school achievement. However, when executive functioning was additionally included into the prediction of early academic achievement, fine motor skills and non-verbal intelligence were no longer significantly associated with later school performance suggesting that executive functioning plays an important role for the motor-cognitive performance link. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Maps of the Auditory Cortex.

    Science.gov (United States)

    Brewer, Alyssa A; Barton, Brian

    2016-07-08

    One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration.

  10. Demodulation Processes in Auditory Perception

    National Research Council Canada - National Science Library

    Feth, Lawrence

    1997-01-01

    The long range goal of this project was the understanding of human auditory processing of information conveyed by complex, time varying signals such as speech, music or important environmental sounds...

  11. Auditory and Non-Auditory Contributions for Unaided Speech Recognition in Noise as a Function of Hearing Aid Use.

    Science.gov (United States)

    Gieseler, Anja; Tahden, Maike A S; Thiel, Christiane M; Wagener, Kirsten C; Meis, Markus; Colonius, Hans

    2017-01-01

    Differences in understanding speech in noise among hearing-impaired individuals cannot be explained entirely by hearing thresholds alone, suggesting the contribution of other factors beyond standard auditory ones as derived from the audiogram. This paper reports two analyses addressing individual differences in the explanation of unaided speech-in-noise performance among n = 438 elderly hearing-impaired listeners ( mean = 71.1 ± 5.8 years). The main analysis was designed to identify clinically relevant auditory and non-auditory measures for speech-in-noise prediction using auditory (audiogram, categorical loudness scaling) and cognitive tests (verbal-intelligence test, screening test of dementia), as well as questionnaires assessing various self-reported measures (health status, socio-economic status, and subjective hearing problems). Using stepwise linear regression analysis, 62% of the variance in unaided speech-in-noise performance was explained, with measures Pure-tone average (PTA), Age , and Verbal intelligence emerging as the three most important predictors. In the complementary analysis, those individuals with the same hearing loss profile were separated into hearing aid users (HAU) and non-users (NU), and were then compared regarding potential differences in the test measures and in explaining unaided speech-in-noise recognition. The groupwise comparisons revealed significant differences in auditory measures and self-reported subjective hearing problems, while no differences in the cognitive domain were found. Furthermore, groupwise regression analyses revealed that Verbal intelligence had a predictive value in both groups, whereas Age and PTA only emerged significant in the group of hearing aid NU.

  12. Sadness increases distraction by auditory deviant stimuli.

    Science.gov (United States)

    Pacheco-Unguetti, Antonia P; Parmentier, Fabrice B R

    2014-02-01

    Research shows that attention is ineluctably captured away from a focal visual task by rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards). The fundamental cognitive mechanisms underlying this effect have been the object of an increasing number of studies but their sensitivity to mood and emotions remains relatively unexplored despite suggestion of greater distractibility in negative emotional contexts. In this study, we examined the effect of sadness, a widespread form of emotional distress and a symptom of many disorders, on distraction by deviant sounds. Participants received either a sadness induction or a neutral mood induction by means of a mixed procedure based on music and autobiographical recall prior to taking part in an auditory-visual oddball task in which they categorized visual digits while ignoring task-irrelevant sounds. The results showed that although all participants exhibited significantly longer response times in the visual categorization task following the presentation of rare and unexpected deviant sounds relative to that of the standard sound, this distraction effect was significantly greater in participants who had received the sadness induction (a twofold increase). The residual distraction on the subsequent trial (postdeviance distraction) was equivalent in both groups, suggesting that sadness interfered with the disengagement of attention from the deviant sound and back toward the target stimulus. We propose that this disengagement impairment reflected the monopolization of cognitive resources by sadness and/or associated ruminations. Our findings suggest that sadness can increase distraction even when distractors are emotionally neutral. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  13. Predictive uncertainty in auditory sequence processing

    Directory of Open Access Journals (Sweden)

    Niels Chr. eHansen

    2014-09-01

    Full Text Available Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty - a property of listeners’ prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure.Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex. Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty. We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty. Finally, we simulate listeners’ perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature.The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  14. Predictive uncertainty in auditory sequence processing.

    Science.gov (United States)

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  15. Story Retelling and Language Ability in School-Aged Children with Cerebral Palsy and Speech Impairment

    Science.gov (United States)

    Nordberg, Ann; Dahlgren Sandberg, Annika; Miniscalco, Carmela

    2015-01-01

    Background: Research on retelling ability and cognition is limited in children with cerebral palsy (CP) and speech impairment. Aims: To explore the impact of expressive and receptive language, narrative discourse dimensions (Narrative Assessment Profile measures), auditory and visual memory, theory of mind (ToM) and non-verbal cognition on the…

  16. Diminished auditory sensory gating during active auditory verbal hallucinations.

    Science.gov (United States)

    Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia

    2017-10-01

    Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Using EEG to Discriminate Cognitive Workload and Performance Based on Neural Activation and Connectivity

    Science.gov (United States)

    2016-05-31

    in memory . Cognitive efficacy is assessed based on accuracy in recalling digits from memory . A Gaussian classifier is used to discriminate cognitive...high) and 2) cognitive performance (correct recall , incorrect recall ). Our paper is organized as follows. In Section 2, the auditory working memory ...data collection is described, which uses a novel cognitive load protocol that taxes auditory working memory by eliciting recall of sentences and

  18. Neural correlates of auditory temporal predictions during sensorimotor synchronization

    Directory of Open Access Journals (Sweden)

    Nadine ePecenka

    2013-08-01

    Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  19. Auditory Hallucinations as Translational Psychiatry: Evidence from Magnetic Resonance Imaging.

    Science.gov (United States)

    Hugdahl, Kenneth

    2017-12-01

    In this invited review article, I present a translational perspective and overview of our research on auditory hallucinations in schizophrenia at the University of Bergen, Norway, with a focus on the neuronal mechanisms underlying the phenomenology of experiencing "hearing voices". An auditory verbal hallucination (i.e. hearing a voice) is defined as a sensory experience in the absence of a corresponding external sensory source that could explain the phenomenological experience. I suggest a general frame or scheme for the study of auditory verbal hallucinations, called Levels of Explanation. Using a Levels of Explanation approach, mental phenomena can be described and explained at different levels (cultural, clinical, cognitive, brain-imaging, cellular and molecular). Another way of saying this is that, to advance knowledge in a research field, it is not only necessary to replicate findings, but also to show how evidence obtained with one method, and at one level of explanation, converges with evidence obtained with another method at another level. To achieve breakthroughs in our understanding of auditory verbal hallucinations, we have to advance vertically through the various levels, rather than the more common approach of staying at our favourite level and advancing horizontally (e.g., more advanced techniques and data acquisition analyses). The horizontal expansion will, however, not advance a deeper understanding of how an auditory verbal hallucination spontaneously starts and stops. Finally, I present data from the clinical, cognitive, brain-imaging, and cellular levels, where data from one level validate and support data at another level, called converging of evidence. Using a translational approach, the current status of auditory verbal hallucinations is that they implicate speech perception areas in the left temporal lobe, impairing perception of and attention to external sounds. Preliminary results also show that amygdala is implicated in the emotional

  20. Auditory Hallucinations as Translational Psychiatry: Evidence from Magnetic Resonance Imaging

    Directory of Open Access Journals (Sweden)

    Kenneth Hugdahl

    2017-12-01

    Full Text Available In this invited review article, I present a translational perspective and overview of our research on auditory hallucinations in schizophrenia at the University of Bergen, Norway, with a focus on the neuronal mechanisms underlying the phenomenology of experiencing "hearing voices". An auditory verbal hallucination (i.e. hearing a voice is defined as a sensory experience in the absence of a corresponding external sensory source that could explain the phenomenological experience. I suggest a general frame or scheme for the study of auditory verbal hallucinations, called Levels of Explanation. Using a Levels of Explanation approach, mental phenomena can be described and explained at different levels (cultural, clinical, cognitive, brain-imaging, cellular and molecular. Another way of saying this is that, to advance knowledge in a research field, it is not only necessary to replicate findings, but also to show how evidence obtained with one method, and at one level of explanation, converges with evidence obtained with another method at another level. To achieve breakthroughs in our understanding of auditory verbal hallucinations, we have to advance vertically through the various levels, rather than the more common approach of staying at our favourite level and advancing horizontally (e.g., more advanced techniques and data acquisition analyses. The horizontal expansion will, however, not advance a deeper understanding of how an auditory verbal hallucination spontaneously starts and stops. Finally, I present data from the clinical, cognitive, brain-imaging, and cellular levels, where data from one level validate and support data at another level, called converging of evidence. Using a translational approach, the current status of auditory verbal hallucinations is that they implicate speech perception areas in the left temporal lobe, impairing perception of and attention to external sounds. Preliminary results also show that amygdala is implicated in

  1. Relations Between the Intelligibility of Speech in Noise and Psychophysical Measures of Hearing Measured in Four Languages Using the Auditory Profile Test Battery

    NARCIS (Netherlands)

    van Esch, T. E. M.; Dreschler, W. A.

    2015-01-01

    The aim of the present study was to determine the relations between the intelligibility of speech in noise and measures of auditory resolution, loudness recruitment, and cognitive function. The analyses were based on data published earlier as part of the presentation of the Auditory Profile, a test

  2. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    Science.gov (United States)

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  3. Changes in otoacoustic emissions during selective auditory and visual attention.

    Science.gov (United States)

    Walsh, Kyle P; Pasanen, Edward G; McFadden, Dennis

    2015-05-01

    Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing-the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2-3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater.

  4. Auditory cortical volumes and musical ability in Williams syndrome.

    Science.gov (United States)

    Martens, Marilee A; Reutens, David C; Wilson, Sarah J

    2010-07-01

    Individuals with Williams syndrome (WS) have been shown to have atypical morphology in the auditory cortex, an area associated with aspects of musicality. Some individuals with WS have demonstrated specific musical abilities, despite intellectual delays. Primary auditory cortex and planum temporale volumes were manually segmented in 25 individuals with WS and 25 control participants, and the participants also underwent testing of musical abilities. Left and right planum temporale volumes were significantly larger in the participants with WS than in controls, with no significant difference noted between groups in planum temporale asymmetry or primary auditory cortical volumes. Left planum temporale volume was significantly increased in a subgroup of the participants with WS who demonstrated specific musical strengths, as compared to the remaining WS participants, and was highly correlated with scores on a musical task. These findings suggest that differences in musical ability within WS may be in part associated with variability in the left auditory cortical region, providing further evidence of cognitive and neuroanatomical heterogeneity within this syndrome. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  5. Changes in otoacoustic emissions during selective auditory and visual attention

    Science.gov (United States)

    Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis

    2015-01-01

    Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing—the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2–3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater. PMID:25994703

  6. Auditory analysis for speech recognition based on physiological models

    Science.gov (United States)

    Jeon, Woojay; Juang, Biing-Hwang

    2004-05-01

    To address the limitations of traditional cepstrum or LPC based front-end processing methods for automatic speech recognition, more elaborate methods based on physiological models of the human auditory system may be used to achieve more robust speech recognition in adverse environments. For this purpose, a modified version of a model of the primary auditory cortex featuring a three dimensional mapping of auditory spectra [Wang and Shamma, IEEE Trans. Speech Audio Process. 3, 382-395 (1995)] is adopted and investigated for its use as an improved front-end processing method. The study is conducted in two ways: first, by relating the model's redundant representation to traditional spectral representations and showing that the former not only encompasses information provided by the latter, but also reveals more relevant information that makes it superior in describing the identifying features of speech signals; and second, by observing the statistical features of the representation for various classes of sound to show how different identifying features manifest themselves as specific patterns on the cortical map, thereby becoming a place-coded data set on which detection theory could be applied to simulate auditory perception and cognition.

  7. What works in auditory working memory? A neural oscillations perspective.

    Science.gov (United States)

    Wilsch, Anna; Obleser, Jonas

    2016-06-01

    Working memory is a limited resource: brains can only maintain small amounts of sensory input (memory load) over a brief period of time (memory decay). The dynamics of slow neural oscillations as recorded using magneto- and electroencephalography (M/EEG) provide a window into the neural mechanics of these limitations. Especially oscillations in the alpha range (8-13Hz) are a sensitive marker for memory load. Moreover, according to current models, the resultant working memory load is determined by the relative noise in the neural representation of maintained information. The auditory domain allows memory researchers to apply and test the concept of noise quite literally: Employing degraded stimulus acoustics increases memory load and, at the same time, allows assessing the cognitive resources required to process speech in noise in an ecologically valid and clinically relevant way. The present review first summarizes recent findings on neural oscillations, especially alpha power, and how they reflect memory load and memory decay in auditory working memory. The focus is specifically on memory load resulting from acoustic degradation. These findings are then contrasted with contextual factors that benefit neural as well as behavioral markers of memory performance, by reducing representational noise. We end on discussing the functional role of alpha power in auditory working memory and suggest extensions of the current methodological toolkit. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  8. Binaural auditory beats affect long-term memory.

    Science.gov (United States)

    Garcia-Argibay, Miguel; Santed, Miguel A; Reales, José M

    2017-12-08

    The presentation of two pure tones to each ear separately with a slight difference in their frequency results in the perception of a single tone that fluctuates in amplitude at a frequency that equals the difference of interaural frequencies. This perceptual phenomenon is known as binaural auditory beats, and it is thought to entrain electrocortical activity and enhance cognition functions such as attention and memory. The aim of this study was to determine the effect of binaural auditory beats on long-term memory. Participants (n = 32) were kept blind to the goal of the study and performed both the free recall and recognition tasks after being exposed to binaural auditory beats, either in the beta (20 Hz) or theta (5 Hz) frequency bands and white noise as a control condition. Exposure to beta-frequency binaural beats yielded a greater proportion of correctly recalled words and a higher sensitivity index d' in recognition tasks, while theta-frequency binaural-beat presentation lessened the number of correctly remembered words and the sensitivity index. On the other hand, we could not find differences in the conditional probability for recall given recognition between beta and theta frequencies and white noise, suggesting that the observed changes in recognition were due to the recollection component. These findings indicate that the presentation of binaural auditory beats can affect long-term memory both positively and negatively, depending on the frequency used.

  9. Cognitive Abilities Relate to Self-Reported Hearing Disability

    Science.gov (United States)

    Zekveld, Adriana A.; George, Erwin L. J.; Houtgast, Tammo; Kramer, Sophia E.

    2013-01-01

    Purpose: In this explorative study, the authors investigated the relationship between auditory and cognitive abilities and self-reported hearing disability. Method: Thirty-two adults with mild to moderate hearing loss completed the Amsterdam Inventory for Auditory Disability and Handicap (AIADH; Kramer, Kapteyn, Festen, & Tobi, 1996) and…

  10. Cognitive abilities relate to self-reported hearing disability

    NARCIS (Netherlands)

    Zekveld, A.A.; George, E.L.J.; Houtgast, T.; Kramer, S.E.

    2013-01-01

    Purpose: In this explorative study, the authors investigated the relationship between auditory and cognitive abilities and self-reported hearing disability. Method: Thirty-two adults with mild to moderate hearing loss completed the Amsterdam Inventory for Auditory Disability and Handicap (AIADH;

  11. Effect of teenage motherhood on cognitive outcomes in children: a population-based cohort study.

    Science.gov (United States)

    Morinis, Julia; Carson, Claire; Quigley, Maria A

    2013-12-01

    To examine the association between teenage motherhood and cognitive development at 5 years. Data from Millennium Cohort Study, a prospective, nationally representative UK cohort of 18 818 infants born between 2000 and 2001. 12 021 (64%) mother-child pairs from white, English-speaking, singleton pregnancies were included. Cognitive ability at 5 years was measured by the British Ability Scales II. Difference in mean cognitive scores across maternal age groups was estimated using linear regression, with adjustment for potential confounders and mediators. 617 (5%) children were born to mothers aged ≤18 years. Our analysis revealed that children of teenage mothers had significantly lower cognitive scores compared with children of mothers aged 25-34 years: difference in mean score for verbal ability -8.9 (-10.88 to -6.86, p<0.001); non-verbal ability -7.8 (-10.52 to -5.19, p<0.001); spatial ability -4.7 (-6.39 to -3.07, p<0.001), which is equivalent to an average delay of 11, 7 and 4 months, respectively. After adjustment for perinatal and sociodemographic factors, the effect of young maternal age on non-verbal and spatial ability mean scores was attenuated. A difference persisted in the mean verbal ability scores -3.8 (-6.34 to -1.34, p=0.003), equivalent to an average delay of 5 months. Results suggest that the difference observed in the initial analyses for non-verbal and spatial skills are almost entirely explained by marked inequalities in sociodemographic circumstances and perinatal risk. However, there remains a significant adverse effect on verbal abilities in the children born to teenage mothers.

  12. Association of blood antioxidants status with visual and auditory sustained attention.

    Science.gov (United States)

    Shiraseb, Farideh; Siassi, Fereydoun; Sotoudeh, Gity; Qorbani, Mostafa; Rostami, Reza; Sadeghi-Firoozabadi, Vahid; Narmaki, Elham

    2015-01-01

    A low antioxidants status has been shown to result in oxidative stress and cognitive impairment. Because antioxidants can protect the nervous system, it is expected that a better blood antioxidant status might be related to sustained attention. However, the relationship between the blood antioxidant status and visual and auditory sustained attention has not been investigated. The aim of this study was to evaluate the association of fruits and vegetables intake and the blood antioxidant status with visual and auditory sustained attention in women. This cross-sectional study was performed on 400 healthy women (20-50 years) who attended the sports clubs of Tehran Municipality. Sustained attention was evaluated based on the Integrated Visual and Auditory Continuous Performance Test using the Integrated Visual and Auditory (IVA) software. The 24-hour food recall questionnaire was used for estimating fruits and vegetables intake. Serum total antioxidant capacity (TAC), and erythrocyte superoxide dismutase (SOD) and glutathione peroxidase (GPx) activities were measured in 90 participants. After adjusting for energy intake, age, body mass index (BMI), years of education and physical activity, higher reported fruits, and vegetables intake was associated with better visual and auditory sustained attention (P attention (P visual and auditory sustained attention after adjusting for age, years of education, physical activity, energy, BMI, and caffeine intake (P visual and auditory sustained attention is associated with a better blood antioxidant status. Therefore, improvement of the antioxidant status through an appropriate dietary intake can possibly enhance sustained attention.

  13. Spatiotemporal Relationships among Audiovisual Stimuli Modulate Auditory Facilitation of Visual Target Discrimination.

    Science.gov (United States)

    Li, Qi; Yang, Huamin; Sun, Fang; Wu, Jinglong

    2015-03-01

    Sensory information is multimodal; through audiovisual interaction, task-irrelevant auditory stimuli tend to speed response times and increase visual perception accuracy. However, mechanisms underlying these performance enhancements have remained unclear. We hypothesize that task-irrelevant auditory stimuli might provide reliable temporal and spatial cues for visual target discrimination and behavioral response enhancement. Using signal detection theory, the present study investigated the effects of spatiotemporal relationships on auditory facilitation of visual target discrimination. Three experiments were conducted where an auditory stimulus maintained reliable temporal and/or spatial relationships with visual target stimuli. Results showed that perception sensitivity (d') to visual target stimuli was enhanced only when a task-irrelevant auditory stimulus maintained reliable spatiotemporal relationships with a visual target stimulus. When only reliable spatial or temporal information was contained, perception sensitivity was not enhanced. These results suggest that reliable spatiotemporal relationships between visual and auditory signals are required for audiovisual integration during a visual discrimination task, most likely due to a spread of attention. These results also indicate that auditory facilitation of visual target discrimination follows from late-stage cognitive processes rather than early stage sensory processes. © 2015 SAGE Publications.

  14. Auditory display as feedback for a novel eye-tracking system for sterile operating room interaction.

    Science.gov (United States)

    Black, David; Unger, Michael; Fischer, Nele; Kikinis, Ron; Hahn, Horst; Neumuth, Thomas; Glaser, Bernhard

    2018-01-01

    The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye-tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response. An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for six typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures. When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings. Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.

  15. Different patterns of auditory cortex activation revealed by functional magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Formisano, E; Pepino, A; Bracale, M [Department of Electronic Engineering, Biomedical Unit, Universita di Napoli, Federic II, Italy, Via Claudio 21, 80125 Napoli (Italy); Di Salle, F [Department of Biomorphological and Functional Sciences, Radiologucal Unit, Universita di Napoli, Federic II, Italy, Via Claudio 21, 80125 Napoli (Italy); Lanfermann, H; Zanella, F E [Department of Neuroradiology, J.W. Goethe Universitat, Frankfurt/M. (Germany)

    1999-12-31

    In the last few years, functional Magnetic Resonance Imaging (fMRI) has been widely accepted as an effective tool for mapping brain activities in both the sensorimotor and the cognitive field. The present work aims to assess the possibility of using fMRI methods to study the cortical response to different acoustic stimuli. Furthermore, we refer to recent data collected at Frankfurt University on the cortical pattern of auditory hallucinations. Healthy subjects showed broad bilateral activation, mostly located in the transverse gyrus of Heschl. The analysis of the cortical activation induced by different stimuli has pointed out a remarkable difference in the spatial and temporal features of the auditory cortex response to pulsed tones and pure tones. The activated areas during episodes of auditory hallucinations match the location of primary auditory cortex as defined in control measurements with the same patients and in the experiments on healthy subjects. (authors) 17 refs., 4 figs.

  16. Different patterns of auditory cortex activation revealed by functional magnetic resonance imaging

    International Nuclear Information System (INIS)

    Formisano, E.; Pepino, A.; Bracale, M.; Di Salle, F.; Lanfermann, H.; Zanella, F.E.

    1998-01-01

    In the last few years, functional Magnetic Resonance Imaging (fMRI) has been widely accepted as an effective tool for mapping brain activities in both the sensorimotor and the cognitive field. The present work aims to assess the possibility of using fMRI methods to study the cortical response to different acoustic stimuli. Furthermore, we refer to recent data collected at Frankfurt University on the cortical pattern of auditory hallucinations. Healthy subjects showed broad bilateral activation, mostly located in the transverse gyrus of Heschl. The analysis of the cortical activation induced by different stimuli has pointed out a remarkable difference in the spatial and temporal features of the auditory cortex response to pulsed tones and pure tones. The activated areas during episodes of auditory hallucinations match the location of primary auditory cortex as defined in control measurements with the same patients and in the experiments on healthy subjects. (authors)

  17. Verbal auditory agnosia in a patient with traumatic brain injury: A case report.

    Science.gov (United States)

    Kim, Jong Min; Woo, Seung Beom; Lee, Zeeihn; Heo, Sung Jae; Park, Donghwi

    2018-03-01

    Verbal auditory agnosia is the selective inability to recognize verbal sounds. Patients with this disorder lose the ability to understand language, write from dictation, and repeat words with reserved ability to identify nonverbal sounds. However, to the best of our knowledge, there was no report about verbal auditory agnosia in adult patient with traumatic brain injury. He was able to clearly distinguish between language and nonverbal sounds, and he did not have any difficulty in identifying the environmental sounds. However, he did not follow oral commands and could not repeat and dictate words. On the other hand, he had fluent and comprehensible speech, and was able to read and understand written words and sentences. Verbal auditory agnosia INTERVENTION:: He received speech therapy and cognitive rehabilitation during his hospitalization, and he practiced understanding of verbal language by providing written sentences together. Two months after hospitalization, he regained his ability to understand some verbal words. Six months after hospitalization, his ability to understand verbal language was improved to an understandable level when speaking slowly in front of his eyes, but his comprehension of verbal sound language was still word level, not sentence level. This case gives us the lesson that the evaluation of auditory functions as well as cognition and language functions important for accurate diagnosis and appropriate treatment, because the verbal auditory agnosia tends to be easily misdiagnosed as hearing impairment, cognitive dysfunction and sensory aphasia.

  18. Potencial evocado cognitivo e desordem de processamento auditivo em crianças com distúrbios de leitura e escrita Cognitive evoked potentials and central auditory processing in children with reading and writing disorders

    Directory of Open Access Journals (Sweden)

    Gislaine Richter Minhoto Wiemes

    2012-06-01

    Full Text Available As dificuldades na aprendizagem escolar muitas vezes podem ser causadas por uma alteração do Processamento Auditivo - PA. OBJETIVO: Identificar se acima da média dos valores de latência do P300, num grupo de indivíduos com Distúrbio de Leitura e Escrita, também seriam encontradas alterações no teste Staggered Spondaic Word - SSW e no teste de Fala no Ruído que sugerissem Desordem do Processamento Auditivo - DPA. MATERIAL E MÉTODOS: Estudo de coorte transversal. Foram avaliados 21 indivíduos com distúrbio de leitura e escrita, idade entre 7 e 14 anos. RESULTADOS: Todos apresentaram resultados normais no exame otorrinolaringológico, na avaliação audiológica e Potencial Evocado Auditivo de Tronco Encefálico. Fazendo-se a média aritmética de todos os valores de latência do P300 obtidos, chegou-se à média de 334,25 ms, sendo divididos em dois grupos: grupo "A", com média da latência acima de 335 ms, e "B", com latência abaixo de 335 ms. Nos indivíduos do grupo "A", foram realizados os testes SSW e Fala no Ruído. CONCLUSÃO:O presente estudo pode concluir que foram encontradas alterações nos testes de fala dicótica (SSW e de Fala no Ruído no grupo de indivíduos com Distúrbio da Escrita e Leitura com valores de latência do P300 acima de 335 ms, sugerindo DPA.Learning disorders are often magnified by auditory processing disorders (APD. OBJECTIVE: This paper aims to verify whether individuals with reading and writing disorders and P300 latencies above the average also present altered Staggered Spondaic Word (SSW and speech-in-noise test results suggestive of APD. MATERIALS AND METHODS: This is a cross-sectional cohort study. Twenty-one individuals with reading and writing disorders aged between 7 and 14 years were enrolled. RESULTS: All subjects had normal findings on ENT examination, audiological tests, and brainstem auditory evoked potentials. The average P300 latency (334,25 ms of all patients was picked as a cutoff

  19. Review: Optimising cognitive load and usability to improve the ...

    African Journals Online (AJOL)

    Cognitive load theory views learning as involving active processing of information by working memory via separate visual and auditory channels. This system is of ... The fields of cognitive load theory and human-computer interaction share a common goal in striving to reduce extraneous cognitive load. The load induced by ...

  20. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  1. Doubtful association of antipsychotic polypharmacy and high dosage with cognition in chronic schizophrenia.

    Science.gov (United States)

    Kontis, Dimitrios; Theochari, Eirini; Kleisas, Spyridon; Kalogerakou, Stamatina; Andreopoulou, Angeliki; Psaras, Rafael; Makris, Yannis; Karouzos, Charalambos; Tsaltas, Eleftheria

    2010-10-01

    Despite consistent recommendations for antipsychotic monotherapy, antipsychotic polypharmacy (the use of two or more antipsychotic agents) and the administration of excessive doses (higher than 1000 mgr/day of chloropromazine equivalents) is a common practice in schizophrenia. The therapeutic and adverse effects of this practice are poorly studied, in particular with regards to the cognitive symptoms of the disease. In this cross-sectional study we investigated the cognitive effects of antipsychotic polypharmacy and excessive doses in 53 patients with chronic schizophrenia using non-verbal cognitive tasks involving speed of movement, memory and executive functions. No significant difference in performance scores was found between the groups under polypharmacy and monotherapy, or the groups receiving either excessive or normal doses of antipsychotics. Since these groups did not also differ in demographic, clinical, other pharmacologic parameters, in the relative anticholinergic potency of antipsychotics, or in intelligence scores, we raise doubts about the association of polypharmacy and excessive doses with non-verbal cognitive performance in chronic schizophrenia. Copyright © 2010 Elsevier Inc. All rights reserved.

  2. Behavioral semantics of learning and crossmodal processing in auditory cortex: the semantic processor concept.

    Science.gov (United States)

    Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram

    2011-01-01

    Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of

  3. A comparative study of event-related coupling patterns during an auditory oddball task in schizophrenia

    Science.gov (United States)

    Bachiller, Alejandro; Poza, Jesús; Gómez, Carlos; Molina, Vicente; Suazo, Vanessa; Hornero, Roberto

    2015-02-01

    Objective. The aim of this research is to explore the coupling patterns of brain dynamics during an auditory oddball task in schizophrenia (SCH). Approach. Event-related electroencephalographic (ERP) activity was recorded from 20 SCH patients and 20 healthy controls. The coupling changes between auditory response and pre-stimulus baseline were calculated in conventional EEG frequency bands (theta, alpha, beta-1, beta-2 and gamma), using three coupling measures: coherence, phase-locking value and Euclidean distance. Main results. Our results showed a statistically significant increase from baseline to response in theta coupling and a statistically significant decrease in beta-2 coupling in controls. No statistically significant changes were observed in SCH patients. Significance. Our findings support the aberrant salience hypothesis, since SCH patients failed to change their coupling dynamics between stimulus response and baseline when performing an auditory cognitive task. This result may reflect an impaired communication among neural areas, which may be related to abnormal cognitive functions.

  4. Auditory sensory memory in 2-year-old children: an event-related potential study.

    Science.gov (United States)

    Glass, Elisabeth; Sachse, Steffi; von Suchodoletz, Waldemar

    2008-03-26

    Auditory sensory memory is assumed to play an important role in cognitive development, but little is known about it in young children. The aim of this study was to estimate the duration of auditory sensory memory in 2-year-old children. We recorded the mismatch negativity in response to tone stimuli presented with different interstimulus intervals. Our findings suggest that in 2-year-old children the memory representation of the standard tone remains in the sensory memory store for at least 1 s but for less than 2 s. Recording the mismatch negativity with stimuli presented at various interstimulus intervals seems to be a useful method for studying the relationship between auditory sensory memory and normal and disturbed cognitive development.

  5. Quadri-stability of a spatially ambiguous auditory illusion

    Directory of Open Access Journals (Sweden)

    Constance May Bainbridge

    2015-01-01

    Full Text Available In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front versus behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or bouncing to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual

  6. Children's auditory working memory performance in degraded listening conditions.

    Science.gov (United States)

    Osman, Homira; Sullivan, Jessica R

    2014-08-01

    The objectives of this study were to determine (a) whether school-age children with typical hearing demonstrate poorer auditory working memory performance in multitalker babble at degraded signal-to-noise ratios than in quiet; and (b) whether the amount of cognitive demand of the task contributed to differences in performance in noise. It was hypothesized that stressing the working memory system with the presence of noise would impede working memory processes in real time and result in poorer working memory performance in degraded conditions. Twenty children with typical hearing between 8 and 10 years old were tested using 4 auditory working memory tasks (Forward Digit Recall, Backward Digit Recall, Listening Recall Primary, and Listening Recall Secondary). Stimuli were from the standardized Working Memory Test Battery for Children. Each task was administered in quiet and in 4-talker babble noise at 0 dB and -5 dB signal-to-noise ratios. Children's auditory working memory performance was systematically decreased in the presence of multitalker babble noise compared with quiet. Differences between low-complexity and high-complexity tasks were observed, with children performing more poorly on tasks with greater storage and processing demands. There was no interaction between noise and complexity of task. All tasks were negatively impacted similarly by the addition of noise. Auditory working memory performance was negatively impacted by the presence of multitalker babble noise. Regardless of complexity of task, noise had a similar effect on performance. These findings suggest that the addition of noise inhibits auditory working memory processes in real time for school-age children.

  7. Pre-Attentive Auditory Processing of Lexicality

    Science.gov (United States)

    Jacobsen, Thomas; Horvath, Janos; Schroger, Erich; Lattner, Sonja; Widmann, Andreas; Winkler, Istvan

    2004-01-01

    The effects of lexicality on auditory change detection based on auditory sensory memory representations were investigated by presenting oddball sequences of repeatedly presented stimuli, while participants ignored the auditory stimuli. In a cross-linguistic study of Hungarian and German participants, stimulus sequences were composed of words that…

  8. Feature Assignment in Perception of Auditory Figure

    Science.gov (United States)

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  9. Auditory-visual integration in fields of the auditory cortex.

    Science.gov (United States)

    Kubota, Michinori; Sugimoto, Shunji; Hosokawa, Yutaka; Ojima, Hisayuki; Horikawa, Junsei

    2017-03-01

    While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Temporal Information Processing as a Basis for Auditory Comprehension: Clinical Evidence from Aphasic Patients

    Science.gov (United States)

    Oron, Anna; Szymaszek, Aneta; Szelag, Elzbieta

    2015-01-01

    Background: Temporal information processing (TIP) underlies many aspects of cognitive functions like language, motor control, learning, memory, attention, etc. Millisecond timing may be assessed by sequencing abilities, e.g. the perception of event order. It may be measured with auditory temporal-order-threshold (TOT), i.e. a minimum time gap…

  11. Bird brains and songs : Neural mechanisms of auditory memory and perception in zebra finches

    NARCIS (Netherlands)

    Gobes, S.M.H.|info:eu-repo/dai/nl/304832669

    2009-01-01

    Songbirds, such as zebra finches, learn their songs from a ‘tutor’ (usually the father), early in life. There are strong parallels between the behavioural, cognitive and neural processes that underlie vocal learning in humans and songbirds. In both cases there is a sensitive period for auditory

  12. Contributions from eye movement potentials to stimulus preceding negativity during anticipation of auditory stimulation

    DEFF Research Database (Denmark)

    Engdahl, Lis; Bjerre, Vicky K; Christoffersen, Gert R J

    2007-01-01

    Cognitive anticipation of a stimulus has been associated with an ERP called "stimulus preceding negativity" (SPN). A new auditory delay task without stimulus-related motor activity demonstrated a prefrontal SPN, present during attentive anticipation of sounds with closed eyes, but absent during d...

  13. Auditory Temporal Processing and Working Memory: Two Independent Deficits for Dyslexia

    Science.gov (United States)

    Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit

    2012-01-01

    Dyslexia is a neuro-cognitive disorder with a strong genetic basis, characterized by a difficulty in acquiring reading skills. Several hypotheses have been suggested in an attempt to explain the origin of dyslexia, among which some have suggested that dyslexic readers might have a deficit in auditory temporal processing, while others hypothesized…

  14. Auditory Attraction: Activation of Visual Cortex by Music and Sound in Williams Syndrome

    Science.gov (United States)

    Thornton-Wells, Tricia A.; Cannistraci, Christopher J.; Anderson, Adam W.; Kim, Chai-Youn; Eapen, Mariam; Gore, John C.; Blake, Randolph; Dykens, Elisabeth M.

    2010-01-01

    Williams syndrome is a genetic neurodevelopmental disorder with a distinctive phenotype, including cognitive-linguistic features, nonsocial anxiety, and a strong attraction to music. We performed functional MRI studies examining brain responses to musical and other types of auditory stimuli in young adults with Williams syndrome and typically…

  15. Common genetic influences on intelligence and auditory simple reaction time in a large Swedish sample

    NARCIS (Netherlands)

    Madison, G.; Mosing, M.A.; Verweij, K.J.H.; Pedersen, N.L.; Ullén, F.

    2016-01-01

    Intelligence and cognitive ability have long been associated with chronometric performance measures, such as reaction time (RT), but few studies have investigated auditory RT in this context. The nature of this relationship is important for understanding the etiology and structure of intelligence.

  16. Attentional Capture by Deviant Sounds: A Noncontingent Form of Auditory Distraction?

    Science.gov (United States)

    Vachon, François; Labonté, Katherine; Marsh, John E.

    2017-01-01

    The occurrence of an unexpected, infrequent sound in an otherwise homogeneous auditory background tends to disrupt the ongoing cognitive task. This "deviation effect" is typically explained in terms of attentional capture whereby the deviant sound draws attention away from the focal activity, regardless of the nature of this activity.…

  17. Subjective Loudness and Reality of Auditory Verbal Hallucinations and Activation of the Inner Speech Processing Network

    NARCIS (Netherlands)

    Vercammen, Ans; Knegtering, Henderikus; Bruggeman, Richard; Aleman, Andre

    Background: One of the most influential cognitive models of auditory verbal hallucinations (AVH) suggests that a failure to adequately monitor the production of one's own inner speech leads to verbal thought being misidentified as an alien voice. However, it is unclear whether this theory can

  18. Children's Auditory Working Memory Performance in Degraded Listening Conditions

    Science.gov (United States)

    Osman, Homira; Sullivan, Jessica R.

    2014-01-01

    Purpose: The objectives of this study were to determine (a) whether school-age children with typical hearing demonstrate poorer auditory working memory performance in multitalker babble at degraded signal-to-noise ratios than in quiet; and (b) whether the amount of cognitive demand of the task contributed to differences in performance in noise. It…

  19. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  20. Delayed Auditory Feedback and Movement

    Science.gov (United States)

    Pfordresher, Peter Q.; Dalla Bella, Simone

    2011-01-01

    It is well known that timing of rhythm production is disrupted by delayed auditory feedback (DAF), and that disruption varies with delay length. We tested the hypothesis that disruption depends on the state of the movement trajectory at the onset of DAF. Participants tapped isochronous rhythms at a rate specified by a metronome while hearing DAF…

  1. Molecular approach of auditory neuropathy.

    Science.gov (United States)

    Silva, Magali Aparecida Orate Menezes da; Piatto, Vânia Belintani; Maniglia, Jose Victor

    2015-01-01

    Mutations in the otoferlin gene are responsible for auditory neuropathy. To investigate the prevalence of mutations in the mutations in the otoferlin gene in patients with and without auditory neuropathy. This original cross-sectional case study evaluated 16 index cases with auditory neuropathy, 13 patients with sensorineural hearing loss, and 20 normal-hearing subjects. DNA was extracted from peripheral blood leukocytes, and the mutations in the otoferlin gene sites were amplified by polymerase chain reaction/restriction fragment length polymorphism. The 16 index cases included nine (56%) females and seven (44%) males. The 13 deaf patients comprised seven (54%) males and six (46%) females. Among the 20 normal-hearing subjects, 13 (65%) were males and seven were (35%) females. Thirteen (81%) index cases had wild-type genotype (AA) and three (19%) had the heterozygous AG genotype for IVS8-2A-G (intron 8) mutation. The 5473C-G (exon 44) mutation was found in a heterozygous state (CG) in seven (44%) index cases and nine (56%) had the wild-type allele (CC). Of these mutants, two (25%) were compound heterozygotes for the mutations found in intron 8 and exon 44. All patients with sensorineural hearing loss and normal-hearing individuals did not have mutations (100%). There are differences at the molecular level in patients with and without auditory neuropathy. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  2. Dynamics of auditory working memory

    Directory of Open Access Journals (Sweden)

    Jochen eKaiser

    2015-05-01

    Full Text Available Working memory denotes the ability to retain stimuli in mind that are no longer physically present and to perform mental operations on them. Electro- and magnetoencephalography allow investigating the short-term maintenance of acoustic stimuli at a high temporal resolution. Studies investigating working memory for non-spatial and spatial auditory information have suggested differential roles of regions along the putative auditory ventral and dorsal streams, respectively, in the processing of the different sound properties. Analyses of event-related potentials have shown sustained, memory load-dependent deflections over the retention periods. The topography of these waves suggested an involvement of modality-specific sensory storage regions. Spectral analysis has yielded information about the temporal dynamics of auditory working memory processing of individual stimuli, showing activation peaks during the delay phase whose timing was related to task performance. Coherence at different frequencies was enhanced between frontal and sensory cortex. In summary, auditory working memory seems to rely on the dynamic interplay between frontal executive systems and sensory representation regions.

  3. Estudo longitudinal da atenção compartilhada em crianças autistas não-verbais Longitudinal study of joint attention in non-verbal autistic children

    Directory of Open Access Journals (Sweden)

    Leila Sandra Damião Farah

    2009-12-01

    desenvolvimento da comunicação das crianças autistas.PURPOSE: to identify and characterize abilities of Joint Attention of non-verbal autistic children through the observation of communicative behaviors. METHODS: the research involved 5 boys, between 5,9 and 8,6-year old, diagnosed as Autistic Disorder (DSM IV, 2002, recorded in two instances with a four months interval. Meanwhile, the children were submitted to a language therapy mediation based on Joint Attention stimulation. Each recording was 15 minutes long and involved one child or group of 2-3 children with the therapist within non-directed and semi-directed interaction situations, at school where they studied. We observed and registered behaviors regarding Joint Attention abilities. The used material involved percussion instruments. Data were analyzed in relation to time, interaction and interlocutor. RESULTS: the gaze behavior showed the greatest growth in each subject. Data analysis revealed that the subjects showed qualitative trends for evolution of the Joint Attention ability revealing important clinical meaning although there was lack of statistical significance. Each subject showed characteristics and evolution of the communicative behaviors regarding Joint Attention in an individualized manner. After the period of language therapy intervention, we observed a quantitative behavioral growth in the 5 subjects, specifically under child-therapist interaction. CONCLUSIONS: the gaze behavior is an important step for the development of others behaviors toward Joint Attention. The adult-child interaction situation facilitates the appearance of communication behaviors and sharing. Language therapy with focus on the Joint Attention abilities seems to contribute positively for communication development of autistic children.

  4. Sleep for cognitive enhancement

    Directory of Open Access Journals (Sweden)

    Susanne eDiekelmann

    2014-04-01

    Full Text Available Sleep is essential for effective cognitive functioning. Loosing even a few hours of sleep can have detrimental effects on a wide variety of cognitive processes such as attention, language, reasoning, decision making, learning and memory. While sleep is necessary to ensure normal healthy cognitive functioning, it can also enhance performance beyond the boundaries of the normal condition. This article discusses the enhancing potential of sleep, mainly focusing on the domain of learning and memory. Sleep is known to facilitate the consolidation of memories learned before sleep as well as the acquisition of new memories to be learned after sleep. According to a widely held model this beneficial effect of sleep relies on the neuronal reactivation of memories during sleep that is associated with sleep-specific brain oscillations (slow oscillations, spindles, ripples as well as a characteristic neurotransmitter milieu. Recent research indicates that memory processing during sleep can be boosted by (i cueing memory reactivation during sleep, (ii stimulating sleep-specific brain oscillations, and (iii targeting specific neurotransmitter systems pharmacologically. Olfactory and auditory cues can be used, for example, to increase reactivation of associated memories during post-learning sleep. Intensifying neocortical slow oscillations (the hallmark of slow wave sleep by electrical or auditory stimulation and modulating specific neurotransmitters such as noradrenaline and glutamate likewise facilitates memory processing during sleep. With this evidence in mind, this article concludes by discussing different methodological caveats and ethical issues that should be considered when thinking about using sleep for cognitive enhancement in everyday applications.

  5. Cognitive and contextual variables in sexual partner and relationship perception.

    Science.gov (United States)

    Alvarez, Maria-João; Garcia-Marques, Leonel

    2011-04-01

    This study examined the effects of contextual and cognitive variables for sexual protection on perceived social relationship factors. University students (108 women and 108 men) read script-based narratives on sexual encounters in which six variables were manipulated in two independent analyses. In the first analysis, four variables were evaluated: relational context (stable, casual), condom use (yes, no), script terminus (beginning, middle or end), and the rater's sex. The dependent variables were interpersonal perception of one of the characters of the narrative, and expectations regarding characteristics and future of the relationship. In the second analysis, two other factors were manipulated only in the "yes" condom conditions: communication strategy (verbal, non-verbal) and condom proponent gender. Our findings corroborated other studies where condom use was viewed as unromantic with less positive characteristics for relationships. Condom proponents, especially male, were perceived as less romantic, particularly when proposing a condom non-verbally at the beginning of the encounter. However, the controlled variables enabled us to propose ways of associating condom use with positive expectations towards the proponent and the relationship itself. Romanticism, expectation of sexual intercourse, emotional proximity, and expectations of condom use in encounters where a condom was proposed increased when suggested by a woman, postponed to the end of the encounter, and verbally mentioned. We encourage women to take the lead in suggesting condom use, thus empowering them since they do not have to wait for the male to make the first move.

  6. Social interaction with a tutor modulates responsiveness of specific auditory neurons in juvenile zebra finches.

    Science.gov (United States)

    Yanagihara, Shin; Yazaki-Sugiyama, Yoko

    2018-04-12

    Behavioral states of animals, such as observing the behavior of a conspecific, modify signal perception and/or sensations that influence state-dependent higher cognitive behavior, such as learning. Recent studies have shown that neuronal responsiveness to sensory signals is modified when animals are engaged in social interactions with others or in locomotor activities. However, how these changes produce state-dependent differences in higher cognitive function is still largely unknown. Zebra finches, which have served as the premier songbird model, learn to sing from early auditory experiences with tutors. They also learn from playback of recorded songs however, learning can be greatly improved when song models are provided through social communication with tutors (Eales, 1989; Chen et al., 2016). Recently we found a subset of neurons in the higher-level auditory cortex of juvenile zebra finches that exhibit highly selective auditory responses to the tutor song after song learning, suggesting an auditory memory trace of the tutor song (Yanagihara and Yazaki-Sugiyama, 2016). Here we show that auditory responses of these selective neurons became greater when juveniles were paired with their tutors, while responses of non-selective neurons did not change. These results suggest that social interaction modulates cortical activity and might function in state-dependent song learning. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. (Amusicality in Williams syndrome: Examining relationships among auditory perception, musical skill, and emotional responsiveness to music

    Directory of Open Access Journals (Sweden)

    Miriam eLense

    2013-08-01

    Full Text Available Williams syndrome (WS, a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and typically developing individuals with and without amusia.

  8. Auditory post-processing in a passive listening task is deficient in Alzheimer's disease.

    Science.gov (United States)

    Bender, Stephan; Bluschke, Annet; Dippel, Gabriel; Rupp, André; Weisbrod, Matthias; Thomas, Christine

    2014-01-01

    To investigate whether automatic auditory post-processing is deficient in patients with Alzheimer's disease and is related to sensory gating. Event-related potentials were recorded during a passive listening task to examine the automatic transient storage of auditory information (short click pairs). Patients with Alzheimer's disease were compared to a healthy age-matched control group. A young healthy control group was included to assess effects of physiological aging. A bilateral frontal negativity in combination with deep temporal positivity occurring 500 ms after stimulus offset was reduced in patients with Alzheimer's disease, but was unaffected by physiological aging. Its amplitude correlated with short-term memory capacity, but was independent of sensory gating in healthy elderly controls. Source analysis revealed a dipole pair in the anterior temporal lobes. Results suggest that auditory post-processing is deficient in Alzheimer's disease, but is not typically related to sensory gating. The deficit could neither be explained by physiological aging nor by problems in earlier stages of auditory perception. Correlations with short-term memory capacity and executive control tasks suggested an association with memory encoding and/or overall cognitive control deficits. An auditory late negative wave could represent a marker of auditory working memory encoding deficits in Alzheimer's disease. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  9. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    Science.gov (United States)

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  10. Detection of stimulus deviance within primate primary auditory cortex: intracortical mechanisms of mismatch negativity (MMN) generation.

    Science.gov (United States)

    Javitt, D C; Steinschneider, M; Schroeder, C E; Vaughan, H G; Arezzo, J C

    1994-12-26

    Mismatch negativity (MMN) is a cognitive, auditory event-related potential (AEP) that reflects preattentive detection of stimulus deviance and indexes the operation of the auditory sensory ('echoic') memory system. MMN is elicited most commonly in an auditory oddball paradigm in which a sequence of repetitive standard stimuli is interrupted infrequently and unexpectedly by a physically deviant 'oddball' stimulus. Electro- and magnetoencephalographic dipole mapping studies have localized the generators of MMN to supratemporal auditory cortex in the vicinity of Heschl's gyrus, but have not determined the degree to which MMN reflects activation within primary auditory cortex (AI) itself. The present study, using moveable multichannel electrodes inserted acutely into superior temporal plane, demonstrates a significant contribution of AI to scalp-recorded MMN in the monkey, as reflected by greater response of AI to loud or soft clicks presented as deviants than to the same stimuli presented as repetitive standards. The MMN-like activity was localized primarily to supragranular laminae within AI. Thus, standard and deviant stimuli elicited similar degrees of initial, thalamocortical excitation. In contrast, responses within supragranular cortex were significantly larger to deviant stimuli than to standards. No MMN-like activity was detected in a limited number to passes that penetrated anterior and medial to AI. AI plays a well established role in the decoding of the acoustic properties of individual stimuli. The present study demonstrates that primary auditory cortex also plays an important role in processing the relationships between stimuli, and thus participates in cognitive, as well as purely sensory, processing of auditory information.

  11. Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers.

    Science.gov (United States)

    Hoppe, Christian; Splittstößer, Christoph; Fliessbach, Klaus; Trautner, Peter; Elger, Christian E; Weber, Bernd

    2014-11-01

    In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and

  12. Effect of high-dose simvastatin on cognitive, neuropsychiatric, and health-related quality-of-life measures in secondary progressive multiple sclerosis: secondary analyses from the MS-STAT randomised, placebo-controlled trial.

    Science.gov (United States)

    Chan, Dennis; Binks, Sophie; Nicholas, Jennifer M; Frost, Chris; Cardoso, M Jorge; Ourselin, Sebastien; Wilkie, David; Nicholas, Richard; Chataway, Jeremy

    2017-08-01

    In the 24-month MS-STAT phase 2 trial, we showed that high-dose simvastatin significantly reduced the annualised rate of whole brain atrophy in patients with secondary progressive multiple sclerosis (SPMS). We now describe the results of the MS-STAT cognitive substudy, in which we investigated the treatment effect on cognitive, neuropsychiatric, and health-related quality-of-life (HRQoL) outcome measures. We did a secondary analysis of MS-STAT, a 24-month, double-blind, controlled trial of patients with SPMS done at three neuroscience centres in the UK between Jan 28, 2008, and Nov 4, 2011. Patients were randomly assigned (1:1) to either 80 mg simvastatin (n=70) or placebo (n=70). The cognitive assessments done were the National Adult Reading Test, Wechsler Abbreviated Scale of Intelligence, Graded Naming Test, Birt Memory and Information Processing Battery (BMIPB), Visual Object and Space Perception battery (cube analysis), Frontal Assessment Battery (FAB), and Paced Auditory Serial Addition Test. Neuropsychiatric status was assessed using the Hamilton Depression Rating Scale and the Neuropsychiatric Inventory Questionnaire. HRQoL was assessed using the self-reported 36-Item Short Form Survey (SF-36) version 2. Assessments were done at study entry, 12 months, and 24 months. Patients, treating physicians, and outcome assessors were masked to treatment allocation. Analyses were by intention to treat. MS-STAT is registered with ClinicalTrials.gov, number NCT00647348. Baseline assessment revealed impairments in 60 (45%) of 133 patients on the test of frontal lobe function (FAB), and in between 13 (10%) and 43 (33%) of 130 patients in tests of non-verbal and verbal memory (BMIPB). Over the entire trial, we noted significant worsening on tests of verbal memory (T score decline of 5·7 points, 95% CI 3·6-7·8; pmultiple sclerosis treatment trials. The Moulton Foundation, the Berkeley Foundation, the Multiple Sclerosis Trials Collaboration, the Rosetrees Trust, a

  13. The influence of an auditory-memory attention-demanding task on postural control in blind persons.

    Science.gov (United States)

    Melzer, Itshak; Damry, Elad; Landau, Anat; Yagev, Ronit

    2011-05-01

    In order to evaluate the effect of an auditory-memory attention-demanding task on balance control, nine blind adults were compared to nine age-gender-matched sighted controls. This issue is particularly relevant for the blind population in which functional assessment of postural control has to be revealed through "real life" motor and cognitive function. The study aimed to explore whether an auditory-memory attention-demanding cognitive task would influence postural control in blind persons and compare this with blindfolded sighted persons. Subjects were instructed to minimize body sway during narrow base upright standing on a single force platform under two conditions: 1) standing still (single task); 2) as in 1) while performing an auditory-memory attention-demanding cognitive task (dual task). Subjects in both groups were required to stand blindfolded with their eyes closed. Center of Pressure displacement data were collected and analyzed using summary statistics and stabilogram-diffusion analysis. Blind and sighted subjects had similar postural sway in eyes closed condition. However, for dual compared to single task, sighted subjects show significant decrease in postural sway while blind subjects did not. The auditory-memory attention-demanding cognitive task had no interference effect on balance control on blind subjects. It seems that sighted individuals used auditory cues to compensate for momentary loss of vision, whereas blind subjects did not. This may suggest that blind and sighted people use different sensorimotor strategies to achieve stability. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Rapid Auditory System Adaptation Using a Virtual Auditory Environment

    Directory of Open Access Journals (Sweden)

    Gaëtan Parseihian

    2011-10-01

    Full Text Available Various studies have highlighted plasticity of the auditory system from visual stimuli, limiting the trained field of perception. The aim of the present study is to investigate auditory system adaptation using an audio-kinesthetic platform. Participants were placed in a Virtual Auditory Environment allowing the association of the physical position of a virtual sound source with an alternate set of acoustic spectral cues or Head-Related Transfer Function (HRTF through the use of a tracked ball manipulated by the subject. This set-up has the advantage to be not being limited to the visual field while also offering a natural perception-action coupling through the constant awareness of one's hand position. Adaptation process to non-individualized HRTF was realized through a spatial search game application. A total of 25 subjects participated, consisting of subjects presented with modified cues using non-individualized HRTF and a control group using individual measured HRTFs to account for any learning effect due to the game itself. The training game lasted 12 minutes and was repeated over 3 consecutive days. Adaptation effects were measured with repeated localization tests. Results showed a significant performance improvement for vertical localization and a significant reduction in the front/back confusion rate after 3 sessions.

  15. Efficacy of individual computer-based auditory training for people with hearing loss: a systematic review of the evidence.

    Directory of Open Access Journals (Sweden)

    Helen Henshaw

    Full Text Available BACKGROUND: Auditory training involves active listening to auditory stimuli and aims to improve performance in auditory tasks. As such, auditory training is a potential intervention for the management of people with hearing loss. OBJECTIVE: This systematic review (PROSPERO 2011: CRD42011001406 evaluated the published evidence-base for the efficacy of individual computer-based auditory training to improve speech intelligibility, cognition and communication abilities in adults with hearing loss, with or without hearing aids or cochlear implants. METHODS: A systematic search of eight databases and key journals identified 229 articles published since 1996, 13 of which met the inclusion criteria. Data were independently extracted and reviewed by the two authors. Study quality was assessed using ten pre-defined scientific and intervention-specific measures. RESULTS: Auditory training resulted in improved performance for trained tasks in 9/10 articles that reported on-task outcomes. Although significant generalisation of learning was shown to untrained measures of speech intelligibility (11/13 articles, cognition (1/1 articles and self-reported hearing abilities (1/2 articles, improvements were small and not robust. Where reported, compliance with computer-based auditory training was high, and retention of learning was shown at post-training follow-ups. Published evidence was of very-low to moderate study quality. CONCLUSIONS: Our findings demonstrate that published evidence for the efficacy of individual computer-based auditory training for adults with hearing loss is not robust and therefore cannot be reliably used to guide intervention at this time. We identify a need for high-quality evidence to further examine the efficacy of computer-based auditory training for people with hearing loss.

  16. Auditory cortical function during verbal episodic memory encoding in Alzheimer's disease.

    Science.gov (United States)

    Dhanjal, Novraj S; Warren, Jane E; Patel, Maneesh C; Wise, Richard J S

    2013-02-01

    Episodic memory encoding of a verbal message depends upon initial registration, which requires sustained auditory attention followed by deep semantic processing of the message. Motivated by previous data demonstrating modulation of auditory cortical activity during sustained attention to auditory stimuli, we investigated the response of the human auditory cortex during encoding of sentences to episodic memory. Subsequently, we investigated this response in patients with mild cognitive impairment (MCI) and probable Alzheimer's disease (pAD). Using functional magnetic resonance imaging, 31 healthy participants were studied. The response in 18 MCI and 18 pAD patients was then determined, and compared to 18 matched healthy controls. Subjects heard factual sentences, and subsequent retrieval performance indicated successful registration and episodic encoding. The healthy subjects demonstrated that suppression of auditory cortical responses was related to greater success in encoding heard sentences; and that this was also associated with greater activity in the semantic system. In contrast, there was reduced auditory cortical suppression in patients with MCI, and absence of suppression in pAD. Administration of a central cholinesterase inhibitor (ChI) partially restored the suppression in patients with pAD, and this was associated with an improvement in verbal memory. Verbal episodic memory impairment in AD is associated with altered auditory cortical function, reversible with a ChI. Although these results may indicate the direct influence of pathology in auditory cortex, they are also likely to indicate a partially reversible impairment of feedback from neocortical systems responsible for sustained attention and semantic processing. Copyright © 2012 American Neurological Association.

  17. Superior pre-attentive auditory processing in musicians.

    Science.gov (United States)

    Koelsch, S; Schröger, E; Tervaniemi, M

    1999-04-26

    The present study focuses on influences of long-term experience on auditory processing, providing the first evidence for pre-attentively superior auditory processing in musicians. This was revealed by the brain's automatic change-detection response, which is reflected electrically as the mismatch negativity (MMN) and generated by the operation of sensoric (echoic) memory, the earliest cognitive memory system. Major chords and single tones were presented to both professional violinists and non-musicians under ignore and attend conditions. Slightly impure chords, presented among perfect major chords elicited a distinct MMN in professional musicians, but not in non-musicians. This demonstrates that compared to non-musicians, musicians are superior in pre-attentively extracting more information out of musically relevant stimuli. Since effects of long-term experience on pre-attentive auditory processing have so far been reported for language-specific phonemes only, results indicate that sensory memory mechanisms can be modulated by training on a more general level.

  18. Genetic pleiotropy explains associations between musical auditory discrimination and intelligence.

    Science.gov (United States)

    Mosing, Miriam A; Pedersen, Nancy L; Madison, Guy; Ullén, Fredrik

    2014-01-01

    Musical aptitude is commonly measured using tasks that involve discrimination of different types of musical auditory stimuli. Performance on such different discrimination tasks correlates positively with each other and with intelligence. However, no study to date has explored these associations using a genetically informative sample to estimate underlying genetic and environmental influences. In the present study, a large sample of Swedish twins (N = 10,500) was used to investigate the genetic architecture of the associations between intelligence and performance on three musical auditory discrimination tasks (rhythm, melody and pitch). Phenotypic correlations between the tasks ranged between 0.23 and 0.42 (Pearson r values). Genetic modelling showed that the covariation between the variables could be explained by shared genetic influences. Neither shared, nor non-shared environment had a significant effect on the associations. Good fit was obtained with a two-factor model where one underlying shared genetic factor explained all the covariation between the musical discrimination tasks and IQ, and a second genetic factor explained variance exclusively shared among the discrimination tasks. The results suggest that positive correlations among musical aptitudes result from both genes with broad effects on cognition, and genes with potentially more specific influences on auditory functions.

  19. Dual Gamma Rhythm Generators Control Interlaminar Synchrony in Auditory Cortex

    Science.gov (United States)

    Ainsworth, Matthew; Lee, Shane; Cunningham, Mark O.; Roopun, Anita K.; Traub, Roger D.; Kopell, Nancy J.; Whittington, Miles A.

    2013-01-01

    Rhythmic activity in populations of cortical neurons accompanies, and may underlie, many aspects of primary sensory processing and short-term memory. Activity in the gamma band (30 Hz up to > 100 Hz) is associated with such cognitive tasks and is thought to provide a substrate for temporal coupling of spatially separate regions of the brain. However, such coupling requires close matching of frequencies in co-active areas, and because the nominal gamma band is so spectrally broad, it may not constitute a single underlying process. Here we show that, for inhibition-based gamma rhythms in vitro in rat neocortical slices, mechanistically distinct local circuit generators exist in different laminae of rat primary auditory cortex. A persistent, 30 – 45 Hz, gap-junction-dependent gamma rhythm dominates rhythmic activity in supragranular layers 2/3, whereas a tonic depolarization-dependent, 50 – 80 Hz, pyramidal/interneuron gamma rhythm is expressed in granular layer 4 with strong glutamatergic excitation. As a consequence, altering the degree of excitation of the auditory cortex causes bifurcation in the gamma frequency spectrum and can effectively switch temporal control of layer 5 from supragranular to granular layers. Computational modeling predicts the pattern of interlaminar connections may help to stabilize this bifurcation. The data suggest that different strategies are used by primary auditory cortex to represent weak and strong inputs, with principal cell firing rate becoming increasingly important as excitation strength increases. PMID:22114273

  20. Auditory Dysfunction in Patients with Cerebrovascular Disease

    Directory of Open Access Journals (Sweden)

    Sadaharu Tabuchi

    2014-01-01

    Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.

  1. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  2. Cognitive Component Analysis

    DEFF Research Database (Denmark)

    Feng, Ling

    2008-01-01

    This dissertation concerns the investigation of the consistency of statistical regularities in a signaling ecology and human cognition, while inferring appropriate actions for a speech-based perceptual task. It is based on unsupervised Independent Component Analysis providing a rich spectrum...... of audio contexts along with pattern recognition methods to map components to known contexts. It also involves looking for the right representations for auditory inputs, i.e. the data analytic processing pipelines invoked by human brains. The main ideas refer to Cognitive Component Analysis, defined...... as the process of unsupervised grouping of generic data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. Its hypothesis runs ecologically: features which are essentially independent in a context defined ensemble, can be efficiently coded as sparse...

  3. Effect of delayed auditory feedback on stuttering with and without central auditory processing disorders.

    Science.gov (United States)

    Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de

    2017-12-07

    To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.

  4. Processamento auditivo em indivíduos com epilepsia de lobo temporal Auditory processing in patients with temporal lobe epilepsy

    Directory of Open Access Journals (Sweden)

    Juliana Meneguello

    2006-08-01

    and nonverbal sounds. METHOD: eight individuals with temporal lobe epilepsy were assessed, after excluding those with non-confirmed diagnosis or with the focus of discharges not limited to this lobe. The evaluation was carried out through special auditory tests: Sound Localization Test, Duration Pattern Test, Digits Dichotic Test and Non-Verbal Dichotic Test. Their performances were compared to the performances of individuals without neurological diseases (case-control study. RESULTS: similar performances were observed between patients with temporal lobe epilepsy and the control group regarding the auditory mechanism of sound source direction discrimination. Comparing the other auditory mechanisms assessed, the patients with temporal lobe epilepsy presented worse results. CONCLUSION: individuals with temporal lobe epilepsy had more deficits in auditory processing than those without cortical damage.

  5. Motivation and intelligence drive auditory perceptual learning.

    Science.gov (United States)

    Amitay, Sygal; Halliday, Lorna; Taylor, Jenny; Sohoglu, Ediz; Moore, David R

    2010-03-23

    Although feedback on performance is generally thought to promote perceptual learning, the role and necessity of feedback remain unclear. We investigated the effect of providing varying amounts of positive feedback while listeners attempted to discriminate between three identical tones on learning frequency discrimination. Using this novel procedure, the feedback was meaningless and random in relation to the listeners' responses, but the amount of feedback provided (or lack thereof) affected learning. We found that a group of listeners who received positive feedback on 10% of the trials improved their performance on the task (learned), while other groups provided either with excess (90%) or with no feedback did not learn. Superimposed on these group data, however, individual listeners showed other systematic changes of performance. In particular, those with lower non-verbal IQ who trained in the no feedback condition performed more poorly after training. This pattern of results cannot be accounted for by learning models that ascribe an external teacher role to feedback. We suggest, instead, that feedback is used to monitor performance on the task in relation to its perceived difficulty, and that listeners who learn without the benefit of feedback are adept at self-monitoring of performance, a trait that also supports better performance on non-verbal IQ tests. These results show that 'perceptual' learning is strongly influenced by top-down processes of motivation and intelligence.

  6. Reality of auditory verbal hallucinations.

    Science.gov (United States)

    Raij, Tuukka T; Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-11-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency.

  7. Assessment of children with suspected auditory processing disorder: a factor analysis study.

    Science.gov (United States)

    Ahmmed, Ansar U; Ahmmed, Afsara A; Bath, Julie R; Ferguson, Melanie A; Plack, Christopher J; Moore, David R

    2014-01-01

    To identify the factors that may underlie the deficits in children with listening difficulties, despite normal pure-tone audiograms. These children may have auditory processing disorder (APD), but there is no universally agreed consensus as to what constitutes APD. The authors therefore refer to these children as children with suspected APD (susAPD) and aim to clarify the role of attention, cognition, memory, sensorimotor processing speed, speech, and nonspeech auditory processing in susAPD. It was expected that a factor analysis would show how nonauditory and supramodal factors relate to auditory behavioral measures in such children with susAPD. This would facilitate greater understanding of the nature of listening difficulties, thus further helping with characterizing APD and designing multimodal test batteries to diagnose APD. Factor analysis of outcomes from 110 children (68 male, 42 female; aged 6 to 11 years) with susAPD on a widely used clinical test battery (SCAN-C) and a research test battery (MRC Institute of Hearing Research Multi-center Auditory Processing "IMAP"), that have age-based normative data. The IMAP included backward masking, simultaneous masking, frequency discrimination, nonverbal intelligence, working memory, reading, alerting attention and motor reaction times to auditory and visual stimuli. SCAN-C included monaural low-redundancy speech (auditory closure and speech in noise) and dichotic listening tests (competing words and competing sentences) that assess divided auditory attention and hence executive attention. Three factors were extracted: "general auditory processing," "working memory and executive attention," and "processing speed and alerting attention." Frequency discrimination, backward masking, simultaneous masking, and monaural low-redundancy speech tests represented the "general auditory processing" factor. Dichotic listening and the IMAP cognitive tests (apart from nonverbal intelligence) were represented in the "working

  8. Exploration of auditory P50 gating in schizophrenia by way of difference waves

    DEFF Research Database (Denmark)

    Arnfred, Sidse M

    2006-01-01

    potentials but here this method along with low frequency filtering is applied exploratory on auditory P50 gating data, previously analyzed in the standard format (reported in Am J Psychiatry 2003, 160:2236-8). The exploration was motivated by the observation during visual peak detection that the AEP waveform......Electroencephalographic measures of information processing encompass both mid-latency evoked potentials like the pre-attentive auditory P50 potential and a host of later more cognitive components like P300 and N400.Difference waves have mostly been employed in studies of later event related...

  9. School performance and wellbeing of children with CI in different communicative-educational environments

    NARCIS (Netherlands)

    Langereis, M.C.; Vermeulen, A.M.

    2015-01-01

    OBJECTIVES: This study aimed to evaluate the long term effects of CI on auditory, language, educational and social-emotional development of deaf children in different educational-communicative settings. METHODS: The outcomes of 58 children with profound hearing loss and normal non-verbal cognition,

  10. Web-based diagnosis and therapy of auditory prerequisites for reading and spelling

    Directory of Open Access Journals (Sweden)

    Krammer, Sandra

    2006-11-01

    Full Text Available Cognitive deficits in auditory or visual processing or in verbal short-term-memory are amongst others risk factors for the development of dyslexia (reading and spelling disability. By early identification and intervention (optimally before school entry, detrimental effects of these cognitive deficits on reading and spelling might be prevented. The goal of the CASPAR-project is to develop and evaluate web-based tools for diagnosis and therapy of cognitive prerequisites for reading and spelling, which are appropriate for kindergarten children. In the first approach CASPAR addresses auditory processing disorders. This article describes a computerized and web-based approach for screening and testing phoneme discrimination and for promoting phoneme discrimination abilities through interactive games in kindergarteners.

  11. Longitudinal associations of social cognition and substance use in childhood and early adolescence: findings from the Avon Longitudinal Study of Parents and Children.

    Science.gov (United States)

    Fluharty, Meg E; Heron, Jon; Munafò, Marcus R

    2018-06-01

    Substance use is associated with impaired social cognition. Experimental studies have shown that acute intoxication of alcohol, tobacco, and cannabis decreases the performance in non-verbal, social communication and theory of mind tasks. However, in epidemiological studies the temporal direction of this association has gone relatively unstudied. We investigated both directions of association within an adolescent birth cohort: the association of social cognition with subsequent substance use, and the association of early substance use with subsequent social cognition. We used data from the Avon Longitudinal Study of Parents and Children, a UK birth cohort. Logistic regression indicated that poor childhood non-verbal communication was associated with decreased odds of adolescent alcohol (OR 0.70, 95% 0.54-0.91), tobacco (OR 0.62, 95% CI 0.47-0.83), and cannabis use (OR 0.62, 95% CI 0.46-0.83). Early adolescent substance use was associated with increased odds of poor social communication (alcohol: OR 1.46, 95% CI 0.99-2.14; tobacco: OR 1.95, 95% CI 1.33-2.86) and poor social reciprocity (alcohol: OR 1.57, 95% CI 1.18-2.09; tobacco: OR 1.92, 95% CI 1.43-2.58; cannabis: OR 1.54, 95% CI 1.16-2.05). Overall, the relationship between social cognition and substance use was different in each temporal direction. Poor non-verbal communication in childhood appeared protective against later substance use, while adolescent substance use was associated with decreased social cognitive performance.

  12. Laterality of basic auditory perception.

    Science.gov (United States)

    Sininger, Yvonne S; Bhatara, Anjali

    2012-01-01

    Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: (1) gap detection, (2) frequency discrimination, and (3) intensity discrimination. Stimuli included tones (500, 1000, and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was that processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by (1) spectral width, a narrow-band noise (NBN) of 450-Hz bandwidth was evaluated using intensity discrimination, and (2) stimulus duration, 200, 500, and 1000 ms duration tones were evaluated using frequency discrimination. A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments, but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterised as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex, which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli.

  13. Auditory and visual connectivity gradients in frontoparietal cortex.

    Science.gov (United States)

    Braga, Rodrigo M; Hellyer, Peter J; Wise, Richard J S; Leech, Robert

    2017-01-01

    A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal-ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior-anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top-down modulation of modality-specific information to occur within higher-order cortex. This could provide a potentially faster and more efficient pathway by which top-down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long-range connections to sensory cortices. Hum Brain Mapp 38:255-270, 2017. © 2016 Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  14. Brain networks underlying mental imagery of auditory and visual information.

    Science.gov (United States)

    Zvyagintsev, Mikhail; Clemens, Benjamin; Chechko, Natalya; Mathiak, Krystyna A; Sack, Alexander T; Mathiak, Klaus

    2013-05-01

    Mental imagery is a complex cognitive process that resembles the experience of perceiving an object when this object is not physically present to the senses. It has been shown that, depending on the sensory nature of the object, mental imagery also involves correspondent sensory neural mechanisms. However, it remains unclear which areas of the brain subserve supramodal imagery processes that are independent of the object modality, and which brain areas are involved in modality-specific imagery processes. Here, we conducted a functional magnetic resonance imaging study to reveal supramodal and modality-specific networks of mental imagery for auditory and visual information. A common supramodal brain network independent of imagery modality, two separate modality-specific networks for imagery of auditory and visual information, and a common deactivation network were identified. The supramodal network included brain areas related to attention, memory retrieval, motor preparation and semantic processing, as well as areas considered to be part of the default-mode network and multisensory integration areas. The modality-specific networks comprised brain areas involved in processing of respective modality-specific sensory information. Interestingly, we found that imagery of auditory information led to a relative deactivation within the modality-specific areas for visual imagery, and vice versa. In addition, mental imagery of both auditory and visual information widely suppressed the activity of primary sensory and motor areas, for example deactivation network. These findings have important implications for understanding the mechanisms that are involved in generation of mental imagery. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  15. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  16. Auditory Motion Elicits a Visual Motion Aftereffect.

    Science.gov (United States)

    Berger, Christopher C; Ehrsson, H Henrik

    2016-01-01

    The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect-an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  17. Web-based auditory self-training system for adult and elderly users of hearing aids.

    Science.gov (United States)

    Vitti, Simone Virginia; Blasca, Wanderléia Quinhoneiro; Sigulem, Daniel; Torres Pisa, Ivan

    2015-01-01

    THA portal, was finalized, rated by experts and hearing aid users and approved for use. The system comprised auditory skills training along five lines: discrimination; recognition; comprehension and temporal sequencing; auditory closure; and cognitive-linguistic and communication strategies. Users needed to undergo auditory training over a minimum period of 1 month: 5 times a week for 30 minutes a day. Comparisons were made between G1 and G2 and web system use by G3. The web system developed was approved for release to hearing aid users. It is expected that the self-training will help improve effective use of hearing aids, thereby decreasing their rejection.

  18. Interaction of Number Magnitude and Auditory Localization.

    Science.gov (United States)

    Golob, Edward J; Lewald, Jörg; Jungilligens, Johannes; Getzmann, Stephan

    2016-01-01

    The interplay of perception and memory is very evident when we perceive and then recognize familiar stimuli. Conversely, information in long-term memory may also influence how a stimulus is perceived. Prior work on number cognition in the visual modality has shown that in Western number systems long-term memory for the magnitude of smaller numbers can influence performance involving the left side of space, while larger numbers have an influence toward the right. Here, we investigated in the auditory modality whether a related effect may bias the perception of sound location. Subjects (n = 28) used a swivel pointer to localize noise bursts presented from various azimuth positions. The noise bursts were preceded by a spoken number (1-9) or, as a nonsemantic control condition, numbers that were played in reverse. The relative constant error in noise localization (forward minus reversed speech) indicated a systematic shift in localization toward more central locations when the number was smaller and toward more peripheral positions when the preceding number magnitude was larger. These findings do not support the traditional left-right number mapping. Instead, the results may reflect an overlap between codes for number magnitude and codes for sound location as implemented by two channel models of sound localization, or possibly a categorical mapping stage of small versus large magnitudes. © The Author(s) 2015.

  19. Auditory processing in absolute pitch possessors

    Science.gov (United States)

    McKetton, Larissa; Schneider, Keith A.

    2018-05-01

    Absolute pitch (AP) is a rare ability in classifying a musical pitch without a reference standard. It has been of great interest to researchers studying auditory processing and music cognition since it is seldom expressed and sheds light on influences pertaining to neurodevelopmental biological predispositions and the onset of musical training. We investigated the smallest frequency that could be detected or just noticeable difference (JND) between two pitches. Here, we report significant differences in JND thresholds in AP musicians and non-AP musicians compared to non-musician control groups at both 1000 Hz and 987.76 Hz testing frequencies. Although the AP-musicians did better than non-AP musicians, the difference was not significant. In addition, we looked at neuro-anatomical correlates of musicianship and AP using structural MRI. We report increased cortical thickness of the left Heschl's Gyrus (HG) and decreased cortical thickness of the inferior frontal opercular gyrus (IFO) and circular insular sulcus volume (CIS) in AP compared to non-AP musicians and controls. These structures may therefore be optimally enhanced and reduced to form the most efficient network for AP to emerge.

  20. Dose-dependent suppression by ethanol of transient auditory 40-Hz response.

    Science.gov (United States)

    Jääskeläinen, I P; Hirvonen, J; Saher, M; Pekkonen, E; Sillanaukee, P; Näätänen, R; Tiitinen, H

    2000-02-01

    Acute alcohol (ethanol) challenge is known to induce various cognitive disturbances, yet the neural basis of the effect is poorly known. The auditory transient evoked gamma-band (40-Hz) oscillatory responses have been suggested to be associated with various perceptual and cognitive functions in humans; however, alcohol effects on auditory 40-Hz responses have not been investigated to date. The objective of the study was to test the dose-related impact of alcohol on auditory transient evoked 40-Hz responses during a selective-attention task. Ten healthy social drinkers ingested, in four separate sessions, 0.00, 0. 25, 0.50, or 0.75 g/kg of 10% (v/v) alcohol solution. The order of the sessions was randomized and a double-blind procedure was employed. During a selective attention task, 300-Hz standard and 330-Hz deviant tones were presented to the left ear, and 1000-Hz standards and 1100-Hz deviants to the right ear of the subjects (P=0. 425 for each standard, P=0.075 for each deviant). The subjects attended to a designated ear, and were to detect the deviants therein while ignoring tones to the other ear. The auditory transient evoked 40-Hz responses elicited by both the attended and unattended standard tones were significantly suppressed by the 0.50 and 0.75 g/kg alcohol doses. Alcohol suppresses auditory transient evoked 40-Hz oscillations already with moderate blood alcohol concentrations. Given the putative role of gamma-band oscillations in cognition, this finding could be associated with certain alcohol-induced cognitive deficits.

  1. Children with speech sound disorder: Comparing a non-linguistic auditory approach with a phonological intervention approach to improve phonological skills

    Directory of Open Access Journals (Sweden)

    Cristina eMurphy

    2015-02-01

    Full Text Available This study aimed to compare the effects of a non-linguistic auditory intervention approach with a phonological intervention approach on the phonological skills of children with speech sound disorder. A total of 17 children, aged 7-12 years, with speech sound disorder were randomly allocated to either the non-linguistic auditory temporal intervention group (n = 10, average age 7.7 ± 1.2 or phonological intervention group (n = 7, average age 8.6 ± 1.2. The intervention outcomes included auditory-sensory measures (auditory temporal processing skills and cognitive measures (attention, short-term memory, speech production and phonological awareness skills. The auditory approach focused on non-linguistic auditory training (eg. backward masking and frequency discrimination, whereas the phonological approach focused on speech sound training (eg. phonological organisation and awareness. Both interventions consisted of twelve 45-minute sessions delivered twice per week, for a total of nine hours. Intra-group analysis demonstrated that the auditory intervention group showed significant gains in both auditory and cognitive measures, whereas no significant gain was observed in the phonological intervention group. No significant improvement on phonological skills was observed in any of the groups. Inter-group analysis demonstrated significant differences between the improvement following training for both groups, with a more pronounced gain for the non-linguistic auditory temporal intervention in one of the visual attention measures and both auditory measures. Therefore, both analyses suggest that although the non-linguistic auditory intervention approach appeared to be the most effective intervention approach, it was not sufficient to promote the enhancement of phonological skills.

  2. The Effect of Working Memory Training on Auditory Stream Segregation in Auditory Processing Disorders Children

    OpenAIRE

    Abdollah Moossavi; Saeideh Mehrkian; Yones Lotfi; Soghrat Faghih zadeh; Hamed Adjedi

    2015-01-01

    Objectives: This study investigated the efficacy of working memory training for improving working memory capacity and related auditory stream segregation in auditory processing disorders children. Methods: Fifteen subjects (9-11 years), clinically diagnosed with auditory processing disorder participated in this non-randomized case-controlled trial. Working memory abilities and auditory stream segregation were evaluated prior to beginning and six weeks after completing the training program...

  3. Cognitive performance and aphasia recovery.

    Science.gov (United States)

    Fonseca, José; Raposo, Ana; Martins, Isabel Pavão

    2018-03-01

    Objectives This study assessed cognitive performance of subjects with aphasia during the acute stage of stroke and evaluated how such performance relates to recovery at 3 months. Materials & methods Patients with aphasia following a left hemisphere stroke were evaluated during the first (baseline) and the fourth-month post onset. Assessment comprised non-verbal tests of attention/processing speed (Symbol Search, Cancelation Task), executive functioning (Matrix Reasoning, Tower of Hanoi, Clock Drawing, Motor Initiative), semantic (Camel and Cactus Test), episodic and immediate memory (Memory for Faces Test, 5 Objects Memory Test, and Spatial Span. Recovery was measured by the Token Test score at 3 months. The impact of baseline performance on recovery was evaluated by logistic regression adjusting for age, education, severity of aphasia and the Alberta Stroke Program Early CT (ASPECT) score. Results Thirty-nine subjects (with a mean of 66.5 ± 10.6 years of age, 17 men) were included. Average baseline cognitive performance was within normal range in all tests except in memory tests (semantic, episodic and immediate memory) for which scores were ≤-1.5sd. Subjects with poor aphasia recovery (N = 27) were older and had fewer years of formal education but had identical ASPECT score compared to those with favorable recovery. Considering each test individually, the score obtained on the Matrix Reasoning test was the only one to predict aphasia recovery (Exp(B)=24.085 p = 0.038). Conclusions The Matrix Reasoning Test may contribute to predict aphasia recovery. Cognitive performance is a measure of network disruption but may also indicate the availability of recovery strategies.

  4. Acute physical exercise affected processing efficiency in an auditory attention task more than processing effectiveness.

    Science.gov (United States)

    Dutke, Stephan; Jaitner, Thomas; Berse, Timo; Barenberg, Jonathan

    2014-02-01

    Research on effects of acute physical exercise on performance in a concurrent cognitive task has generated equivocal evidence. Processing efficiency theory predicts that concurrent physical exercise can increase resource requirements for sustaining cognitive performance even when the level of performance is unaffected. This hypothesis was tested in a dual-task experiment. Sixty young adults worked on a primary auditory attention task and a secondary interval production task while cycling on a bicycle ergometer. Physical load (cycling) and cognitive load of the primary task were manipulated. Neither physical nor cognitive load affected primary task performance, but both factors interacted on secondary task performance. Sustaining primary task performance under increased physical and/or cognitive load increased resource consumption as indicated by decreased secondary task performance. Results demonstrated that physical exercise effects on cognition might be underestimated when only single task performance is the focus.

  5. Task-specific modulation of human auditory evoked responses in a delayed-match-to-sample task

    Directory of Open Access Journals (Sweden)

    Feng eRong

    2011-05-01

    Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.

  6. Functional mapping of the primate auditory system.

    Science.gov (United States)

    Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer

    2003-01-24

    Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.

  7. Auditory Modeling for Noisy Speech Recognition

    National Research Council Canada - National Science Library

    2000-01-01

    ... digital filtering for noise cancellation which interfaces to speech recognition software. It uses auditory features in speech recognition training, and provides applications to multilingual spoken language translation...

  8. Auditory prediction during speaking and listening.

    Science.gov (United States)

    Sato, Marc; Shiller, Douglas M

    2018-02-02

    In the present EEG study, the role of auditory prediction in speech was explored through the comparison of auditory cortical responses during active speaking and passive listening to the same acoustic speech signals. Two manipulations of sensory prediction accuracy were used during the speaking task: (1) a real-time change in vowel F1 feedback (reducing prediction accuracy relative to unaltered feedback) and (2) presenting a stable auditory target rather than a visual cue to speak (enhancing auditory prediction accuracy during baseline productions, and potentially enhancing the perturbing effect of altered feedback). While subjects compensated for the F1 manipulation, no difference between the auditory-cue and visual-cue conditions were found. Under visually-cued conditions, reduced N1/P2 amplitude was observed during speaking vs. listening, reflecting a motor-to-sensory prediction. In addition, a significant correlation was observed between the magnitude of behavioral compensatory F1 response and the magnitude of this speaking induced suppression (SIS) for P2 during the altered auditory feedback phase, where a stronger compensatory decrease in F1 was associated with a stronger the SIS effect. Finally, under the auditory-cued condition, an auditory repetition-suppression effect was observed in N1/P2 amplitude during the listening task but not active speaking, suggesting that auditory predictive processes during speaking and passive listening are functionally distinct. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Human Factors Military Lexicon: Auditory Displays

    National Research Council Canada - National Science Library

    Letowski, Tomasz

    2001-01-01

    .... In addition to definitions specific to auditory displays, speech communication, and audio technology, the lexicon includes several terms unique to military operational environments and human factors...

  10. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre

  12. Auditory distraction and serial memory: The avoidable and the ineluctable

    Directory of Open Access Journals (Sweden)

    Dylan M Jones

    2010-01-01

    Full Text Available One mental activity that is very vulnerable to auditory distraction is serial recall. This review of the contemporary findings relating to serial recall charts the key determinants of distraction. It is evident that there is one form of distraction that is a joint product of the cognitive characteristics of the task and of the obligatory cognitive processing of the sound. For sequences of sound, distraction appears to be an ineluctable product of similarity-of-process, specifically, the serial order processing of the visually presented items and the serial order coding that is the by-product of the streaming of the sound. However, recently emerging work shows that the distraction from a single sound (one deviating from a prevailing sequence results in attentional capture and is qualitatively distinct from that of a sequence in being restricted in its action to encoding, not to rehearsal of list members. Capture is also sensitive to the sensory task load, suggesting that it is subject to top-down control and therefore avoidable. These two forms of distraction-conflict of process and attentional capture-may be two consequences of auditory perceptual organization processes that serve to strike the optimal balance between attentional selectivity and distractability.

  13. Relating binaural pitch perception to the individual listener's auditory profile.

    Science.gov (United States)

    Santurette, Sébastien; Dau, Torsten

    2012-04-01

    The ability of eight normal-hearing listeners and fourteen listeners with sensorineural hearing loss to detect and identify pitch contours was measured for binaural-pitch stimuli and salience-matched monaurally detectable pitches. In an effort to determine whether impaired binaural pitch perception was linked to a specific deficit, the auditory profiles of the individual listeners were characterized using measures of loudness perception, cognitive ability, binaural processing, temporal fine structure processing, and frequency selectivity, in addition to common audiometric measures. Two of the listeners were found not to perceive binaural pitch at all, despite a clear detection of monaural pitch. While both binaural and monaural pitches were detectable by all other listeners, identification scores were significantly lower for binaural than for monaural pitch. A total absence of binaural pitch sensation coexisted with a loss of a binaural signal-detection advantage in noise, without implying reduced cognitive function. Auditory filter bandwidths did not correlate with the difference in pitch identification scores between binaural and monaural pitches. However, subjects with impaired binaural pitch perception showed deficits in temporal fine structure processing. Whether the observed deficits stemmed from peripheral or central mechanisms could not be resolved here, but the present findings may be useful for hearing loss characterization.

  14. A Positive Generation Effect on Memory for Auditory Context.

    Science.gov (United States)

    Overman, Amy A; Richard, Alison G; Stephens, Joseph D W

    2017-06-01

    Self-generation of information during memory encoding has large positive effects on subsequent memory for items, but mixed effects on memory for contextual information associated with items. A processing account of generation effects on context memory (Mulligan in Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(4), 838-855, 2004; Mulligan, Lozito, & Rosner in Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(4), 836-846, 2006) proposes that these effects depend on whether the generation task causes any shift in processing of the type of context features for which memory is being tested. Mulligan and colleagues have used this account to predict various negative effects of generation on context memory, but the account also predicts positive generation effects under certain circumstances. The present experiment provided a critical test of the processing account by examining how generation affected memory for auditory rather than visual context. Based on the processing account, we predicted that generation of rhyme words should enhance processing of auditory information associated with the words (i.e., voice gender), whereas generation of antonym words should have no effect. These predictions were confirmed, providing support to the processing account.

  15. The recognition of facial expressions: an investigation of the influence of age and cognition.

    Science.gov (United States)

    Horning, Sheena M; Cornwell, R Elisabeth; Davis, Hasker P

    2012-11-01

    The present study aimed to investigate changes in facial expression recognition across the lifespan, as well as to determine the influence of fluid intelligence, processing speed, and memory on this ability. Peak performance in the ability to identify facial affect was found to occur in middle-age, with the children and older adults performing the poorest. Specifically, older adults were impaired in their ability to identify fear, sadness, and happiness, but had preserved recognition of anger, disgust, and surprise. Analyses investigating the influence of cognition on emotion recognition demonstrated that cognitive abilities contribute to performance, especially for participants over age 45. However, the cognitive functions did not fully account for the older adults' impairments on expression recognition. Overall, the age-related deficits in facial expression recognition have implications for older adults' use of non-verbal communicative information.

  16. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers

    Science.gov (United States)

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation. PMID:28824513

  17. Do informal musical activities shape auditory skill development in preschool-age children?

    Science.gov (United States)

    Putkinen, Vesa; Saarikivi, Katri; Tervaniemi, Mari

    2013-08-29

    The influence of formal musical training on auditory cognition has been well established. For the majority of children, however, musical experience does not primarily consist of adult-guided training on a musical instrument. Instead, young children mostly engage in everyday musical activities such as singing and musical play. Here, we review recent electrophysiological and behavioral studies carried out in our laboratory and elsewhere which have begun to map how developing auditory skills are shaped by such informal musical activities both at home and in playschool-type settings. Although more research is still needed, the evidence emerging from these studies suggests that, in addition to formal musical training, informal musical activities can also influence the maturation of auditory discrimination and attention in preschool-aged children.

  18. Non-verbal mother-child communication in conditions of maternal HIV in an experimental environment Comunicación no verbal madre/hijo em la existencia del HIV materna en ambiente experimental Comunicação não-verbal mãe/filho na vigência do HIV materno em ambiente experimental

    Directory of Open Access Journals (Sweden)

    Simone de Sousa Paiva

    2010-02-01

    Full Text Available Non-verbal communication is predominant in the mother-child relation. This study aimed to analyze non-verbal mother-child communication in conditions of maternal HIV. In an experimental environment, five HIV-positive mothers were evaluated during care delivery to their babies of up to six months old. Recordings of the care were analyzed by experts, observing aspects of non-verbal communication, such as: paralanguage, kinesics, distance, visual contact, tone of voice, maternal and infant tactile behavior. In total, 344 scenes were obtained. After statistical analysis, these permitted inferring that mothers use non-verbal communication to demonstrate their close attachment to their children and to perceive possible abnormalities. It is suggested that the mother’s infection can be a determining factor for the formation of mothers’ strong attachment to their children after birth.La comunicación no verbal es predominante en la relación entre madre/hijo. Se tuvo por objetivo verificar la comunicación no verbal madre/hijo en la existencia del HIV materno. En ambiente experimental, fueron evaluadas cinco madres HIV+, que cuidaban de sus hijos de hasta seis meses de vida. Las filmaciones de los cuidados fueron analizadas por peritos, siendo observados los aspectos de la comunicación no verbal, como: paralenguaje, cinestésica, proximidad, contacto visual, tono de voz y comportamiento táctil materno e infantil. Se obtuvo 344 escenas que, después de un análisis estadístico, posibilitó inferir que la comunicación no verbal es utilizada por la madre para demonstrar su apego íntimo a los hijos y para percibir posibles anormalidades. Se sugiere que la infección materna puede ser un factor determinante para la formación del fuerte apego de la madre por su bebé después el nacimiento.A comunicação não-verbal é predominante na relação entre mãe/filho. Objetivou-se verificar a comunicação não-verbal mãe/filho na vigência do HIV

  19. A influência da comunicação não verbal no cuidado de enfermagem La influencia de la comunicación no verbal en la atención de enfermería The influence of