WorldWideScience

Sample records for non-verbal sound processing

  1. Non-verbal communication in severe aphasia: influence of aphasia, apraxia, or semantic processing?

    Science.gov (United States)

    Hogrefe, Katharina; Ziegler, Wolfram; Weidinger, Nicole; Goldenberg, Georg

    2012-09-01

    Patients suffering from severe aphasia have to rely on non-verbal means of communication to convey a message. However, to date it is not clear which patients are able to do so. Clinical experience indicates that some patients use non-verbal communication strategies like gesturing very efficiently whereas others fail to transmit semantic content by non-verbal means. Concerns have been expressed that limb apraxia would affect the production of communicative gestures. Research investigating if and how apraxia influences the production of communicative gestures, led to contradictory outcomes. The purpose of this study was to investigate the impact of limb apraxia on spontaneous gesturing. Further, linguistic and non-verbal semantic processing abilities were explored as potential factors that might influence non-verbal expression in aphasic patients. Twenty-four aphasic patients with highly limited verbal output were asked to retell short video-clips. The narrations were videotaped. Gestural communication was analyzed in two ways. In the first part of the study, we used a form-based approach. Physiological and kinetic aspects of hand movements were transcribed with a notation system for sign languages. We determined the formal diversity of the hand gestures as an indicator of potential richness of the transmitted information. In the second part of the study, comprehensibility of the patients' gestural communication was evaluated by naive raters. The raters were familiarized with the model video-clips and shown the recordings of the patients' retelling without sound. They were asked to indicate, for each narration, which story was being told and which aspects of the stories they recognized. The results indicate that non-verbal faculties are the most important prerequisites for the production of hand gestures. Whereas results on standardized aphasia testing did not correlate with any gestural indices, non-verbal semantic processing abilities predicted the formal diversity

  2. Congenital Amusia: A Short-Term Memory Deficit for Non-Verbal, but Not Verbal Sounds

    Science.gov (United States)

    Tillmann, Barbara; Schulze, Katrin; Foxton, Jessica M.

    2009-01-01

    Congenital amusia refers to a lifelong disorder of music processing and is linked to pitch-processing deficits. The present study investigated congenital amusics' short-term memory for tones, musical timbres and words. Sequences of five events (tones, timbres or words) were presented in pairs and participants had to indicate whether the sequences…

  3. Multi-level prediction of short-term outcome of depression : non-verbal interpersonal processes, cognitions and personality traits

    NARCIS (Netherlands)

    Geerts, E; Bouhuys, N

    1998-01-01

    It was hypothesized that personality factors determine the short-term outcome of depression, and that they may do this via non-verbal interpersonal interactions and via cognitive interpretations of non-verbal behaviour. Twenty-six hospitalized depressed patients entered the study. Personality

  4. A comparison of processing load during non-verbal decision-making in two individuals with aphasia

    Directory of Open Access Journals (Sweden)

    Salima Suleman

    2015-05-01

    Full Text Available INTRODUCTION A growing body of evidence suggests people with aphasia (PWA can have impairments to cognitive functions such as attention, working memory and executive functions.(1-5 Such cognitive impairments have been shown to negatively affect the decision-making (DM abilities adults with neurological damage. (6,7 However, little is known about DM abilities of PWA.(8 Pupillometry is “the measurement of changes in pupil diameter”.(9;p.1 Researchers have reported a positive relationship between processing load and phasic pupil size (i.e., as processing load increases, pupil size increases.(10 Thus pupillometry has the potential to be a useful tool for investigating processing load during DM in PWA. AIMS The primary aim of this study was to establish the feasibility of using pupillometry during a non-verbal DM task with PWA. The secondary aim was to explore non-verbal DM performance in PWA and determine the relationship between DM performance and processing load using pupillometry. METHOD DESIGN. A single-subject case-study design with two participants was used in this study. PARTICIPANTS. Two adult males with anomic aphasia participated in this study. Participants were matched for age and education. Both participants were independent, able to drive, and had legal autonomy. MEASURES. PERFORMANCE ON A DM TASK. We used a computerized risk-taking card game called the Iowa Gambling Task (IGT as our non-verbal DM task.(11 In the IGT, participants made 100 selections (via eye gaze from four decks of cards presented on the computer screen with the goal of maximizing their overall hypothetical monetary gain. PROCESSING LOAD. The EyeLink 1000+ eye tracking system was used to collect pupil size measures while participants deliberated before each deck selection during the IGT. For this analysis, we calculated change in pupil size as a measure of processing load. RESULTS P1. P1 made increasingly advantageous decisions as the task progressed (Fig.1. When

  5. Verbal and Non-verbal Fluency in Adults with Developmental Dyslexia: Phonological Processing or Executive Control Problems?

    Science.gov (United States)

    Smith-Spark, James H; Henry, Lucy A; Messer, David J; Zięcik, Adam P

    2017-08-01

    The executive function of fluency describes the ability to generate items according to specific rules. Production of words beginning with a certain letter (phonemic fluency) is impaired in dyslexia, while generation of words belonging to a certain semantic category (semantic fluency) is typically unimpaired. However, in dyslexia, verbal fluency has generally been studied only in terms of overall words produced. Furthermore, performance of adults with dyslexia on non-verbal design fluency tasks has not been explored but would indicate whether deficits could be explained by executive control, rather than phonological processing, difficulties. Phonemic, semantic and design fluency tasks were presented to adults with dyslexia and without dyslexia, using fine-grained performance measures and controlling for IQ. Hierarchical regressions indicated that dyslexia predicted lower phonemic fluency, but not semantic or design fluency. At the fine-grained level, dyslexia predicted a smaller number of switches between subcategories on phonemic fluency, while dyslexia did not predict the size of phonemically related clusters of items. Overall, the results suggested that phonological processing problems were at the root of dyslexia-related fluency deficits; however, executive control difficulties could not be completely ruled out as an alternative explanation. Developments in research methodology, equating executive demands across fluency tasks, may resolve this issue. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Prosody Predicts Contest Outcome in Non-Verbal Dialogs.

    Science.gov (United States)

    Dreiss, Amélie N; Chatelain, Philippe G; Roulin, Alexandre; Richner, Heinz

    2016-01-01

    Non-verbal communication has important implications for inter-individual relationships and negotiation success. However, to what extent humans can spontaneously use rhythm and prosody as a sole communication tool is largely unknown. We analysed human ability to resolve a conflict without verbal dialogs, independently of semantics. We invited pairs of subjects to communicate non-verbally using whistle sounds. Along with the production of more whistles, participants unwittingly used a subtle prosodic feature to compete over a resource (ice-cream scoops). Winners can be identified by their propensity to accentuate the first whistles blown when replying to their partner, compared to the following whistles. Naive listeners correctly identified this prosodic feature as a key determinant of which whistler won the interaction. These results suggest that in the absence of other communication channels, individuals spontaneously use a subtle variation of sound accentuation (prosody), instead of merely producing exuberant sounds, to impose themselves in a conflict of interest. We discuss the biological and cultural bases of this ability and their link with verbal communication. Our results highlight the human ability to use non-verbal communication in a negotiation process.

  7. Anatomical Correlates of Non-Verbal Perception in Dementia Patients

    Directory of Open Access Journals (Sweden)

    Pin-Hsuan Lin

    2016-08-01

    Full Text Available Purpose: Patients with dementia who have dissociations in verbal and non-verbal sound processing may offer insights into the anatomic basis for highly related auditory modes. Methods: To determine the neuronal networks on non-verbal perception, 16 patients with Alzheimer’s dementia (AD, 15 with behavior variant fronto-temporal dementia (bv-FTD, 14 with semantic dementia (SD were evaluated and compared with 15 age-matched controls. Neuropsychological and auditory perceptive tasks were included to test the ability to compare pitch changes, scale-violated melody and for naming and associating with environmental sound. The brain 3D T1 images were acquired and voxel-based morphometry (VBM was used to compare and correlated the volumetric measures with task scores. Results: The SD group scored the lowest among 3 groups in pitch or scale-violated melody tasks. In the environmental sound test, the SD group also showed impairment in naming and also in associating sound with pictures. The AD and bv-FTD groups, compared with the controls, showed no differences in all tests. VBM with task score correlation showed that atrophy in the right supra-marginal and superior temporal gyri was strongly related to deficits in detecting violated scales, while atrophy in the bilateral anterior temporal poles and left medial temporal structures was related to deficits in environmental sound recognition. Conclusions: Auditory perception of pitch, scale-violated melody or environmental sound reflects anatomical degeneration in dementia patients and the processing of non-verbal sounds is mediated by distinct neural circuits.

  8. A Meta-study of musicians' non-verbal interaction

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer; Marchetti, Emanuela

    2010-01-01

    interruptions. Hence, despite the fact that the skill to engage in a non-verbal interaction is described as tacit knowledge, it is fundamental for both musicians and teachers (Davidson and Good 2002). Typical observed non-verbal cues are for example: physical gestures, modulations of sound, steady eye contact...

  9. Cross-cultural differences in the processing of non-verbal affective vocalizations by Japanese and canadian listeners.

    Science.gov (United States)

    Koeda, Michihiko; Belin, Pascal; Hama, Tomoko; Masuda, Tadashi; Matsuura, Masato; Okubo, Yoshiro

    2013-01-01

    The Montreal Affective Voices (MAVs) consist of a database of non-verbal affect bursts portrayed by Canadian actors, and high recognitions accuracies were observed in Canadian listeners. Whether listeners from other cultures would be as accurate is unclear. We tested for cross-cultural differences in perception of the MAVs: Japanese listeners were asked to rate the MAVs on several affective dimensions and ratings were compared to those obtained by Canadian listeners. Significant Group × Emotion interactions were observed for ratings of Intensity, Valence, and Arousal. Whereas Intensity and Valence ratings did not differ across cultural groups for sad and happy vocalizations, they were significantly less intense and less negative in Japanese listeners for angry, disgusted, and fearful vocalizations. Similarly, pleased vocalizations were rated as less intense and less positive by Japanese listeners. These results demonstrate important cross-cultural differences in affective perception not just of non-verbal vocalizations expressing positive affect (Sauter et al., 2010), but also of vocalizations expressing basic negative emotions.

  10. Cross-Cultural Differences in the Processing of Non-Verbal Affective Vocalizations by Japanese and Canadian Listeners

    Science.gov (United States)

    Koeda, Michihiko; Belin, Pascal; Hama, Tomoko; Masuda, Tadashi; Matsuura, Masato; Okubo, Yoshiro

    2013-01-01

    The Montreal Affective Voices (MAVs) consist of a database of non-verbal affect bursts portrayed by Canadian actors, and high recognitions accuracies were observed in Canadian listeners. Whether listeners from other cultures would be as accurate is unclear. We tested for cross-cultural differences in perception of the MAVs: Japanese listeners were asked to rate the MAVs on several affective dimensions and ratings were compared to those obtained by Canadian listeners. Significant Group × Emotion interactions were observed for ratings of Intensity, Valence, and Arousal. Whereas Intensity and Valence ratings did not differ across cultural groups for sad and happy vocalizations, they were significantly less intense and less negative in Japanese listeners for angry, disgusted, and fearful vocalizations. Similarly, pleased vocalizations were rated as less intense and less positive by Japanese listeners. These results demonstrate important cross-cultural differences in affective perception not just of non-verbal vocalizations expressing positive affect (Sauter et al., 2010), but also of vocalizations expressing basic negative emotions. PMID:23516137

  11. Getting the Message Across; Non-Verbal Communication in the Classroom.

    Science.gov (United States)

    Levy, Jack

    This handbook presents selected theories, activities, and resources which can be utilized by educators in the area of non-verbal communication. Particular attention is given to the use of non-verbal communication in a cross-cultural context. Categories of non-verbal communication such as proxemics, haptics, kinesics, smiling, sound, clothing, and…

  12. Non-verbal auditory cognition in patients with temporal epilepsy before and after anterior temporal lobectomy

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-11-01

    Full Text Available For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL - i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri - is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits.

  13. From Sensory Perception to Lexical-Semantic Processing: An ERP Study in Non-Verbal Children with Autism.

    Science.gov (United States)

    Cantiani, Chiara; Choudhury, Naseem A; Yu, Yan H; Shafer, Valerie L; Schwartz, Richard G; Benasich, April A

    2016-01-01

    This study examines electrocortical activity associated with visual and auditory sensory perception and lexical-semantic processing in nonverbal (NV) or minimally-verbal (MV) children with Autism Spectrum Disorder (ASD). Currently, there is no agreement on whether these children comprehend incoming linguistic information and whether their perception is comparable to that of typically developing children. Event-related potentials (ERPs) of 10 NV/MV children with ASD and 10 neurotypical children were recorded during a picture-word matching paradigm. Atypical ERP responses were evident at all levels of processing in children with ASD. Basic perceptual processing was delayed in both visual and auditory domains but overall was similar in amplitude to typically-developing children. However, significant differences between groups were found at the lexical-semantic level, suggesting more atypical higher-order processes. The results suggest that although basic perception is relatively preserved in NV/MV children with ASD, higher levels of processing, including lexical- semantic functions, are impaired. The use of passive ERP paradigms that do not require active participant response shows significant potential for assessment of non-compliant populations such as NV/MV children with ASD.

  14. From Sensory Perception to Lexical-Semantic Processing: An ERP Study in Non-Verbal Children with Autism

    Science.gov (United States)

    Cantiani, Chiara; Choudhury, Naseem A.; Yu, Yan H.; Shafer, Valerie L.; Schwartz, Richard G.; Benasich, April A.

    2016-01-01

    This study examines electrocortical activity associated with visual and auditory sensory perception and lexical-semantic processing in nonverbal (NV) or minimally-verbal (MV) children with Autism Spectrum Disorder (ASD). Currently, there is no agreement on whether these children comprehend incoming linguistic information and whether their perception is comparable to that of typically developing children. Event-related potentials (ERPs) of 10 NV/MV children with ASD and 10 neurotypical children were recorded during a picture-word matching paradigm. Atypical ERP responses were evident at all levels of processing in children with ASD. Basic perceptual processing was delayed in both visual and auditory domains but overall was similar in amplitude to typically-developing children. However, significant differences between groups were found at the lexical-semantic level, suggesting more atypical higher-order processes. The results suggest that although basic perception is relatively preserved in NV/MV children with ASD, higher levels of processing, including lexical- semantic functions, are impaired. The use of passive ERP paradigms that do not require active participant response shows significant potential for assessment of non-compliant populations such as NV/MV children with ASD. PMID:27560378

  15. The neural basis of non-verbal communication-enhanced processing of perceived give-me gestures in 9-month-old girls.

    Science.gov (United States)

    Bakker, Marta; Kaduk, Katharina; Elsner, Claudia; Juvrud, Joshua; Gustaf Gredebäck

    2015-01-01

    This study investigated the neural basis of non-verbal communication. Event-related potentials were recorded while 29 nine-month-old infants were presented with a give-me gesture (experimental condition) and the same hand shape but rotated 90°, resulting in a non-communicative hand configuration (control condition). We found different responses in amplitude between the two conditions, captured in the P400 ERP component. Moreover, the size of this effect was modulated by participants' sex, with girls generally demonstrating a larger relative difference between the two conditions than boys.

  16. Perception of non-verbal auditory stimuli in Italian dyslexic children.

    Science.gov (United States)

    Cantiani, Chiara; Lorusso, Maria Luisa; Valnegri, Camilla; Molteni, Massimo

    2010-01-01

    Auditory temporal processing deficits have been proposed as the underlying cause of phonological difficulties in Developmental Dyslexia. The hypothesis was tested in a sample of 20 Italian dyslexic children aged 8-14, and 20 matched control children. Three tasks of auditory processing of non-verbal stimuli, involving discrimination and reproduction of sequences of rapidly presented short sounds were expressly created. Dyslexic subjects performed more poorly than control children, suggesting the presence of a deficit only partially influenced by the duration of the stimuli and of inter-stimulus intervals (ISIs).

  17. [Non-verbal communication in Alzheimer's disease].

    Science.gov (United States)

    Schiaratura, Loris Tamara

    2008-09-01

    This review underlines the importance of non-verbal communication in Alzheimer's disease. A social psychological perspective of communication is privileged. Non-verbal behaviors such as looks, head nods, hand gestures, body posture or facial expression provide a lot of information about interpersonal attitudes, behavioral intentions, and emotional experiences. Therefore they play an important role in the regulation of interaction between individuals. Non-verbal communication is effective in Alzheimer's disease even in the late stages. Patients still produce non-verbal signals and are responsive to others. Nevertheless, few studies have been devoted to the social factors influencing the non-verbal exchange. Misidentification and misinterpretation of behaviors may have negative consequences for the patients. Thus, improving the comprehension of and the response to non-verbal behavior would increase first the quality of the interaction, then the physical and psychological well-being of patients and that of caregivers. The role of non-verbal behavior in social interactions should be approached from an integrative and functional point of view.

  18. Non-verbal numerical cognition: from reals to integers.

    Science.gov (United States)

    Gallistel; Gelman

    2000-02-01

    Data on numerical processing by verbal (human) and non-verbal (animal and human) subjects are integrated by the hypothesis that a non-verbal counting process represents discrete (countable) quantities by means of magnitudes with scalar variability. These appear to be identical to the magnitudes that represent continuous (uncountable) quantities such as duration. The magnitudes representing countable quantity are generated by a discrete incrementing process, which defines next magnitudes and yields a discrete ordering. In the case of continuous quantities, the continuous accumulation process does not define next magnitudes, so the ordering is also continuous ('dense'). The magnitudes representing both countable and uncountable quantity are arithmetically combined in, for example, the computation of the income to be expected from a foraging patch. Thus, on the hypothesis presented here, the primitive machinery for arithmetic processing works with real numbers (magnitudes).

  19. Drama to promote non-verbal communication skills.

    Science.gov (United States)

    Kelly, Martina; Nixon, Lara; Broadfoot, Kirsten; Hofmeister, Marianna; Dornan, Tim

    2018-05-23

    Non-verbal communication skills (NVCS) help physicians to deliver relationship-centred care, and the effective use of NVCS is associated with improved patient satisfaction, better use of health services and high-quality clinical care. In contrast to verbal communication skills, NVCS training is under developed in communication curricula for the health care professions. One of the challenges teaching NVCS is their tacit nature. In this study, we evaluated drama exercises to raise awareness of NVCS by making familiar activities 'strange'. Workshops based on drama exercises were designed to heighten an awareness of sight, hearing, touch and proxemics in non-verbal communication. These were conducted at eight medical education conferences, held between 2014 and 2016, and were open to all conference participants. Workshops were evaluated by recording narrative data generated during the workshops and an open-ended questionnaire following the workshop. Data were analysed qualitatively, using thematic analysis. Non-verbal communication skills help doctors to deliver relationship-centred care RESULTS: One hundred and twelve participants attended workshops, 73 (65%) of whom completed an evaluation form: 56 physicians, nine medical students and eight non-physician faculty staff. Two themes were described: an increased awareness of NVCS and the importance of NVCS in relationship building. Drama exercises enabled participants to experience NVCS, such as sight, sound, proxemics and touch, in novel ways. Participants reflected on how NCVS contribute to developing trust and building relationships in clinical practice. Drama-based exercises elucidate the tacit nature of NVCS and require further evaluation in formal educational settings. © 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  20. Consonant Differentiation Mediates the Discrepancy between Non-verbal and Verbal Abilities in Children with ASD

    Science.gov (United States)

    Key, A. P.; Yoder, P. J.; Stone, W. L.

    2016-01-01

    Background: Many children with autism spectrum disorder (ASD) demonstrate verbal communication disorders reflected in lower verbal than non-verbal abilities. The present study examined the extent to which this discrepancy is associated with atypical speech sound differentiation. Methods: Differences in the amplitude of auditory event-related…

  1. Dissociation of neural correlates of verbal and non-verbal visual working memory with different delays

    Directory of Open Access Journals (Sweden)

    Endestad Tor

    2007-10-01

    Full Text Available Abstract Background Dorsolateral prefrontal cortex (DLPFC, posterior parietal cortex, and regions in the occipital cortex have been identified as neural sites for visual working memory (WM. The exact involvement of the DLPFC in verbal and non-verbal working memory processes, and how these processes depend on the time-span for retention, remains disputed. Methods We used functional MRI to explore the neural correlates of the delayed discrimination of Gabor stimuli differing in orientation. Twelve subjects were instructed to code the relative orientation either verbally or non-verbally with memory delays of short (2 s or long (8 s duration. Results Blood-oxygen level dependent (BOLD 3-Tesla fMRI revealed significantly more activity for the short verbal condition compared to the short non-verbal condition in bilateral superior temporal gyrus, insula and supramarginal gyrus. Activity in the long verbal condition was greater than in the long non-verbal condition in left language-associated areas (STG and bilateral posterior parietal areas, including precuneus. Interestingly, right DLPFC and bilateral superior frontal gyrus was more active in the non-verbal long delay condition than in the long verbal condition. Conclusion The results point to a dissociation between the cortical sites involved in verbal and non-verbal WM for long and short delays. Right DLPFC seems to be engaged in non-verbal WM tasks especially for long delays. Furthermore, the results indicate that even slightly different memory maintenance intervals engage largely differing networks and that this novel finding may explain differing results in previous verbal/non-verbal WM studies.

  2. Non-Verbal Communication in Children with Visual Impairment

    Science.gov (United States)

    Mallineni, Sharmila; Nutheti, Rishita; Thangadurai, Shanimole; Thangadurai, Puspha

    2006-01-01

    The aim of this study was to determine: (a) whether children with visual and additional impairments show any non-verbal behaviors, and if so what were the common behaviors; (b) whether two rehabilitation professionals interpreted the non-verbal behaviors similarly; and (c) whether a speech pathologist and a rehabilitation professional interpreted…

  3. Guidelines for Teaching Non-Verbal Communications Through Visual Media

    Science.gov (United States)

    Kundu, Mahima Ranjan

    1976-01-01

    There is a natural unique relationship between non-verbal communication and visual media such as television and film. Visual media will have to be used extensively--almost exclusively--in teaching non-verbal communications, as well as other methods requiring special teaching skills. (Author/ER)

  4. The impact of the teachers’ non-verbal communication on success in teaching

    Directory of Open Access Journals (Sweden)

    FATEMEH BAMBAEEROO

    2017-04-01

    Full Text Available Introduction: Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers’ non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers’ use of non-verbal communication and also its impact on success in teaching. Methods: Considering the research method, i.e. a review article, we searched for all articles in this field using key words such as success in teaching, verbal communication and non-verbal communication. In this study, we did not encode the articles. Results: The results of this revealed that there was a strong relationship among the quality, amount and the method of using non-verbal communication by teachers while teaching. Based on the findings of the studies reviewed, it was found that the more the teachers used verbal and non-verbal communication, the more efficacious their education and the students’ academic progress were. Under non-verbal communication, some other patterns were used. For example, emotive, team work, supportive, imaginative, purposive, and balanced communication using speech, body, and pictures all have been effective in students’ learning and academic success. The teachers’ attention to the students’ non-verbal reactions and arranging the syllabus considering the students’ mood and readiness have been emphasized in the studies reviewed. Conclusion: It was concluded that if this skill is practiced by teachers, it will have a positive and profound effect on the students’ mood. Non-verbal communication is highly reliable in the communication process, so if the recipient of a message is between two contradictory verbal and nonverbal messages, logic dictates that we push him toward the non-verbal message

  5. The impact of the teachers' non-verbal communication on success in teaching.

    Science.gov (United States)

    Bambaeeroo, Fatemeh; Shokrpour, Nasrin

    2017-04-01

    Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers' non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers' use of non-verbal communication and also its impact on success in teaching. Considering the research method, i.e. a review article, we searched for all articles in this field using key words such as success in teaching, verbal communication and non-verbal communication. In this study, we did not encode the articles. The results of this revealed that there was a strong relationship among the quality, amount and the method of using non-verbal communication by teachers while teaching. Based on the findings of the studies reviewed, it was found that the more the teachers used verbal and non-verbal communication, the more efficacious their education and the students' academic progress were. Under non-verbal communication, some other patterns were used. For example, emotive, team work, supportive, imaginative, purposive, and balanced communication using speech, body, and pictures all have been effective in students' learning and academic success. The teachers' attention to the students' non-verbal reactions and arranging the syllabus considering the students' mood and readiness have been emphasized in the studies reviewed. It was concluded that if this skill is practiced by teachers, it will have a positive and profound effect on the students' mood. Non-verbal communication is highly reliable in the communication process, so if the recipient of a message is between two contradictory verbal and nonverbal messages, logic dictates that we push him toward the non-verbal message and ask him to pay more attention to non-verbal than verbal messages because non-verbal

  6. The impact of the teachers’ non-verbal communication on success in teaching

    Science.gov (United States)

    BAMBAEEROO, FATEMEH; SHOKRPOUR, NASRIN

    2017-01-01

    Introduction: Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers’ non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers’ use of non-verbal communication and also its impact on success in teaching. Methods: Considering the research method, i.e. a review article, we searched for all articles in this field using key words such as success in teaching, verbal communication and non-verbal communication. In this study, we did not encode the articles. Results: The results of this revealed that there was a strong relationship among the quality, amount and the method of using non-verbal communication by teachers while teaching. Based on the findings of the studies reviewed, it was found that the more the teachers used verbal and non-verbal communication, the more efficacious their education and the students’ academic progress were. Under non-verbal communication, some other patterns were used. For example, emotive, team work, supportive, imaginative, purposive, and balanced communication using speech, body, and pictures all have been effective in students’ learning and academic success. The teachers’ attention to the students’ non-verbal reactions and arranging the syllabus considering the students’ mood and readiness have been emphasized in the studies reviewed. Conclusion: It was concluded that if this skill is practiced by teachers, it will have a positive and profound effect on the students’ mood. Non-verbal communication is highly reliable in the communication process, so if the recipient of a message is between two contradictory verbal and nonverbal messages, logic dictates that we push him toward the non-verbal message and ask him to pay

  7. An executable model of the interaction between verbal and non-verbal communication.

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.

    2000-01-01

    In this paper an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The model has been formalised by three-levelled partial temporal models, covering both the material and mental processes and their relations. The generic

  8. An Executable Model of the Interaction between Verbal and Non-Verbal Communication

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.; Dignum, F.; Greaves, M.

    2000-01-01

    In this paper an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The model has been formalised by three-levelled partial temporal models, covering both the material and mental processes and their relations. The generic

  9. Phenomenology of non-verbal communication as a representation of sports activities

    Directory of Open Access Journals (Sweden)

    Liubov Karpets

    2018-04-01

    Full Text Available The priority of language professional activity in sports is such non-verbal communication as body language. Purpose: to delete the main aspects of non-verbal communication as a representation of sports activities. Material & Methods: in the study participated members of sports teams, individual athletes, in particular, for such sports: basketball, handball, volleyball, football, hockey, bodybuilding. Results: in the process of research it was revealed that in sports activities such nonverbal communication as gestures, facial expressions, physique, etc., are lapped, and, as a consequence, the position "everything is language" (Lyotard is embodied. Conclusions: non-verbal communication is one of the most significant forms of communication in sports. Additional means of communication through the "language" of the body help the athletes to realize themselves and self-determination.

  10. Non-verbal communication barriers when dealing with Saudi sellers

    Directory of Open Access Journals (Sweden)

    Yosra Missaoui

    2015-12-01

    Full Text Available Communication has a major impact on how customers perceive sellers and their organizations. Especially, the non-verbal communication such as body language, appearance, facial expressions, gestures, proximity, posture, eye contact that can influence positively or negatively the first impression of customers and their experiences in stores. Salespeople in many countries, especially the developing ones, are just telling about their companies’ products because they are unaware of the real role of sellers and the importance of non-verbal communication. In Saudi Arabia, the seller profession has been exclusively for foreign labor until 2006. It is very recently that Saudi workforce enters to the retailing sector as sellers. The non-verbal communication of those sellers has never been evaluated from consumer’s point of view. Therefore, the aim of this paper is to explore the non-verbal communication barriers that customers are facing when dealing with Saudi sellers. After discussing the non-verbal communication skills that sellers must have in the light of the previous academic research and the depth interviews with seven focus groups of Saudi customers, this study found that the Saudi customers were not totally satisfied with the current non-verbal communication skills of Saudi sellers. Therefore, it is strongly recommended to develop the non-verbal communication skills of Saudi sellers by intensive trainings, to distinguish more the appearance of their sellers, especially the female ones, to focus on the time of intervention as well as the proximity to customers.

  11. From SOLER to SURETY for effective non-verbal communication.

    Science.gov (United States)

    Stickley, Theodore

    2011-11-01

    This paper critiques the model for non-verbal communication referred to as SOLER (which stands for: "Sit squarely"; "Open posture"; "Lean towards the other"; "Eye contact; "Relax"). It has been approximately thirty years since Egan (1975) introduced his acronym SOLER as an aid for teaching and learning about non-verbal communication. There is evidence that the SOLER framework has been widely used in nurse education with little published critical appraisal. A new acronym that might be appropriate for non-verbal communication skills training and education is proposed and this is SURETY (which stands for "Sit at an angle"; "Uncross legs and arms"; "Relax"; "Eye contact"; "Touch"; "Your intuition"). The proposed model advances the SOLER model by including the use of touch and the importance of individual intuition is emphasised. The model encourages student nurse educators to also think about therapeutic space when they teach skills of non-verbal communication. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Conditioned sounds enhance visual processing.

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    Full Text Available This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral or monetary outcomes (+50 euro cents, -50 cents, 0 cents. In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.

  13. Cortical Auditory Disorders: A Case of Non-Verbal Disturbances Assessed with Event-Related Brain Potentials

    Directory of Open Access Journals (Sweden)

    Sönke Johannes

    1998-01-01

    Full Text Available In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians’ musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19–30 and by event-related potentials (ERP recorded in a modified 'oddball paradigm’. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  14. Cortical auditory disorders: a case of non-verbal disturbances assessed with event-related brain potentials.

    Science.gov (United States)

    Johannes, Sönke; Jöbges, Michael E.; Dengler, Reinhard; Münte, Thomas F.

    1998-01-01

    In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians' musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19-30) and by event-related potentials (ERP) recorded in a modified 'oddball paradigm'. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  15. Non-verbal communication between primary care physicians and older patients: how does race matter?

    Science.gov (United States)

    Stepanikova, Irena; Zhang, Qian; Wieland, Darryl; Eleazer, G Paul; Stewart, Thomas

    2012-05-01

    Non-verbal communication is an important aspect of the diagnostic and therapeutic process, especially with older patients. It is unknown how non-verbal communication varies with physician and patient race. To examine the joint influence of physician race and patient race on non-verbal communication displayed by primary care physicians during medical interviews with patients 65 years or older. Video-recordings of visits of 209 patients 65 years old or older to 30 primary care physicians at three clinics located in the Midwest and Southwest. Duration of physicians' open body position, eye contact, smile, and non-task touch, coded using an adaption of the Nonverbal Communication in Doctor-Elderly Patient Transactions form. African American physicians with African American patients used more open body position, smile, and touch, compared to the average across other dyads (adjusted mean difference for open body position = 16.55, p non-verbal communication with older patients. Its influence is best understood when physician race and patient race are considered jointly.

  16. Effects of proactive interference on non-verbal working memory.

    Science.gov (United States)

    Cyr, Marilyn; Nee, Derek E; Nelson, Eric; Senger, Thea; Jonides, John; Malapani, Chara

    2017-02-01

    Working memory (WM) is a cognitive system responsible for actively maintaining and processing relevant information and is central to successful cognition. A process critical to WM is the resolution of proactive interference (PI), which involves suppressing memory intrusions from prior memories that are no longer relevant. Most studies that have examined resistance to PI in a process-pure fashion used verbal material. By contrast, studies using non-verbal material are scarce, and it remains unclear whether the effect of PI is domain-general or whether it applies solely to the verbal domain. The aim of the present study was to examine the effect of PI in visual WM using both objects with high and low nameability. Using a Directed-Forgetting paradigm, we varied discriminability between WM items on two dimensions, one verbal (high-nameability vs. low-nameability objects) and one perceptual (colored vs. gray objects). As in previous studies using verbal material, effects of PI were found with object stimuli, even after controlling for verbal labels being used (i.e., low-nameability condition). We also found that the addition of distinctive features (color, verbal label) increased performance in rejecting intrusion probes, most likely through an increase in discriminability between content-context bindings in WM.

  17. Binaural Processing of Multiple Sound Sources

    Science.gov (United States)

    2016-08-18

    AFRL-AFOSR-VA-TR-2016-0298 Binaural Processing of Multiple Sound Sources William Yost ARIZONA STATE UNIVERSITY 660 S MILL AVE STE 312 TEMPE, AZ 85281...18-08-2016 2. REPORT TYPE Final Performance 3. DATES COVERED (From - To) 15 Jul 2012 to 14 Jul 2016 4. TITLE AND SUBTITLE Binaural Processing of...three topics cited above are entirely within the scope of the AFOSR grant. 15. SUBJECT TERMS Binaural hearing, Sound Localization, Interaural signal

  18. Young Children's Understanding of Markedness in Non-Verbal Communication

    Science.gov (United States)

    Liebal, Kristin; Carpenter, Malinda; Tomasello, Michael

    2011-01-01

    Speakers often anticipate how recipients will interpret their utterances. If they wish some other, less obvious interpretation, they may "mark" their utterance (e.g. with special intonations or facial expressions). We investigated whether two- and three-year-olds recognize when adults mark a non-verbal communicative act--in this case a pointing…

  19. Videotutoring, Non-Verbal Communication and Initial Teacher Training.

    Science.gov (United States)

    Nichol, Jon; Watson, Kate

    2000-01-01

    Describes the use of video tutoring for distance education within the context of a post-graduate teacher training course at the University of Exeter. Analysis of the tapes used a protocol based on non-verbal communication research, and findings suggest that the interaction of participants was significantly different from face-to-face…

  20. Language, Power, Multilingual and Non-Verbal Multicultural Communication

    NARCIS (Netherlands)

    Marácz, L.; Zhuravleva, E.A.

    2014-01-01

    Due to developments in internal migration and mobility there is a proliferation of linguistic diversity, multilingual and non-verbal multicultural communication. At the same time the recognition of the use of one’s first language receives more and more support in international political, legal and

  1. Non-verbal behaviour in nurse-elderly patient communication.

    NARCIS (Netherlands)

    Caris-Verhallen, W.M.C.M.; Kerkstra, A.; Bensing, J.M.

    1999-01-01

    This study explores the occurence of non-verbal communication in nurse-elderly patient interaction in two different care settings: home nursing and a home for the elderly. In a sample of 181 nursing encounters involving 47 nurses a study was made of videotaped nurse-patient communication. Six

  2. Physical growth and non-verbal intelligence: Associations in Zambia

    Science.gov (United States)

    Hein, Sascha; Reich, Jodi; Thuma, Philip E.; Grigorenko, Elena L.

    2014-01-01

    Objectives To investigate normative developmental BMI trajectories and associations of physical growth indicators (ie, height, weight, head circumference [HC], body mass index [BMI]) with non-verbal intelligence in an understudied population of children from Sub-Saharan Africa. Study design A sample of 3981 students (50.8% male), grades 3 to 7, with a mean age of 12.75 years was recruited from 34 rural Zambian schools. Children with low scores on vision and hearing screenings were excluded. Height, weight and HC were measured, and non-verbal intelligence was assessed using UNIT-symbolic memory and KABC-II-triangles. Results Results showed that students in higher grades have a higher BMI over and above the effect of age. Girls showed a marginally higher BMI, although that for both boys and girls was approximately 1 SD below the international CDC and WHO norms. Controlling for the effect of age, non-verbal intelligence showed small but significant positive relationships with HC (r = .17) and BMI (r = .11). HC and BMI accounted for 1.9% of the variance in non-verbal intelligence, over and above the contribution of grade and sex. Conclusions BMI-for-age growth curves of Zambian children follow observed worldwide developmental trajectories. The positive relationships between BMI and intelligence underscore the importance of providing adequate nutritional and physical growth opportunities for children worldwide and in sub-Saharan Africa in particular. Directions for future studies are discussed with regard to maximizing the cognitive potential of all rural African children. PMID:25217196

  3. Context, culture and (non-verbal) communication affect handover quality.

    Science.gov (United States)

    Frankel, Richard M; Flanagan, Mindy; Ebright, Patricia; Bergman, Alicia; O'Brien, Colleen M; Franks, Zamal; Allen, Andrew; Harris, Angela; Saleem, Jason J

    2012-12-01

    Transfers of care, also known as handovers, remain a substantial patient safety risk. Although research on handovers has been done since the 1980s, the science is incomplete. Surprisingly few interventions have been rigorously evaluated and, of those that have, few have resulted in long-term positive change. Researchers, both in medicine and other high reliability industries, agree that face-to-face handovers are the most reliable. It is not clear, however, what the term face-to-face means in actual practice. We studied the use of non-verbal behaviours, including gesture, posture, bodily orientation, facial expression, eye contact and physical distance, in the delivery of information during face-to-face handovers. To address this question and study the role of non-verbal behaviour on the quality and accuracy of handovers, we videotaped 52 nursing, medicine and surgery handovers covering 238 patients. Videotapes were analysed using immersion/crystallisation methods of qualitative data analysis. A team of six researchers met weekly for 18 months to view videos together using a consensus-building approach. Consensus was achieved on verbal, non-verbal, and physical themes and patterns observed in the data. We observed four patterns of non-verbal behaviour (NVB) during handovers: (1) joint focus of attention; (2) 'the poker hand'; (3) parallel play and (4) kerbside consultation. In terms of safety, joint focus of attention was deemed to have the best potential for high quality and reliability; however, it occurred infrequently, creating opportunities for education and improvement. Attention to patterns of NVB in face-to-face handovers coupled with education and practice can improve quality and reliability.

  4. Cross-cultural Differences of Stereotypes about Non-verbal Communication of Russian and Chinese Students

    Directory of Open Access Journals (Sweden)

    I A Novikova

    2011-09-01

    Full Text Available The article deals with peculiarities of non-verbal communication as a factor of cross-cultural intercourse and adaptation of representatives of different cultures. The possibility of studying of ethnic stereotypes concerning non-verbal communication is considered. The results of empiric research of stereotypes about non-verbal communication of Russian and Chinese students are presented.

  5. Modular and Adaptive Control of Sound Processing

    Science.gov (United States)

    van Nort, Douglas

    This dissertation presents research into the creation of systems for the control of sound synthesis and processing. The focus differs from much of the work related to digital musical instrument design, which has rightly concentrated on the physicality of the instrument and interface: sensor design, choice of controller, feedback to performer and so on. Often times a particular choice of sound processing is made, and the resultant parameters from the physical interface are conditioned and mapped to the available sound parameters in an exploratory fashion. The main goal of the work presented here is to demonstrate the importance of the space that lies between physical interface design and the choice of sound manipulation algorithm, and to present a new framework for instrument design that strongly considers this essential part of the design process. In particular, this research takes the viewpoint that instrument designs should be considered in a musical control context, and that both control and sound dynamics must be considered in tandem. In order to achieve this holistic approach, the work presented in this dissertation assumes complementary points of view. Instrument design is first seen as a function of musical context, focusing on electroacoustic music and leading to a view on gesture that relates perceived musical intent to the dynamics of an instrumental system. The important design concept of mapping is then discussed from a theoretical and conceptual point of view, relating perceptual, systems and mathematically-oriented ways of examining the subject. This theoretical framework gives rise to a mapping design space, functional analysis of pertinent existing literature, implementations of mapping tools, instrumental control designs and several perceptual studies that explore the influence of mapping structure. Each of these reflect a high-level approach in which control structures are imposed on top of a high-dimensional space of control and sound synthesis

  6. Cortical processing of dynamic sound envelope transitions.

    Science.gov (United States)

    Zhou, Yi; Wang, Xiaoqin

    2010-12-08

    Slow envelope fluctuations in the range of 2-20 Hz provide important segmental cues for processing communication sounds. For a successful segmentation, a neural processor must capture envelope features associated with the rise and fall of signal energy, a process that is often challenged by the interference of background noise. This study investigated the neural representations of slowly varying envelopes in quiet and in background noise in the primary auditory cortex (A1) of awake marmoset monkeys. We characterized envelope features based on the local average and rate of change of sound level in envelope waveforms and identified envelope features to which neurons were selective by reverse correlation. Our results showed that envelope feature selectivity of A1 neurons was correlated with the degree of nonmonotonicity in their static rate-level functions. Nonmonotonic neurons exhibited greater feature selectivity than monotonic neurons in quiet and in background noise. The diverse envelope feature selectivity decreased spike-timing correlation among A1 neurons in response to the same envelope waveforms. As a result, the variability, but not the average, of the ensemble responses of A1 neurons represented more faithfully the dynamic transitions in low-frequency sound envelopes both in quiet and in background noise.

  7. Non-verbal Persuasion and Communication in an Affective Agent

    DEFF Research Database (Denmark)

    André, Elisabeth; Bevacqua, Elisabetta; Heylen, Dirk

    2011-01-01

    the critical role of non-verbal behaviour during face-to-face communication. In this chapter we restrict the discussion to body language. We also consider embodied virtual agents. As is the case with humans, there are a number of fundamental factors to be considered when constructing persuasive agents......This chapter deals with the communication of persuasion. Only a small percentage of communication involves words: as the old saying goes, “it’s not what you say, it’s how you say it”. While this likely underestimates the importance of good verbal persuasion techniques, it is accurate in underlining...

  8. Musical ability and non-native speech-sound processing are linked through sensitivity to pitch and spectral information.

    Science.gov (United States)

    Kempe, Vera; Bublitz, Dennis; Brooks, Patricia J

    2015-05-01

    Is the observed link between musical ability and non-native speech-sound processing due to enhanced sensitivity to acoustic features underlying both musical and linguistic processing? To address this question, native English speakers (N = 118) discriminated Norwegian tonal contrasts and Norwegian vowels. Short tones differing in temporal, pitch, and spectral characteristics were used to measure sensitivity to the various acoustic features implicated in musical and speech processing. Musical ability was measured using Gordon's Advanced Measures of Musical Audiation. Results showed that sensitivity to specific acoustic features played a role in non-native speech-sound processing: Controlling for non-verbal intelligence, prior foreign language-learning experience, and sex, sensitivity to pitch and spectral information partially mediated the link between musical ability and discrimination of non-native vowels and lexical tones. The findings suggest that while sensitivity to certain acoustic features partially mediates the relationship between musical ability and non-native speech-sound processing, complex tests of musical ability also tap into other shared mechanisms. © 2014 The British Psychological Society.

  9. The use of virtual characters to assess and train non-verbal communication in high-functioning autism.

    Science.gov (United States)

    Georgescu, Alexandra Livia; Kuzmanovic, Bojana; Roth, Daniel; Bente, Gary; Vogeley, Kai

    2014-01-01

    High-functioning autism (HFA) is a neurodevelopmental disorder, which is characterized by life-long socio-communicative impairments on the one hand and preserved verbal and general learning and memory abilities on the other. One of the areas where particular difficulties are observable is the understanding of non-verbal communication cues. Thus, investigating the underlying psychological processes and neural mechanisms of non-verbal communication in HFA allows a better understanding of this disorder, and potentially enables the development of more efficient forms of psychotherapy and trainings. However, the research on non-verbal information processing in HFA faces several methodological challenges. The use of virtual characters (VCs) helps to overcome such challenges by enabling an ecologically valid experience of social presence, and by providing an experimental platform that can be systematically and fully controlled. To make this field of research accessible to a broader audience, we elaborate in the first part of the review the validity of using VCs in non-verbal behavior research on HFA, and we review current relevant paradigms and findings from social-cognitive neuroscience. In the second part, we argue for the use of VCs as either agents or avatars in the context of "transformed social interactions." This allows for the implementation of real-time social interaction in virtual experimental settings, which represents a more sensitive measure of socio-communicative impairments in HFA. Finally, we argue that VCs and environments are a valuable assistive, educational and therapeutic tool for HFA.

  10. Motor system contributions to verbal and non-verbal working memory

    Directory of Open Access Journals (Sweden)

    Diana A Liao

    2014-09-01

    Full Text Available Working memory (WM involves the ability to maintain and manipulate information held in mind. Neuroimaging studies have shown that secondary motor areas activate during WM for verbal content (e.g., words or letters, in the absence of primary motor area activation. This activation pattern may reflect an inner speech mechanism supporting online phonological rehearsal. Here, we examined the causal relationship between motor system activity and WM processing by using transcranial magnetic stimulation (TMS to manipulate motor system activity during WM rehearsal. We tested WM performance for verbalizable (words and pseudowords and non-verbalizable (Chinese characters visual information. We predicted that disruption of motor circuits would specifically affect WM processing of verbalizable information. We found that TMS targeting motor cortex slowed response times on verbal WM trials with high (pseudoword vs. low (real word phonological load. However, non-verbal WM trials were also significantly slowed with motor TMS. WM performance was unaffected by sham stimulation or TMS over visual cortex. Self-reported use of motor strategy predicted the degree of motor stimulation disruption on WM performance. These results provide evidence of the motor system’s contributions to verbal and non-verbal WM processing. We speculate that the motor system supports WM by creating motor traces consistent with the type of information being rehearsed during maintenance.

  11. On the embedded cognition of non-verbal narratives

    DEFF Research Database (Denmark)

    Bruni, Luis Emilio; Baceviciute, Sarune

    2014-01-01

    Acknowledging that narratives are an important resource in human communication and cognition, the focus of this article is on the cognitive aspects of involvement with visual and auditory non-verbal narratives, particularly in relation to the newest immersive media and digital interactive...... representational technologies. We consider three relevant trends in narrative studies that have emerged in the 60 years of cognitive and digital revolution. The issue at hand could have implications for developmental psychology, pedagogics, cognitive science, cognitive psychology, ethology and evolutionary studies...... of language. In particular, it is of great importance for narratology in relation to interactive media and new representational technologies. Therefore we outline a research agenda for a bio-cognitive semiotic interdisciplinary investigation on how people understand, react to, and interact with narratives...

  12. The role of interaction of verbal and non-verbal means of communication in different types of discourse

    OpenAIRE

    Orlova M. А.

    2010-01-01

    Communication relies on verbal and non-verbal interaction. To be most effective, group members need to improve verbal and non-verbal communication. Non-verbal communication fulfills functions within groups that are sometimes difficult to communicate verbally. But interpreting non-verbal messages requires a great deal of skill because multiple meanings abound in these messages.

  13. Non-verbal Full Body Emotional and Social Interaction: A Case Study on Multimedia Systems for Active Music Listening

    Science.gov (United States)

    Camurri, Antonio

    Research on HCI and multimedia systems for art and entertainment based on non-verbal, full-body, emotional and social interaction is the main topic of this paper. A short review of previous research projects in this area at our centre are presented, to introduce the main issues discussed in the paper. In particular, a case study based on novel paradigms of social active music listening is presented. Active music listening experience enables users to dynamically mould expressive performance of music and of audiovisual content. This research is partially supported by the 7FP EU-ICT Project SAME (Sound and Music for Everyone, Everyday, Everywhere, Every Way, www.sameproject.eu).

  14. The impact of the teachers? non-verbal communication on success in teaching

    OpenAIRE

    BAMBAEEROO, FATEMEH; SHOKRPOUR, NASRIN

    2017-01-01

    Introduction: Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers’ non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers’ use of non-verbal communication and ...

  15. Neurophysiological Modulations of Non-Verbal and Verbal Dual-Tasks Interference during Word Planning.

    Directory of Open Access Journals (Sweden)

    Raphaël Fargier

    Full Text Available Running a concurrent task while speaking clearly interferes with speech planning, but whether verbal vs. non-verbal tasks interfere with the same processes is virtually unknown. We investigated the neural dynamics of dual-task interference on word production using event-related potentials (ERPs with either tones or syllables as concurrent stimuli. Participants produced words from pictures in three conditions: without distractors, while passively listening to distractors and during a distractor detection task. Production latencies increased for tasks with higher attentional demand and were longer for syllables relative to tones. ERP analyses revealed common modulations by dual-task for verbal and non-verbal stimuli around 240 ms, likely corresponding to lexical selection. Modulations starting around 350 ms prior to vocal onset were only observed when verbal stimuli were involved. These later modulations, likely reflecting interference with phonological-phonetic encoding, were observed only when overlap between tasks was maximal and the same underlying neural circuits were engaged (cross-talk.

  16. Neurophysiological Modulations of Non-Verbal and Verbal Dual-Tasks Interference during Word Planning.

    Science.gov (United States)

    Fargier, Raphaël; Laganaro, Marina

    2016-01-01

    Running a concurrent task while speaking clearly interferes with speech planning, but whether verbal vs. non-verbal tasks interfere with the same processes is virtually unknown. We investigated the neural dynamics of dual-task interference on word production using event-related potentials (ERPs) with either tones or syllables as concurrent stimuli. Participants produced words from pictures in three conditions: without distractors, while passively listening to distractors and during a distractor detection task. Production latencies increased for tasks with higher attentional demand and were longer for syllables relative to tones. ERP analyses revealed common modulations by dual-task for verbal and non-verbal stimuli around 240 ms, likely corresponding to lexical selection. Modulations starting around 350 ms prior to vocal onset were only observed when verbal stimuli were involved. These later modulations, likely reflecting interference with phonological-phonetic encoding, were observed only when overlap between tasks was maximal and the same underlying neural circuits were engaged (cross-talk).

  17. Non-verbal communication of compassion: measuring psychophysiologic effects.

    Science.gov (United States)

    Kemper, Kathi J; Shaltout, Hossam A

    2011-12-20

    Calm, compassionate clinicians comfort others. To evaluate the direct psychophysiologic benefits of non-verbal communication of compassion (NVCC), it is important to minimize the effect of subjects' expectation. This preliminary study was designed to a) test the feasibility of two strategies for maintaining subject blinding to non-verbal communication of compassion (NVCC), and b) determine whether blinded subjects would experience psychophysiologic effects from NVCC. Subjects were healthy volunteers who were told the study was evaluating the effect of time and touch on the autonomic nervous system. The practitioner had more than 10 years' experience with loving-kindness meditation (LKM), a form of NVCC. Subjects completed 10-point visual analog scales (VAS) for stress, relaxation, and peacefulness before and after LKM. To assess physiologic effects, practitioners and subjects wore cardiorespiratory monitors to assess respiratory rate (RR), heart rate (HR) and heart rate variability (HRV) throughout the 4 10-minute study periods: Baseline (both practitioner and subjects read neutral material); non-tactile-LKM (subjects read while the practitioner practiced LKM while pretending to read); tactile-LKM (subjects rested while the practitioner practiced LKM while lightly touching the subject on arms, shoulders, hands, feet, and legs); Post-Intervention Rest (subjects rested; the practitioner read). To assess blinding, subjects were asked after the interventions what the practitioner was doing during each period (reading, touch, or something else). Subjects' mean age was 43.6 years; all were women. Blinding was maintained and the practitioner was able to maintain meditation for both tactile and non-tactile LKM interventions as reflected in significantly reduced RR. Despite blinding, subjects' VAS scores improved from baseline to post-intervention for stress (5.5 vs. 2.2), relaxation (3.8 vs. 8.8) and peacefulness (3.8 vs. 9.0, P non-tactile LKM. It is possible to test the

  18. Sound Is Sound: Film Sound Techniques and Infrasound Data Array Processing

    Science.gov (United States)

    Perttu, A. B.; Williams, R.; Taisne, B.; Tailpied, D.

    2017-12-01

    A multidisciplinary collaboration between earth scientists and a sound designer/composer was established to explore the possibilities of audification analysis of infrasound array data. Through the process of audification of the infrasound we began to experiment with techniques and processes borrowed from cinema to manipulate the noise content of the signal. The results of this posed the question: "Would the accuracy of infrasound data array processing be enhanced by employing these techniques?". So a new area of research was born from this collaboration and highlights the value of these interactions and the unintended paths that can occur from them. Using a reference event database, infrasound data were processed using these new techniques and the results were compared with existing techniques to asses if there was any improvement to detection capability for the array. With just under one thousand volcanoes, and a high probability of eruption, Southeast Asia offers a unique opportunity to develop and test techniques for regional monitoring of volcanoes with different technologies. While these volcanoes are monitored locally (e.g. seismometer, infrasound, geodetic and geochemistry networks) and remotely (e.g. satellite and infrasound), there are challenges and limitations to the current monitoring capability. Not only is there a high fraction of cloud cover in the region, making plume observation more difficult via satellite, there have been examples of local monitoring networks and telemetry being destroyed early in the eruptive sequence. The success of local infrasound studies to identify explosions at volcanoes, and calculate plume heights from these signals, has led to an interest in retrieving source parameters for the purpose of ash modeling with a regional network independent of cloud cover.

  19. Hemispheric processing of vocal emblem sounds.

    Science.gov (United States)

    Neumann-Werth, Yael; Levy, Erika S; Obler, Loraine K

    2013-01-01

    Vocal emblems, such as shh and brr, are speech sounds that have linguistic and nonlinguistic features; thus, it is unclear how they are processed in the brain. Five adult dextral individuals with left-brain damage and moderate-severe Wernicke's aphasia, five adult dextral individuals with right-brain damage, and five Controls participated in two tasks: (1) matching vocal emblems to photographs ('picture task') and (2) matching vocal emblems to verbal translations ('phrase task'). Cross-group statistical analyses on items on which the Controls performed at ceiling revealed lower accuracy by the group with left-brain damage (than by Controls) on both tasks, and lower accuracy by the group with right-brain damage (than by Controls) on the picture task. Additionally, the group with left-brain damage performed significantly less accurately than the group with right-brain damage on the phrase task only. Findings suggest that comprehension of vocal emblems recruits more left- than right-hemisphere processing.

  20. The Process of Optimizing Mechanical Sound Quality in Product Design

    DEFF Research Database (Denmark)

    Eriksen, Kaare; Holst, Thomas

    2011-01-01

    The research field concerning optimizing product sound quality is a relatively unexplored area, and may become difficult for designers to operate in. To some degree, sound is a highly subjective parameter, which is normally targeted sound specialists. This paper describes the theoretical...... and practical background for managing a process of optimizing the mechanical sound quality in a product design by using simple tools and workshops systematically. The procedure is illustrated by a case study of a computer navigation tool (computer mouse or mouse). The process is divided into 4 phases, which...... clarify the importance of product sound, defining perceptive demands identified by users, and, finally, how to suggest mechanical principles for modification of an existing sound design. The optimized mechanical sound design is followed by tests on users of the product in its use context. The result...

  1. A qualitative study on non-verbal sensitivity in nursing students.

    Science.gov (United States)

    Chan, Zenobia C Y

    2013-07-01

    To explore nursing students' perception of the meanings and roles of non-verbal communication and sensitivity. It also attempts to understand how different factors influence their non-verbal communication style. The importance of non-verbal communication in the health arena lies in the need for good communication for efficient healthcare delivery. Understanding nursing students' non-verbal communication with patients and the influential factors is essential to prepare them for field work in the future. Qualitative approach based on 16 in-depth interviews. Sixteen nursing students from the Master of Nursing and the Year 3 Bachelor of Nursing program were interviewed. Major points in the recorded interviews were marked down for content analysis. Three main themes were developed: (1) understanding students' non-verbal communication, which shows how nursing students value and experience non-verbal communication in the nursing context; (2) factors that influence the expression of non-verbal cues, which reveals the effect of patients' demographic background (gender, age, social status and educational level) and participants' characteristics (character, age, voice and appearance); and (3) metaphors of non-verbal communication, which is further divided into four subthemes: providing assistance, individualisation, dropping hints and promoting interaction. Learning about students' non-verbal communication experiences in the clinical setting allowed us to understand their use of non-verbal communication and sensitivity, as well as to understand areas that may need further improvement. The experiences and perceptions revealed by the nursing students could provoke nurses to reconsider the effects of the different factors suggested in this study. The results might also help students and nurses to learn and ponder their missing gap, leading them to rethink, train and pay more attention to their non-verbal communication style and sensitivity. © 2013 John Wiley & Sons Ltd.

  2. Culture and Social Relationship as Factors of Affecting Communicative Non-Verbal Behaviors

    DEFF Research Database (Denmark)

    Lipi, Afia Akhter; Nakano, Yukiko; Rehm, Matthias

    2010-01-01

    The goal of this paper is to link a bridge between social relationship and cultural variation to predict conversants' non-verbal behaviors. This idea serves as a basis of establishing a parameter based socio-cultural model, which determines non-verbal expressive parameters that specify the shapes...

  3. Oncologists’ non-verbal behavior and analog patients’ recall of information

    NARCIS (Netherlands)

    Hillen, M.A.; de Haes, H.C.J.M.; van Tienhoven, G.; van Laarhoven, H.W.M.; van Weert, J.C.M.; Vermeulen, D.M.; Smets, E.M.A.

    2016-01-01

    Background Information in oncological consultations is often excessive. Those patients who better recall information are more satisfied, less anxious and more adherent. Optimal recall may be enhanced by the oncologist’s non-verbal communication. We tested the influence of three non-verbal behaviors,

  4. Oncologists' non-verbal behavior and analog patients' recall of information

    NARCIS (Netherlands)

    Hillen, Marij A.; de Haes, Hanneke C. J. M.; van Tienhoven, Geertjan; van Laarhoven, Hanneke W. M.; van Weert, Julia C. M.; Vermeulen, Daniëlle M.; Smets, Ellen M. A.

    2016-01-01

    Background Information in oncological consultations is often excessive. Those patients who better recall information are more satisfied, less anxious and more adherent. Optimal recall may be enhanced by the oncologist's non-verbal communication. We tested the influence of three non-verbal behaviors,

  5. Virtual Chironomia: A Multimodal Study of Verbal and Non-Verbal Communication in a Virtual World

    Science.gov (United States)

    Verhulsdonck, Gustav

    2010-01-01

    This mixed methods study examined the various aspects of multimodal use of non-verbal communication in virtual worlds during dyadic negotiations. Quantitative analysis uncovered a treatment effect whereby people with more rhetorical certainty used more neutral non-verbal communication; whereas people that were rhetorically less certain used more…

  6. Cross-cultural features of gestures in non-verbal communication

    Directory of Open Access Journals (Sweden)

    Chebotariova N. A.

    2017-09-01

    Full Text Available this article is devoted to analysis of the concept of non-verbal communication and ways of expressing it. Gesticulation is studied in detail as it is the main element of non-verbal communication and has different characteristics in various countries of the world.

  7. Non-verbal emotion communication training induces specific changes in brain function and structure.

    Science.gov (United States)

    Kreifelts, Benjamin; Jacob, Heike; Brück, Carolin; Erb, Michael; Ethofer, Thomas; Wildgruber, Dirk

    2013-01-01

    The perception of emotional cues from voice and face is essential for social interaction. However, this process is altered in various psychiatric conditions along with impaired social functioning. Emotion communication trainings have been demonstrated to improve social interaction in healthy individuals and to reduce emotional communication deficits in psychiatric patients. Here, we investigated the impact of a non-verbal emotion communication training (NECT) on cerebral activation and brain structure in a controlled and combined functional magnetic resonance imaging (fMRI) and voxel-based morphometry study. NECT-specific reductions in brain activity occurred in a distributed set of brain regions including face and voice processing regions as well as emotion processing- and motor-related regions presumably reflecting training-induced familiarization with the evaluation of face/voice stimuli. Training-induced changes in non-verbal emotion sensitivity at the behavioral level and the respective cerebral activation patterns were correlated in the face-selective cortical areas in the posterior superior temporal sulcus and fusiform gyrus for valence ratings and in the temporal pole, lateral prefrontal cortex and midbrain/thalamus for the response times. A NECT-induced increase in gray matter (GM) volume was observed in the fusiform face area. Thus, NECT induces both functional and structural plasticity in the face processing system as well as functional plasticity in the emotion perception and evaluation system. We propose that functional alterations are presumably related to changes in sensory tuning in the decoding of emotional expressions. Taken together, these findings highlight that the present experimental design may serve as a valuable tool to investigate the altered behavioral and neuronal processing of emotional cues in psychiatric disorders as well as the impact of therapeutic interventions on brain function and structure.

  8. The Use of Virtual Characters to Assess and Train Non-Verbal Communication in High-Functioning Autism

    Science.gov (United States)

    Georgescu, Alexandra Livia; Kuzmanovic, Bojana; Roth, Daniel; Bente, Gary; Vogeley, Kai

    2014-01-01

    High-functioning autism (HFA) is a neurodevelopmental disorder, which is characterized by life-long socio-communicative impairments on the one hand and preserved verbal and general learning and memory abilities on the other. One of the areas where particular difficulties are observable is the understanding of non-verbal communication cues. Thus, investigating the underlying psychological processes and neural mechanisms of non-verbal communication in HFA allows a better understanding of this disorder, and potentially enables the development of more efficient forms of psychotherapy and trainings. However, the research on non-verbal information processing in HFA faces several methodological challenges. The use of virtual characters (VCs) helps to overcome such challenges by enabling an ecologically valid experience of social presence, and by providing an experimental platform that can be systematically and fully controlled. To make this field of research accessible to a broader audience, we elaborate in the first part of the review the validity of using VCs in non-verbal behavior research on HFA, and we review current relevant paradigms and findings from social-cognitive neuroscience. In the second part, we argue for the use of VCs as either agents or avatars in the context of “transformed social interactions.” This allows for the implementation of real-time social interaction in virtual experimental settings, which represents a more sensitive measure of socio-communicative impairments in HFA. Finally, we argue that VCs and environments are a valuable assistive, educational and therapeutic tool for HFA. PMID:25360098

  9. Comparative Analysis of Verbal and Non-Verbal Mental Activity Components Regarding the Young People with Different Intellectual Levels

    Directory of Open Access Journals (Sweden)

    Y. M. Revenko

    2013-01-01

    Full Text Available The paper maintains that for developing the educational pro- grams and technologies adequate to the different stages of students’ growth and maturity, there is a need for exploring the natural determinants of intel- lectual development as well as the students’ individual qualities affecting the cognition process. The authors investigate the differences of the intellect manifestations with the reference to the gender principle, and analyze the correlations be- tween verbal and non-verbal components in boys and girls’ mental activity depending on their general intellect potential. The research, carried out in Si- berian State Automobile Road Academy and focused on the first year stu- dents, demonstrates the absence of gender differences in students’ general in- tellect levels; though, there are some other conformities: the male students of different intellectual levels show the same correlation coefficient of verbal and non-verbal intellect while the female ones have the same correlation only at the high intellect level. In conclusion, the authors emphasize the need for the integral ap- proach to raising students’ mental abilities considering the close interrelation between the verbal and non-verbal component development. The teaching materials should stimulate different mental qualities by differentiating the educational process to develop students’ individual abilities. 

  10. Non-verbal communication of the residents living in homes for the older people in Slovenia.

    Science.gov (United States)

    Zaletel, Marija; Kovacev, Asja Nina; Sustersic, Olga; Kragelj, Lijana Zaletel

    2010-09-01

    and paralinguistic signs. The caregivers should be aware of this and pay a lot of attention to these two groups of non-verbal expressions. Their importance should be constantly emphasized during the educational process of all kinds of health-care professionals as well.

  11. Non-verbal communication in meetings of psychiatrists and patients with schizophrenia.

    Science.gov (United States)

    Lavelle, M; Dimic, S; Wildgrube, C; McCabe, R; Priebe, S

    2015-03-01

    Recent evidence found that patients with schizophrenia display non-verbal behaviour designed to avoid social engagement during the opening moments of their meetings with psychiatrists. This study aimed to replicate, and build on, this finding, assessing the non-verbal behaviour of patients and psychiatrists during meetings, exploring changes over time and its association with patients' symptoms and the quality of the therapeutic relationship. 40-videotaped routine out-patient consultations, involving patients with schizophrenia, were analysed. Non-verbal behaviour of patients and psychiatrists was assessed during three fixed, 2-min intervals using a modified Ethological Coding System for Interviews. Symptoms, satisfaction with communication and the quality of the therapeutic relationship were also measured. Over time, patients' non-verbal behaviour remained stable, whilst psychiatrists' flight behaviour decreased. Patients formed two groups based on their non-verbal profiles, one group (n = 25) displaying pro-social behaviour, inviting interaction and a second (n = 15) displaying flight behaviour, avoiding interaction. Psychiatrists interacting with pro-social patients displayed more pro-social behaviours (P communication (P non-verbal behaviour during routine psychiatric consultations remains unchanged, and is linked to both their psychiatrist's non-verbal behaviour and the quality of the therapeutic relationship. © 2014 The Authors. Acta Psychiatrica Scandinavica Published by John Wiley & Sons Ltd.

  12. [Non-verbal communication of patients submitted to heart surgery: from awaking after anesthesia to extubation].

    Science.gov (United States)

    Werlang, Sueli da Cruz; Azzolin, Karina; Moraes, Maria Antonieta; de Souza, Emiliane Nogueira

    2008-12-01

    Preoperative orientation is an essential tool for patient's communication after surgery. This study had the objective of evaluating non-verbal communication of patients submitted to cardiac surgery from the time of awaking from anesthesia until extubation, after having received preoperative orientation by nurses. A quantitative cross-sectional study was developed in a reference hospital of the state of Rio Grande do Sul, Brazil, from March to July 2006. Data were collected in the pre and post operative periods. A questionnaire to evaluate non-verbal communication on awaking from sedation was applied to a sample of 100 patients. Statistical analysis included Student, Wilcoxon, and Mann Whittney tests. Most of the patients responded satisfactorily to non-verbal communication strategies as instructed on the preoperative orientation. Thus, non-verbal communication based on preoperative orientation was helpful during the awaking period.

  13. Parents and Physiotherapists Recognition of Non-Verbal Communication of Pain in Individuals with Cerebral Palsy.

    Science.gov (United States)

    Riquelme, Inmaculada; Pades Jiménez, Antonia; Montoya, Pedro

    2017-08-29

    Pain assessment is difficult in individuals with cerebral palsy (CP). This is of particular relevance in children with communication difficulties, when non-verbal pain behaviors could be essential for appropriate pain recognition. Parents are considered good proxies in the recognition of pain in their children; however, health professionals also need a good understanding of their patients' pain experience. This study aims at analyzing the agreement between parents' and physiotherapists' assessments of verbal and non-verbal pain behaviors in individuals with CP. A written survey about pain characteristics and non-verbal pain expression of 96 persons with CP (45 classified as communicative, and 51 as non-communicative individuals) was performed. Parents and physiotherapists displayed a high agreement in their estimations of the presence of chronic pain, healthcare seeking, pain intensity and pain interference, as well as in non-verbal pain behaviors. Physiotherapists and parents can recognize pain behaviors in individuals with CP regardless of communication disabilities.

  14. Non-verbal mother-child communication in conditions of maternal HIV in an experimental environment.

    Science.gov (United States)

    de Sousa Paiva, Simone; Galvão, Marli Teresinha Gimeniz; Pagliuca, Lorita Marlena Freitag; de Almeida, Paulo César

    2010-01-01

    Non-verbal communication is predominant in the mother-child relation. This study aimed to analyze non-verbal mother-child communication in conditions of maternal HIV. In an experimental environment, five HIV-positive mothers were evaluated during care delivery to their babies of up to six months old. Recordings of the care were analyzed by experts, observing aspects of non-verbal communication, such as: paralanguage, kinesics, distance, visual contact, tone of voice, maternal and infant tactile behavior. In total, 344 scenes were obtained. After statistical analysis, these permitted inferring that mothers use non-verbal communication to demonstrate their close attachment to their children and to perceive possible abnormalities. It is suggested that the mothers infection can be a determining factor for the formation of mothers strong attachment to their children after birth.

  15. Context effects on processing widely deviant sounds in newborn infants

    Directory of Open Access Journals (Sweden)

    Gábor Péter Háden

    2013-09-01

    Full Text Available Detecting and orienting towards sounds carrying new information is a crucial feature of the human brain that supports adaptation to the environment. Rare, acoustically widely deviant sounds presented amongst frequent tones elicit large event related brain potentials (ERPs in neonates. Here we tested whether these discriminative ERP responses reflect only the activation of fresh afferent neuronal populations (i.e., neuronal circuits not affected by the tones or they also index the processing of contextual mismatch between the rare and the frequent sounds.In two separate experiments, we presented sleeping newborns with 150 different environmental sounds and the same number of white noise bursts. Both sounds served either as deviants in an oddball paradigm with the frequent standard stimulus a tone (Novel/Noise deviant, or as the standard stimulus with the tone as deviant (Novel/Noise standard, or they were delivered alone with the same timing as the deviants in the oddball condition (Novel/Noise alone.Whereas the ERP responses to noise–deviants elicited similar responses as the same sound presented alone, the responses elicited by environmental sounds in the corresponding conditions morphologically differed from each other. Thus whereas the ERP response to the noise sounds can be explained by the different refractory state of stimulus specific neuronal populations, the ERP response to environmental sounds indicated context sensitive processing. These results provide evidence for an innate tendency of context dependent auditory processing as well as a basis for the different developmental trajectories of processing acoustical deviance and contextual novelty.

  16. Emotional sounds modulate early neural processing of emotional pictures

    Directory of Open Access Journals (Sweden)

    Antje B M Gerdes

    2013-10-01

    Full Text Available In our natural environment, emotional information is conveyed by converging visual and auditory information; multimodal integration is of utmost importance. In the laboratory, however, emotion researchers have mostly focused on the examination of unimodal stimuli. Few existing studies on multimodal emotion processing have focused on human communication such as the integration of facial and vocal expressions. Extending the concept of multimodality, the current study examines how the neural processing of emotional pictures is influenced by simultaneously presented sounds. Twenty pleasant, unpleasant, and neutral pictures of complex scenes were presented to 22 healthy participants. On the critical trials these pictures were paired with pleasant, unpleasant and neutral sounds. Sound presentation started 500 ms before picture onset and each stimulus presentation lasted for 2s. EEG was recorded from 64 channels and ERP analyses focused on the picture onset. In addition, valence, and arousal ratings were obtained. Previous findings for the neural processing of emotional pictures were replicated. Specifically, unpleasant compared to neutral pictures were associated with an increased parietal P200 and a more pronounced centroparietal late positive potential (LPP, independent of the accompanying sound valence. For audiovisual stimulation, increased parietal P100 and P200 were found in response to all pictures which were accompanied by unpleasant or pleasant sounds compared to pictures with neutral sounds. Most importantly, incongruent audiovisual pairs of unpleasant pictures and pleasant sounds enhanced parietal P100 and P200 compared to pairings with congruent sounds. Taken together, the present findings indicate that emotional sounds modulate early stages of visual processing and, therefore, provide an avenue by which multimodal experience may enhance perception.

  17. The impact of culture and education on non-verbal neuropsychological measurements: a critical review.

    Science.gov (United States)

    Rosselli, Mónica; Ardila, Alfredo

    2003-08-01

    Clinical neuropsychology has frequently considered visuospatial and non-verbal tests to be culturally and educationally fair or at least fairer than verbal tests. This paper reviews the cross-cultural differences in performance on visuoperceptual and visuoconstructional ability tasks and analyzes the impact of education and culture on non-verbal neuropsychological measurements. This paper compares: (1) non-verbal test performance among groups with different educational levels, and the same cultural background (inter-education intra-culture comparison); (2) the test performance among groups with the same educational level and different cultural backgrounds (intra-education inter-culture comparisons). Several studies have demonstrated a strong association between educational level and performance on common non-verbal neuropsychological tests. When neuropsychological test performance in different cultural groups is compared, significant differences are evident. Performance on non-verbal tests such as copying figures, drawing maps or listening to tones can be significantly influenced by the individual's culture. Arguments against the use of some current neuropsychological non-verbal instruments, procedures, and norms in the assessment of diverse educational and cultural groups are discussed and possible solutions to this problem are presented.

  18. The role of non-verbal behaviour in racial disparities in health care: implications and solutions.

    Science.gov (United States)

    Levine, Cynthia S; Ambady, Nalini

    2013-09-01

    People from racial minority backgrounds report less trust in their doctors and have poorer health outcomes. Although these deficiencies have multiple roots, one important set of explanations involves racial bias, which may be non-conscious, on the part of providers, and minority patients' fears that they will be treated in a biased way. Here, we focus on one mechanism by which this bias may be communicated and reinforced: namely, non-verbal behaviour in the doctor-patient interaction. We review 2 lines of research on race and non-verbal behaviour: (i) the ways in which a patient's race can influence a doctor's non-verbal behaviour toward the patient, and (ii) the relative difficulty that doctors can have in accurately understanding the nonverbal communication of non-White patients. Further, we review research on the implications that both lines of work can have for the doctor-patient relationship and the patient's health. The research we review suggests that White doctors interacting with minority group patients are likely to behave and respond in ways that are associated with worse health outcomes. As doctors' disengaged non-verbal behaviour towards minority group patients and lower ability to read minority group patients' non-verbal behaviours may contribute to racial disparities in patients' satisfaction and health outcomes, solutions that target non-verbal behaviour may be effective. A number of strategies for such targeting are discussed. © 2013 John Wiley & Sons Ltd.

  19. Evaluating verbal and non-verbal communication skills, in an ethnogeriatric OSCE.

    Science.gov (United States)

    Collins, Lauren G; Schrimmer, Anne; Diamond, James; Burke, Janice

    2011-05-01

    Communication during medical interviews plays a large role in patient adherence, satisfaction with care, and health outcomes. Both verbal and non-verbal communication (NVC) skills are central to the development of rapport between patients and healthcare professionals. The purpose of this study was to assess the role of non-verbal and verbal communication skills on evaluations by standardized patients during an ethnogeriatric Objective Structured Clinical Examination (OSCE). Interviews from 19 medical students, residents, and fellows in an ethnogeriatric OSCE were analyzed. Each interview was videotaped and evaluated on a 14 item verbal and an 8 item non-verbal communication checklist. The relationship between verbal and non-verbal communication skills on interview evaluations by standardized patients were examined using correlational analyses. Maintaining adequate facial expression (FE), using affirmative gestures (AG), and limiting both unpurposive movements (UM) and hand gestures (HG) had a significant positive effect on perception of interview quality during this OSCE. Non-verbal communication skills played a role in perception of overall interview quality as well as perception of culturally competent communication. Incorporating formative and summative evaluation of both verbal and non-verbal communication skills may be a critical component of curricular innovations in ethnogeriatrics, such as the OSCE. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  20. Patients' perceptions of GP non-verbal communication: a qualitative study.

    Science.gov (United States)

    Marcinowicz, Ludmila; Konstantynowicz, Jerzy; Godlewski, Cezary

    2010-02-01

    During doctor-patient interactions, many messages are transmitted without words, through non-verbal communication. To elucidate the types of non-verbal behaviours perceived by patients interacting with family GPs and to determine which cues are perceived most frequently. In-depth interviews with patients of family GPs. Nine family practices in different regions of Poland. At each practice site, interviews were performed with four patients who were scheduled consecutively to see their family doctor. Twenty-four of 36 studied patients spontaneously perceived non-verbal behaviours of the family GP during patient-doctor encounters. They reported a total of 48 non-verbal cues. The most frequent features were tone of voice, eye contact, and facial expressions. Less frequent were examination room characteristics, touch, interpersonal distance, GP clothing, gestures, and posture. Non-verbal communication is an important factor by which patients spontaneously describe and evaluate their interactions with a GP. Family GPs should be trained to better understand and monitor their own non-verbal behaviours towards patients.

  1. A brain-computer interface for potential non-verbal facial communication based on EEG signals related to specific emotions.

    Science.gov (United States)

    Kashihara, Koji

    2014-01-01

    Unlike assistive technology for verbal communication, the brain-machine or brain-computer interface (BMI/BCI) has not been established as a non-verbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients' emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based non-verbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600-700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus (FG). This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals. A classification method based on a support vector machine enables the easy classification of neutral faces that trigger specific individual emotions. In

  2. Exploring laterality and memory effects in the haptic discrimination of verbal and non-verbal shapes.

    Science.gov (United States)

    Stoycheva, Polina; Tiippana, Kaisa

    2018-03-14

    The brain's left hemisphere often displays advantages in processing verbal information, while the right hemisphere favours processing non-verbal information. In the haptic domain due to contra-lateral innervations, this functional lateralization is reflected in a hand advantage during certain functions. Findings regarding the hand-hemisphere advantage for haptic information remain contradictory, however. This study addressed these laterality effects and their interaction with memory retention times in the haptic modality. Participants performed haptic discrimination of letters, geometric shapes and nonsense shapes at memory retention times of 5, 15 and 30 s with the left and right hand separately, and we measured the discriminability index d'. The d' values were significantly higher for letters and geometric shapes than for nonsense shapes. This might result from dual coding (naming + spatial) or/and from a low stimulus complexity. There was no stimulus-specific laterality effect. However, we found a time-dependent laterality effect, which revealed that the performance of the left hand-right hemisphere was sustained up to 15 s, while the performance of the right-hand-left hemisphere decreased progressively throughout all retention times. This suggests that haptic memory traces are more robust to decay when they are processed by the left hand-right hemisphere.

  3. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  4. Memory and comprehension deficits in spatial descriptions of children with non-verbal and reading disabilities.

    Science.gov (United States)

    Mammarella, Irene C; Meneghetti, Chiara; Pazzaglia, Francesca; Cornoldi, Cesare

    2014-01-01

    The present study investigated the difficulties encountered by children with non-verbal learning disability (NLD) and reading disability (RD) when processing spatial information derived from descriptions, based on the assumption that both groups should find it more difficult than matched controls, but for different reasons, i.e., due to a memory encoding difficulty in cases of RD and to spatial information comprehension problems in cases of NLD. Spatial descriptions from both survey and route perspectives were presented to 9-12-year-old children divided into three groups: NLD (N = 12); RD (N = 12), and typically developing controls (TD; N = 15); then participants completed a sentence verification task and a memory for locations task. The sentence verification task was presented in two conditions: in one the children could refer to the text while answering the questions (i.e., text present condition), and in the other the text was withdrawn (i.e., text absent condition). Results showed that the RD group benefited from the text present condition, but was impaired to the same extent as the NLD group in the text absent condition, suggesting that the NLD children's difficulty is due mainly to their poor comprehension of spatial descriptions, while the RD children's difficulty is due more to a memory encoding problem. These results are discussed in terms of their implications in the neuropsychological profiles of children with NLD or RD, and the processes involved in spatial descriptions.

  5. Negative Symptoms and Avoidance of Social Interaction: A Study of Non-Verbal Behaviour.

    Science.gov (United States)

    Worswick, Elizabeth; Dimic, Sara; Wildgrube, Christiane; Priebe, Stefan

    2018-01-01

    Non-verbal behaviour is fundamental to social interaction. Patients with schizophrenia display an expressivity deficit of non-verbal behaviour, exhibiting behaviour that differs from both healthy subjects and patients with different psychiatric diagnoses. The present study aimed to explore the association between non-verbal behaviour and symptom domains, overcoming methodological shortcomings of previous studies. Standardised interviews with 63 outpatients diagnosed with schizophrenia were videotaped. Symptoms were assessed using the Clinical Assessment Interview for Negative Symptoms (CAINS), the Positive and Negative Syndrome Scale (PANSS) and the Calgary Depression Scale. Independent raters later analysed the videos for non-verbal behaviour, using a modified version of the Ethological Coding System for Interviews (ECSI). Patients with a higher level of negative symptoms displayed significantly fewer prosocial (e.g., nodding and smiling), gesture, and displacement behaviours (e.g., fumbling), but significantly more flight behaviours (e.g., looking away, freezing). No gender differences were found, and these associations held true when adjusted for antipsychotic medication dosage. Negative symptoms are associated with both a lower level of actively engaging non-verbal behaviour and an increased active avoidance of social contact. Future research should aim to identify the mechanisms behind flight behaviour, with implications for the development of treatments to improve social functioning. © 2017 S. Karger AG, Basel.

  6. Parts of Speech in Non-typical Function: (Asymmetrical Encoding of Non-verbal Predicates in Erzya

    Directory of Open Access Journals (Sweden)

    Rigina Turunen

    2011-01-01

    Full Text Available Erzya non-verbal conjugation refers to symmetric paradigms in which non-verbal predicates behave morphosyntactically in a similar way to verbal predicates. Notably, though, non-verbal conjugational paradigms are asymmetric, which is seen as an outcome of paradigmatic neutralisation in less frequent/less typical contexts. For non-verbal predicates it is not obligatory to display the same amount of behavioural potential as it is for verbal predicates, and the lexical class of non-verbal predicate operates in such a way that adjectival predicates are more likely to be conjugated than nominals. Further, besides symmetric paradigms and constructions, in Erzya there are non-verbal predicate constructions which display a more overt structural encoding than do verbal ones, namely, copula constructions. Complexity in the domain of non-verbal predication in Erzya decreases the symmetry of the paradigms. Complexity increases in asymmetric constructions, as well as in paradigmatic neutralisation when non-verbal predicates cannot be inflected in all the tenses and moods occurring in verbal predication. The results would be the reverse if we were to measure complexity in terms of the morphological structure. The asymmetric features in non-verbal predication are motivated language-externally, because non-verbal predicates refer to states and occur less frequently as predicates than verbal categories. The symmetry of the paradigms and constructions is motivated language-internally: a grammatical system with fewer rules is economical.

  7. Non-verbal Communication in a Neonatal Intensive Care Unit: A Video Audit Using Non-verbal Immediacy Scale (NIS-O).

    Science.gov (United States)

    Nimbalkar, Somashekhar Marutirao; Raval, Himalaya; Bansal, Satvik Chaitanya; Pandya, Utkarsh; Pathak, Ajay

    2018-05-03

    Effective communication with parents is a very important skill for pediatricians especially in a neonatal setup. The authors analyzed non-verbal communication of medical caregivers during counseling sessions. Recorded videos of counseling sessions from the months of March-April 2016 were audited. Counseling episodes were scored using Non-verbal Immediacy Scale Observer Report (NIS-O). A total of 150 videos of counseling sessions were audited. The mean (SD) total score on (NIS-O) was 78.96(7.07). Female counseled sessions had significantly higher proportion of low scores (p communication skills in a neonatal unit. This study lays down a template on which other Neonatal intensive care units (NICUs) can carry out gap defining audits.

  8. The Effects of Verbal and Non-Verbal Features on the Reception of DRTV Commercials

    Directory of Open Access Journals (Sweden)

    Smiljana Komar

    2016-12-01

    Full Text Available Analyses of consumer response are important for successful advertising as they help advertisers to find new, original and successful ways of persuasion. Successful advertisements have to boost the product’s benefits but they also have to appeal to consumers’ emotions. In TV advertisements, this is done by means of verbal and non-verbal strategies. The paper presents the results of an empirical investigation whose purpose was to examine the viewers’ emotional responses to a DRTV commercial induced by different verbal and non-verbal features, the amount of credibility and persuasiveness of the commercial and its general acceptability. Our findings indicate that (1 an overload of the same verbal and non-verbal information decreases persuasion; and (2 highly marked prosodic delivery is either exaggerated or funny, while the speaker is perceived as annoying.

  9. Culture and Social Relationship as Factors of Affecting Communicative Non-verbal Behaviors

    Science.gov (United States)

    Akhter Lipi, Afia; Nakano, Yukiko; Rehm, Mathias

    The goal of this paper is to link a bridge between social relationship and cultural variation to predict conversants' non-verbal behaviors. This idea serves as a basis of establishing a parameter based socio-cultural model, which determines non-verbal expressive parameters that specify the shapes of agent's nonverbal behaviors in HAI. As the first step, a comparative corpus analysis is done for two cultures in two specific social relationships. Next, by integrating the cultural and social parameters factors with the empirical data from corpus analysis, we establish a model that predicts posture. The predictions from our model successfully demonstrate that both cultural background and social relationship moderate communicative non-verbal behaviors.

  10. Persistent non-verbal memory impairment in remitted major depression - caused by encoding deficits?

    Science.gov (United States)

    Behnken, Andreas; Schöning, Sonja; Gerss, Joachim; Konrad, Carsten; de Jong-Meyer, Renate; Zwanzger, Peter; Arolt, Volker

    2010-04-01

    While neuropsychological impairments are well described in acute phases of major depressive disorders (MDD), little is known about the neuropsychological profile in remission. There is evidence for episodic memory impairments in both acute depressed and remitted patients with MDD. Learning and memory depend on individuals' ability to organize information during learning. This study investigates non-verbal memory functions in remitted MDD and whether nonverbal memory performance is mediated by organizational strategies whilst learning. 30 well-characterized fully remitted individuals with unipolar MDD and 30 healthy controls matching in age, sex and education were investigated. Non-verbal learning and memory were measured by the Rey-Osterrieth-Complex-Figure-Test (RCFT). The RCFT provides measures of planning, organizational skills, perceptual and non-verbal memory functions. For assessing the mediating effects of organizational strategies, we used the Savage Organizational Score. Compared to healthy controls, participants with remitted MDD showed more deficits in their non-verbal memory function. Moreover, participants with remitted MDD demonstrated difficulties in organizing non-verbal information appropriately during learning. In contrast, no impairments regarding visual-spatial functions in remitted MDD were observed. Except for one patient, all the others were taking psychopharmacological medication. The neuropsychological function was solely investigated in the remitted phase of MDD. Individuals with MDD in remission showed persistent non-verbal memory impairments, modulated by a deficient use of organizational strategies during encoding. Therefore, our results strongly argue for additional therapeutic interventions in order to improve these remaining deficits in cognitive function. Copyright 2009 Elsevier B.V. All rights reserved.

  11. Executive functioning and non-verbal intelligence as predictors of bullying in early elementary school

    NARCIS (Netherlands)

    Verlinden, Marina; Veenstra, René; Ghassabian, Akhgar; Jansen, P.W.; Hofman, Albert; Jaddoe, Vincent W. V.; Verhulst, F.C.; Tiemeier, Henning

    Executive function and intelligence are negatively associated with aggression, yet the role of executive function has rarely been examined in the context of school bullying. We studied whether different domains of executive function and non-verbal intelligence are associated with bullying

  12. Toward a digitally mediated, transgenerational negotiation of verbal and non-verbal concepts in daycare

    DEFF Research Database (Denmark)

    Chimirri, Niklas Alexander

    an adult researcher’s research problem and her/his conceptual knowledge of the child-adult-digital media interaction are able to do justice to what the children actually intend to communicate about their experiences and actions, both verbally and non-verbally, by and large remains little explored...

  13. “Communication by impact” and other forms of non-verbal ...

    African Journals Online (AJOL)

    This article aims to review the importance, place and especially the emotional impact of non-verbal communication in psychiatry. The paper argues that while biological psychiatry is in the ascendency with increasing discoveries being made about the functioning of the brain and psycho-pharmacology, it is important to try ...

  14. Development of non-verbal intellectual capacity in school-age children with cerebral palsy

    NARCIS (Netherlands)

    Smits, D. W.; Ketelaar, M.; Gorter, J. W.; van Schie, P. E.; Becher, J. G.; Lindeman, E.; Jongmans, M. J.

    Background Children with cerebral palsy (CP) are at greater risk for a limited intellectual development than typically developing children. Little information is available which children with CP are most at risk. This study aimed to describe the development of non-verbal intellectual capacity of

  15. Presentation Trainer: a toolkit for learning non-verbal public speaking skills

    NARCIS (Netherlands)

    Schneider, Jan; Börner, Dirk; Van Rosmalen, Peter; Specht, Marcus

    2014-01-01

    The paper presents and outlines the demonstration of Presentation Trainer, a prototype that works as a public speaking instructor. It tracks and analyses the body posture, movements and voice of the user in order to give in- structional feedback on non-verbal communication skills. Besides exploring

  16. Interactive use of communication by verbal and non-verbal autistic children.

    Science.gov (United States)

    Amato, Cibelle Albuquerque de la Higuera; Fernandes, Fernanda Dreux Miranda

    2010-01-01

    Communication of autistic children. To assess the communication functionality of verbal and non-verbal children of the autistic spectrum and to identify possible associations amongst the groups. Subjects were 20 children of the autistic spectrum divided into two groups: V with 10 verbal children and NV with 10 non-verbal children with ages varying between 2y10m and 10y6m. All subjects were video recorded during 30 minutes of spontaneous interaction with their mothers. The samples were analyzed according to the functional communicative profile and comparisons within and between groups were conducted. Data referring to the occupation of communicative space suggest that there is an even balance between each child and his mother. The number of communicative acts per minute shows a clear difference between verbal and non-verbal children. Both verbal and non-verbal children use mostly the gestual communicative mean in their interactions. Data about the use of interpersonal communicative functions point out to the autistic children's great interactive impairment. The characterization of the functional communicative profile proposed in this study confirmed the autistic children's difficulties with interpersonal communication and that these difficulties do not depend on the preferred communicative mean.

  17. Non-Verbal Communication Training: An Avenue for University Professionalizing Programs?

    Science.gov (United States)

    Gazaille, Mariane

    2011-01-01

    In accordance with today's workplace expectations, many university programs identify the ability to communicate as a crucial asset for future professionals. Yet, if the teaching of verbal communication is clearly identifiable in most university programs, the same cannot be said of non-verbal communication (NVC). Knowing the importance of the…

  18. Automated Video Analysis of Non-verbal Communication in a Medical Setting.

    Science.gov (United States)

    Hart, Yuval; Czerniak, Efrat; Karnieli-Miller, Orit; Mayo, Avraham E; Ziv, Amitai; Biegon, Anat; Citron, Atay; Alon, Uri

    2016-01-01

    Non-verbal communication plays a significant role in establishing good rapport between physicians and patients and may influence aspects of patient health outcomes. It is therefore important to analyze non-verbal communication in medical settings. Current approaches to measure non-verbal interactions in medicine employ coding by human raters. Such tools are labor intensive and hence limit the scale of possible studies. Here, we present an automated video analysis tool for non-verbal interactions in a medical setting. We test the tool using videos of subjects that interact with an actor portraying a doctor. The actor interviews the subjects performing one of two scripted scenarios of interviewing the subjects: in one scenario the actor showed minimal engagement with the subject. The second scenario included active listening by the doctor and attentiveness to the subject. We analyze the cross correlation in total kinetic energy of the two people in the dyad, and also characterize the frequency spectrum of their motion. We find large differences in interpersonal motion synchrony and entrainment between the two performance scenarios. The active listening scenario shows more synchrony and more symmetric followership than the other scenario. Moreover, the active listening scenario shows more high-frequency motion termed jitter that has been recently suggested to be a marker of followership. The present approach may be useful for analyzing physician-patient interactions in terms of synchrony and dominance in a range of medical settings.

  19. Quality Matters! Differences between Expressive and Receptive Non-Verbal Communication Skills in Adolescents with ASD

    Science.gov (United States)

    Grossman, Ruth B.; Tager-Flusberg, Helen

    2012-01-01

    We analyzed several studies of non-verbal communication (prosody and facial expressions) completed in our lab and conducted a secondary analysis to compare performance on receptive vs. expressive tasks by adolescents with ASD and their typically developing peers. Results show a significant between-group difference for the aggregate score of…

  20. Interpersonal Interactions in Instrumental Lessons: Teacher/Student Verbal and Non-Verbal Behaviours

    Science.gov (United States)

    Zhukov, Katie

    2013-01-01

    This study examined verbal and non-verbal teacher/student interpersonal interactions in higher education instrumental music lessons. Twenty-four lessons were videotaped and teacher/student behaviours were analysed using a researcher-designed instrument. The findings indicate predominance of student and teacher joke among the verbal behaviours with…

  1. The Introduction of Non-Verbal Communication in Greek Education: A Literature Review

    Science.gov (United States)

    Stamatis, Panagiotis J.

    2012-01-01

    Introduction: The introductory part of this paper underlines the research interest of the educational community in the issue of non-verbal communication in education. The question for the introduction of this scientific field in Greek education enter within the context of this research which include many aspects. Method: The paper essentially…

  2. Effect of interaction with clowns on vital signs and non-verbal communication of hospitalized children.

    Science.gov (United States)

    Alcântara, Pauline Lima; Wogel, Ariane Zonho; Rossi, Maria Isabela Lobo; Neves, Isabela Rodrigues; Sabates, Ana Llonch; Puggina, Ana Cláudia

    2016-12-01

    Compare the non-verbal communication of children before and during interaction with clowns and compare their vital signs before and after this interaction. Uncontrolled, intervention, cross-sectional, quantitative study with children admitted to a public university hospital. The intervention was performed by medical students dressed as clowns and included magic tricks, juggling, singing with the children, making soap bubbles and comedic performances. The intervention time was 20minutes. Vital signs were assessed in two measurements with an interval of one minute immediately before and after the interaction. Non-verbal communication was observed before and during the interaction using the Non-Verbal Communication Template Chart, a tool in which nonverbal behaviors are assessed as effective or ineffective in the interactions. The sample consisted of 41 children with a mean age of 7.6±2.7 years; most were aged 7 to 11 years (n=23; 56%) and were males (n=26; 63.4%). There was a statistically significant difference in systolic and diastolic blood pressure, pain and non-verbal behavior of children with the intervention. Systolic and diastolic blood pressure increased and pain scales showed decreased scores. The playful interaction with clowns can be a therapeutic resource to minimize the effects of the stressing environment during the intervention, improve the children's emotional state and reduce the perception of pain. Copyright © 2016 Sociedade de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.

  3. Verbal and Non-Verbal Communication and Coordination in Mission Control

    Science.gov (United States)

    Vinkhuyzen, Erik; Norvig, Peter (Technical Monitor)

    1998-01-01

    In this talk I will present some video-materials gathered in Mission Control during simulations. The focus of the presentation will be on verbal and non-verbal communication between the officers in the front and backroom, especially the practices that have evolved around a peculiar communications technology called voice loops.

  4. Trauma team leaders' non-verbal communication: video registration during trauma team training.

    Science.gov (United States)

    Härgestam, Maria; Hultin, Magnus; Brulin, Christine; Jacobsson, Maritha

    2016-03-25

    There is widespread consensus on the importance of safe and secure communication in healthcare, especially in trauma care where time is a limiting factor. Although non-verbal communication has an impact on communication between individuals, there is only limited knowledge of how trauma team leaders communicate. The purpose of this study was to investigate how trauma team members are positioned in the emergency room, and how leaders communicate in terms of gaze direction, vocal nuances, and gestures during trauma team training. Eighteen trauma teams were audio and video recorded during trauma team training in the emergency department of a hospital in northern Sweden. Quantitative content analysis was used to categorize the team members' positions and the leaders' non-verbal communication: gaze direction, vocal nuances, and gestures. The quantitative data were interpreted in relation to the specific context. Time sequences of the leaders' gaze direction, speech time, and gestures were identified separately and registered as time (seconds) and proportions (%) of the total training time. The team leaders who gained control over the most important area in the emergency room, the "inner circle", positioned themselves as heads over the team, using gaze direction, gestures, vocal nuances, and verbal commands that solidified their verbal message. Changes in position required both attention and collaboration. Leaders who spoke in a hesitant voice, or were silent, expressed ambiguity in their non-verbal communication: and other team members took over the leader's tasks. In teams where the leader had control over the inner circle, the members seemed to have an awareness of each other's roles and tasks, knowing when in time and where in space these tasks needed to be executed. Deviations in the leaders' communication increased the ambiguity in the communication, which had consequences for the teamwork. Communication cannot be taken for granted; it needs to be practiced

  5. By the sound of it. An ERP investigation of human action sound processing in 7-month-old infants

    Directory of Open Access Journals (Sweden)

    Elena Geangu

    2015-04-01

    Full Text Available Recent evidence suggests that human adults perceive human action sounds as a distinct category from human vocalizations, environmental, and mechanical sounds, activating different neural networks (Engel et al., 2009; Lewis et al., 2011. Yet, little is known about the development of such specialization. Using event-related potentials (ERP, this study investigated neural correlates of 7-month-olds’ processing of human action (HA sounds in comparison to human vocalizations (HV, environmental (ENV, and mechanical (MEC sounds. Relative to the other categories, HA sounds led to increased positive amplitudes between 470 and 570 ms post-stimulus onset at left anterior temporal locations, while HV led to increased negative amplitudes at the more posterior temporal locations in both hemispheres. Collectively, human produced sounds (HA + HV led to significantly different response profiles compared to non-living sound sources (ENV + MEC at parietal and frontal locations in both hemispheres. Overall, by 7 months of age human action sounds are being differentially processed in the brain, consistent with a dichotomy for processing living versus non-living things. This provides novel evidence regarding the typical categorical processing of socially relevant sounds.

  6. Incongruence between Verbal and Non-Verbal Information Enhances the Late Positive Potential.

    Science.gov (United States)

    Morioka, Shu; Osumi, Michihiro; Shiotani, Mayu; Nobusako, Satoshi; Maeoka, Hiroshi; Okada, Yohei; Hiyamizu, Makoto; Matsuo, Atsushi

    2016-01-01

    Smooth social communication consists of both verbal and non-verbal information. However, when presented with incongruence between verbal information and nonverbal information, the relationship between an individual judging trustworthiness in those who present the verbal-nonverbal incongruence and the brain activities observed during judgment for trustworthiness are not clear. In the present study, we attempted to identify the impact of incongruencies between verbal information and facial expression on the value of trustworthiness and brain activity using event-related potentials (ERP). Combinations of verbal information [positive/negative] and facial expressions [smile/angry] expressions were presented randomly on a computer screen to 17 healthy volunteers. The value of trustworthiness of the presented facial expression was evaluated by the amount of donation offered by the observer to the person depicted on the computer screen. In addition, the time required to judge the value of trustworthiness was recorded for each trial. Using electroencephalography, ERP were obtained by averaging the wave patterns recorded while the participants judged the value of trustworthiness. The amount of donation offered was significantly lower when the verbal information and facial expression were incongruent, particularly for [negative × smile]. The amplitude of the early posterior negativity (EPN) at the temporal lobe showed no significant difference between all conditions. However, the amplitude of the late positive potential (LPP) at the parietal electrodes for the incongruent condition [negative × smile] was higher than that for the congruent condition [positive × smile]. These results suggest that the LPP amplitude observed from the parietal cortex is involved in the processing of incongruence between verbal information and facial expression.

  7. Auditory-motor mapping training as an intervention to facilitate speech output in non-verbal children with autism: a proof of concept study.

    Directory of Open Access Journals (Sweden)

    Catherine Y Wan

    Full Text Available Although up to 25% of children with autism are non-verbal, there are very few interventions that can reliably produce significant improvements in speech output. Recently, a novel intervention called Auditory-Motor Mapping Training (AMMT has been developed, which aims to promote speech production directly by training the association between sounds and articulatory actions using intonation and bimanual motor activities. AMMT capitalizes on the inherent musical strengths of children with autism, and offers activities that they intrinsically enjoy. It also engages and potentially stimulates a network of brain regions that may be dysfunctional in autism. Here, we report an initial efficacy study to provide 'proof of concept' for AMMT. Six non-verbal children with autism participated. Prior to treatment, the children had no intelligible words. They each received 40 individual sessions of AMMT 5 times per week, over an 8-week period. Probe assessments were conducted periodically during baseline, therapy, and follow-up sessions. After therapy, all children showed significant improvements in their ability to articulate words and phrases, with generalization to items that were not practiced during therapy sessions. Because these children had no or minimal vocal output prior to treatment, the acquisition of speech sounds and word approximations through AMMT represents a critical step in expressive language development in children with autism.

  8. Persistent Thalamic Sound Processing Despite Profound Cochlear Denervation

    Directory of Open Access Journals (Sweden)

    Anna R. Chambers

    2016-08-01

    Full Text Available Neurons at higher stages of sensory processing can partially compensate for a sudden drop in input from the periphery through a homeostatic plasticity process that increases the gain on weak afferent inputs. Even after a profound unilateral auditory neuropathy where > 95% of synapses between auditory nerve fibers and inner hair cells have been eliminated with ouabain, central gain can restore the cortical processing and perceptual detection of basic sounds delivered to the denervated ear. In this model of profound auditory neuropathy, cortical processing and perception recover despite the absence of an auditory brainstem response (ABR or brainstem acoustic reflexes, and only a partial recovery of sound processing at the level of the inferior colliculus (IC, an auditory midbrain nucleus. In this study, we induced a profound cochlear neuropathy with ouabain and asked whether central gain enabled a compensatory plasticity in the auditory thalamus comparable to the full recovery of function previously observed in the auditory cortex (ACtx, the partial recovery observed in the IC, or something different entirely. Unilateral ouabain treatment in adult mice effectively eliminated the ABR, yet robust sound-evoked activity persisted in a minority of units recorded from the contralateral medial geniculate body (MGB of awake mice. Sound-driven MGB units could decode moderate and high-intensity sounds with accuracies comparable to sham-treated control mice, but low-intensity classification was near chance. Pure tone receptive fields and synchronization to broadband pulse trains also persisted, albeit with significantly reduced quality and precision, respectively. MGB decoding of temporally modulated pulse trains and speech tokens were both greatly impaired in ouabain-treated mice. Taken together, the absence of an ABR belied a persistent auditory processing at the level of the MGB that was likely enabled through increased central gain. Compensatory

  9. Music and Sound in Time Processing of Children with ADHD.

    Science.gov (United States)

    Carrer, Luiz Rogério Jorgensen

    2015-01-01

    ADHD involves cognitive and behavioral aspects with impairments in many environments of children and their families' lives. Music, with its playful, spontaneous, affective, motivational, temporal, and rhythmic dimensions can be of great help for studying the aspects of time processing in ADHD. In this article, we studied time processing with simple sounds and music in children with ADHD with the hypothesis that children with ADHD have a different performance when compared with children with normal development in tasks of time estimation and production. The main objective was to develop sound and musical tasks to evaluate and correlate the performance of children with ADHD, with and without methylphenidate, compared to a control group with typical development. The study involved 36 participants of age 6-14 years, recruited at NANI-UNIFESP/SP, subdivided into three groups with 12 children in each. Data was collected through a musical keyboard using Logic Audio Software 9.0 on the computer that recorded the participant's performance in the tasks. Tasks were divided into sections: spontaneous time production, time estimation with simple sounds, and time estimation with music. (1) performance of ADHD groups in temporal estimation of simple sounds in short time intervals (30 ms) were statistically lower than that of control group (p < 0.05); (2) in the task comparing musical excerpts of the same duration (7 s), ADHD groups considered the tracks longer when the musical notes had longer durations, while in the control group, the duration was related to the density of musical notes in the track. The positive average performance observed in the three groups in most tasks perhaps indicates the possibility that music can, in some way, positively modulate the symptoms of inattention in ADHD.

  10. Improviser non verbalement pour l’apprentissage de la langue parlée

    Directory of Open Access Journals (Sweden)

    Francine Chaîné

    2015-04-01

    Full Text Available Un texte réflexif sur la pratique de l'improvisation dans un contexte scolaire en vue d'apprendre la langue parlée. D'aucun penserait que l'improvisation verbale est le moyen par excellence pour faire l'apprentissage de la langue, mais l'expérience nous a fait découvrir la richesse de l'improvisation non-verbale suivie de prise de parole sur la pratique comme moyen privilégié. L'article est illustré d'un atelier d'improvisation-non verbale s'adressant à des enfants ou à des adolescents.

  11. A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    OpenAIRE

    Mavridis, Nikolaos

    2014-01-01

    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-lookin...

  12. Oncologists' non-verbal behavior and analog patients' recall of information.

    Science.gov (United States)

    Hillen, Marij A; de Haes, Hanneke C J M; van Tienhoven, Geertjan; van Laarhoven, Hanneke W M; van Weert, Julia C M; Vermeulen, Daniëlle M; Smets, Ellen M A

    2016-06-01

    Background Information in oncological consultations is often excessive. Those patients who better recall information are more satisfied, less anxious and more adherent. Optimal recall may be enhanced by the oncologist's non-verbal communication. We tested the influence of three non-verbal behaviors, i.e. eye contact, body posture and smiling, on patients' recall of information and perceived friendliness of the oncologist. Moreover, the influence of patient characteristics on recall was examined, both directly or as a moderator of non-verbal communication. Material and methods Non-verbal communication of an oncologist was experimentally varied using video vignettes. In total 194 breast cancer patients/survivors and healthy women participated as 'analog patients', viewing a randomly selected video version while imagining themselves in the role of the patient. Directly after viewing, they evaluated the oncologist. From 24 to 48 hours later, participants' passive recall, i.e. recognition, and free recall of information provided by the oncologist were assessed. Results Participants' recognition was higher if the oncologist maintained more consistent eye contact (β = 0.17). More eye contact and smiling led to a perception of the oncologist as more friendly. Body posture and smiling did not significantly influence recall. Older age predicted significantly worse recognition (β = -0.28) and free recall (β = -0.34) of information. Conclusion Oncologists may be able to facilitate their patients' recall functioning through consistent eye contact. This seems particularly relevant for older patients, whose recall is significantly worse. These findings can be used in training, focused on how to maintain eye contact while managing computer tasks.

  13. Shall we use non-verbal fluency in schizophrenia? A pilot study.

    Science.gov (United States)

    Rinaldi, Romina; Trappeniers, Julie; Lefebvre, Laurent

    2014-05-30

    Over the last few years, numerous studies have attempted to explain fluency impairments in people with schizophrenia, leading to heterogeneous results. This could notably be due to the fact that fluency is often used in its verbal form where semantic dimensions are implied. In order to gain an in-depth understanding of fluency deficits, a non-verbal fluency task - the Five-Point Test (5PT) - was proposed to 24 patients with schizophrenia and to 24 healthy subjects categorized in terms of age, gender and schooling. The 5PT involves producing as many abstract figures as possible within 1min by connecting points with straight lines. All subjects also completed the Frontal Assessment Battery (FAB) while those with schizophrenia were further assessed using the Positive and Negative Syndrome Scale (PANSS). Results show that the 5PT evaluation differentiates patients from healthy subjects with regard to the number of figures produced. Patients׳ results also suggest that the number of figures produced is linked to the "overall executive functioning" and to some inhibition components. Although this study is a first step in the non-verbal efficiency research field, we believe that experimental psychopathology could benefit from the investigations on non-verbal fluency. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Effects of spectral complexity and sound duration on automatic complex-sound pitch processing in humans - a mismatch negativity study.

    Science.gov (United States)

    Tervaniemi, M; Schröger, E; Saher, M; Näätänen, R

    2000-08-18

    The pitch of a spectrally rich sound is known to be more easily perceived than that of a sinusoidal tone. The present study compared the importance of spectral complexity and sound duration in facilitated pitch discrimination. The mismatch negativity (MMN), which reflects automatic neural discrimination, was recorded to a 2. 5% pitch change in pure tones with only one sinusoidal frequency component (500 Hz) and in spectrally rich tones with three (500-1500 Hz) and five (500-2500 Hz) harmonic partials. During the recordings, subjects concentrated on watching a silent movie. In separate blocks, stimuli were of 100 and 250 ms in duration. The MMN amplitude was enhanced with both spectrally rich sounds when compared with pure tones. The prolonged sound duration did not significantly enhance the MMN. This suggests that increased spectral rather than temporal information facilitates pitch processing of spectrally rich sounds.

  15. Mood As Cumulative Expectation Mismatch: A Test of Theory Based on Data from Non-verbal Cognitive Bias Tests

    Directory of Open Access Journals (Sweden)

    Camille M. C. Raoult

    2017-12-01

    Full Text Available Affective states are known to influence behavior and cognitive processes. To assess mood (moderately long-term affective states, the cognitive judgment bias test was developed and has been widely used in various animal species. However, little is known about how mood changes, how mood can be experimentally manipulated, and how mood then feeds back into cognitive judgment. A recent theory argues that mood reflects the cumulative impact of differences between obtained outcomes and expectations. Here expectations refer to an established context. Situations in which an established context fails to match an outcome are then perceived as mismatches of expectation and outcome. We take advantage of the large number of studies published on non-verbal cognitive bias tests in recent years (95 studies with a total of 162 independent tests to test whether cumulative mismatch could indeed have led to the observed mood changes. Based on a criteria list, we assessed whether mismatch had occurred with the experimental procedure used to induce mood (mood induction mismatch, or in the context of the non-verbal cognitive bias procedure (testing mismatch. For the mood induction mismatch, we scored the mismatch between the subjects’ potential expectations and the manipulations conducted for inducing mood whereas, for the testing mismatch, we scored mismatches that may have occurred during the actual testing. We then investigated whether these two types of mismatch can predict the actual outcome of the cognitive bias study. The present evaluation shows that mood induction mismatch cannot well predict the success of a cognitive bias test. On the other hand, testing mismatch can modulate or even inverse the expected outcome. We think, cognitive bias studies should more specifically aim at creating expectation mismatch while inducing mood states to test the cumulative mismatch theory more properly. Furthermore, testing mismatch should be avoided as much as possible

  16. Noise control, sound, and the vehicle design process

    Science.gov (United States)

    Donavan, Paul

    2005-09-01

    For many products, noise and sound are viewed as necessary evils that need to be dealt with in order to bring the product successfully to market. They are generally not product ``exciters'' although some vehicle manufacturers do tune and advertise specific sounds to enhance the perception of their products. In this paper, influencing the design process for the ``evils,'' such as wind noise and road noise, are considered in more detail. There are three ingredients to successfully dealing with the evils in the design process. The first of these is knowing how excesses in noise effects the end customer in a tangible manner and how that effects customer satisfaction and ultimately sells. The second is having and delivering the knowledge of what is required of the design to achieve a satisfactory or even better level of noise performance. The third ingredient is having the commitment of the designers to incorporate the knowledge into their part, subsystem or system. In this paper, the elements of each of these ingredients are discussed in some detail and the attributes of a successful design process are enumerated.

  17. Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing

    Science.gov (United States)

    Wolf, Gail Marie

    2016-01-01

    This intervention study investigated the growth of letter sound reading and growth of consonant-vowel-consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching preschool children to…

  18. Initial uncertainty impacts statistical learning in sound sequence processing.

    Science.gov (United States)

    Todd, Juanita; Provost, Alexander; Whitson, Lisa; Mullens, Daniel

    2016-11-01

    This paper features two studies confirming a lasting impact of first learning on how subsequent experience is weighted in early relevance-filtering processes. In both studies participants were exposed to sequences of sound that contained a regular pattern on two different timescales. Regular patterning in sound is readily detected by the auditory system and used to form "prediction models" that define the most likely properties of sound to be encountered in a given context. The presence and strength of these prediction models is inferred from changes in automatically elicited components of auditory evoked potentials. Both studies employed sound sequences that contained both a local and longer-term pattern. The local pattern was defined by a regular repeating pure tone occasionally interrupted by a rare deviating tone (p=0.125) that was physically different (a 30msvs. 60ms duration difference in one condition and a 1000Hz vs. 1500Hz frequency difference in the other). The longer-term pattern was defined by the rate at which the two tones alternated probabilities (i.e., the tone that was first rare became common and the tone that was first common became rare). There was no task related to the tones and participants were asked to ignore them while focussing attention on a movie with subtitles. Auditory-evoked potentials revealed long lasting modulatory influences based on whether the tone was initially encountered as rare and unpredictable or common and predictable. The results are interpreted as evidence that probability (or indeed predictability) assigns a differential information-value to the two tones that in turn affects the extent to which prediction models are updated and imposed. These effects are exposed for both common and rare occurrences of the tones. The studies contribute to a body of work that reveals that probabilistic information is not faithfully represented in these early evoked potentials and instead exposes that predictability (or conversely

  19. Deaf children’s non-verbal working memory is impacted by their language experience

    Directory of Open Access Journals (Sweden)

    Chloe eMarshall

    2015-05-01

    Full Text Available Recent studies suggest that deaf children perform more poorly on working memory tasks compared to hearing children, but do not say whether this poorer performance arises directly from deafness itself or from deaf children’s reduced language exposure. The issue remains unresolved because findings come from (1 tasks that are verbal as opposed to non-verbal, and (2 involve deaf children who use spoken communication and therefore may have experienced impoverished input and delayed language acquisition. This is in contrast to deaf children who have been exposed to a sign language since birth from Deaf parents (and who therefore have native language-learning opportunities. A more direct test of how the type and quality of language exposure impacts working memory is to use measures of non-verbal working memory (NVWM and to compare hearing children with two groups of deaf signing children: those who have had native exposure to a sign language, and those who have experienced delayed acquisition compared to their native-signing peers. In this study we investigated the relationship between NVWM and language in three groups aged 6-11 years: hearing children (n=27, deaf native users of British Sign Language (BSL; n=7, and deaf children non native signers (n=19. We administered a battery of non-verbal reasoning, NVWM, and language tasks. We examined whether the groups differed on NVWM scores, and if language tasks predicted scores on NVWM tasks. For the two NVWM tasks, the non-native signers performed less accurately than the native signer and hearing groups (who did not differ from one another. Multiple regression analysis revealed that the vocabulary measure predicted scores on NVWM tasks. Our results suggest that whatever the language modality – spoken or signed – rich language experience from birth, and the good language skills that result from this early age of aacquisition, play a critical role in the development of NVWM and in performance on NVWM

  20. Network structure underlying resolution of conflicting non-verbal and verbal social information.

    Science.gov (United States)

    Watanabe, Takamitsu; Yahata, Noriaki; Kawakubo, Yuki; Inoue, Hideyuki; Takano, Yosuke; Iwashiro, Norichika; Natsubori, Tatsunobu; Takao, Hidemasa; Sasaki, Hiroki; Gonoi, Wataru; Murakami, Mizuho; Katsura, Masaki; Kunimatsu, Akira; Abe, Osamu; Kasai, Kiyoto; Yamasue, Hidenori

    2014-06-01

    Social judgments often require resolution of incongruity in communication contents. Although previous studies revealed that such conflict resolution recruits brain regions including the medial prefrontal cortex (mPFC) and posterior inferior frontal gyrus (pIFG), functional relationships and networks among these regions remain unclear. In this functional magnetic resonance imaging study, we investigated the functional dissociation and networks by measuring human brain activity during resolving incongruity between verbal and non-verbal emotional contents. First, we found that the conflict resolutions biased by the non-verbal contents activated the posterior dorsal mPFC (post-dmPFC), bilateral anterior insula (AI) and right dorsal pIFG, whereas the resolutions biased by the verbal contents activated the bilateral ventral pIFG. In contrast, the anterior dmPFC (ant-dmPFC), bilateral superior temporal sulcus and fusiform gyrus were commonly involved in both of the resolutions. Second, we found that the post-dmPFC and right ventral pIFG were hub regions in networks underlying the non-verbal- and verbal-content-biased resolutions, respectively. Finally, we revealed that these resolution-type-specific networks were bridged by the ant-dmPFC, which was recruited for the conflict resolutions earlier than the two hub regions. These findings suggest that, in social conflict resolutions, the ant-dmPFC selectively recruits one of the resolution-type-specific networks through its interaction with resolution-type-specific hub regions. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  1. School effects on non-verbal intelligence and nutritional status in rural Zambia

    OpenAIRE

    Hein, Sascha; Tan, Mei; Reich, Jodi; Thuma, Philip E.; Grigorenko, Elena L.

    2015-01-01

    This study uses hierarchical linear modeling (HLM) to examine the school factors (i.e., related to school organization and teacher and student body) associated with non-verbal intelligence (NI) and nutritional status (i.e., body mass index; BMI) of 4204 3rd to 7th graders in rural areas of Southern Province, Zambia. Results showed that 23.5% and 7.7% of the NI and BMI variance, respectively, were conditioned by differences between schools. The set of 14 school factors accounted for 58.8% and ...

  2. Linguistic analysis of verbal and non-verbal communication in the operating room.

    Science.gov (United States)

    Moore, Alison; Butt, David; Ellis-Clarke, Jodie; Cartmill, John

    2010-12-01

    Surgery can be a triumph of co-operation, the procedure evolving as a result of joint action between multiple participants. The communication that mediates the joint action of surgery is conveyed by verbal but particularly by non-verbal signals. Competing priorities superimposed by surgical learning must also be negotiated within this context and this paper draws on techniques of systemic functional linguistics to observe and analyse the flow of information during such a phase of surgery. © 2010 The Authors. ANZ Journal of Surgery © 2010 Royal Australasian College of Surgeons.

  3. Verbal and non-verbal semantic impairment: From fluent primary progressive aphasia to semantic dementia

    Directory of Open Access Journals (Sweden)

    Mirna Lie Hosogi Senaha

    Full Text Available Abstract Selective disturbances of semantic memory have attracted the interest of many investigators and the question of the existence of single or multiple semantic systems remains a very controversial theme in the literature. Objectives: To discuss the question of multiple semantic systems based on a longitudinal study of a patient who presented semantic dementia from fluent primary progressive aphasia. Methods: A 66 year-old woman with selective impairment of semantic memory was examined on two occasions, undergoing neuropsychological and language evaluations, the results of which were compared to those of three paired control individuals. Results: In the first evaluation, physical examination was normal and the score on the Mini-Mental State Examination was 26. Language evaluation revealed fluent speech, anomia, disturbance in word comprehension, preservation of the syntactic and phonological aspects of the language, besides surface dyslexia and dysgraphia. Autobiographical and episodic memories were relatively preserved. In semantic memory tests, the following dissociation was found: disturbance of verbal semantic memory with preservation of non-verbal semantic memory. Magnetic resonance of the brain revealed marked atrophy of the left anterior temporal lobe. After 14 months, the difficulties in verbal semantic memory had become more severe and the semantic disturbance, limited initially to the linguistic sphere, had worsened to involve non-verbal domains. Conclusions: Given the dissociation found in the first examination, we believe there is sufficient clinical evidence to refute the existence of a unitary semantic system.

  4. [Non-verbal communication and executive function impairment after traumatic brain injury: a case report].

    Science.gov (United States)

    Sainson, C

    2007-05-01

    Following post-traumatic impairment in executive function, failure to adjust to communication situations often creates major obstacles to social and professional reintegration. The analysis of pathological verbal communication has been based on clinical scales since the 1980s, but that of nonverbal elements has been neglected, although their importance should be acknowledged. The aim of this research was to study non-verbal aspects of communication in a case of executive-function impairment after traumatic brain injury. During the patient's conversation with an interlocutor, all nonverbal parameters - coverbal gestures, gaze, posture, proxemics and facial expressions - were studied in as much an ecological way as possible, to closely approximate natural conversation conditions. Such an approach highlights the difficulties such patients experience in communicating, difficulties of a pragmatic kind, that have so far been overlooked by traditional investigations, which mainly take into account the formal linguistic aspects of language. The analysis of the patient's conversation revealed non-verbal dysfunctions, not only on a pragmatic and interactional level but also in terms of enunciation. Moreover, interactional adjustment phenomena were noted in the interlocutor's behaviour. The two inseparable aspects of communication - verbal and nonverbal - should be equally assessed in patients with communication difficulties; highlighting distortions in each area might bring about an improvement in the rehabilitation of such people.

  5. Exploring Children’s Peer Relationships through Verbal and Non-verbal Communication: A Qualitative Action Research Focused on Waldorf Pedagogy

    Directory of Open Access Journals (Sweden)

    Aida Milena Montenegro Mantilla

    2007-12-01

    Full Text Available This study analyzes the relationships that children around seven and eight years old establish in a classroom. It shows that peer relationships have a positive dimension with features such as the development of children’s creativity to communicate and modify norms. These features were found through an analysis of children’s verbal and non-verbal communication and an interdisciplinary view of children’s learning process from Rudolf Steiner, founder of Waldorf Pedagogy, and Jean Piaget and Lev Vygotsky, specialists in children’s cognitive and social dimensions. This research is an invitation to recognize children’s capacity to construct their own rules in peer relationships.

  6. Contrasting visual working memory for verbal and non-verbal material with multivariate analysis of fMRI

    Science.gov (United States)

    Habeck, Christian; Rakitin, Brian; Steffener, Jason; Stern, Yaakov

    2012-01-01

    We performed a delayed-item-recognition task to investigate the neural substrates of non-verbal visual working memory with event-related fMRI (‘Shape task’). 25 young subjects (mean age: 24.0 years; STD=3.8 years) were instructed to study a list of either 1,2 or 3 unnamable nonsense line drawings for 3 seconds (‘stimulus phase’ or STIM). Subsequently, the screen went blank for 7 seconds (‘retention phase’ or RET), and then displayed a probe stimulus for 3 seconds in which subject indicated with a differential button press whether the probe was contained in the studied shape-array or not (‘probe phase’ or PROBE). Ordinal Trend Canonical Variates Analysis (Habeck et al., 2005a) was performed to identify spatial covariance patterns that showed a monotonic increase in expression with memory load during all task phases. Reliable load-related patterns were identified in the stimulus and retention phase (pmemory loads (pmemory load, and mediofrontal and temporal regions that were decreasing. Mean subject expression of both patterns across memory load during retention also correlated positively with recognition accuracy (dL) in the Shape task (prehearsal processes. Encoding processes, on the other hand, are critically dependent on the to-be-remembered material, and seem to necessitate material-specific neural substrates. PMID:22652306

  7. Linking social cognition with social interaction: Non-verbal expressivity, social competence and "mentalising" in patients with schizophrenia spectrum disorders

    Directory of Open Access Journals (Sweden)

    Lehmkämper Caroline

    2009-01-01

    Full Text Available Abstract Background Research has shown that patients with schizophrenia spectrum disorders (SSD can be distinguished from controls on the basis of their non-verbal expression. For example, patients with SSD use facial expressions less than normals to invite and sustain social interaction. Here, we sought to examine whether non-verbal expressivity in patients corresponds with their impoverished social competence and neurocognition. Method Fifty patients with SSD were videotaped during interviews. Non-verbal expressivity was evaluated using the Ethological Coding System for Interviews (ECSI. Social competence was measured using the Social Behaviour Scale and psychopathology was rated using the Positive and Negative Symptom Scale. Neurocognitive variables included measures of IQ, executive functioning, and two mentalising tasks, which tapped into the ability to appreciate mental states of story characters. Results Non-verbal expressivity was reduced in patients relative to controls. Lack of "prosocial" nonverbal signals was associated with poor social competence and, partially, with impaired understanding of others' minds, but not with non-social cognition or medication. Conclusion This is the first study to link deficits in non-verbal expressivity to levels of social skills and awareness of others' thoughts and intentions in patients with SSD.

  8. The Bursts and Lulls of Multimodal Interaction: Temporal Distributions of Behavior Reveal Differences Between Verbal and Non-Verbal Communication.

    Science.gov (United States)

    Abney, Drew H; Dale, Rick; Louwerse, Max M; Kello, Christopher T

    2018-04-06

    Recent studies of naturalistic face-to-face communication have demonstrated coordination patterns such as the temporal matching of verbal and non-verbal behavior, which provides evidence for the proposal that verbal and non-verbal communicative control derives from one system. In this study, we argue that the observed relationship between verbal and non-verbal behaviors depends on the level of analysis. In a reanalysis of a corpus of naturalistic multimodal communication (Louwerse, Dale, Bard, & Jeuniaux, ), we focus on measuring the temporal patterns of specific communicative behaviors in terms of their burstiness. We examined burstiness estimates across different roles of the speaker and different communicative modalities. We observed more burstiness for verbal versus non-verbal channels, and for more versus less informative language subchannels. Using this new method for analyzing temporal patterns in communicative behaviors, we show that there is a complex relationship between verbal and non-verbal channels. We propose a "temporal heterogeneity" hypothesis to explain how the language system adapts to the demands of dialog. Copyright © 2018 Cognitive Science Society, Inc.

  9. Measurement of sound velocity profiles in fluids for process monitoring

    International Nuclear Information System (INIS)

    Wolf, M; Kühnicke, E; Lenz, M; Bock, M

    2012-01-01

    In ultrasonic measurements, the time of flight to the object interface is often the only information that is analysed. Conventionally it is only possible to determine distances or sound velocities if the other value is known. The current paper deals with a novel method to measure the sound propagation path length and the sound velocity in media with moving scattering particles simultaneously. Since the focal position also depends on sound velocity, it can be used as a second parameter. Via calibration curves it is possible to determine the focal position and sound velocity from the measured time of flight to the focus, which is correlated to the maximum of averaged echo signal amplitude. To move focal position along the acoustic axis, an annular array is used. This allows measuring sound velocity locally resolved without any previous knowledge of the acoustic media and without a reference reflector. In previous publications the functional efficiency of this method was shown for media with constant velocities. In this work the accuracy of these measurements is improved. Furthermore first measurements and simulations are introduced for non-homogeneous media. Therefore an experimental set-up was created to generate a linear temperature gradient, which also causes a gradient of sound velocity.

  10. Competing sound sources reveal spatial effects in cortical processing.

    Directory of Open Access Journals (Sweden)

    Ross K Maddox

    Full Text Available Why is spatial tuning in auditory cortex weak, even though location is important to object recognition in natural settings? This question continues to vex neuroscientists focused on linking physiological results to auditory perception. Here we show that the spatial locations of simultaneous, competing sound sources dramatically influence how well neural spike trains recorded from the zebra finch field L (an analog of mammalian primary auditory cortex encode source identity. We find that the location of a birdsong played in quiet has little effect on the fidelity of the neural encoding of the song. However, when the song is presented along with a masker, spatial effects are pronounced. For each spatial configuration, a subset of neurons encodes song identity more robustly than others. As a result, competing sources from different locations dominate responses of different neural subpopulations, helping to separate neural responses into independent representations. These results help elucidate how cortical processing exploits spatial information to provide a substrate for selective spatial auditory attention.

  11. Patterns of non-verbal social interactions within intensive mathematics intervention contexts

    Science.gov (United States)

    Thomas, Jonathan Norris; Harkness, Shelly Sheats

    2016-06-01

    This study examined the non-verbal patterns of interaction within an intensive mathematics intervention context. Specifically, the authors draw on social constructivist worldview to examine a teacher's use of gesture in this setting. The teacher conducted a series of longitudinal teaching experiments with a small number of young, school-age children in the context of early arithmetic development. From these experiments, the authors gathered extensive video records of teaching practice and, from an inductive analysis of these records, identified three distinct patterns of teacher gesture: behavior eliciting, behavior suggesting, and behavior replicating. Awareness of their potential to influence students via gesture may prompt teachers to more closely attend to their own interactions with mathematical tools and take these teacher interactions into consideration when forming interpretations of students' cognition.

  12. Judging the urgency of non-verbal auditory alarms: a case study.

    Science.gov (United States)

    Arrabito, G Robert; Mondor, Todd; Kent, Kimberley

    2004-06-22

    When designed correctly, non-verbal auditory alarms can convey different levels of urgency to the aircrew, and thereby permit the operator to establish the appropriate level of priority to address the alarmed condition. The conveyed level of urgency of five non-verbal auditory alarms presently used in the Canadian Forces CH-146 Griffon helicopter was investigated. Pilots of the CH-146 Griffon helicopter and non-pilots rated the perceived urgency of the signals using a rating scale. The pilots also ranked the urgency of the alarms in a post-experiment questionnaire to reflect their assessment of the actual situation that triggers the alarms. The results of this investigation revealed that participants' ratings of perceived urgency appear to be based on the acoustic properties of the alarms which are known to affect the listener's perceived level of urgency. Although for 28% of the pilots the mapping of perceived urgency to the urgency of their perception of the triggering situation was statistically significant for three of the five alarms, the overall data suggest that the triggering situations are not adequately conveyed by the acoustic parameters inherent in the alarms. The pilots' judgement of the triggering situation was intended as a means of evaluating the reliability of the alerting system. These data will subsequently be discussed with respect to proposed enhancements in alerting systems as it relates to addressing the problem of phase of flight. These results call for more serious consideration of incorporating situational awareness in the design and assignment of auditory alarms in aircraft.

  13. Individual differences in non-verbal number acuity correlate with maths achievement.

    Science.gov (United States)

    Halberda, Justin; Mazzocco, Michèle M M; Feigenson, Lisa

    2008-10-02

    Human mathematical competence emerges from two representational systems. Competence in some domains of mathematics, such as calculus, relies on symbolic representations that are unique to humans who have undergone explicit teaching. More basic numerical intuitions are supported by an evolutionarily ancient approximate number system that is shared by adults, infants and non-human animals-these groups can all represent the approximate number of items in visual or auditory arrays without verbally counting, and use this capacity to guide everyday behaviour such as foraging. Despite the widespread nature of the approximate number system both across species and across development, it is not known whether some individuals have a more precise non-verbal 'number sense' than others. Furthermore, the extent to which this system interfaces with the formal, symbolic maths abilities that humans acquire by explicit instruction remains unknown. Here we show that there are large individual differences in the non-verbal approximation abilities of 14-year-old children, and that these individual differences in the present correlate with children's past scores on standardized maths achievement tests, extending all the way back to kindergarten. Moreover, this correlation remains significant when controlling for individual differences in other cognitive and performance factors. Our results show that individual differences in achievement in school mathematics are related to individual differences in the acuity of an evolutionarily ancient, unlearned approximate number sense. Further research will determine whether early differences in number sense acuity affect later maths learning, whether maths education enhances number sense acuity, and the extent to which tertiary factors can affect both.

  14. The Influence of Manifest Strabismus and Stereoscopic Vision on Non-Verbal Abilities of Visually Impaired Children

    Science.gov (United States)

    Gligorovic, Milica; Vucinic, Vesna; Eskirovic, Branka; Jablan, Branka

    2011-01-01

    This research was conducted in order to examine the influence of manifest strabismus and stereoscopic vision on non-verbal abilities of visually impaired children aged between 7 and 15. The sample included 55 visually impaired children from the 1st to the 6th grade of elementary schools for visually impaired children in Belgrade. RANDOT stereotest…

  15. Contextual analysis of human non-verbal guide behaviors to inform the development of FROG, the Fun Robotic Outdoor Guide

    NARCIS (Netherlands)

    Karreman, Daphne Eleonora; van Dijk, Elisabeth M.A.G.; Evers, Vanessa

    2012-01-01

    This paper reports the first step in a series of studies to design the interaction behaviors of an outdoor robotic guide. We describe and report the use case development carried out to identify effective human tour guide behaviors. In this paper we focus on non-verbal communication cues in gaze,

  16. Treating depressive symptoms in psychosis : A Network Meta-Analysis on the Effects of Non-Verbal Therapies

    NARCIS (Netherlands)

    Steenhuis, L. A.; Nauta, M. H.; Bockting, C. L. H.; Pijnenborg, G. H. M.

    2015-01-01

    AIMS: The aim of this study was to examine whether non-verbal therapies are effective in treating depressive symptoms in psychotic disorders. MATERIAL AND METHODS: A systematic literature search was performed in PubMed, Psychinfo, Picarta, Embase and ISI Web of Science, up to January 2015.

  17. The similar effects of verbal and non-verbal intervening tasks on word recall in an elderly population.

    Science.gov (United States)

    Williams, B R; Sullivan, S K; Morra, L F; Williams, J R; Donovick, P J

    2014-01-01

    Vulnerability to retroactive interference has been shown to increase with cognitive aging. Consistent with the findings of memory and aging literature, the authors of the California Verbal Learning Test-II (CVLT-II) suggest that a non-verbal task be administered during the test's delay interval to minimize the effects of retroactive interference on delayed recall. The goal of the present study was to determine the extent to which retroactive interference caused by non-verbal and verbal intervening tasks affects recall of verbal information in non-demented, older adults. The effects of retroactive interference on recall of words during Long-Delay recall on the California Verbal Learning Test-II (CVLT-II) were evaluated. Participants included 85 adults age 60 and older. During a 20-minute delay interval on the CVLT-II, participants received either a verbal (WAIS-III Vocabulary or Peabody Picture Vocabulary Test-IIIB) or non-verbal (Raven's Standard Progressive Matrices or WAIS-III Block Design) intervening task. Similarly to previous research with young adults (Williams & Donovick, 2008), older adults recalled the same number of words across all groups, regardless of the type of intervening task. These findings suggest that the administration of verbal intervening tasks during the CVLT-II do not elicit more retroactive interference than non-verbal intervening tasks, and thus verbal tasks need not be avoided during the delay interval of the CVLT-II.

  18. Treating depressive symptoms in psychosis : A network meta-analysis on the effects of non-verbal therapies

    NARCIS (Netherlands)

    Steenhuis, Laura A.; Nauta, Maaike H.; Bocking, Claudi L.H.; Pijnenborg, Gerdina H.M.

    2015-01-01

    AIMS: The aim of this study was to examine whether non-verbal therapies are effective in treating depressive symptoms in psychotic disorders. MATERIAL AND METHODS: A systematic literature search was performed in PubMed, Psychinfo, Picarta, Embase and ISI Web of Science, up to January 2015.

  19. Adults with Asperger Syndrome with and without a Cognitive Profile Associated with "Non-Verbal Learning Disability." A Brief Report

    Science.gov (United States)

    Nyden, Agneta; Niklasson, Lena; Stahlberg, Ola; Anckarsater, Henrik; Dahlgren-Sandberg, Annika; Wentz, Elisabet; Rastam, Maria

    2010-01-01

    Asperger syndrome (AS) and non-verbal learning disability (NLD) are both characterized by impairments in motor coordination, visuo-perceptual abilities, pragmatics and comprehension of language and social understanding. NLD is also defined as a learning disorder affecting functions in the right cerebral hemisphere. The present study investigates…

  20. Near Real-Time Comprehension Classification with Artificial Neural Networks: Decoding e-Learner Non-Verbal Behavior

    Science.gov (United States)

    Holmes, Mike; Latham, Annabel; Crockett, Keeley; O'Shea, James D.

    2018-01-01

    Comprehension is an important cognitive state for learning. Human tutors recognize comprehension and non-comprehension states by interpreting learner non-verbal behavior (NVB). Experienced tutors adapt pedagogy, materials, and instruction to provide additional learning scaffold in the context of perceived learner comprehension. Near real-time…

  1. Gender Differences in Variance and Means on the Naglieri Non-Verbal Ability Test: Data from the Philippines

    Science.gov (United States)

    Vista, Alvin; Care, Esther

    2011-01-01

    Background: Research on gender differences in intelligence has focused mostly on samples from Western countries and empirical evidence on gender differences from Southeast Asia is relatively sparse. Aims: This article presents results on gender differences in variance and means on a non-verbal intelligence test using a national sample of public…

  2. Role of Auditory Non-Verbal Working Memory in Sentence Repetition for Bilingual Children with Primary Language Impairment

    Science.gov (United States)

    Ebert, Kerry Danahy

    2014-01-01

    Background: Sentence repetition performance is attracting increasing interest as a valuable clinical marker for primary (or specific) language impairment (LI) in both monolingual and bilingual populations. Multiple aspects of memory appear to contribute to sentence repetition performance, but non-verbal memory has not yet been considered. Aims: To…

  3. The Efficiency of Peer Teaching of Developing Non Verbal Communication to Children with Autism Spectrum Disorder (ASD)

    Science.gov (United States)

    Alshurman, Wael; Alsreaa, Ihsani

    2015-01-01

    This study aimed at identifying the efficiency of peer teaching of developing non-verbal communication to children with autism spectrum disorder (ASD). The study was carried out on a sample of (10) children with autism spectrum disorder (ASD), diagnosed according to basics and criteria adopted at Al-taif qualification center at (2013) in The…

  4. Measuring Verbal and Non-Verbal Communication in Aphasia: Reliability, Validity, and Sensitivity to Change of the Scenario Test

    Science.gov (United States)

    van der Meulen, Ineke; van de Sandt-Koenderman, W. Mieke E.; Duivenvoorden, Hugo J.; Ribbers, Gerard M.

    2010-01-01

    Background: This study explores the psychometric qualities of the Scenario Test, a new test to assess daily-life communication in severe aphasia. The test is innovative in that it: (1) examines the effectiveness of verbal and non-verbal communication; and (2) assesses patients' communication in an interactive setting, with a supportive…

  5. Verbal and non-verbal behaviour and patient perception of communication in primary care: an observational study.

    Science.gov (United States)

    Little, Paul; White, Peter; Kelly, Joanne; Everitt, Hazel; Gashi, Shkelzen; Bikker, Annemieke; Mercer, Stewart

    2015-06-01

    Few studies have assessed the importance of a broad range of verbal and non-verbal consultation behaviours. To explore the relationship of observer ratings of behaviours of videotaped consultations with patients' perceptions. Observational study in general practices close to Southampton, Southern England. Verbal and non-verbal behaviour was rated by independent observers blind to outcome. Patients competed the Medical Interview Satisfaction Scale (MISS; primary outcome) and questionnaires addressing other communication domains. In total, 275/360 consultations from 25 GPs had useable videotapes. Higher MISS scores were associated with slight forward lean (an 0.02 increase for each degree of lean, 95% confidence interval [CI] = 0.002 to 0.03), the number of gestures (0.08, 95% CI = 0.01 to 0.15), 'back-channelling' (for example, saying 'mmm') (0.11, 95% CI = 0.02 to 0.2), and social talk (0.29, 95% CI = 0.4 to 0.54). Starting the consultation with professional coolness ('aloof') was helpful and optimism unhelpful. Finishing with non-verbal 'cut-offs' (for example, looking away), being professionally cool ('aloof'), or patronising, ('infantilising') resulted in poorer ratings. Physical contact was also important, but not traditional verbal communication. These exploratory results require confirmation, but suggest that patients may be responding to several non-verbal behaviours and non-specific verbal behaviours, such as social talk and back-channelling, more than traditional verbal behaviours. A changing consultation dynamic may also help, from professional 'coolness' at the beginning of the consultation to becoming warmer and avoiding non-verbal cut-offs at the end. © British Journal of General Practice 2015.

  6. School effects on non-verbal intelligence and nutritional status in rural Zambia.

    Science.gov (United States)

    Hein, Sascha; Tan, Mei; Reich, Jodi; Thuma, Philip E; Grigorenko, Elena L

    2016-02-01

    This study uses hierarchical linear modeling (HLM) to examine the school factors (i.e., related to school organization and teacher and student body) associated with non-verbal intelligence (NI) and nutritional status (i.e., body mass index; BMI) of 4204 3 rd to 7 th graders in rural areas of Southern Province, Zambia. Results showed that 23.5% and 7.7% of the NI and BMI variance, respectively, were conditioned by differences between schools. The set of 14 school factors accounted for 58.8% and 75.9% of the between-school differences in NI and BMI, respectively. Grade-specific HLM yielded higher between-school variation of NI (41%) and BMI (14.6%) for students in grade 3 compared to grades 4 to 7. School factors showed a differential pattern of associations with NI and BMI across grades. The distance to a health post and teacher's teaching experience were the strongest predictors of NI (particularly in grades 4, 6 and 7); the presence of a preschool was linked to lower BMI in grades 4 to 6. Implications for improving access and quality of education in rural Zambia are discussed.

  7. Consistency between verbal and non-verbal affective cues: a clue to speaker credibility.

    Science.gov (United States)

    Gillis, Randall L; Nilsen, Elizabeth S

    2017-06-01

    Listeners are exposed to inconsistencies in communication; for example, when speakers' words (i.e. verbal) are discrepant with their demonstrated emotions (i.e. non-verbal). Such inconsistencies introduce ambiguity, which may render a speaker to be a less credible source of information. Two experiments examined whether children make credibility discriminations based on the consistency of speakers' affect cues. In Experiment 1, school-age children (7- to 8-year-olds) preferred to solicit information from consistent speakers (e.g. those who provided a negative statement with negative affect), over novel speakers, to a greater extent than they preferred to solicit information from inconsistent speakers (e.g. those who provided a negative statement with positive affect) over novel speakers. Preschoolers (4- to 5-year-olds) did not demonstrate this preference. Experiment 2 showed that school-age children's ratings of speakers were influenced by speakers' affect consistency when the attribute being judged was related to information acquisition (speakers' believability, "weird" speech), but not general characteristics (speakers' friendliness, likeability). Together, findings suggest that school-age children are sensitive to, and use, the congruency of affect cues to determine whether individuals are credible sources of information.

  8. Relationship of Non-Verbal Intelligence Materials as Catalyst for Academic Achievement and Peaceful Co-Existence among Secondary School Students in Nigeria

    Science.gov (United States)

    Sambo, Aminu

    2015-01-01

    This paper examines students' performance in Non-verbal Intelligence tests relative academic achievement of some selected secondary school students. Two hypotheses were formulated with a view to generating data for the ease of analyses. Two non-verbal intelligent tests viz: Raven's Standard Progressive Matrices (SPM) and AH[subscript 4] Part II…

  9. Symbiotic Relations of Verbal and Non-Verbal Components of Creolized Text on the Example of Stephen King’s Books Covers Analysis

    OpenAIRE

    Anna S. Kobysheva; Viktoria A. Nakaeva

    2017-01-01

    The article examines the symbiotic relationships between non-verbal and verbal components of the creolized text. The research focuses on the analysis of the correlation between verbal and visual elements of horror book covers based on three types of correlations between verbal and non-verbal text constituents, i.e. recurrent, additive and emphatic.

  10. Symbiotic Relations of Verbal and Non-Verbal Components of Creolized Text on the Example of Stephen King’s Books Covers Analysis

    Directory of Open Access Journals (Sweden)

    Anna S. Kobysheva

    2017-12-01

    Full Text Available The article examines the symbiotic relationships between non-verbal and verbal components of the creolized text. The research focuses on the analysis of the correlation between verbal and visual elements of horror book covers based on three types of correlations between verbal and non-verbal text constituents, i.e. recurrent, additive and emphatic.

  11. Visuospatial working memory for locations, colours, and binding in typically developing children and in children with dyslexia and non-verbal learning disability.

    Science.gov (United States)

    Garcia, Ricardo Basso; Mammarella, Irene C; Tripodi, Doriana; Cornoldi, Cesare

    2014-03-01

    This study examined forward and backward recall of locations and colours and the binding of locations and colours, comparing typically developing children - aged between 8 and 10 years - with two different groups of children of the same age with learning disabilities (dyslexia in one group, non-verbal learning disability [NLD] in the other). Results showed that groups with learning disabilities had different visuospatial working memory problems and that children with NLD had particular difficulties in the backward recall of locations. The differences between the groups disappeared, however, when locations and colours were bound together. It was concluded that specific processes may be involved in children in the binding and backward recall of different types of information, as they are not simply the resultant of combining the single processes needed to recall single features. © 2013 The British Psychological Society.

  12. Time course of the influence of musical expertise on the processing of vocal and musical sounds.

    Science.gov (United States)

    Rigoulot, S; Pell, M D; Armony, J L

    2015-04-02

    Previous functional magnetic resonance imaging (fMRI) studies have suggested that different cerebral regions preferentially process human voice and music. Yet, little is known on the temporal course of the brain processes that decode the category of sounds and how the expertise in one sound category can impact these processes. To address this question, we recorded the electroencephalogram (EEG) of 15 musicians and 18 non-musicians while they were listening to short musical excerpts (piano and violin) and vocal stimuli (speech and non-linguistic vocalizations). The task of the participants was to detect noise targets embedded within the stream of sounds. Event-related potentials revealed an early differentiation of sound category, within the first 100 ms after the onset of the sound, with mostly increased responses to musical sounds. Importantly, this effect was modulated by the musical background of participants, as musicians were more responsive to music sounds than non-musicians, consistent with the notion that musical training increases sensitivity to music. In late temporal windows, brain responses were enhanced in response to vocal stimuli, but musicians were still more responsive to music. These results shed new light on the temporal course of neural dynamics of auditory processing and reveal how it is impacted by the stimulus category and the expertise of participants. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  13. Letter-sound processing deficits in children with developmental dyslexia: An ERP study.

    Science.gov (United States)

    Moll, Kristina; Hasko, Sandra; Groth, Katharina; Bartling, Jürgen; Schulte-Körne, Gerd

    2016-04-01

    The time course during letter-sound processing was investigated in children with developmental dyslexia (DD) and typically developing (TD) children using electroencephalography. Thirty-eight children with DD and 25 TD children participated in a visual-auditory oddball paradigm. Event-related potentials (ERPs) elicited by standard and deviant stimuli in an early (100-190 ms) and late (560-750 ms) time window were analysed. In the early time window, ERPs elicited by the deviant stimulus were delayed and less left lateralized over fronto-temporal electrodes for children with DD compared to TD children. In the late time window, children with DD showed higher amplitudes extending more over right frontal electrodes. Longer latencies in the early time window and stronger right hemispheric activation in the late time window were associated with slower reading and naming speed. Additionally, stronger right hemispheric activation in the late time window correlated with poorer phonological awareness skills. Deficits in early stages of letter-sound processing influence later more explicit cognitive processes during letter-sound processing. Identifying the neurophysiological correlates of letter-sound processing and their relation to reading related skills provides insight into the degree of automaticity during letter-sound processing beyond behavioural measures of letter-sound-knowledge. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  14. Computerized training of non-verbal reasoning and working memory in children with intellectual disability

    Directory of Open Access Journals (Sweden)

    Stina eSöderqvist

    2012-10-01

    Full Text Available Children with intellectual disabilities show deficits in both reasoning ability and working memory (WM that impact everyday functioning and academic achievement. In this study we investigated the feasibility of cognitive training for improving WM and non-verbal reasoning (NVR ability in children with intellectual disability. Participants were randomized to a 5-week adaptive training program (intervention group or non-adaptive version of the program (active control group. Cognitive assessments were conducted prior to and directly after training, and one year later to examine effects of the training. Improvements during training varied largely and amount of progress during training predicted transfer to WM and comprehension of instructions, with higher training progress being associated with greater transfer effects. The strongest predictors for training progress were found to be gender, co-morbidity and baseline capacity on verbal WM. In particular, females without an additional diagnosis and with higher baseline performance showed greater progress. No significant effects of training were observed at the one-year follow-up, suggesting that training should be more intense or repeated in order for effects to persist in children with intellectual disabilities. A major finding of this study is that cognitive training is feasible in children with intellectual disabilities and can help improve their cognitive capacities. However, a minimum cognitive capacity or training ability seems necessary for the training to be beneficial, with some individuals showing little improvement in performance. Future studies of cognitive training should take into consideration how inter-individual differences in training progress influence transfer effects and further investigate how baseline capacities predict training outcome.

  15. Non-verbal communication between Registered Nurses Intellectual Disability and people with an intellectual disability: an exploratory study of the nurse's experiences. Part 1.

    Science.gov (United States)

    Martin, Anne-Marie; Connor-Fenelon, Maureen O'; Lyons, Rosemary

    2012-03-01

    This is the first of two articles presenting the findings of a qualitative study which explored the experiences of Registered Nurses Intellectual Disability (RNIDs) of communicating with people with an intellectual disability who communicate non-verbally. The article reports and critically discusses the findings in the context of the policy and service delivery discourses of person-centredness, inclusion, choice and independence. Arguably, RNIDs are the profession who most frequently encounter people with an intellectual disability and communication impairment. The results suggest that the communication studied is both complicated and multifaceted. An overarching category of 'familiarity/knowing the person' encompasses discrete but related themes and subthemes that explain the process: the RNID knowing the service-user; the RNID/service-user relationship; and the value of experience. People with an intellectual disability, their families and disability services are facing a time of great change, and RNIDs will have a crucial role in supporting this transition.

  16. Speech, Sound and Music Processing: Embracing Research in India

    DEFF Research Database (Denmark)

    classical music and its impact in cognitive science are the focus of discussion. Eminent scientist from the USA, Japan, Sweden, France, Poland, Taiwan, India and other European and Asian countries have delivered state-of-the-art lectures in these areas every year at different places providing an opportunity......The Computer Music Modeling and Retrieval (CMMR) 2011 conference was the 8th event of this international series, and the first that took place outside Europe. Since its beginnings in 2003, this conference has been co-organized by the Laboratoire de M´ecanique et d’Acoustique in Marseille, France......, and the Department of Architecture, Design and Media Technology (ad:mt), University of Aalborg, Esbjerg, Denmark, and has taken place in France, Italy, Spain, and Denmark. Historically, CMMR offers a cross-disciplinary overview of current music information retrieval and sound modeling activities and related topics...

  17. Non-verbal communication between nurses and people with an intellectual disability: a review of the literature.

    Science.gov (United States)

    Martin, Anne-Marie; O'Connor-Fenelon, Maureen; Lyons, Rosemary

    2010-12-01

    This article critically synthesizes current literature regarding communication between nurses and people with an intellectual disability who communicate non-verbally. The unique context of communication between the intellectual disability nurse and people with intellectual disability and the review aims and strategies are outlined. Communication as a concept is explored in depth. Communication between the intellectual disability nurse and the person with an intellectual disability is then comprehensively examined in light of existing literature. Issues including knowledge of the person with intellectual disability, mismatch of communication ability, and knowledge of communication arose as predominant themes. A critical review of the importance of communication in nursing practice follows. The paucity of literature relating to intellectual disability nursing and non-verbal communication clearly indicates a need for research.

  18. Randomised controlled trial of a brief intervention targeting predominantly non-verbal communication in general practice consultations.

    Science.gov (United States)

    Little, Paul; White, Peter; Kelly, Joanne; Everitt, Hazel; Mercer, Stewart

    2015-06-01

    The impact of changing non-verbal consultation behaviours is unknown. To assess brief physician training on improving predominantly non-verbal communication. Cluster randomised parallel group trial among adults aged ≥16 years attending general practices close to the study coordinating centres in Southampton. Sixteen GPs were randomised to no training, or training consisting of a brief presentation of behaviours identified from a prior study (acronym KEPe Warm: demonstrating Knowledge of the patient; Encouraging [back-channelling by saying 'hmm', for example]; Physically engaging [touch, gestures, slight lean]; Warm-up: cool/professional initially, warming up, avoiding distancing or non-verbal cut-offs at the end of the consultation); and encouragement to reflect on videos of their consultation. Outcomes were the Medical Interview Satisfaction Scale (MISS) mean item score (1-7) and patients' perceptions of other domains of communication. Intervention participants scored higher MISS overall (0.23, 95% confidence interval [CI] = 0.06 to 0.41), with the largest changes in the distress-relief and perceived relationship subscales. Significant improvement occurred in perceived communication/partnership (0.29, 95% CI = 0.09 to 0.49) and health promotion (0.26, 95% CI = 0.05 to 0.46). Non-significant improvements occurred in perceptions of a personal relationship, a positive approach, and understanding the effects of the illness on life. Brief training of GPs in predominantly non-verbal communication in the consultation and reflection on consultation videotapes improves patients' perceptions of satisfaction, distress, a partnership approach, and health promotion. © British Journal of General Practice 2015.

  19. Maternal postpartum depressive symptoms predict delay in non-verbal communication in 14-month-old infants.

    Science.gov (United States)

    Kawai, Emiko; Takagai, Shu; Takei, Nori; Itoh, Hiroaki; Kanayama, Naohiro; Tsuchiya, Kenji J

    2017-02-01

    We investigated the potential relationship between maternal depressive symptoms during the postpartum period and non-verbal communication skills of infants at 14 months of age in a birth cohort study of 951 infants and assessed what factors may influence this association. Maternal depressive symptoms were measured using the Edinburgh Postnatal Depression Scale, and non-verbal communication skills were measured using the MacArthur-Bates Communicative Development Inventories, which include Early Gestures and Later Gestures domains. Infants whose mothers had a high level of depressive symptoms (13+ points) during both the first month postpartum and at 10 weeks were approximately 0.5 standard deviations below normal in Early Gestures scores and 0.5-0.7 standard deviations below normal in Later Gestures scores. These associations were independent of potential explanations, such as maternal depression/anxiety prior to birth, breastfeeding practices, and recent depressive symptoms among mothers. These findings indicate that infants whose mothers have postpartum depressive symptoms may be at increased risk of experiencing delay in non-verbal development. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. The influence of (central) auditory processing disorder in speech sound disorders.

    Science.gov (United States)

    Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Vilela, Nadia; Carvallo, Renata Mota Mamede; Wertzner, Haydée Fiszbein

    2016-01-01

    Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central) auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. To study phonological measures and (central) auditory processing of children with speech sound disorder. Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central) auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. The group with (central) auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central) auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  1. Fish protection at water intakes using a new signal development process and sound system

    International Nuclear Information System (INIS)

    Loeffelman, P.H.; Klinect, D.A.; Van Hassel, J.H.

    1991-01-01

    American Electric Power Company, Inc., is exploring the feasibility of using a patented signal development process and sound system to guide aquatic animals with underwater sound. Sounds from animals such as chinook salmon, steelhead trout, striped bass, freshwater drum, largemouth bass, and gizzard shad can be used to synthesize a new signal to stimulate the animal in the most sensitive portion of its hearing range. AEP's field tests during its research demonstrate that adult chinook salmon, steelhead trout and warmwater fish, and steelhead trout and chinook salmon smolts can be repelled with a properly-tuned system. The signal development process and sound system is designed to be transportable and use animals at the site to incorporate site-specific factors known to affect underwater sound, e.g., bottom shape and type, water current, and temperature. This paper reports that, because the overall goal of this research was to determine the feasibility of using sound to divert fish, it was essential that the approach use a signal development process which could be customized to animals and site conditions at any hydropower plant site

  2. Frontal brain deactivation during a non-verbal cognitive judgement bias test in sheep.

    Science.gov (United States)

    Guldimann, Kathrin; Vögeli, Sabine; Wolf, Martin; Wechsler, Beat; Gygax, Lorenz

    2015-02-01

    Animal welfare concerns have raised an interest in animal affective states. These states also play an important role in the proximate control of behaviour. Due to their potential to modulate short-term emotional reactions, one specific focus is on long-term affective states, that is, mood. These states can be assessed by using non-verbal cognitive judgement bias paradigms. Here, we conducted a spatial variant of such a test on 24 focal animals that were kept under either unpredictable, stimulus-poor or predictable, stimulus-rich housing conditions to induce differential mood states. Based on functional near-infrared spectroscopy, we measured haemodynamic frontal brain reactions during 10 s in which the sheep could observe the configuration of the cognitive judgement bias trial before indicating their assessment based on the go/no-go reaction. We used (generalised) mixed-effects models to evaluate the data. Sheep from the unpredictable, stimulus-poor housing conditions took longer and were less likely to reach the learning criterion and reacted slightly more optimistically in the cognitive judgement bias test than sheep from the predictable, stimulus-rich housing conditions. A frontal cortical increase in deoxy-haemoglobin [HHb] and a decrease in oxy-haemoglobin [O2Hb] were observed during the visual assessment of the test situation by the sheep, indicating a frontal cortical brain deactivation. This deactivation was more pronounced with the negativity of the test situation, which was reflected by the provenance of the sheep from the unpredictable, stimulus-poor housing conditions, the proximity of the cue to the negatively reinforced cue location, or the absence of a go reaction in the trial. It seems that (1) sheep from the unpredictable, stimulus-poor in comparison to sheep from the predictable, stimulus-rich housing conditions dealt less easily with the test conditions rich in stimuli, that (2) long-term housing conditions seemingly did not influence mood

  3. A new signal development process and sound system for diverting fish from water intakes

    International Nuclear Information System (INIS)

    Klinet, D.A.; Loeffelman, P.H.; van Hassel, J.H.

    1992-01-01

    This paper reports that American Electric Power Service Corporation has explored the feasibility of using a patented signal development process and underwater sound system to divert fish away from water intake areas. The effect of water intakes on fish is being closely scrutinized as hydropower projects are re-licensed. The overall goal of this four-year research project was to develop an underwater guidance system which is biologically effective, reliable and cost-effective compared to other proposed methods of diversion, such as physical screens. Because different fish species have various listening ranges, it was essential to the success of this experiment that the sound system have a great amount of flexibility. Assuming a fish's sounds are heard by the same kind of fish, it was necessary to develop a procedure and acquire instrumentation to properly analyze the sounds that the target fish species create to communicate and any artificial signals being generated for diversion

  4. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    Science.gov (United States)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  5. Pedagogical and didactical rationale of phonemic stimulation process in pre-school age children

    Directory of Open Access Journals (Sweden)

    López, Yudenia

    2010-01-01

    Full Text Available The paper describes the main results of a regional research problem dealing with education in pre-school age. It examines the effectiveness of the didactic conception of the process of phonemic stimulation in children from 3 to 5 years old. The pedagogical and didactic rationale of the process, viewed from the evolutionary, ontogeny, systemic perspective is explained. Likewise, possible scaffolding is illustrated. The suggested procedures focus the provision of support on a systematic and purposely practice which involve first the discrimination of non-verbal sounds and the discrimi-nation of verbal sound later, aiming to the creation of a phonological consciousness.

  6. Mapping symbols to sounds: electrophysiological correlates of the impaired reading process in dyslexia

    Directory of Open Access Journals (Sweden)

    Andreas eWidmann

    2012-03-01

    Full Text Available Dyslexic and control first grade school children were compared in a Symbol-to-Sound matching test based on a nonlinguistic audiovisual training which is known to have a remediating effect on dyslexia. Visual symbol patterns had to be matched with predicted sound patterns. Sounds incongruent with the corresponding visual symbol (thus not matching the prediction elicited the N2b and P3a event-related potential (ERP components relative to congruent sounds in control children. Their ERPs resembled the ERP effects previously reported for healthy adults with this paradigm. In dyslexic children, N2b onset latency was delayed and its amplitude significantly reduced over left hemisphere whereas P3a was absent. Moreover, N2b amplitudes significantly correlated with the reading skills. ERPs to sound changes in a control condition were unaffected. In addition, correctly predicted sounds, that is, sounds that are congruent with the visual symbol, elicited an early induced auditory gamma band response (GBR reflecting synchronization of brain activity in normal-reading children as previously observed in healthy adults. However, dyslexic children showed no GBR. This indicates that visual symbolic and auditory sensory information are not integrated into a unitary audiovisual object representation in them. Finally, incongruent sounds were followed by a later desynchronization of brain activity in the gamma band in both groups. This desynchronization was significantly larger in dyslexic children. Although both groups accomplished the task successfully remarkable group differences in brain responses suggest that normal-reading children and dyslexic children recruit (partly different brain mechanisms when solving the task. We propose that abnormal ERPs and GBRs in dyslexic readers indicate a deficit resulting in a widespread impairment in processing and integrating auditory and visual information and contributing to the reading impairment in dyslexia.

  7. Neural Correlates of Indicators of Sound Change in Cantonese: Evidence from Cortical and Subcortical Processes.

    Science.gov (United States)

    Maggu, Akshay R; Liu, Fang; Antoniou, Mark; Wong, Patrick C M

    2016-01-01

    Across time, languages undergo changes in phonetic, syntactic, and semantic dimensions. Social, cognitive, and cultural factors contribute to sound change, a phenomenon in which the phonetics of a language undergo changes over time. Individuals who misperceive and produce speech in a slightly divergent manner (called innovators ) contribute to variability in the society, eventually leading to sound change. However, the cause of variability in these individuals is still unknown. In this study, we examined whether such misperceptions are represented in neural processes of the auditory system. We investigated behavioral, subcortical (via FFR), and cortical (via P300) manifestations of sound change processing in Cantonese, a Chinese language in which several lexical tones are merging. Across the merging categories, we observed a similar gradation of speech perception abilities in both behavior and the brain (subcortical and cortical processes). Further, we also found that behavioral evidence of tone merging correlated with subjects' encoding at the subcortical and cortical levels. These findings indicate that tone-merger categories, that are indicators of sound change in Cantonese, are represented neurophysiologically with high fidelity. Using our results, we speculate that innovators encode speech in a slightly deviant neurophysiological manner, and thus produce speech divergently that eventually spreads across the community and contributes to sound change.

  8. Peculiarities of Stereotypes about Non-Verbal Communication and their Role in Cross-Cultural Interaction between Russian and Chinese Students

    Directory of Open Access Journals (Sweden)

    I A Novikova

    2012-12-01

    Full Text Available The article is devoted to the analysis of the peculiarities of the stereotypes about non-verbal communication, formed in Russian and Chinese cultures. The results of the experimental research of the role of ethnic auto- and heterostereotypes about non-verbal communication in cross-cultural interaction between Russian and Chinese students of the Peoples’ Friendship University of Russia are presented.

  9. Prevalence of inter-hemispheric asymetry in children and adolescents with interdisciplinary diagnosis of non-verbal learning disorder.

    Science.gov (United States)

    Wajnsztejn, Alessandra Bernardes Caturani; Bianco, Bianca; Barbosa, Caio Parente

    2016-01-01

    To describe clinical and epidemiological features of children and adolescents with interdisciplinary diagnosis of non-verbal learning disorder and to investigate the prevalence of inter-hemispheric asymmetry in this population group. Cross-sectional study including children and adolescents referred for interdisciplinary assessment with learning difficulty complaints, who were given an interdisciplinary diagnosis of non-verbal learning disorder. The following variables were included in the analysis: sex-related prevalence, educational system, initial presumptive diagnoses and respective prevalence, overall non-verbal learning disorder prevalence, prevalence according to school year, age range at the time of assessment, major family complaints, presence of inter-hemispheric asymmetry, arithmetic deficits, visuoconstruction impairments and major signs and symptoms of non-verbal learning disorder. Out of 810 medical records analyzed, 14 were from individuals who met the diagnostic criteria for non-verbal learning disorder, including the presence of inter-hemispheric asymmetry. Of these 14 patients, 8 were male. The high prevalence of inter-hemispheric asymmetry suggests this parameter can be used to predict or support the diagnosis of non-verbal learning disorder. Descrever as características clínicas e epidemiológicas de crianças e adolescentes com transtorno de aprendizagem não verbal, e investigar a prevalência de assimetria inter-hemisférica neste grupo populacional. Estudo transversal que incluiu crianças e adolescentes encaminhados para uma avaliação interdisciplinar, com queixas de dificuldades de aprendizagem e que receberam diagnóstico interdisciplinar de transtorno de aprendizagem não verbal. As variáveis avaliadas foram prevalência por sexo, sistema de ensino, hipóteses diagnósticas iniciais e respectivas prevalências, prevalência de condições em relação à amostra total, prevalência geral do transtorno de aprendizagem não verbal

  10. MODELO DE COMUNICACIÓN NO VERBAL EN DEPORTE Y BALLET NON-VERBAL COMMUNICATION MODELS IN SPORTS AND BALLET

    Directory of Open Access Journals (Sweden)

    Gloria Vallejo

    2010-12-01

    Full Text Available Este estudio analiza el modelo de comunicación que se genera en los entrenadores de fútbol y de gimnasia artística a nivel profesional, y en los instructores de ballet en modalidad folklórica, tomando como referente el lenguaje corporal dinámico propio de la comunicación especializada de deportistas y bailarines, en la que se evidencia lenguaje no verbal. Este último se estudió tanto en prácticas psicomotrices como sociomotrices, para identificar y caracterizar relaciones entre diferentes conceptos y su correspondiente representación gestual. Los resultados indican que el lenguaje no verbal de los entrenadores e instructores toma ocasionalmente el lugar del lenguaje verbal, cuando este último resulta insuficiente o inapropiado para describir una acción motriz de gran precisión, debido a las condiciones de distancia o de interferencias acústicas. En los instructores de ballet se encontró una forma generalizada de dirigir los ensayos utilizando conteos rítmicos con las palmas o los pies. De igual forma, se destacan los componentes paralingüísticos de los diversos actos de habla, especialmente, en lo que se refiere a entonación, duración e intensidad.This study analyzes the communication model generated among professional soccer trainers, artistic gymnastics trainers, and folkloric ballet instructors, on the basis of the dynamic body language typical of specialized communication among sportspeople and dancers, which includes a high percentage of non-verbal language. Non-verbal language was observed in both psychomotor and sociomotor practices in order to identify and characterize relations between different concepts and their corresponding gestural representation. This made it possible to generate a communication model that takes into account the non-verbal aspects of specialized communicative contexts. The results indicate that the non-verbal language of trainers and instructors occasionally replaces verbal language when the

  11. What and Where in auditory sensory processing: A high-density electrical mapping study of distinct neural processes underlying sound object recognition and sound localization

    Directory of Open Access Journals (Sweden)

    Victoria M Leavitt

    2011-06-01

    Full Text Available Functionally distinct dorsal and ventral auditory pathways for sound localization (where and sound object recognition (what have been described in non-human primates. A handful of studies have explored differential processing within these streams in humans, with highly inconsistent findings. Stimuli employed have included simple tones, noise bursts and speech sounds, with simulated left-right spatial manipulations, and in some cases participants were not required to actively discriminate the stimuli. Our contention is that these paradigms were not well suited to dissociating processing within the two streams. Our aim here was to determine how early in processing we could find evidence for dissociable pathways using better titrated what and where task conditions. The use of more compelling tasks should allow us to amplify differential processing within the dorsal and ventral pathways. We employed high-density electrical mapping using a relatively large and environmentally realistic stimulus set (seven animal calls delivered from seven free-field spatial locations; with stimulus configuration identical across the where and what tasks. Topographic analysis revealed distinct dorsal and ventral auditory processing networks during the where and what tasks with the earliest point of divergence seen during the N1 component of the auditory evoked response, beginning at approximately 100 ms. While this difference occurred during the N1 timeframe, it was not a simple modulation of N1 amplitude as it displayed a wholly different topographic distribution to that of the N1. Global dissimilarity measures using topographic modulation analysis confirmed that this difference between tasks was driven by a shift in the underlying generator configuration. Minimum norm source reconstruction revealed distinct activations that corresponded well with activity within putative dorsal and ventral auditory structures.

  12. Processing of spatial sounds in the impaired auditory system

    DEFF Research Database (Denmark)

    Arweiler, Iris

    with an intelligibility-weighted “efficiency factor” which revealed that the spectral characteristics of the ER’s caused the reduced benefit. Hearing-impaired listeners were able to utilize the ER energy as effectively as normal-hearing listeners, most likely because binaural processing was not required...... implications for speech perception models and the development of compensation strategies in future generations of hearing instruments.......Understanding speech in complex acoustic environments presents a challenge for most hearing-impaired listeners. In conditions where normal-hearing listeners effortlessly utilize spatial cues to improve speech intelligibility, hearing-impaired listeners often struggle. In this thesis, the influence...

  13. A comparison of sound quality judgments for monaural and binaural hearing aid processed stimuli.

    Science.gov (United States)

    Balfour, P B; Hawkins, D B

    1992-10-01

    Fifteen adults with bilaterally symmetrical mild and/or moderate sensorineural hearing loss completed a paired-comparison task designed to elicit sound quality preference judgments for monaural/binaural hearing aid processed signals. Three stimuli (speech-in-quiet, speech-in-noise, and music) were recorded separately in three listening environments (audiometric test booth, living room, and a music/lecture hall) through hearing aids placed on a Knowles Electronics Manikin for Acoustics Research. Judgments were made on eight separate sound quality dimensions (brightness, clarity, fullness, loudness, nearness, overall impression, smoothness, and spaciousness) for each of the three stimuli in three listening environments. Results revealed a distinct binaural preference for all eight sound quality dimensions independent of listening environment. Binaural preferences were strongest for overall impression, fullness, and spaciousness. Stimulus type effect was significant only for fullness and spaciousness, where binaural preferences were strongest for speech-in-quiet. After binaural preference data were obtained, subjects ranked each sound quality dimension with respect to its importance for binaural listening relative to monaural. Clarity was ranked highest in importance and brightness was ranked least important. The key to demonstration of improved binaural hearing aid sound quality may be the use of a paired-comparison format.

  14. Effects of musical training on sound pattern processing in high-school students.

    Science.gov (United States)

    Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse

    2009-05-01

    Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.

  15. Convection measurement package for space processing sounding rocket flights. [low gravity manufacturing - fluid dynamics

    Science.gov (United States)

    Spradley, L. W.

    1975-01-01

    The effects on heated fluids of nonconstant accelerations, rocket vibrations, and spin rates, was studied. A system is discussed which can determine the influence of the convective effects on fluid experiments. The general suitability of sounding rockets for performing these experiments is treated. An analytical investigation of convection in an enclosure which is heated in low gravity is examined. The gravitational body force was taken as a time-varying function using anticipated sounding rocket accelerations, since accelerometer flight data were not available. A computer program was used to calculate the flow rates and heat transfer in fluids with geometries and boundary conditions typical of space processing configurations. Results of the analytical investigation identify the configurations, fluids and boundary values which are most suitable for measuring the convective environment of sounding rockets. A short description of fabricated fluid cells and the convection measurement package is given. Photographs are included.

  16. Condom use: exploring verbal and non-verbal communication strategies among Latino and African American men and women.

    Science.gov (United States)

    Zukoski, Ann P; Harvey, S Marie; Branch, Meredith

    2009-08-01

    A growing body of literature provides evidence of a link between communication with sexual partners and safer sexual practices, including condom use. More research is needed that explores the dynamics of condom communication including gender differences in initiation, and types of communication strategies. The overall objective of this study was to explore condom use and the dynamics surrounding condom communication in two distinct community-based samples of African American and Latino heterosexual couples at increased risk for HIV. Based on 122 in-depth interviews, 80% of women and 74% of men reported ever using a condom with their primary partner. Of those who reported ever using a condom with their current partner, the majority indicated that condom use was initiated jointly by men and women. In addition, about one-third of the participants reported that the female partner took the lead and let her male partner know she wanted to use a condom. A sixth of the sample reported that men initiated use. Although over half of the respondents used bilateral verbal strategies (reminding, asking and persuading) to initiate condom use, one-fourth used unilateral verbal strategies (commanding and threatening to withhold sex). A smaller number reported using non-verbal strategies involving condoms themselves (e.g. putting a condom on or getting condoms). The results suggest that interventions designed to improve condom use may need to include both members of a sexual dyad and focus on improving verbal and non-verbal communication skills of individuals and couples.

  17. IRI-2012 MODEL ADAPTABILITY ESTIMATION FOR AUTOMATED PROCESSING OF VERTICAL SOUNDING IONOGRAMS

    Directory of Open Access Journals (Sweden)

    V. D. Nikolaeva

    2014-01-01

    Full Text Available The paper deals with possibility of IRI-2012 global empirical model applying to the vertical sounding of the ionosphere semiautomatic data processing. Main ionosphere characteristics from vertical sounding data at IZMIRAN Voeikovo station in February 2013 were compared with IRI-2012 model calculation results. 2688 model values and 1866 real values of f0F2, f0E, hmF2, hmE were processed. E and F2 layers critical frequency (f0E, f0F2 and the maximum altitudes (hmF2, hmE were determined from the ionograms. Vertical profiles of the electron concentration were restored with IRI-2012 model by measured frequency and height. The model calculation was also made without the inclusion of the real vertical sounding data. Monthly averages and standard deviations (σ for the parameters f0F2, f0E, hmF2, hmE for each hour of the day were calculated according to the vertical sounding and model values. Model applicability conditions for automated processing of ionograms for subauroral ionosphere were determined. Initial IRI-2012 model can be applied in the sub-auroral ionograms processing at daytime with undisturbed conditions in the absence of sporadic ionization. In this case model calculations can be adjusted by the near-time vertical sounding data. IRI-2012 model values for f0E (in daytime and hmF2 can be applied to reduce computational costs in the systems of automatic parameters search and preliminary determination of the searching area range for the main parameters. IRI-2012 model can be used for a more accurate approximation of the real data series in the absence of the real values. In view of sporadic ionization, ionosphere models of the high latitudes must be applied with corpuscular ions formation unit.

  18. Brain regions for sound processing and song release in a small grasshopper.

    Science.gov (United States)

    Balvantray Bhavsar, Mit; Stumpner, Andreas; Heinrich, Ralf

    2017-05-01

    We investigated brain regions - mostly neuropils - that process auditory information relevant for the initiation of response songs of female grasshoppers Chorthippus biguttulus during bidirectional intraspecific acoustic communication. Male-female acoustic duets in the species Ch. biguttulus require the perception of sounds, their recognition as a species- and gender-specific signal and the initiation of commands that activate thoracic pattern generating circuits to drive the sound-producing stridulatory movements of the hind legs. To study sensory-to-motor processing during acoustic communication we used multielectrodes that allowed simultaneous recordings of acoustically stimulated electrical activity from several ascending auditory interneurons or local brain neurons and subsequent electrical stimulation of the recording site. Auditory activity was detected in the lateral protocerebrum (where most of the described ascending auditory interneurons terminate), in the superior medial protocerebrum and in the central complex, that has previously been implicated in the control of sound production. Neural responses to behaviorally attractive sound stimuli showed no or only poor correlation with behavioral responses. Current injections into the lateral protocerebrum, the central complex and the deuto-/tritocerebrum (close to the cerebro-cervical fascicles), but not into the superior medial protocerebrum, elicited species-typical stridulation with high success rate. Latencies and numbers of phrases produced by electrical stimulation were different between these brain regions. Our results indicate three brain regions (likely neuropils) where auditory activity can be detected with two of these regions being potentially involved in song initiation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Enhanced Excitatory Connectivity and Disturbed Sound Processing in the Auditory Brainstem of Fragile X Mice.

    Science.gov (United States)

    Garcia-Pino, Elisabet; Gessele, Nikodemus; Koch, Ursula

    2017-08-02

    Hypersensitivity to sounds is one of the prevalent symptoms in individuals with Fragile X syndrome (FXS). It manifests behaviorally early during development and is often used as a landmark for treatment efficacy. However, the physiological mechanisms and circuit-level alterations underlying this aberrant behavior remain poorly understood. Using the mouse model of FXS ( Fmr1 KO ), we demonstrate that functional maturation of auditory brainstem synapses is impaired in FXS. Fmr1 KO mice showed a greatly enhanced excitatory synaptic input strength in neurons of the lateral superior olive (LSO), a prominent auditory brainstem nucleus, which integrates ipsilateral excitation and contralateral inhibition to compute interaural level differences. Conversely, the glycinergic, inhibitory input properties remained unaffected. The enhanced excitation was the result of an increased number of cochlear nucleus fibers converging onto one LSO neuron, without changing individual synapse properties. Concomitantly, immunolabeling of excitatory ending markers revealed an increase in the immunolabeled area, supporting abnormally elevated excitatory input numbers. Intrinsic firing properties were only slightly enhanced. In line with the disturbed development of LSO circuitry, auditory processing was also affected in adult Fmr1 KO mice as shown with single-unit recordings of LSO neurons. These processing deficits manifested as an increase in firing rate, a broadening of the frequency response area, and a shift in the interaural level difference function of LSO neurons. Our results suggest that this aberrant synaptic development of auditory brainstem circuits might be a major underlying cause of the auditory processing deficits in FXS. SIGNIFICANCE STATEMENT Fragile X Syndrome (FXS) is the most common inheritable form of intellectual impairment, including autism. A core symptom of FXS is extreme sensitivity to loud sounds. This is one reason why individuals with FXS tend to avoid social

  20. Temporal integration: intentional sound discrimination does not modulate stimulus-driven processes in auditory event synthesis.

    Science.gov (United States)

    Sussman, Elyse; Winkler, István; Kreuzer, Judith; Saher, Marieke; Näätänen, Risto; Ritter, Walter

    2002-12-01

    Our previous study showed that the auditory context could influence whether two successive acoustic changes occurring within the temporal integration window (approximately 200ms) were pre-attentively encoded as a single auditory event or as two discrete events (Cogn Brain Res 12 (2001) 431). The aim of the current study was to assess whether top-down processes could influence the stimulus-driven processes in determining what constitutes an auditory event. Electroencepholagram (EEG) was recorded from 11 scalp electrodes to frequently occurring standard and infrequently occurring deviant sounds. Within the stimulus blocks, deviants either occurred only in pairs (successive feature changes) or both singly and in pairs. Event-related potential indices of change and target detection, the mismatch negativity (MMN) and the N2b component, respectively, were compared with the simultaneously measured performance in discriminating the deviants. Even though subjects could voluntarily distinguish the two successive auditory feature changes from each other, which was also indicated by the elicitation of the N2b target-detection response, top-down processes did not modify the event organization reflected by the MMN response. Top-down processes can extract elemental auditory information from a single integrated acoustic event, but the extraction occurs at a later processing stage than the one whose outcome is indexed by MMN. Initial processes of auditory event-formation are fully governed by the context within which the sounds occur. Perception of the deviants as two separate sound events (the top-down effects) did not change the initial neural representation of the same deviants as one event (indexed by the MMN), without a corresponding change in the stimulus-driven sound organization.

  1. Stochastic Signal Processing for Sound Environment System with Decibel Evaluation and Energy Observation

    Directory of Open Access Journals (Sweden)

    Akira Ikuta

    2014-01-01

    Full Text Available In real sound environment system, a specific signal shows various types of probability distribution, and the observation data are usually contaminated by external noise (e.g., background noise of non-Gaussian distribution type. Furthermore, there potentially exist various nonlinear correlations in addition to the linear correlation between input and output time series. Consequently, often the system input and output relationship in the real phenomenon cannot be represented by a simple model using only the linear correlation and lower order statistics. In this study, complex sound environment systems difficult to analyze by using usual structural method are considered. By introducing an estimation method of the system parameters reflecting correlation information for conditional probability distribution under existence of the external noise, a prediction method of output response probability for sound environment systems is theoretically proposed in a suitable form for the additive property of energy variable and the evaluation in decibel scale. The effectiveness of the proposed stochastic signal processing method is experimentally confirmed by applying it to the observed data in sound environment systems.

  2. The role of high-level processes for oscillatory phase entrainment to speech sound

    Directory of Open Access Journals (Sweden)

    Benedikt eZoefel

    2015-12-01

    Full Text Available Constantly bombarded with input, the brain has the need to filter out relevant information while ignoring the irrelevant rest. A powerful tool may be represented by neural oscillations which entrain their high-excitability phase to important input while their low-excitability phase attenuates irrelevant information. Indeed, the alignment between brain oscillations and speech improves intelligibility and helps dissociating speakers during a cocktail party. Although well-investigated, the contribution of low- and high-level processes to phase entrainment to speech sound has only recently begun to be understood. Here, we review those findings, and concentrate on three main results: (1 Phase entrainment to speech sound is modulated by attention or predictions, likely supported by top-down signals and indicating higher-level processes involved in the brain’s adjustment to speech. (2 As phase entrainment to speech can be observed without systematic fluctuations in sound amplitude or spectral content, it does not only reflect a passive steady-state ringing of the cochlea, but entails a higher-level process. (3 The role of intelligibility for phase entrainment is debated. Recent results suggest that intelligibility modulates the behavioral consequences of entrainment, rather than directly affecting the strength of entrainment in auditory regions. We conclude that phase entrainment to speech reflects a sophisticated mechanism: Several high-level processes interact to optimally align neural oscillations with predicted events of high relevance, even when they are hidden in a continuous stream of background noise.

  3. Do children with autism have a theory of mind? A non-verbal test of autism vs. specific language impairment.

    Science.gov (United States)

    Colle, Livia; Baron-Cohen, Simon; Hill, Jacqueline

    2007-04-01

    Children with autism have delays in the development of theory of mind. However, the sub-group of children with autism who have little or no language have gone untested since false belief tests (FB) typically involve language. FB understanding has been reported to be intact in children with specific language impairment (SLI). This raises the possibility that a non-verbal FB test would distinguish children with autism vs. children with SLI. The present study tested two predictions: (1) FB understanding is to some extent independent of language ability; and (2) Children with autism with low language levels show specific impairment in theory of mind. Results confirmed both predictions. Results are discussed in terms of the role of language in the development of mindreading.

  4. Deficits in visual short-term memory binding in children at risk of non-verbal learning disabilities.

    Science.gov (United States)

    Garcia, Ricardo Basso; Mammarella, Irene C; Pancera, Arianna; Galera, Cesar; Cornoldi, Cesare

    2015-01-01

    It has been hypothesized that learning disabled children meet short-term memory (STM) problems especially when they must bind different types of information, however the hypothesis has not been systematically tested. This study assessed visual STM for shapes and colors and the binding of shapes and colors, comparing a group of children (aged between 8 and 10 years) at risk of non-verbal learning disabilities (NLD) with a control group of children matched for general verbal abilities, age, gender, and socioeconomic level. Results revealed that groups did not differ in retention of either shapes or colors, but children at risk of NLD were poorer than controls in memory for shape-color bindings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Sex differences in the ability to recognise non-verbal displays of emotion: a meta-analysis.

    Science.gov (United States)

    Thompson, Ashley E; Voyer, Daniel

    2014-01-01

    The present study aimed to quantify the magnitude of sex differences in humans' ability to accurately recognise non-verbal emotional displays. Studies of relevance were those that required explicit labelling of discrete emotions presented in the visual and/or auditory modality. A final set of 551 effect sizes from 215 samples was included in a multilevel meta-analysis. The results showed a small overall advantage in favour of females on emotion recognition tasks (d=0.19). However, the magnitude of that sex difference was moderated by several factors, namely specific emotion, emotion type (negative, positive), sex of the actor, sensory modality (visual, audio, audio-visual) and age of the participants. Method of presentation (computer, slides, print, etc.), type of measurement (response time, accuracy) and year of publication did not significantly contribute to variance in effect sizes. These findings are discussed in the context of social and biological explanations of sex differences in emotion recognition.

  6. Long-term exposure to noise impairs cortical sound processing and attention control.

    Science.gov (United States)

    Kujala, Teija; Shtyrov, Yury; Winkler, Istvan; Saher, Marieke; Tervaniemi, Mari; Sallinen, Mikael; Teder-Sälejärvi, Wolfgang; Alho, Kimmo; Reinikainen, Kalevi; Näätänen, Risto

    2004-11-01

    Long-term exposure to noise impairs human health, causing pathological changes in the inner ear as well as other anatomical and physiological deficits. Numerous individuals are daily exposed to excessive noise. However, there is a lack of systematic research on the effects of noise on cortical function. Here we report data showing that long-term exposure to noise has a persistent effect on central auditory processing and leads to concurrent behavioral deficits. We found that speech-sound discrimination was impaired in noise-exposed individuals, as indicated by behavioral responses and the mismatch negativity brain response. Furthermore, irrelevant sounds increased the distractibility of the noise-exposed subjects, which was shown by increased interference in task performance and aberrant brain responses. These results demonstrate that long-term exposure to noise has long-lasting detrimental effects on central auditory processing and attention control.

  7. Corticofugal modulation of initial neural processing of sound information from the ipsilateral ear in the mouse.

    Directory of Open Access Journals (Sweden)

    Xiuping Liu

    2010-11-01

    Full Text Available Cortical neurons implement a high frequency-specific modulation of subcortical nuclei that includes the cochlear nucleus. Anatomical studies show that corticofugal fibers terminating in the auditory thalamus and midbrain are mostly ipsilateral. Differently, corticofugal fibers terminating in the cochlear nucleus are bilateral, which fits to the needs of binaural hearing that improves hearing quality. This leads to our hypothesis that corticofugal modulation of initial neural processing of sound information from the contralateral and ipsilateral ears could be equivalent or coordinated at the first sound processing level.With the focal electrical stimulation of the auditory cortex and single unit recording, this study examined corticofugal modulation of the ipsilateral cochlear nucleus. The same methods and procedures as described in our previous study of corticofugal modulation of contralateral cochlear nucleus were employed simply for comparison. We found that focal electrical stimulation of cortical neurons induced substantial changes in the response magnitude, response latency and receptive field of ipsilateral cochlear nucleus neurons. Cortical stimulation facilitated auditory response and shortened the response latency of physiologically matched neurons whereas it inhibited auditory response and lengthened the response latency of unmatched neurons. Finally, cortical stimulation shifted the best frequencies of cochlear neurons towards those of stimulated cortical neurons.Our data suggest that cortical neurons enable a high frequency-specific remodelling of sound information processing in the ipsilateral cochlear nucleus in the same manner as that in the contralateral cochlear nucleus.

  8. Electrophysiological evidence for a defect in the processing of temporal sound patterns in multiple sclerosis.

    Science.gov (United States)

    Jones, S J; Sprague, L; Vaz Pato, M

    2002-11-01

    To assess the processing of spectrotemporal sound patterns in multiple sclerosis by using auditory evoked potentials (AEPs) to complex harmonic tones. 22 patients with definite multiple sclerosis but mild disability and no auditory complaints were compared with 15 normal controls. Short latency AEPs were recorded using standard methods. Long latency AEPs were recorded to synthesised musical instrument tones, at onset every two seconds, at abrupt frequency changes every two seconds, and at the end of a two second period of 16/s frequency changes. The subjects were inattentive but awake, reading irrelevant material. Short latency AEPs were abnormal in only 4 of 22 patients, whereas long latency AEPs were abnormal to one or more stimuli in 17 of 22. No significant latency prolongation was seen in response to onset and infrequent frequency changes (P1, N1, P2) but the potentials at the end of 16/s frequency modulations, particularly the P2 peaking approximately 200 ms after the next expected change, were significantly delayed. The delayed responses appear to be a mild disorder in the processing of change in temporal sound patterns. The delay may be conceived of as extra time taken to compare the incoming sound with the contents of a temporally ordered sensory memory store (the long auditory store or echoic memory), which generates a response when the next expected frequency change fails to occur. The defect cannot be ascribed to lesions of the afferent pathways and so may be due to disseminated brain lesions visible or invisible on magnetic resonance imaging.

  9. A Signal Processing Module for the Analysis of Heart Sounds and Heart Murmurs

    International Nuclear Information System (INIS)

    Javed, Faizan; Venkatachalam, P A; H, Ahmad Fadzil M

    2006-01-01

    In this paper a Signal Processing Module (SPM) for the computer-aided analysis of heart sounds has been developed. The module reveals important information of cardiovascular disorders and can assist general physician to come up with more accurate and reliable diagnosis at early stages. It can overcome the deficiency of expert doctors in rural as well as urban clinics and hospitals. The module has five main blocks: Data Acquisition and Pre-processing, Segmentation, Feature Extraction, Murmur Detection and Murmur Classification. The heart sounds are first acquired using an electronic stethoscope which has the capability of transferring these signals to the near by workstation using wireless media. Then the signals are segmented into individual cycles as well as individual components using the spectral analysis of heart without using any reference signal like ECG. Then the features are extracted from the individual components using Spectrogram and are used as an input to a MLP (Multiple Layer Perceptron) Neural Network that is trained to detect the presence of heart murmurs. Once the murmur is detected they are classified into seven classes depending on their timing within the cardiac cycle using Smoothed Pseudo Wigner-Ville distribution. The module has been tested with real heart sounds from 40 patients and has proved to be quite efficient and robust while dealing with a large variety of pathological conditions

  10. A Signal Processing Module for the Analysis of Heart Sounds and Heart Murmurs

    Energy Technology Data Exchange (ETDEWEB)

    Javed, Faizan; Venkatachalam, P A; H, Ahmad Fadzil M [Signal and Imaging Processing and Tele-Medicine Technology Research Group, Department of Electrical and Electronics Engineering, Universiti Teknologi PETRONAS, 31750 Tronoh, Perak (Malaysia)

    2006-04-01

    In this paper a Signal Processing Module (SPM) for the computer-aided analysis of heart sounds has been developed. The module reveals important information of cardiovascular disorders and can assist general physician to come up with more accurate and reliable diagnosis at early stages. It can overcome the deficiency of expert doctors in rural as well as urban clinics and hospitals. The module has five main blocks: Data Acquisition and Pre-processing, Segmentation, Feature Extraction, Murmur Detection and Murmur Classification. The heart sounds are first acquired using an electronic stethoscope which has the capability of transferring these signals to the near by workstation using wireless media. Then the signals are segmented into individual cycles as well as individual components using the spectral analysis of heart without using any reference signal like ECG. Then the features are extracted from the individual components using Spectrogram and are used as an input to a MLP (Multiple Layer Perceptron) Neural Network that is trained to detect the presence of heart murmurs. Once the murmur is detected they are classified into seven classes depending on their timing within the cardiac cycle using Smoothed Pseudo Wigner-Ville distribution. The module has been tested with real heart sounds from 40 patients and has proved to be quite efficient and robust while dealing with a large variety of pathological conditions.

  11. Uranium-series radionuclides as tracers of geochemical processes in Long Island Sound

    International Nuclear Information System (INIS)

    Benninger, L.K.

    1976-05-01

    An estuary can be visualized as a membrane between land and the deep ocean, and the understanding of the estuarine processes which determine the permeability of this membrane to terrigenous materials is necessary for the estimation of fluxes of these materials to the oceans. Natural radionuclides are useful probes into estuarine geochemistry because of the time-dependent relationships among them and because, as analogs of stable elements, they are much less subject to contamination during sampling and analysis. In this study the flux of heavy metals through Long Island Sound is considered in light of the material balance for excess 210 Pb, and analyses of concurrent seston and water samples from central Long Island Sound are used to probe the internal workings of the estuary

  12. Processing Complex Sounds Passing through the Rostral Brainstem: The New Early Filter Model

    Science.gov (United States)

    Marsh, John E.; Campbell, Tom A.

    2016-01-01

    The rostral brainstem receives both “bottom-up” input from the ascending auditory system and “top-down” descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e

  13. Adverse Life Events and Emotional and Behavioral Problems in Adolescence: The Role of Non-Verbal Cognitive Ability and Negative Cognitive Errors

    Science.gov (United States)

    Flouri, Eirini; Panourgia, Constantina

    2011-01-01

    The aim of this study was to test whether negative cognitive errors (overgeneralizing, catastrophizing, selective abstraction, and personalizing) mediate the moderator effect of non-verbal cognitive ability on the association between adverse life events (life stress) and emotional and behavioral problems in adolescence. The sample consisted of 430…

  14. Referential Interactions of Turkish-Learning Children with Their Caregivers about Non-Absent Objects: Integration of Non-Verbal Devices and Prior Discourse

    Science.gov (United States)

    Ates, Beyza S.; Küntay, Aylin C.

    2018-01-01

    This paper examines the way children younger than two use non-verbal devices (i.e., deictic gestures and communicative functional acts) and pay attention to discourse status (i.e., prior mention vs. newness) of referents in interactions with caregivers. Data based on semi-naturalistic interactions with caregivers of four children, at ages 1;00,…

  15. The Functional Role of Neural Oscillations in Non-Verbal Emotional Communication.

    Science.gov (United States)

    Symons, Ashley E; El-Deredy, Wael; Schwartze, Michael; Kotz, Sonja A

    2016-01-01

    frequency bands supports a predictive coding model of multisensory emotion perception in which emotional facial and body expressions facilitate the processing of emotional vocalizations.

  16. Individual Differences in Verbal and Non-Verbal Affective Responses to Smells: Influence of Odor Label Across Cultures.

    Science.gov (United States)

    Ferdenzi, Camille; Joussain, Pauline; Digard, Bérengère; Luneau, Lucie; Djordjevic, Jelena; Bensafi, Moustafa

    2017-01-01

    Olfactory perception is highly variable from one person to another, as a function of individual and contextual factors. Here, we investigated the influence of 2 important factors of variation: culture and semantic information. More specifically, we tested whether cultural-specific knowledge and presence versus absence of odor names modulate odor perception, by measuring these effects in 2 populations differing in cultural background but not in language. Participants from France and Quebec, Canada, smelled 4 culture-specific and 2 non-specific odorants in 2 conditions: first without label, then with label. Their ratings of pleasantness, familiarity, edibility, and intensity were collected as well as their psychophysiological and olfactomotor responses. The results revealed significant effects of culture and semantic information, both at the verbal and non-verbal level. They also provided evidence that availability of semantic information reduced cultural differences. Semantic information had a unifying action on olfactory perception that overrode the influence of cultural background. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Emotion Recognition as a Real Strength in Williams Syndrome: Evidence From a Dynamic Non-verbal Task

    Directory of Open Access Journals (Sweden)

    Laure Ibernon

    2018-04-01

    Full Text Available The hypersocial profile characterizing individuals with Williams syndrome (WS, and particularly their attraction to human faces and their desire to form relationships with other people, could favor the development of their emotion recognition capacities. This study seeks to better understand the development of emotion recognition capacities in WS. The ability to recognize six emotions was assessed in 15 participants with WS. Their performance was compared to that of 15 participants with Down syndrome (DS and 15 typically developing (TD children of the same non-verbal developmental age, as assessed with Raven’s Colored Progressive Matrices (RCPM; Raven et al., 1998. The analysis of the three groups’ results revealed that the participants with WS performed better than the participants with DS and also than the TD children. Individuals with WS performed at a similar level to TD participants in terms of recognizing different types of emotions. The study of development trajectories confirmed that the participants with WS presented the same development profile as the TD participants. These results seem to indicate that the recognition of emotional facial expressions constitutes a real strength in people with WS.

  18. Selection of words for implementation of the Picture Exchange Communication System - PECS in non-verbal autistic children.

    Science.gov (United States)

    Ferreira, Carine; Bevilacqua, Monica; Ishihara, Mariana; Fiori, Aline; Armonia, Aline; Perissinoto, Jacy; Tamanaha, Ana Carina

    2017-03-09

    It is known that some autistic individuals are considered non-verbal, since they are unable to use verbal language and barely use gestures to compensate for the absence of speech. Therefore, these individuals' ability to communicate may benefit from the use of the Picture Exchange Communication System - PECS. The objective of this study was to verify the most frequently used words in the implementation of PECS in autistic children, and on a complementary basis, to analyze the correlation between the frequency of these words and the rate of maladaptive behaviors. This is a cross-sectional study. The sample was composed of 31 autistic children, twenty-five boys and six girls, aged between 5 and 10 years old. To identify the most frequently used words in the initial period of implementation of PECS, the Vocabulary Selection Worksheet was used. And to measure the rate of maladaptive behaviors, we applied the Autism Behavior Checklist (ABC). There was a significant prevalence of items in the category "food", followed by "activities" and "beverages". There was no correlation between the total amount of items identified by the families and the rate of maladaptive behaviors. The categories of words most mentioned by the families could be identified, and it was confirmed that the level of maladaptive behaviors did not interfere directly in the preparation of the vocabulary selection worksheet for the children studied.

  19. The influence of non-verbal educational and therapeutic Practices in autism spectrum disorder: the possibilities for physical education professionals

    Directory of Open Access Journals (Sweden)

    Adryelle Fabiane Campelo de Lima

    2017-09-01

    Full Text Available The individual with autism spectrum disorder (ASD have symptoms that begin in childhood and affects the individual's ability to function in life and in their day to day. For reduce and control the symptoms of ASD exist several types of practices. Thus, this study aims to analyze the contributions of the main pedagogical and therapeutic practices of non-verbal communication in motivation, emotional stability, communication and socialization of individuals with autism spectrum disorders, which may collaborate in the intervention of the physical education professional. The study was done through a systematic review that was conducted in the electronic databases. Initially, 390 documents have been identified. After the reading and analysis of the titles of the documents, have selected 109. After reading the summaries were considered eligible 53 and, finally, we've included 18, which completely satisfy our criteria for inclusion. The results showed that intervention programs are distinct and the majority is in music therapy. This systematic review showed that there is direct intervention of physical education professionals only in psychomotricity.

  20. How physician electronic health record screen sharing affects patient and doctor non-verbal communication in primary care.

    Science.gov (United States)

    Asan, Onur; Young, Henry N; Chewning, Betty; Montague, Enid

    2015-03-01

    Use of electronic health records (EHRs) in primary-care exam rooms changes the dynamics of patient-physician interaction. This study examines and compares doctor-patient non-verbal communication (eye-gaze patterns) during primary care encounters for three different screen/information sharing groups: (1) active information sharing, (2) passive information sharing, and (3) technology withdrawal. Researchers video recorded 100 primary-care visits and coded the direction and duration of doctor and patient gaze. Descriptive statistics compared the length of gaze patterns as a percentage of visit length. Lag sequential analysis determined whether physician eye-gaze influenced patient eye gaze, and vice versa, and examined variations across groups. Significant differences were found in duration of gaze across groups. Lag sequential analysis found significant associations between several gaze patterns. Some, such as DGP-PGD ("doctor gaze patient" followed by "patient gaze doctor") were significant for all groups. Others, such DGT-PGU ("doctor gaze technology" followed by "patient gaze unknown") were unique to one group. Some technology use styles (active information sharing) seem to create more patient engagement, while others (passive information sharing) lead to patient disengagement. Doctors can engage patients in communication by using EHRs in the visits. EHR training and design should facilitate this. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. Bilateral capacity for speech sound processing in auditory comprehension: evidence from Wada procedures.

    Science.gov (United States)

    Hickok, G; Okada, K; Barr, W; Pa, J; Rogalsky, C; Donnelly, K; Barde, L; Grant, A

    2008-12-01

    Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated words better than one would expect if their speech perception system had been largely destroyed (70-80% accuracy). Further, when comprehension fails in such patients their errors are more often semantically-based, than-phonemically based. The question addressed by the present study is whether this ability of the right hemisphere to process speech sounds is a result of plastic reorganization following chronic left hemisphere damage, or whether the ability exists in undamaged language systems. We sought to test these possibilities by studying auditory comprehension in acute left versus right hemisphere deactivation during Wada procedures. A series of 20 patients undergoing clinically indicated Wada procedures were asked to listen to an auditorily presented stimulus word, and then point to its matching picture on a card that contained the target picture, a semantic foil, a phonemic foil, and an unrelated foil. This task was performed under three conditions, baseline, during left carotid injection of sodium amytal, and during right carotid injection of sodium amytal. Overall, left hemisphere injection led to a significantly higher error rate than right hemisphere injection. However, consistent with lesion work, the majority (75%) of these errors were semantic in nature. These findings suggest that auditory comprehension deficits are predominantly semantic in nature, even following acute left hemisphere disruption. This, in turn, supports the hypothesis that the right hemisphere is capable of speech sound processing in the intact brain.

  2. Transport processes and sound velocity in vibrationally non-equilibrium gas of anharmonic oscillators

    Science.gov (United States)

    Rydalevskaya, Maria A.; Voroshilova, Yulia N.

    2018-05-01

    Vibrationally non-equilibrium flows of chemically homogeneous diatomic gases are considered under the conditions that the distribution of the molecules over vibrational levels differs significantly from the Boltzmann distribution. In such flows, molecular collisions can be divided into two groups: the first group corresponds to "rapid" microscopic processes whereas the second one corresponds to "slow" microscopic processes (their rate is comparable to or larger than that of gasdynamic parameters variation). The collisions of the first group form quasi-stationary vibrationally non-equilibrium distribution functions. The model kinetic equations are used to study the transport processes under these conditions. In these equations, the BGK-type approximation is used to model only the collision operators of the first group. It allows us to simplify derivation of the transport fluxes and calculation of the kinetic coefficients. Special attention is given to the connection between the formulae for the bulk viscosity coefficient and the sound velocity square.

  3. Early enhanced processing and delayed habituation to deviance sounds in autism spectrum disorder.

    Science.gov (United States)

    Hudac, Caitlin M; DesChamps, Trent D; Arnett, Anne B; Cairney, Brianna E; Ma, Ruqian; Webb, Sara Jane; Bernier, Raphael A

    2018-06-01

    Children with autism spectrum disorder (ASD) exhibit difficulties processing and encoding sensory information in daily life. Cognitive response to environmental change in control individuals is naturally dynamic, meaning it habituates or reduces over time as one becomes accustomed to the deviance. The origin of atypical response to deviance in ASD may relate to differences in this dynamic habituation. The current study of 133 children and young adults with and without ASD examined classic electrophysiological responses (MMN and P3a), as well as temporal patterns of habituation (i.e., N1 and P3a change over time) in response to a passive auditory oddball task. Individuals with ASD showed an overall heightened sensitivity to change as exhibited by greater P3a amplitude to novel sounds. Moreover, youth with ASD showed dynamic ERP differences, including slower attenuation of the N1 response to infrequent tones and the P3a response to novel sounds. Dynamic ERP responses were related to parent ratings of auditory sensory-seeking behaviors, but not general cognition. As the first large-scale study to characterize temporal dynamics of auditory ERPs in ASD, our results provide compelling evidence that heightened response to auditory deviance in ASD is largely driven by early sensitivity and prolonged processing of auditory deviance. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  5. A virtual auditory environment for investigating the auditory signal processing of realistic sounds

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel; Buchholz, Jörg

    2008-01-01

    In the present study, a novel multichannel loudspeaker-based virtual auditory environment (VAE) is introduced. The VAE aims at providing a versatile research environment for investigating the auditory signal processing in real environments, i.e., considering multiple sound sources and room...... reverberation. The environment is based on the ODEON room acoustic simulation software to render the acoustical scene. ODEON outputs are processed using a combination of different order Ambisonic techniques to calculate multichannel room impulse responses (mRIR). Auralization is then obtained by the convolution...... the VAE development, special care was taken in order to achieve a realistic auditory percept and to avoid “artifacts” such as unnatural coloration. The performance of the VAE has been evaluated and optimized on a 29 loudspeaker setup using both objective and subjective measurement techniques....

  6. Predictive Brain Mechanisms in Sound-to-Meaning Mapping during Speech Processing.

    Science.gov (United States)

    Lyu, Bingjiang; Ge, Jianqiao; Niu, Zhendong; Tan, Li Hai; Gao, Jia-Hong

    2016-10-19

    Spoken language comprehension relies not only on the identification of individual words, but also on the expectations arising from contextual information. A distributed frontotemporal network is known to facilitate the mapping of speech sounds onto their corresponding meanings. However, how prior expectations influence this efficient mapping at the neuroanatomical level, especially in terms of individual words, remains unclear. Using fMRI, we addressed this question in the framework of the dual-stream model by scanning native speakers of Mandarin Chinese, a language highly dependent on context. We found that, within the ventral pathway, the violated expectations elicited stronger activations in the left anterior superior temporal gyrus and the ventral inferior frontal gyrus (IFG) for the phonological-semantic prediction of spoken words. Functional connectivity analysis showed that expectations were mediated by both top-down modulation from the left ventral IFG to the anterior temporal regions and enhanced cross-stream integration through strengthened connections between different subregions of the left IFG. By further investigating the dynamic causality within the dual-stream model, we elucidated how the human brain accomplishes sound-to-meaning mapping for words in a predictive manner. In daily communication via spoken language, one of the core processes is understanding the words being used. Effortless and efficient information exchange via speech relies not only on the identification of individual spoken words, but also on the contextual information giving rise to expected meanings. Despite the accumulating evidence for the bottom-up perception of auditory input, it is still not fully understood how the top-down modulation is achieved in the extensive frontotemporal cortical network. Here, we provide a comprehensive description of the neural substrates underlying sound-to-meaning mapping and demonstrate how the dual-stream model functions in the modulation of

  7. Design, development and test of the gearbox condition monitoring system using sound signal processing

    Directory of Open Access Journals (Sweden)

    M Zamani

    2016-09-01

    from a productive source of power to a consumer, for torque meeting and for rotating speed needed for the consumer. In fact, gearbox is an interfere between power source and power consumer which produces a flexible communication between power source and power consumer. Needing to a gearbox as a machine which can generate harmony as an interface is unavoidable due to lack of harmony of torque and rotating speed of production source of power. So necessary calculations in order to attain to technical characteristics of gearwheels, bearings, shaft dimensions and other accessories of gearbox were done. This gearbox is from kinds of simple gearwheel which its input and output shaft are parallel to each other. Main accessories of gearbox are: 1.crust 2.shaft 3.gearwheel 4.thorn 5.bearing 6.cover. All of the design parameters were calculated and considered in designing of all of the accessories of gearbox. Electromotor rotating calibration: For this aim, a light-contact telemeter in model of Lutron was used as contact. Acoustic module of electro motor: A module was constructed in order to prevent from sound waves interaction resulting from an electromotor function with waves of gearbox function. Three layers of sound absorbent including common felt with 1mm width, polyethylene foam with 15 mm width and shoulder foam egg with 35 mm width were used for the module insulation. Material used for the body of this module was MDF. Based on field measurement, level of electromotor sound decrement using the acoustic module was 20dB. Investigated malfunctions in this research are relevant to gearwheel with one tooth fracture, one worn tooth and one tooth fracture and other worn tooth. Collection and storage of acoustic data: In this research, an audiometer in model of HT-157 made in Italy in order to obtain acoustic data and a laptop with a model of Lenovo-G550 for data storage and processing was used. Cool Edit Pro 2.0 software was used for data processing. Data storage was in PCM

  8. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    Science.gov (United States)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  9. Deficits in Letter-Speech Sound Associations but Intact Visual Conflict Processing in Dyslexia: Results from a Novel ERP-Paradigm

    OpenAIRE

    Bakos, Sarolta; Landerl, Karin; Bartling, Jürgen; Schulte-Körne, Gerd; Moll, Kristina

    2017-01-01

    The reading and spelling deficits characteristic of developmental dyslexia (dyslexia) have been related to problems in phonological processing and in learning associations between letters and speech-sounds. Even when children with dyslexia have learned the letters and their corresponding speech sounds, letter-speech sound associations might still be less automatized compared to children with age-adequate literacy skills. In order to examine automaticity in letter-speech sound associations and...

  10. Residual Neural Processing of Musical Sound Features in Adult Cochlear Implant Users

    Science.gov (United States)

    Timm, Lydia; Vuust, Peter; Brattico, Elvira; Agrawal, Deepashri; Debener, Stefan; Büchner, Andreas; Dengler, Reinhard; Wittfoth, Matthias

    2014-01-01

    Auditory processing in general and music perception in particular are hampered in adult cochlear implant (CI) users. To examine the residual music perception skills and their underlying neural correlates in CI users implanted in adolescence or adulthood, we conducted an electrophysiological and behavioral study comparing adult CI users with normal-hearing age-matched controls (NH controls). We used a newly developed musical multi-feature paradigm, which makes it possible to test automatic auditory discrimination of six different types of sound feature changes inserted within a musical enriched setting lasting only 20 min. The presentation of stimuli did not require the participants’ attention, allowing the study of the early automatic stage of feature processing in the auditory cortex. For the CI users, we obtained mismatch negativity (MMN) brain responses to five feature changes but not to changes of rhythm, whereas we obtained MMNs for all the feature changes in the NH controls. Furthermore, the MMNs to deviants of pitch of CI users were reduced in amplitude and later than those of NH controls for changes of pitch and guitar timber. No other group differences in MMN parameters were found to changes in intensity and saxophone timber. Furthermore, the MMNs in CI users reflected the behavioral scores from a respective discrimination task and were correlated with patients’ age and speech intelligibility. Our results suggest that even though CI users are not performing at the same level as NH controls in neural discrimination of pitch-based features, they do possess potential neural abilities for music processing. However, CI users showed a disrupted ability to automatically discriminate rhythmic changes compared with controls. The current behavioral and MMN findings highlight the residual neural skills for music processing even in CI users who have been implanted in adolescence or adulthood. Highlights: -Automatic brain responses to musical feature changes

  11. Alternative Silver Production by Environmental Sound Processing of a Sulfo Salt Silver Mineral Found in Bolivia

    Directory of Open Access Journals (Sweden)

    Alexander Birich

    2018-02-01

    Full Text Available Very often, the production of silver causes devastating environmental issues, because of the use of toxic reagents like cyanide and mercury. Due to severe environmental damage caused by humans in the last decades, the social awareness regarding the sustainable production processes is on the rise. Terms like “sustainable” and “green” in product descriptions are becoming more and more popular and producers are forced to satisfy the rising environmental awareness of their customers. Within this work, an alternative environmental sound silver recovery process was developed for a vein type silver ore from Mina Porka, Bolivia. A foregoing characterization of the input material reveals its mineral composition. In the following mineral processing, around 92.9% silver was concentrated by separating 59.5 wt. % of non-silver minerals. Nitric acid leaching of the generated concentrate enabled a silver recovery of up to 98%. The dissolved silver was then separated via copper cementation to generate a metallic silver product of >99% purity. Summarizing all process steps, a silver yield of 87% was achieved in lab scale. A final upscaling trial was conducted to prove the process’ robustness. Within this trial, almost 4 kg of metallic silver with a purity of higher than 99.5 wt. % was produced.

  12. Spectro-temporal analysis of complex tones: two cortical processes dependent on retention of sounds in the long auditory store.

    Science.gov (United States)

    Jones, S J; Vaz Pato, M; Sprague, L

    2000-09-01

    To examine whether two cortical processes concerned with spectro-temporal analysis of complex tones, a 'C-process' generating CN1 and CP2 potentials at cf. 100 and 180 ms after sudden change of pitch or timbre, and an 'M-process' generating MN1 and MP2 potentials of similar latency at the sudden cessation of repeated changes, are dependent on accumulation of a sound image in the long auditory store. The durations of steady (440 Hz) and rapidly oscillating (440-494 Hz, 16 changes/s) pitch of a synthesized 'clarinet' tone were reciprocally varied between 0.5 and 4.5 s within a duty cycle of 5 s. Potentials were recorded at the beginning and end of the period of oscillation in 10 non-attending normal subjects. The CN1 at the beginning of pitch oscillation and the MN1 at the end were both strongly influenced by the duration of the immediately preceding stimulus pattern, mean amplitudes being 3-4 times larger after 4.5 s as compared with 0.5 s. The processes responsible for both CN1 and MN1 are influenced by the duration of the preceding sound pattern over a period comparable to that of the 'echoic memory' or long auditory store. The store therefore appears to occupy a key position in spectro-temporal sound analysis. The C-process is concerned with the spectral structure of complex sounds, and may therefore reflect the 'grouping' of frequency components underlying auditory stream segregation. The M-process (mismatch negativity) is concerned with the temporal sound structure, and may play an important role in the extraction of information from sequential sounds.

  13. What a Smile Means: Contextual Beliefs and Facial Emotion Expressions in a Non-verbal Zero-Sum Game.

    Science.gov (United States)

    Pádua Júnior, Fábio P; Prado, Paulo H M; Roeder, Scott S; Andrade, Eduardo B

    2016-01-01

    Research into the authenticity of facial emotion expressions often focuses on the physical properties of the face while paying little attention to the role of beliefs in emotion perception. Further, the literature most often investigates how people express a pre-determined emotion rather than what facial emotion expressions people strategically choose to express. To fill these gaps, this paper proposes a non-verbal zero-sum game - the Face X Game - to assess the role of contextual beliefs and strategic displays of facial emotion expression in interpersonal interactions. This new research paradigm was used in a series of three studies, where two participants are asked to play the role of the sender (individual expressing emotional information on his/her face) or the observer (individual interpreting the meaning of that expression). Study 1 examines the outcome of the game with reference to the sex of the pair, where senders won more frequently when the pair was comprised of at least one female. Study 2 examines the strategic display of facial emotion expressions. The outcome of the game was again contingent upon the sex of the pair. Among female pairs, senders won the game more frequently, replicating the pattern of results from study 1. We also demonstrate that senders who strategically express an emotion incongruent with the valence of the event (e.g., smile after seeing a negative event) are able to mislead observers, who tend to hold a congruent belief about the meaning of the emotion expression. If sending an incongruent signal helps to explain why female senders win more frequently, it logically follows that female observers were more prone to hold a congruent, and therefore inaccurate, belief. This prospect implies that while female senders are willing and/or capable of displaying fake smiles, paired-female observers are not taking this into account. Study 3 investigates the role of contextual factors by manipulating female observers' beliefs. When prompted

  14. Signal Processing Implementation and Comparison of Automotive Spatial Sound Rendering Strategies

    Directory of Open Access Journals (Sweden)

    Bai MingsianR

    2009-01-01

    Full Text Available Design and implementation strategies of spatial sound rendering are investigated in this paper for automotive scenarios. Six design methods are implemented for various rendering modes with different number of passengers. Specifically, the downmixing algorithms aimed at balancing the front and back reproductions are developed for the 5.1-channel input. Other five algorithms based on inverse filtering are implemented in two approaches. The first approach utilizes binaural (Head-Related Transfer Functions HRTFs measured in the car interior, whereas the second approach named the point-receiver model targets a point receiver positioned at the center of the passenger's head. The proposed processing algorithms were compared via objective and subjective experiments under various listening conditions. Test data were processed by the multivariate analysis of variance (MANOVA method and the least significant difference (Fisher's LSD method as a post hoc test to justify the statistical significance of the experimental data. The results indicate that inverse filtering algorithms are preferred for the single passenger mode. For the multipassenger mode, however, downmixing algorithms generally outperformed the other processing techniques.

  15. Neural Correlates of Phonological Processing in Speech Sound Disorder: A Functional Magnetic Resonance Imaging Study

    Science.gov (United States)

    Tkach, Jean A.; Chen, Xu; Freebairn, Lisa A.; Schmithorst, Vincent J.; Holland, Scott K.; Lewis, Barbara A.

    2011-01-01

    Speech sound disorders (SSD) are the largest group of communication disorders observed in children. One explanation for these disorders is that children with SSD fail to form stable phonological representations when acquiring the speech sound system of their language due to poor phonological memory (PM). The goal of this study was to examine PM in…

  16. Not all sounds sound the same: Parkinson's disease affects differently emotion processing in music and in speech prosody.

    Science.gov (United States)

    Lima, César F; Garrett, Carolina; Castro, São Luís

    2013-01-01

    Does emotion processing in music and speech prosody recruit common neurocognitive mechanisms? To examine this question, we implemented a cross-domain comparative design in Parkinson's disease (PD). Twenty-four patients and 25 controls performed emotion recognition tasks for music and spoken sentences. In music, patients had impaired recognition of happiness and peacefulness, and intact recognition of sadness and fear; this pattern was independent of general cognitive and perceptual abilities. In speech, patients had a small global impairment, which was significantly mediated by executive dysfunction. Hence, PD affected differently musical and prosodic emotions. This dissociation indicates that the mechanisms underlying the two domains are partly independent.

  17. High frequency source localization in a shallow ocean sound channel using frequency difference matched field processing.

    Science.gov (United States)

    Worthmann, Brian M; Song, H C; Dowling, David R

    2015-12-01

    Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, particularly those involving high frequency signals, imperfect knowledge of the actual propagation environment prevents accurate propagation modeling and source localization via MFP fails. For beamforming applications, this actual-to-model mismatch problem was mitigated through a frequency downshift, made possible by a nonlinear array-signal-processing technique called frequency difference beamforming [Abadi, Song, and Dowling (2012). J. Acoust. Soc. Am. 132, 3018-3029]. Here, this technique is extended to conventional (Bartlett) MFP using simulations and measurements from the 2011 Kauai Acoustic Communications MURI experiment (KAM11) to produce ambiguity surfaces at frequencies well below the signal bandwidth where the detrimental effects of mismatch are reduced. Both the simulation and experimental results suggest that frequency difference MFP can be more robust against environmental mismatch than conventional MFP. In particular, signals of frequency 11.2 kHz-32.8 kHz were broadcast 3 km through a 106-m-deep shallow ocean sound channel to a sparse 16-element vertical receiving array. Frequency difference MFP unambiguously localized the source in several experimental data sets with average peak-to-side-lobe ratio of 0.9 dB, average absolute-value range error of 170 m, and average absolute-value depth error of 10 m.

  18. Long-Term Impairment of Sound Processing in the Auditory Midbrain by Daily Short-Term Exposure to Moderate Noise

    Directory of Open Access Journals (Sweden)

    Liang Cheng

    2017-01-01

    Full Text Available Most citizen people are exposed daily to environmental noise at moderate levels with a short duration. The aim of the present study was to determine the effects of daily short-term exposure to moderate noise on sound level processing in the auditory midbrain. Sound processing properties of auditory midbrain neurons were recorded in anesthetized mice exposed to moderate noise (80 dB SPL, 2 h/d for 6 weeks and were compared with those from age-matched controls. Neurons in exposed mice had a higher minimum threshold and maximum response intensity, a longer first spike latency, and a higher slope and narrower dynamic range for rate level function. However, these observed changes were greater in neurons with the best frequency within the noise exposure frequency range compared with those outside the frequency range. These sound processing properties also remained abnormal after a 12-week period of recovery in a quiet laboratory environment after completion of noise exposure. In conclusion, even daily short-term exposure to moderate noise can cause long-term impairment of sound level processing in a frequency-specific manner in auditory midbrain neurons.

  19. Pre-attentive processing of spectrally complex sounds with asynchronous onsets: an event-related potential study with human subjects.

    Science.gov (United States)

    Tervaniemi, M; Schröger, E; Näätänen, R

    1997-05-23

    Neuronal mechanisms involved in the processing of complex sounds with asynchronous onsets were studied in reading subjects. The sound onset asynchrony (SOA) between the leading partial and the remaining complex tone was varied between 0 and 360 ms. Infrequently occurring deviant sounds (in which one out of 10 harmonics was different in pitch relative to the frequently occurring standard sound) elicited the mismatch negativity (MMN), a change-specific cortical event-related potential (ERP) component. This indicates that the pitch of standard stimuli had been pre-attentively coded by sensory-memory traces. Moreover, when the complex-tone onset fell within temporal integration window initiated by the leading-partial onset, the deviants elicited the N2b component. This indexes that involuntary attention switch towards the sound change occurred. In summary, the present results support the existence of pre-perceptual integration mechanism of 100-200 ms duration and emphasize its importance in switching attention towards the stimulus change.

  20. Listening to an audio drama activates two processing networks, one for all sounds, another exclusively for speech.

    Directory of Open Access Journals (Sweden)

    Robert Boldt

    Full Text Available Earlier studies have shown considerable intersubject synchronization of brain activity when subjects watch the same movie or listen to the same story. Here we investigated the across-subjects similarity of brain responses to speech and non-speech sounds in a continuous audio drama designed for blind people. Thirteen healthy adults listened for ∼19 min to the audio drama while their brain activity was measured with 3 T functional magnetic resonance imaging (fMRI. An intersubject-correlation (ISC map, computed across the whole experiment to assess the stimulus-driven extrinsic brain network, indicated statistically significant ISC in temporal, frontal and parietal cortices, cingulate cortex, and amygdala. Group-level independent component (IC analysis was used to parcel out the brain signals into functionally coupled networks, and the dependence of the ICs on external stimuli was tested by comparing them with the ISC map. This procedure revealed four extrinsic ICs of which two-covering non-overlapping areas of the auditory cortex-were modulated by both speech and non-speech sounds. The two other extrinsic ICs, one left-hemisphere-lateralized and the other right-hemisphere-lateralized, were speech-related and comprised the superior and middle temporal gyri, temporal poles, and the left angular and inferior orbital gyri. In areas of low ISC four ICs that were defined intrinsic fluctuated similarly as the time-courses of either the speech-sound-related or all-sounds-related extrinsic ICs. These ICs included the superior temporal gyrus, the anterior insula, and the frontal, parietal and midline occipital cortices. Taken together, substantial intersubject synchronization of cortical activity was observed in subjects listening to an audio drama, with results suggesting that speech is processed in two separate networks, one dedicated to the processing of speech sounds and the other to both speech and non-speech sounds.

  1. Process parameters optimization of needle-punched nonwovens for sound absorption application

    CSIR Research Space (South Africa)

    Mvubu, M

    2015-12-01

    Full Text Available , and stroke frequency on sound absorption properties were studied. These parameters were varied at three levels during experimental trials. From multiple regression analysis, it was observed that the depth of needle penetration alone was the most dominant...

  2. Vibrotactile Identification of Signal-Processed Sounds from Environmental Events Presented by a Portable Vibrator: A Laboratory Study

    Directory of Open Access Journals (Sweden)

    Parivash Ranjbar

    2008-09-01

    Full Text Available Objectives: To evaluate different signal-processing algorithms for tactile identification of environmental sounds in a monitoring aid for the deafblind. Two men and three women, sensorineurally deaf or profoundly hearing impaired with experience of vibratory experiments, age 22-36 years. Methods: A closed set of 45 representative environmental sounds were processed using two transposing (TRHA, TR1/3 and three modulating algorithms (AM, AMFM, AMMC and presented as tactile stimuli using a portable vibrator in three experiments. The algorithms TRHA, TR1/3, AMFM and AMMC had two alternatives (with and without adaption to vibratory thresholds. In Exp. 1, the sounds were preprocessed and directly fed to the vibrator. In Exp. 2 and 3, the sounds were presented in an acoustic test room, without or with background noise (SNR=+5 dB, and processed in real time. Results: In Exp. 1, Algorithm AMFM and AMFM(A consistently had the lowest identification scores, and were thus excluded in Exp. 2 and 3. TRHA, AM, AMMC, and AMMC(A showed comparable identification scores (30%-42% and the addition of noise did not deteriorate the performance. Discussion: Algorithm TRHA, AM, AMMC, and AMMC(A showed good performance in all three experiments and were robust in noise they can therefore be used in further testing in real environments.

  3. Neural Correlates of Indicators of Sound Change in Cantonese: Evidence from Cortical and Subcortical Processes

    OpenAIRE

    Maggu, Akshay R.; Liu, Fang; Antoniou, Mark; Wong, Patrick C. M.

    2016-01-01

    Across time, languages undergo changes in phonetic, syntactic, and semantic dimensions. Social, cognitive, and cultural factors contribute to sound change, a phenomenon in which the phonetics of a language undergo changes over time. Individuals who misperceive and produce speech in a slightly divergent manner (called innovators) contribute to variability in the society, eventually leading to sound change. However, the cause of variability in these individuals is still unknown. In this study, ...

  4. The influence of (central) auditory processing disorder on the severity of speech-sound disorders in children.

    Science.gov (United States)

    Vilela, Nadia; Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Sanches, Seisse Gabriela Gandolfi; Wertzner, Haydée Fiszbein; Carvallo, Renata Mota Mamede

    2016-02-01

    To identify a cutoff value based on the Percentage of Consonants Correct-Revised index that could indicate the likelihood of a child with a speech-sound disorder also having a (central) auditory processing disorder . Language, audiological and (central) auditory processing evaluations were administered. The participants were 27 subjects with speech-sound disorders aged 7 to 10 years and 11 months who were divided into two different groups according to their (central) auditory processing evaluation results. When a (central) auditory processing disorder was present in association with a speech disorder, the children tended to have lower scores on phonological assessments. A greater severity of speech disorder was related to a greater probability of the child having a (central) auditory processing disorder. The use of a cutoff value for the Percentage of Consonants Correct-Revised index successfully distinguished between children with and without a (central) auditory processing disorder. The severity of speech-sound disorder in children was influenced by the presence of (central) auditory processing disorder. The attempt to identify a cutoff value based on a severity index was successful.

  5. Bi-directional effects of depressed mood in the postnatal period on mother-infant non-verbal engagement with picture books.

    Science.gov (United States)

    Reissland, Nadja; Burt, Mike

    2010-12-01

    The purpose of the present study is to examine the bi-directional nature of maternal depressed mood in the postnatal period on maternal and infant non-verbal behaviors while looking at a picture book. Although, it is acknowledged that non-verbal engagement with picture books in infancy plays an important role, the effect of maternal depressed mood on stimulating the interest of infants in books is not known. Sixty-one mothers and their infants, 38 boys and 23 girls, were observed twice approximately 3 months apart (first observation: mean age 6.8 months, range 3-11 months, 32 mothers with depressed mood; second observation: mean age 10.2 months, range 6-16 months, 17 mothers with depressed mood). There was a significant effect for depressed mood on negative behaviors: infants of mothers with depressed mood tended to push away and close books more often. The frequency of negative behaviors (pushing the book away/closing it on the part of the infant and withholding the book and restraining the infant on the part of the mother) were behaviors which if expressed during the first visit were more likely to be expressed during the second visit. Levels of negative behaviors by mother and infant were strongly related during each visit. Additionally, the pattern between visits suggests that maternal negative behavior may be the cause of her infant negative behavior. These results are discussed in terms of the effects of maternal depressed mood on the bi-directional relation of non-verbal engagement of mother and child. Crown Copyright © 2010. Published by Elsevier Inc. All rights reserved.

  6. Physical processes in a coupled bay-estuary coastal system: Whitsand Bay and Plymouth Sound

    Science.gov (United States)

    Uncles, R. J.; Stephens, J. A.; Harris, C.

    2015-09-01

    Whitsand Bay and Plymouth Sound are located in the southwest of England. The Bay and Sound are separated by the ∼2-3 km-wide Rame Peninsula and connected by ∼10-20 m-deep English Channel waters. Results are presented from measurements of waves and currents, drogue tracking, surveys of salinity, temperature and turbidity during stratified and unstratified conditions, and bed sediment surveys. 2D and 3D hydrodynamic models are used to explore the generation of tidally- and wind-driven residual currents, flow separation and the formation of the Rame eddy, and the coupling between the Bay and the Sound. Tidal currents flow around the Rame Peninsula from the Sound to the Bay between approximately 3 h before to 2 h after low water and form a transport path between them that conveys lower salinity, higher turbidity waters from the Sound to the Bay. These waters are then transported into the Bay as part of the Bay-mouth limb of the Rame eddy and subsequently conveyed to the near-shore, east-going limb and re-circulated back towards Rame Head. The Simpson-Hunter stratification parameter indicates that much of the Sound and Bay are likely to stratify thermally during summer months. Temperature stratification in both is pronounced during summer and is largely determined by coastal, deeper-water stratification offshore. Small tidal stresses in the Bay are unable to move bed sediment of the observed sizes. However, the Bay and Sound are subjected to large waves that are capable of driving a substantial bed-load sediment transport. Measurements show relatively low levels of turbidity, but these respond rapidly to, and have a strong correlation with, wave height.

  7. Neural mechanisms underlying valence inferences to sound: The role of the right angular gyrus.

    Science.gov (United States)

    Bravo, Fernando; Cross, Ian; Hawkins, Sarah; Gonzalez, Nadia; Docampo, Jorge; Bruno, Claudio; Stamatakis, Emmanuel Andreas

    2017-07-28

    We frequently infer others' intentions based on non-verbal auditory cues. Although the brain underpinnings of social cognition have been extensively studied, no empirical work has yet examined the impact of musical structure manipulation on the neural processing of emotional valence during mental state inferences. We used a novel sound-based theory-of-mind paradigm in which participants categorized stimuli of different sensory dissonance level in terms of positive/negative valence. Whilst consistent with previous studies which propose facilitated encoding of consonances, our results demonstrated that distinct levels of consonance/dissonance elicited differential influences on the right angular gyrus, an area implicated in mental state attribution and attention reorienting processes. Functional and effective connectivity analyses further showed that consonances modulated a specific inhibitory interaction from associative memory to mental state attribution substrates. Following evidence suggesting that individuals with autism may process social affective cues differently, we assessed the relationship between participants' task performance and self-reported autistic traits in clinically typical adults. Higher scores on the social cognition scales of the AQ were associated with deficits in recognising positive valence in consonant sound cues. These findings are discussed with respect to Bayesian perspectives on autistic perception, which highlight a functional failure to optimize precision in relation to prior beliefs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Neural processing of auditory signals and modular neural control for sound tropism of walking machines

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Pasemann, Frank; Fischer, Joern

    2005-01-01

    and a neural preprocessing system together with a modular neural controller are used to generate a sound tropism of a four-legged walking machine. The neural preprocessing network is acting as a low-pass filter and it is followed by a network which discerns between signals coming from the left or the right....... The parameters of these networks are optimized by an evolutionary algorithm. In addition, a simple modular neural controller then generates the desired different walking patterns such that the machine walks straight, then turns towards a switched-on sound source, and then stops near to it....

  9. Achieving visibility? Use of non-verbal communication in interactions between patients and pharmacists who do not share a common language.

    Science.gov (United States)

    Stevenson, Fiona

    2014-06-01

    Despite the seemingly insatiable interest in healthcare professional-patient communication, less attention has been paid to the use of non-verbal communication in medical consultations. This article considers pharmacists' and patients' use of non-verbal communication to interact directly in consultations in which they do not share a common language. In total, 12 video-recorded, interpreted pharmacy consultations concerned with a newly prescribed medication or a change in medication were analysed in detail. The analysis focused on instances of direct communication initiated by either the patient or the pharmacist, despite the presence of a multilingual pharmacy assistant acting as an interpreter. Direct communication was shown to occur through (i) the demonstration of a medical device, (ii) the indication of relevant body parts and (iii) the use of limited English. These connections worked to make patients and pharmacists visible to each other and thus to maintain a sense of mutual involvement in consultations within which patients and pharmacists could enact professionally and socially appropriate roles. In a multicultural society this work is important in understanding the dynamics involved in consultations in situations in which language is not shared and thus in considering the development of future research and policy. © 2014 The Author. Sociology of Health & Illness published by John Wiley & Sons Ltd on behalf of Foundation for SHIL (SHIL).

  10. Language representation of the emotional state of the personage in non-verbal speech behavior (on the material of Russian and German languages

    Directory of Open Access Journals (Sweden)

    Scherbakova Irina Vladimirovna

    2016-06-01

    Full Text Available The article examines the features of actualization of emotions in a non-verbal speech behavior of the character of a literary text. Emotions are considered basic, the most actively used method of literary character reaction to any object, action, or the communicative situation. Nonverbal ways of expressing emotions more fully give the reader an idea of the emotional state of the character. The main focus in the allocation of non-verbal means of communication in art is focused on the description of kinetic, proxemic and prosodic components. The material of the study is the microdialogue fragments extracted by continuous sampling of their works of art texts of the Russian-speaking and German-speaking classical and modern literature XIX - XX centuries. Fragments of the dialogues were analyzed, where the recorded voice of nonverbal behavior of the character of different emotional content (surprise, joy, fear, anger, rage, excitement, etc. was fixed. It was found that means of verbalization and descriptions of emotion of nonverbal behavior of the character are primarily indirect nomination, expressed verbal vocabulary, adjectives and adverbs. The lexical level is the most significant in the presentation of the emotional state of the character.

  11. Specific components of face perception in the human fusiform gyrus studied by tomographic estimates of magnetoencephalographic signals: a tool for the evaluation of non-verbal communication in psychosomatic paradigms

    Directory of Open Access Journals (Sweden)

    Ioannides Andreas A

    2007-12-01

    100 shows that processing of faces is already differentiated from processing of other objects within 100 ms. Standardization of the three face-specific MEG components could have diagnostic value for the integrity of the initial process of non-verbal communication in various psychosomatic paradigms.

  12. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors.

    Science.gov (United States)

    Chen, Zhaocong; Wong, Francis C K; Jones, Jeffery A; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun

    2015-08-17

    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production.

  13. Broadcast sound technology

    CERN Document Server

    Talbot-Smith, Michael

    1990-01-01

    Broadcast Sound Technology provides an explanation of the underlying principles of modern audio technology. Organized into 21 chapters, the book first describes the basic sound; behavior of sound waves; aspects of hearing, harming, and charming the ear; room acoustics; reverberation; microphones; phantom power; loudspeakers; basic stereo; and monitoring of audio signal. Subsequent chapters explore the processing of audio signal, sockets, sound desks, and digital audio. Analogue and digital tape recording and reproduction, as well as noise reduction, are also explained.

  14. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  15. Habilidades de praxia verbal e não-verbal em indivíduos gagos Verbal and non-verbal praxic abilities in stutterers

    Directory of Open Access Journals (Sweden)

    Natália Casagrande Brabo

    2009-12-01

    Full Text Available OBJETIVO: caracterizar as habilidades de praxias verbal e não-verbal em indivíduos gagos. MÉTODOS: participaram do estudo 40 indivíduos, com idade igual ou superior a 18 anos, do sexo masculino e feminino: 20 gagos adultos e 20 sem queixas de comunicação. Para a avaliação das praxias verbal e não-verbal, os indivíduos foram submetidos à aplicação do Protocolo de Avaliação da Apraxia Verbal e Não-verbal (Martins e Ortiz, 2004. RESULTADOS: com relação às habilidades de praxia verbal houve diferença estatisticamente significante no número de disfluências típicas e atípicas apresentadas pelos grupos estudados. Quanto à tipologia das disfluências observou-se que nas típicas houve diferença estatisticamente significante entre os grupos estudados apenas na repetição de frase, e nas atípicas, houve diferença estatisticamente significante, tanto no bloqueio quanto na repetição de sílaba e no prolongamento. Com relação às habilidades de praxia não-verbal, não foram observadas diferenças estatisticamente significantes entre os indivíduos estudados na realização dos movimentos de lábios, língua e mandíbula, isolados e em sequência. CONCLUSÃO: com relação às habilidades de praxia verbal, os gagos apresentaram frequência maior de rupturas da fala, tanto de disfluências típicas quanto de atípicas, quando comparado ao grupo controle. Já na realização de movimentos práxicos isolados e em sequência, ou seja, nas habilidades de praxia não-verbal, os indivíduos gagos não se diferenciaram dos fluentes não confirmando a hipótese de que o início precoce da gagueira poderia comprometer as habilidades de praxia não-verbal.PURPOSE: to characterize the verbal and non-verbal praxic abilities in adult stutterers. METHODS: for this research, 40 over 18-year old men and women were selected: 20 stuttering adults and 20 without communication complaints. For the praxis evaluation, they were submitted to

  16. WHAT’S THE “SECRET” OF THE GESTURE LANGUAGE? A FEW CRITICAL REFLECTIONS ON THE PSEUDO-SCIENCES DEALING WITH THE “NON-VERBAL DECODING”

    Directory of Open Access Journals (Sweden)

    PASCAL LARDELLIER

    2015-05-01

    Full Text Available In this article we deal with a situation commonly met with in contemporary society: the representatives of pseudo-sciences invite their readers to learn „to decode the non-verbal language”. They pretend that in this way our body is supposed to be „readable” and it would be enough to know these „theories” in order to read into our interlocutors and to find out their thoughts and emotions. It is obvious that we find ourselves in front of a discourse imitating the rhetorical codes of science, but having nothing to do with science. Moreover, these pseudo-sciences have never been presented or discussed within the academic sphere

  17. Seizure-related factors and non-verbal intelligence in children with epilepsy. A population-based study from Western Norway.

    Science.gov (United States)

    Høie, B; Mykletun, A; Sommerfelt, K; Bjørnaes, H; Skeidsvoll, H; Waaler, P E

    2005-06-01

    To study the relationship between seizure-related factors, non-verbal intelligence, and socio-economic status (SES) in a population-based sample of children with epilepsy. The latest ILAE International classifications of epileptic seizures and syndromes were used to classify seizure types and epileptic syndromes in all 6-12 year old children (N=198) with epilepsy in Hordaland County, Norway. The children had neuropediatric and EEG examinations. Of the 198 patients, demographic characteristics were collected on 183 who participated in psychological studies including Raven matrices. 126 healthy controls underwent the same testing. Severe non-verbal problems (SNVP) were defined as a Raven score at or Raven percentile group, whereas controls were highly over-represented in the higher percentile groups. SNVP were present in 43% of children with epilepsy and 3% of controls. These problems were especially common in children with remote symptomatic epilepsy aetiology, undetermined epilepsy syndromes, myoclonic seizures, early seizure debut, high seizure frequency and in children with polytherapy. Seizure-related characteristics that were not usually associated with SNVP were idiopathic epilepsies, localization related (LR) cryptogenic epilepsies, absence and simple partial seizures, and a late debut of epilepsy. Adjusting for socio-economic status factors did not significantly change results. In childhood epilepsy various seizure-related factors, but not SES factors, were associated with the presence or absence of SNVP. Such deficits may be especially common in children with remote symptomatic epilepsy aetiology and in complex and therapy resistant epilepsies. Low frequencies of SNVP may be found in children with idiopathic and LR cryptogenic epilepsy syndromes, simple partial or absence seizures and a late epilepsy debut. Our study contributes to an overall picture of cognitive function and its relation to central seizure characteristics in a childhood epilepsy population

  18. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  19. Transitional Probabilities Are Prioritized over Stimulus/Pattern Probabilities in Auditory Deviance Detection: Memory Basis for Predictive Sound Processing.

    Science.gov (United States)

    Mittag, Maria; Takegata, Rika; Winkler, István

    2016-09-14

    Representations encoding the probabilities of auditory events do not directly support predictive processing. In contrast, information about the probability with which a given sound follows another (transitional probability) allows predictions of upcoming sounds. We tested whether behavioral and cortical auditory deviance detection (the latter indexed by the mismatch negativity event-related potential) relies on probabilities of sound patterns or on transitional probabilities. We presented healthy adult volunteers with three types of rare tone-triplets among frequent standard triplets of high-low-high (H-L-H) or L-H-L pitch structure: proximity deviant (H-H-H/L-L-L), reversal deviant (L-H-L/H-L-H), and first-tone deviant (L-L-H/H-H-L). If deviance detection was based on pattern probability, reversal and first-tone deviants should be detected with similar latency because both differ from the standard at the first pattern position. If deviance detection was based on transitional probabilities, then reversal deviants should be the most difficult to detect because, unlike the other two deviants, they contain no low-probability pitch transitions. The data clearly showed that both behavioral and cortical auditory deviance detection uses transitional probabilities. Thus, the memory traces underlying cortical deviance detection may provide a link between stimulus probability-based change/novelty detectors operating at lower levels of the auditory system and higher auditory cognitive functions that involve predictive processing. Our research presents the first definite evidence for the auditory system prioritizing transitional probabilities over probabilities of individual sensory events. Forming representations for transitional probabilities paves the way for predictions of upcoming sounds. Several recent theories suggest that predictive processing provides the general basis of human perception, including important auditory functions, such as auditory scene analysis. Our

  20. Acoustic processing of temporally modulated sounds in infants: evidence from a combined near-infrared spectroscopy and EEG study

    Directory of Open Access Journals (Sweden)

    Silke eTelkemeyer

    2011-04-01

    Full Text Available Speech perception requires rapid extraction of the linguistic content from the acoustic signal. The ability to efficiently process rapid changes in auditory information is important for decoding speech and thereby crucial during language acquisition. Investigating functional networks of speech perception in infancy might elucidate neuronal ensembles supporting perceptual abilities that gate language acquisition. Interhemispheric specializations for language have been demonstrated in infants. How these asymmetries are shaped by basic temporal acoustic properties is under debate. We recently provided evidence that newborns process non-linguistic sounds sharing temporal features with language in a differential and lateralized fashion. The present study used the same material while measuring brain responses of 6 and 3 month old infants using simultaneous recordings of electroencephalography (EEG and near-infrared spectroscopy (NIRS. NIRS reveals that the lateralization observed in newborns remains constant over the first months of life. While fast acoustic modulations elicit bilateral neuronal activations, slow modulations lead to right-lateralized responses. Additionally, auditory evoked potentials and oscillatory EEG responses show differential responses for fast and slow modulations indicating a sensitivity for temporal acoustic variations. Oscillatory responses reveal an effect of development, that is, 6 but not 3 month old infants show stronger theta-band desynchronization for slowly modulated sounds. Whether this developmental effect is due to increasing fine-grained perception for spectrotemporal sounds in general remains speculative. Our findings support the notion that a more general specialization for acoustic properties can be considered the basis for lateralization of speech perception. The results show that concurrent assessment of vascular based imaging and electrophysiological responses have great potential in the research on language

  1. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  2. Creating sound and reversible configurable process models using CoSeNets

    NARCIS (Netherlands)

    Schunselaar, D.M.M.; Verbeek, H.M.W.; Aalst, van der W.M.P.; Reijers, H.A.; Abramowicz, W.; Kriksciuniene, D.; Sakalauskas, V.

    2012-01-01

    All Dutch municipalities offer the same range of services, and the processes delivering these services are quite similar. Therefore, these municipalities can benefit from configurable process models. This requires the merging of existing process variants into configurable models. Unfortunately,

  3. Cortical processing of pitch: Model-based encoding and decoding of auditory fMRI responses to real-life sounds.

    Science.gov (United States)

    De Angelis, Vittoria; De Martino, Federico; Moerel, Michelle; Santoro, Roberta; Hausfeld, Lars; Formisano, Elia

    2017-11-13

    Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant

  4. Near-infrared-spectroscopic study on processing of sounds in the brain; a comparison between native and non-native speakers of Japanese.

    Science.gov (United States)

    Tsunoda, Koichi; Sekimoto, Sotaro; Itoh, Kenji

    2016-06-01

    Conclusions The result suggested that mother tongue Japanese and non- mother tongue Japanese differ in their pattern of brain dominance when listening to sounds from the natural world-in particular, insect sounds. These results reveal significant support for previous findings from Tsunoda (in 1970). Objectives This study concentrates on listeners who show clear evidence of a 'speech' brain vs a 'music' brain and determines which side is most active in the processing of insect sounds, using with near-infrared spectroscopy. Methods The present study uses 2-channel Near Infrared Spectroscopy (NIRS) to provide a more direct measure of left- and right-brain activity while participants listen to each of three types of sounds: Japanese speech, Western violin music, or insect sounds. Data were obtained from 33 participants who showed laterality on opposite sides for Japanese speech and Western music. Results Results showed that a majority (80%) of the MJ participants exhibited dominance for insect sounds on the side that was dominant for language, while a majority (62%) of the non-MJ participants exhibited dominance for insect sounds on the side that was dominant for music.

  5. Orofacial Pain during Mastication in People with Dementia: Reliability Testing of the Orofacial Pain Scale for Non-Verbal Individuals

    Directory of Open Access Journals (Sweden)

    Merlijn W. de Vries

    2016-01-01

    Full Text Available Objectives. The aim of this study was to establish the reliability of the “chewing” subscale of the OPS-NVI, a novel tool designed to estimate presence and severity of orofacial pain in nonverbal patients. Methods. The OPS-NVI consists of 16 items for observed behavior, classified into four categories and a subjective estimate of pain. Two observers used the OPS-NVI for 237 video clips of people with dementia in Dutch nursing homes during their meal to observe their behavior and to estimate the intensity of orofacial pain. Six weeks later, the same observers rated the video clips a second time. Results. Bottom and ceiling effects for some items were found. This resulted in exclusion of these items from the statistical analyses. The categories which included the remaining items (n=6 showed reliability varying between fair-to-good and excellent (interobserver reliability, ICC: 0.40–0.47; intraobserver reliability, ICC: 0.40–0.92. Conclusions. The “chewing” subscale of the OPS-NVI showed a fair-to-good to excellent interobserver and intraobserver reliability in this dementia population. This study contributes to the validation process of the OPS-NVI as a whole and stresses the need for further assessment of the reliability of the OPS-NVI with subjects that might already show signs of orofacial pain.

  6. Orofacial Pain during Mastication in People with Dementia: Reliability Testing of the Orofacial Pain Scale for Non-Verbal Individuals.

    Science.gov (United States)

    de Vries, Merlijn W; Visscher, Corine; Delwel, Suzanne; van der Steen, Jenny T; Pieper, Marjoleine J C; Scherder, Erik J A; Achterberg, Wilco P; Lobbezoo, Frank

    2016-01-01

    Objectives. The aim of this study was to establish the reliability of the "chewing" subscale of the OPS-NVI, a novel tool designed to estimate presence and severity of orofacial pain in nonverbal patients. Methods. The OPS-NVI consists of 16 items for observed behavior, classified into four categories and a subjective estimate of pain. Two observers used the OPS-NVI for 237 video clips of people with dementia in Dutch nursing homes during their meal to observe their behavior and to estimate the intensity of orofacial pain. Six weeks later, the same observers rated the video clips a second time. Results. Bottom and ceiling effects for some items were found. This resulted in exclusion of these items from the statistical analyses. The categories which included the remaining items (n = 6) showed reliability varying between fair-to-good and excellent (interobserver reliability, ICC: 0.40-0.47; intraobserver reliability, ICC: 0.40-0.92). Conclusions. The "chewing" subscale of the OPS-NVI showed a fair-to-good to excellent interobserver and intraobserver reliability in this dementia population. This study contributes to the validation process of the OPS-NVI as a whole and stresses the need for further assessment of the reliability of the OPS-NVI with subjects that might already show signs of orofacial pain.

  7. A survey of formal business process verification : From soundness to variability

    NARCIS (Netherlands)

    Groefsema, Heerko; Bucur, Doina

    2013-01-01

    Formal verification of business process models is of interest to a number of application areas, including checking for basic process correctness, business compliance, and process variability. A large amount of work on these topics exist, while a comprehensive overview of the field and its directions

  8. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  9. "You can also save a life!": children's drawings as a non-verbal assessment of the impact of cardiopulmonary resuscitation training.

    Science.gov (United States)

    Petriş, Antoniu Octavian; Tatu-Chiţoiu, Gabriel; Cimpoeşu, Diana; Ionescu, Daniela Florentina; Pop, Călin; Oprea, Nadia; Ţînţ, Diana

    2017-04-01

    Drawings made by training children into cardiopulmonary resuscitation (CPR) during the special education week called "School otherwise" can be used as non-verbal means of expression and communication to assess the impact of such training. We analyzed the questionnaires and drawings completed by 327 schoolchildren in different stages of education. After a brief overview of the basic life support (BLS) steps and after watching a video presenting the dynamic performance of the BLS sequence, subjects were asked to complete a questionnaire and make a drawing to express main CPR messages. Questionnaires were filled completely in 97.6 % and drawings were done in 90.2 % cases. Half of the subjects had already witnessed a kind of medical emergency and 96.94 % knew the correct "112" emergency phone number. The drawings were single images (83.81 %) and less cartoon strips (16.18 %). Main themes of the slogans were "Save a life!", "Help!", "Call 112!", "Do not be indifferent/insensible/apathic!" through the use of drawings interpretation, CPR trainers can use art as a way to build a better relation with schoolchildren, to connect to their thoughts and feelings and obtain the highest quality education.

  10. Exploring the Domain Specificity of Creativity in Children: The Relationship between a Non-Verbal Creative Production Test and Creative Problem-Solving Activities

    Directory of Open Access Journals (Sweden)

    Ahmed Mohamed

    2012-12-01

    Full Text Available AbstractIn this study, we explored whether creativity was domain specific or domain general. The relationships between students’ scores on three creative problem-solving activities (math, spa-tial artistic, and oral linguistic in the DIS-COVER assessment (Discovering Intellectual Strengths and Capabilities While Observing Varied Ethnic Responses and the TCT-DP (Test of Creative Thinking-Drawing Produc-tion, a non-verbal general measure of creativi-ty, were examined. The participants were 135 first and second graders from two schools in the Southwestern United States from linguisti-cally and culturally diverse backgrounds. Pearson correlations, canonical correlations, and multiple regression analyses were calcu-lated to describe the relationship between the TCT-DP and the three DISCOVER creative problem-solving activities. We found that crea-tivity has both domain-specific and domain-general aspects, but that the domain-specific component seemed more prominent. One im-plication of these results is that educators should consider assessing creativity in specific domains to place students in special programs for gifted students rather than relying only on domain-general measures of divergent think-ing or creativity.

  11. Heart rate variability during acute psychosocial stress: A randomized cross-over trial of verbal and non-verbal laboratory stressors.

    Science.gov (United States)

    Brugnera, Agostino; Zarbo, Cristina; Tarvainen, Mika P; Marchettini, Paolo; Adorni, Roberta; Compare, Angelo

    2018-05-01

    Acute psychosocial stress is typically investigated in laboratory settings using protocols with distinctive characteristics. For example, some tasks involve the action of speaking, which seems to alter Heart Rate Variability (HRV) through acute changes in respiration patterns. However, it is still unknown which task induces the strongest subjective and autonomic stress response. The present cross-over randomized trial sought to investigate the differences in perceived stress and in linear and non-linear analyses of HRV between three different verbal (Speech and Stroop) and non-verbal (Montreal Imaging Stress Task; MIST) stress tasks, in a sample of 60 healthy adults (51.7% females; mean age = 25.6 ± 3.83 years). Analyses were run controlling for respiration rates. Participants reported similar levels of perceived stress across the three tasks. However, MIST induced a stronger cardiovascular response than Speech and Stroop tasks, even after controlling for respiration rates. Finally, women reported higher levels of perceived stress and lower HRV both at rest and in response to acute psychosocial stressors, compared to men. Taken together, our results suggest the presence of gender-related differences during psychophysiological experiments on stress. They also suggest that verbal activity masked the vagal withdrawal through altered respiration patterns imposed by speaking. Therefore, our findings support the use of highly-standardized math task, such as MIST, as a valid and reliable alternative to verbal protocols during laboratory studies on stress. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Lexical processing and distributional knowledge in sound-spelling mapping in a consistent orthography: A longitudinal study of reading and spelling in dyslexic and typically developing children.

    Science.gov (United States)

    Marinelli, Chiara Valeria; Cellini, Pamela; Zoccolotti, Pierluigi; Angelelli, Paola

    This study examined the ability to master lexical processing and use knowledge of the relative frequency of sound-spelling mappings in both reading and spelling. Twenty-four dyslexic and dysgraphic children and 86 typically developing readers were followed longitudinally in 3rd and 5th grades. Effects of word regularity, word frequency, and probability of sound-spelling mappings were examined in two experimental tasks: (a) spelling to dictation; and (b) orthographic judgment. Dyslexic children showed larger regularity and frequency effects than controls in both tasks. Sensitivity to distributional information of sound-spelling mappings was already detected by third grade, indicating early acquisition even in children with dyslexia. Although with notable differences, knowledge of the relative frequencies of sound-spelling mapping influenced both reading and spelling. Results are discussed in terms of their theoretical and empirical implications.

  13. Waveform analysis of sound

    CERN Document Server

    Tohyama, Mikio

    2015-01-01

    What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...

  14. Sound asleep: Processing and retention of slow oscillation phase-targeted stimuli

    NARCIS (Netherlands)

    Cox, R.; Korjoukov, I.; de Boer, M.; Talamini, L.M.

    2014-01-01

    The sleeping brain retains some residual information processing capacity. Although direct evidence is scarce, a substantial literature suggests the phase of slow oscillations during deep sleep to be an important determinant for stimulus processing. Here, we introduce an algorithm for predicting slow

  15. Processing of complex distracting sounds in school-aged children and adults: Evidence from EEG and MEG data

    Directory of Open Access Journals (Sweden)

    Philipp eRuhnau

    2013-10-01

    Full Text Available When a perceiver performs a task, rarely occurring sounds often have a distracting effect on task performance. The neural mismatch responses in event-related potentials to such distracting stimuli depend on age. Adults commonly show a negative response, whereas in children a positive as well as a negative mismatch response has been reported. Using electro- and magnetoencephalography (EEG/MEG, here we investigated the developmental changes of distraction processing in school-aged children (9–10 years and adults. Participants took part in an auditory-visual distraction paradigm comprising a visuo-spatial primary task and task-irrelevant environmental sounds distracting from this task. Behaviorally, distractors delayed reaction times in the primary task in both age groups, and this delay was of similar magnitude in both groups. The neurophysiological data revealed an early as well as a late mismatch response elicited by distracting stimuli in both age groups. Together with previous research, this indicates that deviance detection is accomplished in a hierarchical manner in the auditory system. Both mismatch responses were localized to auditory cortex areas. All mismatch responses were generally delayed in children, suggesting that not all neurophysiological aspects of deviance processing are mature in school-aged children. Furthermore, the P3a, reflecting involuntary attention capture, was present in both age groups in the EEG with comparable amplitudes and at similar latencies, but with a different topographical distribution. This suggests that involuntary attention shifts towards complex distractors operate comparably in school-aged children and adults, yet undergoing generator maturation.

  16. Affective priming effects of musical sounds on the processing of word meaning.

    Science.gov (United States)

    Steinbeis, Nikolaus; Koelsch, Stefan

    2011-03-01

    Recent studies have shown that music is capable of conveying semantically meaningful concepts. Several questions have subsequently arisen particularly with regard to the precise mechanisms underlying the communication of musical meaning as well as the role of specific musical features. The present article reports three studies investigating the role of affect expressed by various musical features in priming subsequent word processing at the semantic level. By means of an affective priming paradigm, it was shown that both musically trained and untrained participants evaluated emotional words congruous to the affect expressed by a preceding chord faster than words incongruous to the preceding chord. This behavioral effect was accompanied by an N400, an ERP typically linked with semantic processing, which was specifically modulated by the (mis)match between the prime and the target. This finding was shown for the musical parameter of consonance/dissonance (Experiment 1) and then extended to mode (major/minor) (Experiment 2) and timbre (Experiment 3). Seeing that the N400 is taken to reflect the processing of meaning, the present findings suggest that the emotional expression of single musical features is understood by listeners as such and is probably processed on a level akin to other affective communications (i.e., prosody or vocalizations) because it interferes with subsequent semantic processing. There were no group differences, suggesting that musical expertise does not have an influence on the processing of emotional expression in music and its semantic connotations.

  17. Sound asleep: processing and retention of slow oscillation phase-targeted stimuli.

    Science.gov (United States)

    Cox, Roy; Korjoukov, Ilia; de Boer, Marieke; Talamini, Lucia M

    2014-01-01

    The sleeping brain retains some residual information processing capacity. Although direct evidence is scarce, a substantial literature suggests the phase of slow oscillations during deep sleep to be an important determinant for stimulus processing. Here, we introduce an algorithm for predicting slow oscillations in real-time. Using this approach to present stimuli directed at both oscillatory up and down states, we show neural stimulus processing depends importantly on the slow oscillation phase. During ensuing wakefulness, however, we did not observe differential brain or behavioral responses to these stimulus categories, suggesting no enduring memories were formed. We speculate that while simpler forms of learning may occur during sleep, neocortically based memories are not readily established during deep sleep.

  18. Sound asleep: processing and retention of slow oscillation phase-targeted stimuli.

    Directory of Open Access Journals (Sweden)

    Roy Cox

    Full Text Available The sleeping brain retains some residual information processing capacity. Although direct evidence is scarce, a substantial literature suggests the phase of slow oscillations during deep sleep to be an important determinant for stimulus processing. Here, we introduce an algorithm for predicting slow oscillations in real-time. Using this approach to present stimuli directed at both oscillatory up and down states, we show neural stimulus processing depends importantly on the slow oscillation phase. During ensuing wakefulness, however, we did not observe differential brain or behavioral responses to these stimulus categories, suggesting no enduring memories were formed. We speculate that while simpler forms of learning may occur during sleep, neocortically based memories are not readily established during deep sleep.

  19. Taking a call is facilitated by the multisensory processing of smartphone vibrations, sounds, and flashes.

    Directory of Open Access Journals (Sweden)

    Ulrich Pomper

    Full Text Available Many electronic devices that we use in our daily lives provide inputs that need to be processed and integrated by our senses. For instance, ringing, vibrating, and flashing indicate incoming calls and messages in smartphones. Whether the presentation of multiple smartphone stimuli simultaneously provides an advantage over the processing of the same stimuli presented in isolation has not yet been investigated. In this behavioral study we examined multisensory processing between visual (V, tactile (T, and auditory (A stimuli produced by a smartphone. Unisensory V, T, and A stimuli as well as VA, AT, VT, and trisensory VAT stimuli were presented in random order. Participants responded to any stimulus appearance by touching the smartphone screen using the stimulated hand (Experiment 1, or the non-stimulated hand (Experiment 2. We examined violations of the race model to test whether shorter response times to multisensory stimuli exceed probability summations of unisensory stimuli. Significant violations of the race model, indicative of multisensory processing, were found for VA stimuli in both experiments and for VT stimuli in Experiment 1. Across participants, the strength of this effect was not associated with prior learning experience and daily use of smartphones. This indicates that this integration effect, similar to what has been previously reported for the integration of semantically meaningless stimuli, could involve bottom-up driven multisensory processes. Our study demonstrates for the first time that multisensory processing of smartphone stimuli facilitates taking a call. Thus, research on multisensory integration should be taken into consideration when designing electronic devices such as smartphones.

  20. Brief Report: Biological Sound Processing in Children with Autistic Spectrum Disorder

    Science.gov (United States)

    Lortie, Melissa; Proulx-Bégin, Léa; Saint-Amour, Dave; Cousineau, Dominique; Théoret, Hugo; Lepage, Jean-François

    2017-01-01

    There is debate whether social impairments in autism spectrum disorder (ASD) are truly domain-specific, or if they reflect generalized deficits in lower-level cognitive processes. To solve this issue, we used auditory-evoked EEG responses to assess novelty detection (MMN component) and involuntary attentional orientation (P3 component) induced by…

  1. When a hit sounds like a kiss : An electrophysiological exploration of semantic processing in visual narrative

    NARCIS (Netherlands)

    Manfredi, Mirella; Cohn, Neil; Kutas, Marta

    2017-01-01

    Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual

  2. Musical and linguistic expertise influence pre-attentive and attentive processing of non-speech sounds.

    Science.gov (United States)

    Marie, Céline; Kujala, Teija; Besson, Mireille

    2012-04-01

    The aim of this experiment was two-fold. Our first goal was to determine whether linguistic expertise influences the pre-attentive [as reflected by the Mismatch Negativity - (MMN)] and the attentive processing (as reflected by behavioural discrimination accuracy) of non-speech, harmonic sounds. The second was to directly compare the effects of linguistic and musical expertise. To this end, we compared non-musician native speakers of a quantity language, Finnish, in which duration is a phonemically contrastive cue, with French musicians and French non-musicians. Results revealed that pre-attentive and attentive processing of duration deviants was enhanced in Finn non-musicians and French musicians compared to French non-musicians. By contrast, MMN in French musicians was larger than in both Finns and French non-musicians for frequency deviants, whereas no between-group differences were found for intensity deviants. By showing similar effects of linguistic and musical expertise, these results argue in favor of common processing of duration in music and speech. Copyright © 2010 Elsevier Srl. All rights reserved.

  3. A puzzle form of a non-verbal intelligence test gives significantly higher performance measures in children with severe intellectual disability.

    Science.gov (United States)

    Bello, Katrina D; Goharpey, Nahal; Crewther, Sheila G; Crewther, David P

    2008-08-01

    Assessment of 'potential intellectual ability' of children with severe intellectual disability (ID) is limited, as current tests designed for normal children do not maintain their interest. Thus a manual puzzle version of the Raven's Coloured Progressive Matrices (RCPM) was devised to appeal to the attentional and sensory preferences and language limitations of children with ID. It was hypothesized that performance on the book and manual puzzle forms would not differ for typically developing children but that children with ID would perform better on the puzzle form. The first study assessed the validity of this puzzle form of the RCPM for 76 typically developing children in a test-retest crossover design, with a 3 week interval between tests. A second study tested performance and completion rate for the puzzle form compared to the book form in a sample of 164 children with ID. In the first study, no significant difference was found between performance on the puzzle and book forms in typically developing children, irrespective of the order of completion. The second study demonstrated a significantly higher performance and completion rate for the puzzle form compared to the book form in the ID population. Similar performance on book and puzzle forms of the RCPM by typically developing children suggests that both forms measure the same construct. These findings suggest that the puzzle form does not require greater cognitive ability but demands sensory-motor attention and limits distraction in children with severe ID. Thus, we suggest the puzzle form of the RCPM is a more reliable measure of the non-verbal mentation of children with severe ID than the book form.

  4. Cortical processing of speech and non-speech sounds in autism and Asperger syndrome

    OpenAIRE

    Lepistö, Tuulia

    2008-01-01

    Autism and Asperger syndrome (AS) are neurodevelopmental disorders characterised by deficient social and communication skills, as well as restricted, repetitive patterns of behaviour. The language development in individuals with autism is significantly delayed and deficient, whereas in individuals with AS, the structural aspects of language develop quite normally. Both groups, however, have semantic-pragmatic language deficits. The present thesis investigated auditory processing in individual...

  5. Breaking the sound barrier: exploring parents' decision-making process of cochlear implants for their children.

    Science.gov (United States)

    Chang, Pamara F

    2017-08-01

    To understand the dynamic experiences of parents undergoing the decision-making process regarding cochlear implants for their child(ren). Thirty-three parents of d/Deaf children participated in semi-structured interviews. Interviews were digitally recorded, transcribed, and coded using iterative and thematic coding. The results from this study reveal four salient topics related to parents' decision-making process regarding cochlear implantation: 1) factors parents considered when making the decision to get the cochlear implant for their child (e.g., desire to acculturate child into one community), 2) the extent to which parents' communities influence their decision-making (e.g., norms), 3) information sources parents seek and value when decision-making (e.g., parents value other parent's experiences the most compared to medical or online sources), and 4) personal experiences with stigma affecting their decision to not get the cochlear implant for their child. This study provides insights into values and perspectives that can be utilized to improve informed decision-making, when making risky medical decisions with long-term implications. With thorough information provisions, delineation of addressing parents' concerns and encompassing all aspects of the decision (i.e., medical, social and cultural), health professional teams could reduce the uncertainty and anxiety for parents in this decision-making process for cochlear implantation. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  7. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  8. The impact of traditional coffee processing on river water quality in Ethiopia and the urgency of adopting sound environmental practices.

    Science.gov (United States)

    Beyene, Abebe; Kassahun, Yared; Addis, Taffere; Assefa, Fassil; Amsalu, Aklilu; Legesse, Worku; Kloos, Helmut; Triest, Ludwig

    2012-11-01

    Although waste from coffee processing is a valuable resource to make biogas, compost, and nutrient-rich animal food, it is usually dumped into nearby water courses. We carried out water quality assessment at 44 sampling sites along 18 rivers that receive untreated waste from 23 coffee pulping and processing plants in Jimma Zone, Ethiopia. Twenty upstream sampling sites free from coffee waste impact served as control, and 24 downstream sampling sites affected by coffee waste were selected for comparison. Physicochemical and biological results revealed a significant river water quality deterioration as a result of disposing untreated coffee waste into running water courses. During coffee-processing (wet) season, the highest organic load (1,900 mg/l), measured as biochemical oxygen demand, depleted dissolved oxygen (DO) to a level less than 0.01 mg/l, and thus curtailed nitrification. During off season, oxygen started to recuperate and augmented nitrification. The shift from significantly elevated organic load and reduced DO in the wet season to increased nitrate in the off season was found to be the determining factor for the difference in macroinvertebrate community structure as verified by ordination analysis. Macroinvertebrate diversity was significantly reduced in impacted sites during the wet season contrary to the off season. However, there was a significant difference in the ratio of sensitive to pollution-tolerant taxa in the off season, which remained depreciated in the longer term. This study highlights the urgency of research exploring on the feasibility of adopting appropriate pollution abatement technologies to implement ecologically sound coffee-processing systems in coffee-growing regions of Ethiopia.

  9. Evidence of functional connectivity between auditory cortical areas revealed by amplitude modulation sound processing.

    Science.gov (United States)

    Guéguin, Marie; Le Bouquin-Jeannès, Régine; Faucon, Gérard; Chauvel, Patrick; Liégeois-Chauvel, Catherine

    2007-02-01

    The human auditory cortex includes several interconnected areas. A better understanding of the mechanisms involved in auditory cortical functions requires a detailed knowledge of neuronal connectivity between functional cortical regions. In human, it is difficult to track in vivo neuronal connectivity. We investigated the interarea connection in vivo in the auditory cortex using a method of directed coherence (DCOH) applied to depth auditory evoked potentials (AEPs). This paper presents simultaneous AEPs recordings from insular gyrus (IG), primary and secondary cortices (Heschl's gyrus and planum temporale), and associative areas (Brodmann area [BA] 22) with multilead intracerebral electrodes in response to sinusoidal modulated white noises in 4 epileptic patients who underwent invasive monitoring with depth electrodes for epilepsy surgery. DCOH allowed estimation of the causality between 2 signals recorded from different cortical sites. The results showed 1) a predominant auditory stream within the primary auditory cortex from the most medial region to the most lateral one whatever the modulation frequency, 2) unidirectional functional connection from the primary to secondary auditory cortex, 3) a major auditory propagation from the posterior areas to the anterior ones, particularly at 8, 16, and 32 Hz, and 4) a particular role of Heschl's sulcus dispatching information to the different auditory areas. These findings suggest that cortical processing of auditory information is performed in serial and parallel streams. Our data showed that the auditory propagation could not be associated to a unidirectional traveling wave but to a constant interaction between these areas that could reflect the large adaptive and plastic capacities of auditory cortex. The role of the IG is discussed.

  10. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2008-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  11. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2010-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  12. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2007-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  13. Smart Sound Processing for Defect Sizing in Pipelines Using EMAT Actuator Based Multi-Frequency Lamb Waves

    Directory of Open Access Journals (Sweden)

    Joaquín García-Gómez

    2018-03-01

    Full Text Available Pipeline inspection is a topic of particular interest to the companies. Especially important is the defect sizing, which allows them to avoid subsequent costly repairs in their equipment. A solution for this issue is using ultrasonic waves sensed through Electro-Magnetic Acoustic Transducer (EMAT actuators. The main advantage of this technology is the absence of the need to have direct contact with the surface of the material under investigation, which must be a conductive one. Specifically interesting is the meander-line-coil based Lamb wave generation, since the directivity of the waves allows a study based in the circumferential wrap-around received signal. However, the variety of defect sizes changes the behavior of the signal when it passes through the pipeline. Because of that, it is necessary to apply advanced techniques based on Smart Sound Processing (SSP. These methods involve extracting useful information from the signals sensed with EMAT at different frequencies to obtain nonlinear estimations of the depth of the defect, and to select the features that better estimate the profile of the pipeline. The proposed technique has been tested using both simulated and real signals in steel pipelines, obtaining good results in terms of Root Mean Square Error (RMSE.

  14. Smart Sound Processing for Defect Sizing in Pipelines Using EMAT Actuator Based Multi-Frequency Lamb Waves.

    Science.gov (United States)

    García-Gómez, Joaquín; Gil-Pita, Roberto; Rosa-Zurera, Manuel; Romero-Camacho, Antonio; Jiménez-Garrido, Jesús Antonio; García-Benavides, Víctor

    2018-03-07

    Pipeline inspection is a topic of particular interest to the companies. Especially important is the defect sizing, which allows them to avoid subsequent costly repairs in their equipment. A solution for this issue is using ultrasonic waves sensed through Electro-Magnetic Acoustic Transducer (EMAT) actuators. The main advantage of this technology is the absence of the need to have direct contact with the surface of the material under investigation, which must be a conductive one. Specifically interesting is the meander-line-coil based Lamb wave generation, since the directivity of the waves allows a study based in the circumferential wrap-around received signal. However, the variety of defect sizes changes the behavior of the signal when it passes through the pipeline. Because of that, it is necessary to apply advanced techniques based on Smart Sound Processing (SSP). These methods involve extracting useful information from the signals sensed with EMAT at different frequencies to obtain nonlinear estimations of the depth of the defect, and to select the features that better estimate the profile of the pipeline. The proposed technique has been tested using both simulated and real signals in steel pipelines, obtaining good results in terms of Root Mean Square Error (RMSE).

  15. Psychometric evaluation of the Orofacial Pain Scale for Non-Verbal Individuals as a screening tool for orofacial pain in people with dementia.

    Science.gov (United States)

    Delwel, Suzanne; Perez, Roberto S G M; Maier, Andrea B; Hertogh, Cees M P M; de Vet, Henrica C W; Lobbezoo, Frank; Scherder, Erik J A

    2018-04-29

    The aim of this study was to describe the psychometric evaluation of the Orofacial Pain Scale for Non-Verbal Individuals (OPS-NVI) as a screening tool for orofacial pain in people with dementia. The OPS-NVI has recently been developed and needs psychometric evaluation for clinical use in people with dementia. The pain self-report is imperative as a reference standard and can be provided by people with mild-to-moderate cognitive impairment. The presence of orofacial pain during rest, drinking, chewing and oral hygiene care was observed in people with mild cognitive impairment (MCI) and dementia using the OPS-NVI. Participants who were considered to present a reliable self-report were asked about pain presence, and in all participants, the oral health was examined by a dentist for the presence of potential painful conditions. After item-reduction, inter-rater reliability and criterion validity were determined. The presence of orofacial pain in this population was low (0%-10%), resulting in an average Positive Agreement of 0%-100%, an average Negative Agreement of 77%-100%, a sensitivity of 0%-100% and a specificity of 66%-100% for the individual items of the OPS-NVI. At the same time, the presence of oral problems, such as ulcers, tooth root remnants and caries was high (64.5%). The orofacial pain presence in this MCI and dementia population was low, resulting in low scores for average Positive Agreement and sensitivity and high scores for average Negative Agreement and specificity. Therefore, the OPS-NVI in its current form cannot be recommended as a screening tool for orofacial pain in people with MCI and dementia. However, the inter-rater reliability and criterion validity of the individual items in this study provide more insight for the further adjustment of the OPS-NVI for diagnostic use. Notably, oral health problems were frequently present, although no pain was reported or observed, indicating that oral health problems cannot be used as a new reference

  16. A influência da comunicação não verbal no cuidado de enfermagem La influencia de la comunicación no verbal en la atención de enfermería The influence of non-verbal communication in nursing care

    Directory of Open Access Journals (Sweden)

    Carla Cristina Viana Santos

    2005-08-01

    Nursing School Alfredo Pinto UNIRIO, and it started during the development of a monograph. The object of the study is the meaning of non-verbal communication under the optics of the nursing course undergraduates. The study presents the following objectives: to determine how non-verbal communication is comprehended among college students in nursing and to analyze in what way that comprehension influences nursing care. The methodological approach was qualitative, while the dynamics of sensitivity were applied as strategy for data collection. It was observed that undergraduate students identify the relevance and influence of non-verbal communication along nursing care, however there is a need in amplifying the knowledge of non-verbal communication process prior the implementation of nursing care.

  17. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer

  18. Sound Zones

    DEFF Research Database (Denmark)

    Møller, Martin Bo; Olsen, Martin

    2017-01-01

    Sound zones, i.e. spatially confined regions of individual audio content, can be created by appropriate filtering of the desired audio signals reproduced by an array of loudspeakers. The challenge of designing filters for sound zones is twofold: First, the filtered responses should generate...... an acoustic separation between the control regions. Secondly, the pre- and post-ringing as well as spectral deterioration introduced by the filters should be minimized. The tradeoff between acoustic separation and filter ringing is the focus of this paper. A weighted L2-norm penalty is introduced in the sound...

  19. Sound as Popular Culture

    DEFF Research Database (Denmark)

    The wide-ranging texts in this book take as their premise the idea that sound is a subject through which popular culture can be analyzed in an innovative way. From an infant’s gurgles over a baby monitor to the roar of the crowd in a stadium to the sub-bass frequencies produced by sound systems...... in the disco era, sound—not necessarily aestheticized as music—is inextricably part of the many domains of popular culture. Expanding the view taken by many scholars of cultural studies, the contributors consider cultural practices concerning sound not merely as semiotic or signifying processes but as material......, physical, perceptual, and sensory processes that integrate a multitude of cultural traditions and forms of knowledge. The chapters discuss conceptual issues as well as terminologies and research methods; analyze historical and contemporary case studies of listening in various sound cultures; and consider...

  20. Auditory processing and phonological awareness skills of five-year-old children with and without musical experience.

    Science.gov (United States)

    Escalda, Júlia; Lemos, Stela Maris Aguiar; França, Cecília Cavalieri

    2011-09-01

    To investigate the relations between musical experience, auditory processing and phonological awareness of groups of 5-year-old children with and without musical experience. Participants were 56 5-year-old subjects of both genders, 26 in the Study Group, consisting of children with musical experience, and 30 in the Control Group, consisting of children without musical experience. All participants were assessed with the Simplified Auditory Processing Assessment and Phonological Awareness Test and the data was statistically analyzed. There was a statistically significant difference between the results of the sequential memory test for verbal and non-verbal sounds with four stimuli, phonological awareness tasks of rhyme recognition, phonemic synthesis and phonemic deletion. Analysis of multiple binary logistic regression showed that, with exception of the sequential verbal memory with four syllables, the observed difference in subjects' performance was associated with their musical experience. Musical experience improves auditory and metalinguistic abilities of 5-year-old children.

  1. Introducing the Oxford Vocal (OxVoc Sounds Database: A validated set of non-acted affective sounds from human infants, adults and domestic animals

    Directory of Open Access Journals (Sweden)

    Christine eParsons

    2014-06-01

    Full Text Available Sound moves us. Nowhere is this more apparent than in our responses to genuine emotional vocalisations, be they heartfelt distress cries or raucous laughter. Here, we present perceptual ratings and a description of a freely available, large database of natural affective vocal sounds from human infants, adults and domestic animals, the Oxford Vocal (OxVoc Sounds database. This database consists of 173 non-verbal sounds expressing a range of happy, sad and neutral emotional states. Ratings are presented for the sounds on a range of dimensions from a number of independent participant samples. Perceptions related to valence, including distress, vocaliser mood, and listener mood are presented in Study 1. Perceptions of the arousal of the sound, listener motivation to respond and valence (positive, negative are presented in Study 2. Perceptions of the emotional content of the stimuli in both Study 1 and Study 2 were consistent with the predefined categories (e.g., laugh stimuli perceived as positive. While the adult vocalisations received more extreme valence ratings, rated motivation to respond to the sounds was highest for the infant sounds. The major advantages of this database are the inclusion of vocalisations from naturalistic situations, which represent genuine expressions of emotion, and the inclusion of vocalisations from animals and infants, providing comparison stimuli for use in cross-species and developmental studies. The associated website provides a detailed description of the physical properties of the each sound stimulus along with cross-category descriptions.

  2. Snap your fingers! An ERP/sLORETA study investigating implicit processing of self- vs. other-related movement sounds using the passive oddball paradigm

    Directory of Open Access Journals (Sweden)

    Christoph Justen

    2016-10-01

    Full Text Available So far, neurophysiological studies have investigated implicit and explicit self-related processing particularly for self-related stimuli such as the own face or name. The present study extends previous research to the implicit processing of self-related movement sounds and explores their spatiotemporal dynamics. Event-related potentials (ERPs were assessed while participants (N = 12 healthy subjects listened passively to previously recorded self- and other-related finger snapping sounds, presented either as deviants or standards during an oddball paradigm. Passive listening to low (500 Hz and high (1000 Hz pure tones served as additional control. For self- vs. other-related finger snapping sounds, analysis of ERPs revealed significant differences in the time windows of the N2a/MMN and P3. An subsequent source localization analysis with standardized low resolution brain electromagnetic tomography (sLORETA revealed increased cortical activation in distinct motor areas such as the supplementary motor area (SMA in the N2a/mismatch negativity (MMN as well as the P3 time window during processing of self- and other-related finger snapping sounds. In contrast, brain regions associated with self-related processing (e.g., right anterior/posterior cingulate cortex (ACC/PPC as well as the right inferior parietal lobule (IPL showed increased activation particularly during processing of self- vs. other-related finger snapping sounds in the time windows of the N2a/MMN (ACC/PCC or the P3 (IPL. None of these brain regions showed enhanced activation while listening passively to low (500 Hz and high (1000 Hz pure tones. Taken together, the current results indicate (1 a specific role of motor regions such as SMA during auditory processing of movement-related information, regardless of whether this information is self- or other-related, (2 activation of neural sources such as the ACC/PCC and the IPL during implicit processing of self-related movement stimuli, and (3

  3. Snap Your Fingers! An ERP/sLORETA Study Investigating Implicit Processing of Self- vs. Other-Related Movement Sounds Using the Passive Oddball Paradigm

    Science.gov (United States)

    Justen, Christoph; Herbert, Cornelia

    2016-01-01

    So far, neurophysiological studies have investigated implicit and explicit self-related processing particularly for self-related stimuli such as the own face or name. The present study extends previous research to the implicit processing of self-related movement sounds and explores their spatio-temporal dynamics. Event-related potentials (ERPs) were assessed while participants (N = 12 healthy subjects) listened passively to previously recorded self- and other-related finger snapping sounds, presented either as deviants or standards during an oddball paradigm. Passive listening to low (500 Hz) and high (1000 Hz) pure tones served as additional control. For self- vs. other-related finger snapping sounds, analysis of ERPs revealed significant differences in the time windows of the N2a/MMN and P3. An subsequent source localization analysis with standardized low-resolution brain electromagnetic tomography (sLORETA) revealed increased cortical activation in distinct motor areas such as the supplementary motor area (SMA) in the N2a/mismatch negativity (MMN) as well as the P3 time window during processing of self- and other-related finger snapping sounds. In contrast, brain regions associated with self-related processing [e.g., right anterior/posterior cingulate cortex (ACC/PPC)] as well as the right inferior parietal lobule (IPL) showed increased activation particularly during processing of self- vs. other-related finger snapping sounds in the time windows of the N2a/MMN (ACC/PCC) or the P3 (IPL). None of these brain regions showed enhanced activation while listening passively to low (500 Hz) and high (1000 Hz) pure tones. Taken together, the current results indicate (1) a specific role of motor regions such as SMA during auditory processing of movement-related information, regardless of whether this information is self- or other-related, (2) activation of neural sources such as the ACC/PCC and the IPL during implicit processing of self-related movement stimuli, and (3

  4. Sound intensity

    DEFF Research Database (Denmark)

    Crocker, Malcolm J.; Jacobsen, Finn

    1998-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  5. Sound Intensity

    DEFF Research Database (Denmark)

    Crocker, M.J.; Jacobsen, Finn

    1997-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  6. It sounds good!

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Both the atmosphere and we ourselves are hit by hundreds of particles every second and yet nobody has ever heard a sound coming from these processes. Like cosmic rays, particles interacting inside the detectors at the LHC do not make any noise…unless you've decided to use the ‘sonification’ technique, in which case you might even hear the Higgs boson sound like music. Screenshot of the first page of the "LHC sound" site. A group of particle physicists, composers, software developers and artists recently got involved in the ‘LHC sound’ project to make the particles at the LHC produce music. Yes…music! The ‘sonification’ technique converts data into sound. “In this way, if you implement the right software you can get really nice music out of the particle tracks”, says Lily Asquith, a member of the ATLAS collaboration and one of the initiators of the project. The ‘LHC...

  7. Characteristic sounds facilitate visual search.

    Science.gov (United States)

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  8. Sound of mind : electrophysiological and behavioural evidence for the role of context, variation and informativity in human speech processing

    NARCIS (Netherlands)

    Nixon, Jessie Sophia

    2014-01-01

    Spoken communication involves transmission of a message which takes physical form in acoustic waves. Within any given language, acoustic cues pattern in language-specific ways along language-specific acoustic dimensions to create speech sound contrasts. These cues are utilized by listeners to

  9. Fluid Sounds

    DEFF Research Database (Denmark)

    Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects and in arch......Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects...... and in architectural design. Aesthetics, psychoacoustics, perception, and cognition are all present in this expanding field embracing such categories as soundscape composition, sound art, sonic art, sound design, sound studies and auditory culture. Of greatest significance to the overall field is the investigation...

  10. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice......Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  11. Nuclear sound

    International Nuclear Information System (INIS)

    Wambach, J.

    1991-01-01

    Nuclei, like more familiar mechanical systems, undergo simple vibrational motion. Among these vibrations, sound modes are of particular interest since they reveal important information on the effective interactions among the constituents and, through extrapolation, on the bulk behaviour of nuclear and neutron matter. Sound wave propagation in nuclei shows strong quantum effects familiar from other quantum systems. Microscopic theory suggests that the restoring forces are caused by the complex structure of the many-Fermion wavefunction and, in some cases, have no classical analogue. The damping of the vibrational amplitude is strongly influenced by phase coherence among the particles participating in the motion. (author)

  12. The Use of Music and Other Forms of Organized Sound as a Therapeutic Intervention for Students with Auditory Processing Disorder: Providing the Best Auditory Experience for Children with Learning Differences

    Science.gov (United States)

    Faronii-Butler, Kishasha O.

    2013-01-01

    This auto-ethnographical inquiry used vignettes and interviews to examine the therapeutic use of music and other forms of organized sound in the learning environment of individuals with Central Auditory Processing Disorders. It is an investigation of the traditions of healing with sound vibrations, from its earliest cultural roots in shamanism and…

  13. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  14. Second Sound

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 6. Second Sound - The Role of Elastic Waves. R Srinivasan. General Article Volume 4 Issue 6 June 1999 pp 15-19. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/004/06/0015-0019 ...

  15. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  16. Sounds in context

    DEFF Research Database (Denmark)

    Weed, Ethan

    A sound is never just a sound. It is becoming increasingly clear that auditory processing is best thought of not as a one-way afferent stream, but rather as an ongoing interaction between interior processes and the environment. Even the earliest stages of auditory processing in the nervous system...... time-course of contextual influence on auditory processing in three different paradigms: a simple mismatch negativity paradigm with tones of differing pitch, a multi-feature mismatch negativity paradigm in which tones were embedded in a complex musical context, and a cross-modal paradigm, in which...... auditory processing of emotional speech was modulated by an accompanying visual context. I then discuss these results in terms of their implication for how we conceive of the auditory processing stream....

  17. Sound Visualisation

    OpenAIRE

    Dolenc, Peter

    2013-01-01

    This thesis contains a description of a construction of subwoofer case that has an extra functionality of being able to produce special visual effects and display visualizations that match the currently playing sound. For this reason, multiple lighting elements made out of LED (Light Emitting Diode) diodes were installed onto the subwoofer case. The lighting elements are controlled by dedicated software that was also developed. The software runs on STM32F4-Discovery evaluation board inside a ...

  18. Sound knowledge

    DEFF Research Database (Denmark)

    Kauffmann, Lene Teglhus

    as knowledge based on reflexive practices. I chose ‘health promotion’ as the field for my research as it utilises knowledge produced in several research disciplines, among these both quantitative and qualitative. I mapped out the institutions, actors, events, and documents that constituted the field of health...... of the research is to investigate what is considered to ‘work as evidence’ in health promotion and how the ‘evidence discourse’ influences social practices in policymaking and in research. From investigating knowledge practices in the field of health promotion, I develop the concept of sound knowledge...... result of a rigorous and standardized research method. However, this anthropological analysis shows that evidence and evidence-based is a hegemonic ‘way of knowing’ that sometimes transposes everyday reasoning into an epistemological form. However, the empirical material shows a variety of understandings...

  19. From sounds to words: a neurocomputational model of adaptation, inhibition and memory processes in auditory change detection.

    Science.gov (United States)

    Garagnani, Max; Pulvermüller, Friedemann

    2011-01-01

    Most animals detect sudden changes in trains of repeated stimuli but only some can learn a wide range of sensory patterns and recognise them later, a skill crucial for the evolutionary success of higher mammals. Here we use a neural model mimicking the cortical anatomy of sensory and motor areas and their connections to explain brain activity indexing auditory change and memory access. Our simulations indicate that while neuronal adaptation and local inhibition of cortical activity can explain aspects of change detection as observed when a repeated unfamiliar sound changes in frequency, the brain dynamics elicited by auditory stimulation with well-known patterns (such as meaningful words) cannot be accounted for on the basis of adaptation and inhibition alone. Specifically, we show that the stronger brain responses observed to familiar stimuli in passive oddball tasks are best explained in terms of activation of memory circuits that emerged in the cortex during the learning of these stimuli. Such memory circuits, and the activation enhancement they entail, are absent for unfamiliar stimuli. The model illustrates how basic neurobiological mechanisms, including neuronal adaptation, lateral inhibition, and Hebbian learning, underlie neuronal assembly formation and dynamics, and differentially contribute to the brain's major change detection response, the mismatch negativity. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  1. NASA Space Sounds API

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA has released a series of space sounds via sound cloud. We have abstracted away some of the hassle in accessing these sounds, so that developers can play with...

  2. Statistical Signal Processing by Using the Higher-Order Correlation between Sound and Vibration and Its Application to Fault Detection of Rotational Machine

    Directory of Open Access Journals (Sweden)

    Hisako Masuike

    2008-01-01

    Full Text Available In this study, a stochastic diagnosis method based on the changing information of not only a linear correlation but also a higher-order nonlinear correlation is proposed in a form suitable for online signal processing in time domain by using a personal computer, especially in order to find minutely the mutual relationship between sound and vibration emitted from rotational machines. More specifically, a conditional probability hierarchically reflecting various types of correlation information is theoretically derived by introducing an expression on the multidimensional probability distribution in orthogonal expansion series form. The effectiveness of the proposed theory is experimentally confirmed by applying it to the observed data emitted from a rotational machine driven by an electric motor.

  3. Can We Use Creativity to Improve Generic Skills in Our Higher Education Students? A Proposal Based on Non-Verbal Communication and Creative Movement

    Science.gov (United States)

    Rodriquez, Rosa Maria; Castilla, Guillermo

    2013-01-01

    Traditionally, general skills and personal growth have been developed through cognitive processes within academic contexts. Development based on experience may be an alternative route to achieve cognitive knowledge. Enact-learning is based on the biunivocal relationship between knowledge and action. Action is movement. Participants interact with…

  4. Sound pressure level tools design used in occupational health by means of Labview software

    Directory of Open Access Journals (Sweden)

    Farhad Forouharmajd

    2015-01-01

    Conclusion: LabVIEW programming capabilities in the field of sound can be referred to the measurement of sound, frequency analysis, and sound control that actually the software acts like a sound level meter and sound analyzer. According to the mentioned features, we can use this software to analyze and process sound and vibration as a monitoring system.

  5. Visualization of Broadband Sound Sources

    OpenAIRE

    Sukhanov Dmitry; Erzakova Nadezhda

    2016-01-01

    In this paper the method of imaging of wideband audio sources based on the 2D microphone array measurements of the sound field at the same time in all the microphones is proposed. Designed microphone array consists of 160 microphones allowing to digitize signals with a frequency of 7200 Hz. Measured signals are processed using the special algorithm that makes it possible to obtain a flat image of wideband sound sources. It is shown experimentally that the visualization is not dependent on the...

  6. Sound for Health

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    From astronomy to biomedical sciences: music and sound as tools for scientific investigation Music and science are probably two of the most intrinsically linked disciplines in the spectrum of human knowledge. Science and technology have revolutionised the way artists work, interact, and create. The impact of innovative materials, new communication media, more powerful computers, and faster networks on the creative process is evident: we all can become artists in the digital era. What is less known, is that arts, and music in particular, are having a profound impact the way scientists operate, and think. From the early experiments by Kepler to the modern data sonification applications in medicine – sound and music are playing an increasingly crucial role in supporting science and driving innovation. In this talk. Dr. Domenico Vicinanza will be highlighting the complementarity and the natural synergy between music and science, with specific reference to biomedical sciences. Dr. Vicinanza will take t...

  7. The effect of the position of atypical character-to-sound correspondences on reading kanji words aloud: Evidence for a sublexical serially operating kanji reading process.

    Science.gov (United States)

    Sambai, Ami; Coltheart, Max; Uno, Akira

    2018-04-01

    In English, the size of the regularity effect on word reading-aloud latency decreases across position of irregularity. This has been explained by a sublexical serially operating reading mechanism. It is unclear whether sublexical serial processing occurs in reading two-character kanji words aloud. To investigate this issue, we studied how the position of atypical character-to-sound correspondences influenced reading performance. When participants read inconsistent-atypical words aloud mixed randomly with nonwords, reading latencies of words with an inconsistent-atypical correspondence in the initial position were significantly longer than words with an inconsistent-atypical correspondence in the second position. The significant difference of reading latencies for inconsistent-atypical words disappeared when inconsistent-atypical words were presented without nonwords. Moreover, reading latencies for words with an inconsistent-atypical correspondence in the first position were shorter than for words with a typical correspondence in the first position. This typicality effect was absent when the atypicality was in the second position. These position-of-atypicality effects suggest that sublexical processing of kanji occurs serially and that the phonology of two-character kanji words is generated from both a lexical parallel process and a sublexical serial process.

  8. The Sound of Science

    Science.gov (United States)

    Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan

    2014-01-01

    While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…

  9. Sounds Exaggerate Visual Shape

    Science.gov (United States)

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  10. Making Sound Connections

    Science.gov (United States)

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  11. Saudi normative data for the Wisconsin Card Sorting test, Stroop test, Test of Non-verbal Intelligence-3, Picture Completion and Vocabulary (subtest of the Wechsler Adult Intelligence Scale-Revised).

    Science.gov (United States)

    Al-Ghatani, Ali M; Obonsawin, Marc C; Binshaig, Basmah A; Al-Moutaery, Khalaf R

    2011-01-01

    There are 2 aims for this study: first, to collect normative data for the Wisconsin Card Sorting Test (WCST), Stroop test, Test of Non-verbal Intelligence (TONI-3), Picture Completion (PC) and Vocabulary (VOC) sub-test of the Wechsler Adult Intelligence Scale-Revised for use in a Saudi Arabian culture, and second, to use the normative data provided to generate the regression equations. To collect the normative data and generate the regression equations, 198 healthy individuals were selected to provide a representative distribution for age, gender, years of education, and socioeconomic class. The WCST, Stroop test, TONI-3, PC, and VOC were administrated to the healthy individuals. This study was carried out at the Department of Clinical Neurosciences, Riyadh Military Hospital, Riyadh, Kingdom of Saudi Arabia from January 2000 to July 2002. Normative data were obtained for all tests, and tables were constructed to interpret scores for different age groups. Regression equations to predict performance on the 3 tests of frontal function from scores on tests of fluid (TONI-3) and premorbid intelligence were generated from the data from the healthy individuals. The data collected in this study provide normative tables for 3 tests of frontal lobe function and for tests of general intellectual ability for use in Saudi Arabia. The data also provide a method to estimate pre-injury ability without the use of verbally based tests.

  12. Little Sounds

    Directory of Open Access Journals (Sweden)

    Baker M. Bani-Khair

    2017-10-01

    Full Text Available The Spider and the Fly   You little spider, To death you aspire... Or seeking a web wider, To death all walking, No escape you all fighters… Weak and fragile in shape and might, Whatever you see in the horizon, That is destiny whatever sight. And tomorrow the spring comes, And the flowers bloom, And the grasshopper leaps high, And the frogs happily cry, And the flies smile nearby, To that end, The spider has a plot, To catch the flies by his net, A mosquito has fallen down in his net, Begging him to set her free, Out of that prison, To her freedom she aspires, Begging...Imploring...crying,  That is all what she requires, But the spider vows never let her free, His power he admires, Turning blind to light, And with his teeth he shall bite, Leaving her in desperate might, Unable to move from site to site, Tied up with strings in white, Wrapped up like a dead man, Waiting for his grave at night,   The mosquito says, Oh little spider, A stronger you are than me in power, But listen to my words before death hour, Today is mine and tomorrow is yours, No escape from death... Whatever the color of your flower…     Little sounds The Ant The ant is a little creature with a ferocious soul, Looking and looking for more and more, You can simply crush it like dead mold, Or you can simply leave it alone, I wonder how strong and strong they are! Working day and night in a small hole, Their motto is work or whatever you call… A big boon they have and joy in fall, Because they found what they store, A lesson to learn and memorize all in all, Work is something that you should not ignore!   The butterfly: I’m the butterfly Beautiful like a blue clear sky, Or sometimes look like snow, Different in colors, shapes and might, But something to know that we always die, So fragile, weak and thin, Lighter than a glimpse and delicate as light, Something to know for sure… Whatever you have in life and all these fields, You are not happier than a butterfly

  13. Evaluation of Speech Recognition of Cochlear Implant Recipients Using Adaptive, Digital Remote Microphone Technology and a Speech Enhancement Sound Processing Algorithm.

    Science.gov (United States)

    Wolfe, Jace; Morais, Mila; Schafer, Erin; Agrawal, Smita; Koch, Dawn

    2015-05-01

    Cochlear implant recipients often experience difficulty with understanding speech in the presence of noise. Cochlear implant manufacturers have developed sound processing algorithms designed to improve speech recognition in noise, and research has shown these technologies to be effective. Remote microphone technology utilizing adaptive, digital wireless radio transmission has also been shown to provide significant improvement in speech recognition in noise. There are no studies examining the potential improvement in speech recognition in noise when these two technologies are used simultaneously. The goal of this study was to evaluate the potential benefits and limitations associated with the simultaneous use of a sound processing algorithm designed to improve performance in noise (Advanced Bionics ClearVoice) and a remote microphone system that incorporates adaptive, digital wireless radio transmission (Phonak Roger). A two-by-two way repeated measures design was used to examine performance differences obtained without these technologies compared to the use of each technology separately as well as the simultaneous use of both technologies. Eleven Advanced Bionics (AB) cochlear implant recipients, ages 11 to 68 yr. AzBio sentence recognition was measured in quiet and in the presence of classroom noise ranging in level from 50 to 80 dBA in 5-dB steps. Performance was evaluated in four conditions: (1) No ClearVoice and no Roger, (2) ClearVoice enabled without the use of Roger, (3) ClearVoice disabled with Roger enabled, and (4) simultaneous use of ClearVoice and Roger. Speech recognition in quiet was better than speech recognition in noise for all conditions. Use of ClearVoice and Roger each provided significant improvement in speech recognition in noise. The best performance in noise was obtained with the simultaneous use of ClearVoice and Roger. ClearVoice and Roger technology each improves speech recognition in noise, particularly when used at the same time

  14. JINGLE: THE SOUNDING SYMBOL

    Directory of Open Access Journals (Sweden)

    Bysko Maxim V.

    2013-12-01

    Full Text Available The article considers the role of jingles in the industrial era, from the occurrence of the regular radio broadcasting, sound films and television up of modern video games, audio and video podcasts, online broadcasts, and mobile communications. Jingles are researched from the point of view of the theory of symbols: the forward motion is detected in the process of development of jingles from the social symbols (radio callsigns to the individual signs-images (ringtones. The role of technical progress in the formation of jingles as important cultural audio elements of modern digital civilization.

  15. The influence of environmental sound training on the perception of spectrally degraded speech and environmental sounds.

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N

    2012-06-01

    Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients.

  16. Categorizing Sounds

    Science.gov (United States)

    1989-12-01

    Effects of dimensional redundancy on visual discrimination. Journal of Experimental Psychology, 72, 95-104. Lockhead, G. R. (1972) Processing...meeting of The Psychonomic Society, Atlanta GA. Pomerantz, J. (1989) The structure of visual configurations: Stimulus versus subject contributions. In...Eds.), Percepcion del Obieto: Estructura y Procesos, 553-596. Universidad Nacional de Educacion a Distancia. Lisanby, S. H., & Lockhead, G. R. (accepted

  17. Imitation Therapy for Non-Verbal Toddlers

    Science.gov (United States)

    Gill, Cindy; Mehta, Jyutika; Fredenburg, Karen; Bartlett, Karen

    2011-01-01

    When imitation skills are not present in young children, speech and language skills typically fail to emerge. There is little information on practices that foster the emergence of imitation skills in general and verbal imitation skills in particular. The present study attempted to add to our limited evidence base regarding accelerating the…

  18. Spontaneous Non-verbal Counting in Toddlers

    Science.gov (United States)

    Sella, Francesco; Berteletti, Ilaria; Lucangeli, Daniela; Zorzi, Marco

    2016-01-01

    A wealth of studies have investigated numerical abilities in infants and in children aged 3 or above, but research on pre-counting toddlers is sparse. Here we devised a novel version of an imitation task that was previously used to assess spontaneous focusing on numerosity (i.e. the predisposition to grasp numerical properties of the environment)…

  19. Non-verbal communication through sensor fusion

    Science.gov (United States)

    Tairych, Andreas; Xu, Daniel; O'Brien, Benjamin M.; Anderson, Iain A.

    2016-04-01

    When we communicate face to face, we subconsciously engage our whole body to convey our message. In telecommunication, e.g. during phone calls, this powerful information channel cannot be used. Capturing nonverbal information from body motion and transmitting it to the receiver parallel to speech would make these conversations feel much more natural. This requires a sensing device that is capable of capturing different types of movements, such as the flexion and extension of joints, and the rotation of limbs. In a first embodiment, we developed a sensing glove that is used to control a computer game. Capacitive dielectric elastomer (DE) sensors measure finger positions, and an inertial measurement unit (IMU) detects hand roll. These two sensor technologies complement each other, with the IMU allowing the player to move an avatar through a three-dimensional maze, and the DE sensors detecting finger flexion to fire weapons or open doors. After demonstrating the potential of sensor fusion in human-computer interaction, we take this concept to the next level and apply it in nonverbal communication between humans. The current fingerspelling glove prototype uses capacitive DE sensors to detect finger gestures performed by the sending person. These gestures are mapped to corresponding messages and transmitted wirelessly to another person. A concept for integrating an IMU into this system is presented. The fusion of the DE sensor and the IMU combines the strengths of both sensor types, and therefore enables very comprehensive body motion sensing, which makes a large repertoire of gestures available to nonverbal communication over distances.

  20. Evaluative conditioning induces changes in sound valence

    Directory of Open Access Journals (Sweden)

    Anna C. Bolders

    2012-04-01

    Full Text Available Evaluative Conditioning (EC has hardly been tested in the auditory domain, but it is a potentially valuable research tool. In Experiment 1 we investigated whether the affective evaluation of short environmental sounds can be changed using affective words as unconditioned stimuli (US. Congruence effects on an affective priming task (APT for conditioned sounds demonstrated successful EC. Subjective ratings for sounds paired with negative words changed accordingly. In Experiment 2 we investigated whether the acquired valence remains stable after repeated presentation of the conditioned sound without the US or whether extinction occurs. The acquired affective value remained present, albeit weaker, even after 40 extinction trials. These results warrant the use of EC to study processing of short environmental sounds with acquired valence, even if this requires repeated stimulus presentations. This paves the way for studying processing of affective environmental sounds while effectively controlling low level-stimulus properties.

  1. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  2. Stimulus Characteristics Affect Humor Processing in Individuals with Asperger Syndrome

    Science.gov (United States)

    Samson, Andrea C.; Hegenloh, Michael

    2010-01-01

    The present paper aims to investigate whether individuals with Asperger syndrome (AS) show global humor processing deficits or whether humor comprehension and appreciation depends on stimulus characteristics. Non-verbal visual puns, semantic and Theory of Mind cartoons were rated on comprehension, funniness and the punchlines were explained. AS…

  3. Sound wave transmission (image)

    Science.gov (United States)

    When sounds waves reach the ear, they are translated into nerve impulses. These impulses then travel to the brain where they are interpreted by the brain as sound. The hearing mechanisms within the inner ear, can ...

  4. Making fictions sound real

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2010-01-01

    This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related...... to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy...... of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences....

  5. Principles of underwater sound

    National Research Council Canada - National Science Library

    Urick, Robert J

    1983-01-01

    ... the immediately useful help they need for sonar problem solving. Its coverage is broad-ranging from the basic concepts of sound in the sea to making performance predictions in such applications as depth sounding, fish finding, and submarine detection...

  6. An Antropologist of Sound

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2015-01-01

    PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology.......PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology....

  7. Propagation of sound

    DEFF Research Database (Denmark)

    Wahlberg, Magnus; Larsen, Ole Næsbye

    2017-01-01

    properties can be modified by sound absorption, refraction, and interference from multi paths caused by reflections.The path from the source to the receiver may be bent due to refraction. Besides geometrical attenuation, the ground effect and turbulence are the most important mechanisms to influence...... communication sounds for airborne acoustics and bottom and surface effects for underwater sounds. Refraction becomes very important close to shadow zones. For echolocation signals, geometric attenuation and sound absorption have the largest effects on the signals....

  8. Modelling Hyperboloid Sound Scattering

    DEFF Research Database (Denmark)

    Burry, Jane; Davis, Daniel; Peters, Brady

    2011-01-01

    The Responsive Acoustic Surfaces workshop project described here sought new understandings about the interaction between geometry and sound in the arena of sound scattering. This paper reports on the challenges associated with modelling, simulating, fabricating and measuring this phenomenon using...... both physical and digital models at three distinct scales. The results suggest hyperboloid geometry, while difficult to fabricate, facilitates sound scattering....

  9. Electromagnetic sounding of the Earth's interior

    CERN Document Server

    Spichak, Viacheslav V

    2015-01-01

    Electromagnetic Sounding of the Earth's Interior 2nd edition provides a comprehensive up-to-date collection of contributions, covering methodological, computational and practical aspects of Electromagnetic sounding of the Earth by different techniques at global, regional and local scales. Moreover, it contains new developments such as the concept of self-consistent tasks of geophysics and , 3-D interpretation of the TEM sounding which, so far, have not all been covered by one book. Electromagnetic Sounding of the Earth's Interior 2nd edition consists of three parts: I- EM sounding methods, II- Forward modelling and inversion techniques, and III - Data processing, analysis, modelling and interpretation. The new edition includes brand new chapters on Pulse and frequency electromagnetic sounding for hydrocarbon offshore exploration. Additionally all other chapters have been extensively updated to include new developments. Presents recently developed methodological findings of the earth's study, including seism...

  10. Neuroanatomic organization of sound memory in humans.

    Science.gov (United States)

    Kraut, Michael A; Pitcock, Jeffery A; Calhoun, Vince; Li, Juan; Freeman, Thomas; Hart, John

    2006-11-01

    The neural interface between sensory perception and memory is a central issue in neuroscience, particularly initial memory organization following perceptual analyses. We used functional magnetic resonance imaging to identify anatomic regions extracting initial auditory semantic memory information related to environmental sounds. Two distinct anatomic foci were detected in the right superior temporal gyrus when subjects identified sounds representing either animals or threatening items. Threatening animal stimuli elicited signal changes in both foci, suggesting a distributed neural representation. Our results demonstrate both category- and feature-specific responses to nonverbal sounds in early stages of extracting semantic memory information from these sounds. This organization allows for these category-feature detection nodes to extract early, semantic memory information for efficient processing of transient sound stimuli. Neural regions selective for threatening sounds are similar to those of nonhuman primates, demonstrating semantic memory organization for basic biological/survival primitives are present across species.

  11. Vibrotactile Detection, Identification and Directional Perception of signal-Processed Sounds from Environmental Events: A Pilot Field Evaluation in Five Cases

    Directory of Open Access Journals (Sweden)

    Parivash Ranjbar

    2008-09-01

    Full Text Available Objectives: Conducting field tests of a vibrotactile aid for deaf/deafblind persons for detection, identification and directional perception of environmental sounds. Methods: Five deaf (3F/2M, 22–36 years individuals tested the aid separately in a home environment (kitchen and in a traffic environment. Their eyes were blindfolded and they wore a headband and holding a vibrator for sound identification. In the headband, three microphones were mounted and two vibrators for signalling direction of the sound source. The sounds originated from events typical for the home environment and traffic. The subjects were inexperienced (events unknown and experienced (events known. They identified the events in a home and traffic environment, but perceived sound source direction only in traffic. Results: The detection scores were higher than 98% both in the home and in the traffic environment. In the home environment, identification scores varied between 25%-58% when the subjects were inexperienced and between 33%-83% when they were experienced. In traffic, identification scores varied between 20%-40% when the subjects were inexperienced and between 22%-56% when they were experienced. The directional perception scores varied between 30%-60% when inexperienced and between 61%-83% when experienced. Discussion: The vibratory aid consistently improved all participants’ detection, identification and directional perception ability.

  12. Sound Stories for General Music

    Science.gov (United States)

    Cardany, Audrey Berger

    2013-01-01

    Language and music literacy share a similar process of understanding that progresses from sensory experience to symbolic representation. The author identifies Bruner’s modes of understanding as they relate to using narrative in the music classroom to enhance music reading at iconic and symbolic levels. Two sound stories are included for…

  13. Visualization of Broadband Sound Sources

    Directory of Open Access Journals (Sweden)

    Sukhanov Dmitry

    2016-01-01

    Full Text Available In this paper the method of imaging of wideband audio sources based on the 2D microphone array measurements of the sound field at the same time in all the microphones is proposed. Designed microphone array consists of 160 microphones allowing to digitize signals with a frequency of 7200 Hz. Measured signals are processed using the special algorithm that makes it possible to obtain a flat image of wideband sound sources. It is shown experimentally that the visualization is not dependent on the waveform, but determined by the bandwidth. Developed system allows to visualize sources with a resolution of up to 10 cm.

  14. Social construction as process: some new possibilities for research and development

    NARCIS (Netherlands)

    Hosking, D.M.

    1999-01-01

    Here we outline one variant of social constructionism - one that emphasises social ontologys as constructed in ongoing co-ordinations of act and supplement. We stress that such processes may be constructed both in written and spoken words, in non-verbal actions and artefacts. Relational processes

  15. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  16. Sounds like Team Spirit

    Science.gov (United States)

    Hoffman, Edward

    2002-01-01

    trying to improve on what they've done before. Second, success in any endeavor stems from people who know how to interpret a composition to sound beautiful when played in a different style. For Knowledge Sharing to work, it must be adapted, reinterpreted, shaped and played with at the centers. In this regard, we've been blessed with another crazy, passionate, inspired artist named Claire Smith. Claire has turned Ames Research Center in California into APPL-west. She is so good and committed to what she does that I just refer people to her whenever they have questions about implementing project management development at the field level. Finally, any great effort requires talented people working behind the scenes, the people who formulate a business approach and know how to manage the money so that the music gets heard. I have known many brilliant and creative people with a ton of ideas that never take off due to an inability to work the business. Again, the Knowledge Sharing team has been fortunate to have competent and passionate people, specifically Tony Maturo and his procurement team at Goddard Space Flight Center, to make sure the process is in place to support the effort. This kind of support is every bit as crucial as the activity itself, and the efforts and creativity that go into successful procurement and contracting is a vital ingredient of this successful team.

  17. Sound a very short introduction

    CERN Document Server

    Goldsmith, Mike

    2015-01-01

    Sound is integral to how we experience the world, in the form of noise as well as music. But what is sound? What is the physical basis of pitch and harmony? And how are sound waves exploited in musical instruments? Sound: A Very Short Introduction looks at the science of sound and the behaviour of sound waves with their different frequencies. It also explores sound in different contexts, covering the audible and inaudible, sound underground and underwater, acoustic and electronic sound, and hearing in humans and animals. It concludes with the problem of sound out of place—noise and its reduction.

  18. Detecting change in stochastic sound sequences.

    Directory of Open Access Journals (Sweden)

    Benjamin Skerritt-Davis

    2018-05-01

    Full Text Available Our ability to parse our acoustic environment relies on the brain's capacity to extract statistical regularities from surrounding sounds. Previous work in regularity extraction has predominantly focused on the brain's sensitivity to predictable patterns in sound sequences. However, natural sound environments are rarely completely predictable, often containing some level of randomness, yet the brain is able to effectively interpret its surroundings by extracting useful information from stochastic sounds. It has been previously shown that the brain is sensitive to the marginal lower-order statistics of sound sequences (i.e., mean and variance. In this work, we investigate the brain's sensitivity to higher-order statistics describing temporal dependencies between sound events through a series of change detection experiments, where listeners are asked to detect changes in randomness in the pitch of tone sequences. Behavioral data indicate listeners collect statistical estimates to process incoming sounds, and a perceptual model based on Bayesian inference shows a capacity in the brain to track higher-order statistics. Further analysis of individual subjects' behavior indicates an important role of perceptual constraints in listeners' ability to track these sensory statistics with high fidelity. In addition, the inference model facilitates analysis of neural electroencephalography (EEG responses, anchoring the analysis relative to the statistics of each stochastic stimulus. This reveals both a deviance response and a change-related disruption in phase of the stimulus-locked response that follow the higher-order statistics. These results shed light on the brain's ability to process stochastic sound sequences.

  19. Sound Insulation between Dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Regulatory sound insulation requirements for dwellings exist in more than 30 countries in Europe. In some countries, requirements have existed since the 1950s. Findings from comparative studies show that sound insulation descriptors and requirements represent a high degree of diversity...... and initiate – where needed – improvement of sound insulation of new and existing dwellings in Europe to the benefit of the inhabitants and the society. A European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs...... 2009-2013. The main objectives of TU0901 are to prepare proposals for harmonized sound insulation descriptors and for a European sound classification scheme with a number of quality classes for dwellings. Findings from the studies provide input for the discussions in COST TU0901. Data collected from 24...

  20. The velocity of sound

    International Nuclear Information System (INIS)

    Beyer, R.T.

    1985-01-01

    The paper reviews the work carried out on the velocity of sound in liquid alkali metals. The experimental methods to determine the velocity measurements are described. Tables are presented of reported data on the velocity of sound in lithium, sodium, potassium, rubidium and caesium. A formula is given for alkali metals, in which the sound velocity is a function of shear viscosity, atomic mass and atomic volume. (U.K.)

  1. Michael Jackson's Sound Stages

    OpenAIRE

    Morten Michelsen

    2012-01-01

    In order to discuss analytically spatial aspects of recorded sound William Moylan’s concept of ‘sound stage’ is developed within a musicological framework as part of a sound paradigm which includes timbre, texture and sound stage. Two Michael Jackson songs (‘The Lady in My Life’ from 1982 and ‘Scream’ from 1995) are used to: a) demonstrate the value of such a conceptualisation, and b) demonstrate that the model has its limits, as record producers in the 1990s began ignoring the conventions of...

  2. What is Sound?

    OpenAIRE

    Nelson, Peter

    2014-01-01

    What is sound? This question is posed in contradiction to the every-day understanding that sound is a phenomenon apart from us, to be heard, made, shaped and organised. Thinking through the history of computer music, and considering the current configuration of digital communi-cations, sound is reconfigured as a type of network. This network is envisaged as non-hierarchical, in keeping with currents of thought that refuse to prioritise the human in the world. The relationship of sound to musi...

  3. Light and Sound

    CERN Document Server

    Karam, P Andrew

    2010-01-01

    Our world is largely defined by what we see and hear-but our uses for light and sound go far beyond simply seeing a photo or hearing a song. A concentrated beam of light, lasers are powerful tools used in industry, research, and medicine, as well as in everyday electronics like DVD and CD players. Ultrasound, sound emitted at a high frequency, helps create images of a developing baby, cleans teeth, and much more. Light and Sound teaches how light and sound work, how they are used in our day-to-day lives, and how they can be used to learn about the universe at large.

  4. Transformation of second sound into surface waves in superfluid helium

    International Nuclear Information System (INIS)

    Khalatnikov, I.M.; Kolmakov, G.V.; Pokrovsky, V.L.

    1995-01-01

    The Hamiltonian theory of superfluid liquid with a free boundary is developed. Nonlinear amplitudes of parametric Cherenkov radiation of a surface wave by second sound and the inner decay of second sound waves are found. Threshold amplitudes of second sound waves for these two processes are determined. 4 refs

  5. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  6. A Fast Algorithm of Cartographic Sounding Selection

    Institute of Scientific and Technical Information of China (English)

    SUI Haigang; HUA Li; ZHAO Haitao; ZHANG Yongli

    2005-01-01

    An effective strategy and framework that adequately integrate the automated and manual processes for fast cartographic sounding selection is presented. The important submarine topographic features are extracted for important soundings selection, and an improved "influence circle" algorithm is introduced for sounding selection. For automatic configuration of soundings distribution pattern, a special algorithm considering multi-factors is employed. A semi-automatic method for solving the ambiguous conflicts is described. On the basis of the algorithms and strategies a system named HGIS for fast cartographic sounding selection is developed and applied in Chinese Marine Safety Administration Bureau (CMSAB). The application experiments show that the system is effective and reliable. At last some conclusions and the future work are given.

  7. Physiological phenotyping of dementias using emotional sounds.

    Science.gov (United States)

    Fletcher, Phillip D; Nicholas, Jennifer M; Shakespeare, Timothy J; Downey, Laura E; Golden, Hannah L; Agustus, Jennifer L; Clark, Camilla N; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-06-01

    Emotional behavioral disturbances are hallmarks of many dementias but their pathophysiology is poorly understood. Here we addressed this issue using the paradigm of emotionally salient sounds. Pupil responses and affective valence ratings for nonverbal sounds of varying emotional salience were assessed in patients with behavioral variant frontotemporal dementia (bvFTD) (n = 14), semantic dementia (SD) (n = 10), progressive nonfluent aphasia (PNFA) (n = 12), and AD (n = 10) versus healthy age-matched individuals (n = 26). Referenced to healthy individuals, overall autonomic reactivity to sound was normal in Alzheimer's disease (AD) but reduced in other syndromes. Patients with bvFTD, SD, and AD showed altered coupling between pupillary and affective behavioral responses to emotionally salient sounds. Emotional sounds are a useful model system for analyzing how dementias affect the processing of salient environmental signals, with implications for defining pathophysiological mechanisms and novel biomarker development.

  8. New contexts, new processes, new strategies: the co-construction of meaning in plurilingual interactions

    Directory of Open Access Journals (Sweden)

    Filomena Capucho

    2016-11-01

    In this paper, we will present the analysis of an extract from the Bucharest-Cinco corpus that will allow us to identify the strategies developed in the process of co-construction of meaning in multilingual contexts through a close examination of verbal and non-verbal features.

  9. Attuning: A Communication Process between People with Severe and Profound Intellectual Disability and Their Interaction Partners

    Science.gov (United States)

    Griffiths, Colin; Smith, Martine

    2016-01-01

    Background: People with severe and profound intellectual disability typically demonstrate a limited ability to communicate effectively. Most of their communications are non-verbal, often idiosyncratic and ambiguous. This article aims to identify the process that regulates communications of this group of people with others and to describe the…

  10. Students' Learning of a Generalized Theory of Sound Transmission from a Teaching-Learning Sequence about Sound, Hearing and Health

    Science.gov (United States)

    West, Eva; Wallin, Anita

    2013-04-01

    Learning abstract concepts such as sound often involves an ontological shift because to conceptualize sound transmission as a process of motion demands abandoning sound transmission as a transfer of matter. Thus, for students to be able to grasp and use a generalized model of sound transmission poses great challenges for them. This study involved 199 students aged 10-14. Their views about sound transmission were investigated before and after teaching by comparing their written answers about sound transfer in different media. The teaching was built on a research-based teaching-learning sequence (TLS), which was developed within a framework of design research. The analysis involved interpreting students' underlying theories of sound transmission, including the different conceptual categories that were found in their answers. The results indicated a shift in students' understandings from the use of a theory of matter before the intervention to embracing a theory of process afterwards. The described pattern was found in all groups of students irrespective of age. Thus, teaching about sound and sound transmission is fruitful already at the ages of 10-11. However, the older the students, the more advanced is their understanding of the process of motion. In conclusion, the use of a TLS about sound, hearing and auditory health promotes students' conceptualization of sound transmission as a process in all grades. The results also imply some crucial points in teaching and learning about the scientific content of sound.

  11. Research and Implementation of Heart Sound Denoising

    Science.gov (United States)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  12. Robust segmentation and retrieval of environmental sounds

    Science.gov (United States)

    Wichern, Gordon

    The proliferation of mobile computing has provided much of the world with the ability to record any sound of interest, or possibly every sound heard in a lifetime. The technology to continuously record the auditory world has applications in surveillance, biological monitoring of non-human animal sounds, and urban planning. Unfortunately, the ability to record anything has led to an audio data deluge, where there are more recordings than time to listen. Thus, access to these archives depends on efficient techniques for segmentation (determining where sound events begin and end), indexing (storing sufficient information with each event to distinguish it from other events), and retrieval (searching for and finding desired events). While many such techniques have been developed for speech and music sounds, the environmental and natural sounds that compose the majority of our aural world are often overlooked. The process of analyzing audio signals typically begins with the process of acoustic feature extraction where a frame of raw audio (e.g., 50 milliseconds) is converted into a feature vector summarizing the audio content. In this dissertation, a dynamic Bayesian network (DBN) is used to monitor changes in acoustic features in order to determine the segmentation of continuously recorded audio signals. Experiments demonstrate effective segmentation performance on test sets of environmental sounds recorded in both indoor and outdoor environments. Once segmented, every sound event is indexed with a probabilistic model, summarizing the evolution of acoustic features over the course of the event. Indexed sound events are then retrieved from the database using different query modalities. Two important query types are sound queries (query-by-example) and semantic queries (query-by-text). By treating each sound event and semantic concept in the database as a node in an undirected graph, a hybrid (content/semantic) network structure is developed. This hybrid network can

  13. InfoSound

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.; Gopinath, B.; Haberman, Gary O.

    1990-01-01

    The authors explore ways to enhance users' comprehension of complex applications using music and sound effects to present application-program events that are difficult to detect visually. A prototype system, Infosound, allows developers to create and store musical sequences and sound effects with...

  14. Breaking the Sound Barrier

    Science.gov (United States)

    Brown, Tom; Boehringer, Kim

    2007-01-01

    Students in a fourth-grade class participated in a series of dynamic sound learning centers followed by a dramatic capstone event--an exploration of the amazing Trashcan Whoosh Waves. It's a notoriously difficult subject to teach, but this hands-on, exploratory approach ignited student interest in sound, promoted language acquisition, and built…

  15. Sound propagation in cities

    NARCIS (Netherlands)

    Salomons, E.; Polinder, H.; Lohman, W.; Zhou, H.; Borst, H.

    2009-01-01

    A new engineering model for sound propagation in cities is presented. The model is based on numerical and experimental studies of sound propagation between street canyons. Multiple reflections in the source canyon and the receiver canyon are taken into account in an efficient way, while weak

  16. OMNIDIRECTIONAL SOUND SOURCE

    DEFF Research Database (Denmark)

    1996-01-01

    A sound source comprising a loudspeaker (6) and a hollow coupler (4) with an open inlet which communicates with and is closed by the loudspeaker (6) and an open outlet, said coupler (4) comprising rigid walls which cannot respond to the sound pressures produced by the loudspeaker (6). According...

  17. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  18. Poetry Pages. Sound Effects.

    Science.gov (United States)

    Fina, Allan de

    1992-01-01

    Explains how elementary teachers can help students understand onomatopoeia, suggesting that they define onomatopoeia, share examples of it, read poems and have students discuss onomatopoeic words, act out common household sounds, write about sound effects, and create choral readings of onomatopoeic poems. Two appropriate poems are included. (SM)

  19. Exploring Noise: Sound Pollution.

    Science.gov (United States)

    Rillo, Thomas J.

    1979-01-01

    Part one of a three-part series about noise pollution and its effects on humans. This section presents the background information for teachers who are preparing a unit on sound. The next issues will offer learning activities for measuring the effects of sound and some references. (SA)

  20. Sound classification of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. The schemes have been developed, implemented and revised gradually since the early 1990s. However, due to lack of coordination between countries, there are significant discrepancies, and new standards and revisions continue to increase the diversity...... is needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality...

  1. Operator performance and annunciation sounds

    International Nuclear Information System (INIS)

    Patterson, B.K.; Bradley, M.T.; Artiss, W.G.

    1997-01-01

    This paper discusses the audible component of annunciation found in typical operating power stations. The purpose of the audible alarm is stated and the psychological elements involved in the human processing of alarm sounds is explored. Psychological problems with audible annunciation are noted. Simple and more complex improvements to existing systems are described. A modern alarm system is suggested for retrofits or new plant designs. (author)

  2. Operator performance and annunciation sounds

    Energy Technology Data Exchange (ETDEWEB)

    Patterson, B K; Bradley, M T; Artiss, W G [Human Factors Practical, Dipper Harbour, NB (Canada)

    1998-12-31

    This paper discusses the audible component of annunciation found in typical operating power stations. The purpose of the audible alarm is stated and the psychological elements involved in the human processing of alarm sounds is explored. Psychological problems with audible annunciation are noted. Simple and more complex improvements to existing systems are described. A modern alarm system is suggested for retrofits or new plant designs. (author) 3 refs.

  3. Consort 1 sounding rocket flight

    Science.gov (United States)

    Wessling, Francis C.; Maybee, George W.

    1989-01-01

    This paper describes a payload of six experiments developed for a 7-min microgravity flight aboard a sounding rocket Consort 1, in order to investigate the effects of low gravity on certain material processes. The experiments in question were designed to test the effect of microgravity on the demixing of aqueous polymer two-phase systems, the electrodeposition process, the production of elastomer-modified epoxy resins, the foam formation process and the characteristics of foam, the material dispersion, and metal sintering. The apparatuses designed for these experiments are examined, and the rocket-payload integration and operations are discussed.

  4. Sound Performance – Experience and Event

    DEFF Research Database (Denmark)

    Holmboe, Rasmus

    . The present paper draws on examples from my ongoing PhD-project, which is connected to Museum of Contemporary Art in Roskilde, Denmark, where I curate a sub-programme at ACTS 2014 – a festival for performative arts. The aim is to investigate, how sound performance can be presented and represented - in real....... In itself – and as an artistic material – sound is always already process. It involves the listener in a situation that is both filled with elusive presence and one that evokes rooted memory. At the same time sound is bodily, social and historical. It propagates between individuals and objects, it creates...

  5. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  6. Digitizing a sound archive

    DEFF Research Database (Denmark)

    Cone, Louise

    2017-01-01

    Danish and international artists. His methodology left us with a large collection of unique and inspirational time-based media sound artworks that have, until very recently, been inaccessible. Existing on an array of different media formats, such as open reel tapes, 8-track and 4 track cassettes, VHS......In 1990 an artist by the name of William Louis Sørensen was hired by the National Gallery of Denmark to collect important works of art – made from sound. His job was to acquire sound art, but also recordings that captured rare artistic occurrences, music, performances and happenings from both...

  7. Game Sound from Behind the Sofa

    DEFF Research Database (Denmark)

    Garner, Tom Alexander

    2013-01-01

    The central concern of this thesis is upon the processes by which human beings perceive sound and experience emotions within a computer video gameplay context. The potential of quantitative sound parameters to evoke and modulate emotional experience is explored, working towards the development...... that provide additional support of the hypothetical frameworks: an ecological process of fear, a fear-related model of virtual and real acoustic ecologies, and an embodied virtual acoustic ecology framework. It is intended that this thesis will clearly support more effective and efficient sound design...... practices and also improve awareness of the capacity of sound to generate significant emotional experiences during computer video gameplay. It is further hoped that this thesis will elucidate the potential of biometrics/psychophysiology to allow game designers to better understand the player and to move...

  8. Inverse problem of radiofrequency sounding of ionosphere

    Science.gov (United States)

    Velichko, E. N.; Yu. Grishentsev, A.; Korobeynikov, A. G.

    2016-01-01

    An algorithm for the solution of the inverse problem of vertical ionosphere sounding and a mathematical model of noise filtering are presented. An automated system for processing and analysis of spectrograms of vertical ionosphere sounding based on our algorithm is described. It is shown that the algorithm we suggest has a rather high efficiency. This is supported by the data obtained at the ionospheric stations of the so-called “AIS-M” type.

  9. Sonic drifting: sound, city and psychogeography

    OpenAIRE

    Budhaditya Chattopadhyay

    2013-01-01

    Studying and perceiving an emerging city by listening to its sounds might be phenomenologically reductive in approach, but it can lead to a framework for understanding the fabric of the urban environment through artistic practice. This paper describes a sound work, Elegy for Bangalore, and examines its artistic processes in order to shed light on the methodologies for listening to an expanding city by engaging with multilayered urban contexts and, subsequently, evoking the psychogeography of ...

  10. Sounding rockets explore the ionosphere

    International Nuclear Information System (INIS)

    Mendillo, M.

    1990-01-01

    It is suggested that small, expendable, solid-fuel rockets used to explore ionospheric plasma can offer insight into all the processes and complexities common to space plasma. NASA's sounding rocket program for ionospheric research focuses on the flight of instruments to measure parameters governing the natural state of the ionosphere. Parameters include input functions, such as photons, particles, and composition of the neutral atmosphere; resultant structures, such as electron and ion densities, temperatures and drifts; and emerging signals such as photons and electric and magnetic fields. Systematic study of the aurora is also conducted by these rockets, allowing sampling at relatively high spatial and temporal rates as well as investigation of parameters, such as energetic particle fluxes, not accessible to ground based systems. Recent active experiments in the ionosphere are discussed, and future sounding rocket missions are cited

  11. Sounds of Web Advertising

    DEFF Research Database (Denmark)

    Jessen, Iben Bredahl; Graakjær, Nicolai Jørgensgaard

    2010-01-01

    Sound seems to be a neglected issue in the study of web ads. Web advertising is predominantly regarded as visual phenomena–commercial messages, as for instance banner ads that we watch, read, and eventually click on–but only rarely as something that we listen to. The present chapter presents...... an overview of the auditory dimensions in web advertising: Which kinds of sounds do we hear in web ads? What are the conditions and functions of sound in web ads? Moreover, the chapter proposes a theoretical framework in order to analyse the communicative functions of sound in web advertising. The main...... argument is that an understanding of the auditory dimensions in web advertising must include a reflection on the hypertextual settings of the web ad as well as a perspective on how users engage with web content....

  12. Sound Art Situations

    DEFF Research Database (Denmark)

    Krogh Groth, Sanne; Samson, Kristine

    2017-01-01

    and combine theories from several fields. Aspects of sound art studies, performance studies and contemporary art studies are presented in order to theoretically explore the very diverse dimensions of the two sound art pieces: Visual, auditory, performative, social, spatial and durational dimensions become......This article is an analysis of two sound art performances that took place June 2015 in outdoor public spaces in the social housing area Urbanplanen in Copenhagen, Denmark. The two performances were On the production of a poor acoustics by Brandon LaBelle and Green Interactive Biofeedback...... Environments (GIBE) by Jeremy Woodruff. In order to investigate the complex situation that arises when sound art is staged in such contexts, the authors of this article suggest exploring the events through approaching them as ‘situations’ (Doherty 2009). With this approach it becomes possible to engage...

  13. Sound Visualization and Holography

    Science.gov (United States)

    Kock, Winston E.

    1975-01-01

    Describes liquid surface holograms including their application to medicine. Discusses interference and diffraction phenomena using sound wave scanning techniques. Compares focussing by zone plate to holographic image development. (GH)

  14. [Non-speech oral motor treatment efficacy for children with developmental speech sound disorders].

    Science.gov (United States)

    Ygual-Fernandez, A; Cervera-Merida, J F

    2016-01-01

    In the treatment of speech disorders by means of speech therapy two antagonistic methodological approaches are applied: non-verbal ones, based on oral motor exercises (OME), and verbal ones, which are based on speech processing tasks with syllables, phonemes and words. In Spain, OME programmes are called 'programas de praxias', and are widely used and valued by speech therapists. To review the studies conducted on the effectiveness of OME-based treatments applied to children with speech disorders and the theoretical arguments that could justify, or not, their usefulness. Over the last few decades evidence has been gathered about the lack of efficacy of this approach to treat developmental speech disorders and pronunciation problems in populations without any neurological alteration of motor functioning. The American Speech-Language-Hearing Association has advised against its use taking into account the principles of evidence-based practice. The knowledge gathered to date on motor control shows that the pattern of mobility and its corresponding organisation in the brain are different in speech and other non-verbal functions linked to nutrition and breathing. Neither the studies on their effectiveness nor the arguments based on motor control studies recommend the use of OME-based programmes for the treatment of pronunciation problems in children with developmental language disorders.

  15. Assessment and improvement of sound quality in cochlear implant users.

    Science.gov (United States)

    Caldwell, Meredith T; Jiam, Nicole T; Limb, Charles J

    2017-06-01

    Cochlear implants (CIs) have successfully provided speech perception to individuals with sensorineural hearing loss. Recent research has focused on more challenging acoustic stimuli such as music and voice emotion. The purpose of this review is to evaluate and describe sound quality in CI users with the purposes of summarizing novel findings and crucial information about how CI users experience complex sounds. Here we review the existing literature on PubMed and Scopus to present what is known about perceptual sound quality in CI users, discuss existing measures of sound quality, explore how sound quality may be effectively studied, and examine potential strategies of improving sound quality in the CI population. Sound quality, defined here as the perceived richness of an auditory stimulus, is an attribute of implant-mediated listening that remains poorly studied. Sound quality is distinct from appraisal, which is generally defined as the subjective likability or pleasantness of a sound. Existing studies suggest that sound quality perception in the CI population is limited by a range of factors, most notably pitch distortion and dynamic range compression. Although there are currently very few objective measures of sound quality, the CI-MUSHRA has been used as a means of evaluating sound quality. There exist a number of promising strategies to improve sound quality perception in the CI population including apical cochlear stimulation, pitch tuning, and noise reduction processing strategies. In the published literature, sound quality perception is severely limited among CI users. Future research should focus on developing systematic, objective, and quantitative sound quality metrics and designing therapies to mitigate poor sound quality perception in CI users. NA.

  16. Estudo longitudinal da atenção compartilhada em crianças autistas não-verbais Longitudinal study of joint attention in non-verbal autistic children

    Directory of Open Access Journals (Sweden)

    Leila Sandra Damião Farah

    2009-12-01

    desenvolvimento da comunicação das crianças autistas.PURPOSE: to identify and characterize abilities of Joint Attention of non-verbal autistic children through the observation of communicative behaviors. METHODS: the research involved 5 boys, between 5,9 and 8,6-year old, diagnosed as Autistic Disorder (DSM IV, 2002, recorded in two instances with a four months interval. Meanwhile, the children were submitted to a language therapy mediation based on Joint Attention stimulation. Each recording was 15 minutes long and involved one child or group of 2-3 children with the therapist within non-directed and semi-directed interaction situations, at school where they studied. We observed and registered behaviors regarding Joint Attention abilities. The used material involved percussion instruments. Data were analyzed in relation to time, interaction and interlocutor. RESULTS: the gaze behavior showed the greatest growth in each subject. Data analysis revealed that the subjects showed qualitative trends for evolution of the Joint Attention ability revealing important clinical meaning although there was lack of statistical significance. Each subject showed characteristics and evolution of the communicative behaviors regarding Joint Attention in an individualized manner. After the period of language therapy intervention, we observed a quantitative behavioral growth in the 5 subjects, specifically under child-therapist interaction. CONCLUSIONS: the gaze behavior is an important step for the development of others behaviors toward Joint Attention. The adult-child interaction situation facilitates the appearance of communication behaviors and sharing. Language therapy with focus on the Joint Attention abilities seems to contribute positively for communication development of autistic children.

  17. Temperature dependence of sound velocity in yttrium ferrite

    International Nuclear Information System (INIS)

    L'vov, V.A.

    1979-01-01

    The effect of the phonon-magnon and phonon-phonon interoctions on the temperature dependence of the longitudinal sound velocity in yttrium ferrite is considered. It has been shown that at low temperatures four-particle phonon-magnon processes produce the basic contribution to renormalization of the sound velocity. At higher temperatures the temperature dependence of the sound velocity is mainly defined by phonon-phonon processes

  18. Acoustic analysis of trill sounds.

    Science.gov (United States)

    Dhananjaya, N; Yegnanarayana, B; Bhaskararao, Peri

    2012-04-01

    In this paper, the acoustic-phonetic characteristics of steady apical trills--trill sounds produced by the periodic vibration of the apex of the tongue--are studied. Signal processing methods, namely, zero-frequency filtering and zero-time liftering of speech signals, are used to analyze the excitation source and the resonance characteristics of the vocal tract system, respectively. Although it is natural to expect the effect of trilling on the resonances of the vocal tract system, it is interesting to note that trilling influences the glottal source of excitation as well. The excitation characteristics derived using zero-frequency filtering of speech signals are glottal epochs, strength of impulses at the glottal epochs, and instantaneous fundamental frequency of the glottal vibration. Analysis based on zero-time liftering of speech signals is used to study the dynamic resonance characteristics of vocal tract system during the production of trill sounds. Qualitative analysis of trill sounds in different vowel contexts, and the acoustic cues that may help spotting trills in continuous speech are discussed.

  19. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    Sound is a part of architecture, and sound is complex. Upon this, sound is invisible. How is it then possible to design visual objects that interact with the sound? This paper addresses the problem of how to get access to the complexity of sound and how to make textile material revealing the form...... goemetry by analysing the sound pattern at a specific spot. This analysis is done theoretically with algorithmic systems and practical with waves in water. The paper describes the experiments and the findings, and explains how an analysis of sound can be catched in a textile form....

  20. Perception of environmental sounds by experienced cochlear implant patients

    Science.gov (United States)

    Shafiro, Valeriy; Gygi, Brian; Cheng, Min-Yu; Vachhani, Jay; Mulvey, Megan

    2011-01-01

    Objectives Environmental sound perception serves an important ecological function by providing listeners with information about objects and events in their immediate environment. Environmental sounds such as car horns, baby cries or chirping birds can alert listeners to imminent dangers as well as contribute to one's sense of awareness and well being. Perception of environmental sounds as acoustically and semantically complex stimuli, may also involve some factors common to the processing of speech. However, very limited research has investigated the abilities of cochlear implant (CI) patients to identify common environmental sounds, despite patients' general enthusiasm about them. This project (1) investigated the ability of patients with modern-day CIs to perceive environmental sounds, (2) explored associations among speech, environmental sounds and basic auditory abilities, and (3) examined acoustic factors that might be involved in environmental sound perception. Design Seventeen experienced postlingually-deafened CI patients participated in the study. Environmental sound perception was assessed with a large-item test composed of 40 sound sources, each represented by four different tokens. The relationship between speech and environmental sound perception, and the role of working memory and some basic auditory abilities were examined based on patient performance on a battery of speech tests (HINT, CNC, and individual consonant and vowel tests), tests of basic auditory abilities (audiometric thresholds, gap detection, temporal pattern and temporal order for tones tests) and a backward digit recall test. Results The results indicated substantially reduced ability to identify common environmental sounds in CI patients (45.3%). Except for vowels, all speech test scores significantly correlated with the environmental sound test scores: r = 0.73 for HINT in quiet, r = 0.69 for HINT in noise, r = 0.70 for CNC, r = 0.64 for consonants and r = 0.48 for vowels. HINT and

  1. Análise da comunicação verbal e não-verbal de crianças com deficiencia visual durante interação com a mãe Analysis of the verbal and non-verbal communication of children with visual impairment during interaction with their mothers

    Directory of Open Access Journals (Sweden)

    Jáima Pinheiro de Oliveira

    2005-12-01

    blind children, with low vision capacity and children with normal vision and, therefore, to analyze the particularities of the maternal communication during the interaction within free and planned contexts. Six children participated in the study: two blind; two with low vision capacity and; two with normal vision, who were selected from specific criteria. Two recordings of each were carried out in the familiar environment: free and planned situations. The analysis was performed by means of functional characterization of the verbal and non-verbal communication of the children with their mothers. The data showed that the verbal communicative resources were predominant in both free and planned situations. Overall, the results of this study indicate that although there were particularities during its use, the language of the visual impairment children does not present deficit in relation to the one of those with normal vision. Moreover, the mothers of the blind children and with low vision capacity used strategies such as descriptions of the environment, indications and localization of objects during their interactions that favored their performance.

  2. Cortical representations of communication sounds.

    Science.gov (United States)

    Heiser, Marc A; Cheung, Steven W

    2008-10-01

    This review summarizes recent research into cortical processing of vocalizations in animals and humans. There has been a resurgent interest in this topic accompanied by an increased number of studies using animal models with complex vocalizations and new methods in human brain imaging. Recent results from such studies are discussed. Experiments have begun to reveal the bilateral cortical fields involved in communication sound processing and the transformations of neural representations that occur among those fields. Advances have also been made in understanding the neuronal basis of interaction between developmental exposures and behavioral experiences with vocalization perception. Exposure to sounds during the developmental period produces large effects on brain responses, as do a variety of specific trained tasks in adults. Studies have also uncovered a neural link between the motor production of vocalizations and the representation of vocalizations in cortex. Parallel experiments in humans and animals are answering important questions about vocalization processing in the central nervous system. This dual approach promises to reveal microscopic, mesoscopic, and macroscopic principles of large-scale dynamic interactions between brain regions that underlie the complex phenomenon of vocalization perception. Such advances will yield a greater understanding of the causes, consequences, and treatment of disorders related to speech processing.

  3. Sound & The Society

    DEFF Research Database (Denmark)

    Schulze, Holger

    2014-01-01

    How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions and their ...... and their professional design? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Nina Backmann, Jochen Bonz, Stefan Krebs, Esther Schelander & Holger Schulze......How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions...

  4. Urban Sound Interfaces

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2012-01-01

    This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live. In this pa......This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live....... In this paper, three sound works are discussed in relation to the iPod, which is considered as a more private way to explore urban environments, and as a way to control the individual perception of urban spaces....

  5. Predicting outdoor sound

    CERN Document Server

    Attenborough, Keith; Horoshenkov, Kirill

    2014-01-01

    1. Introduction  2. The Propagation of Sound Near Ground Surfaces in a Homogeneous Medium  3. Predicting the Acoustical Properties of Outdoor Ground Surfaces  4. Measurements of the Acoustical Properties of Ground Surfaces and Comparisons with Models  5. Predicting Effects of Source Characteristics on Outdoor Sound  6. Predictions, Approximations and Empirical Results for Ground Effect Excluding Meteorological Effects  7. Influence of Source Motion on Ground Effect and Diffraction  8. Predicting Effects of Mixed Impedance Ground  9. Predicting the Performance of Outdoor Noise Barriers  10. Predicting Effects of Vegetation, Trees and Turbulence  11. Analytical Approximations including Ground Effect, Refraction and Turbulence  12. Prediction Schemes  13. Predicting Sound in an Urban Environment.

  6. Sound & The Senses

    DEFF Research Database (Denmark)

    Schulze, Holger

    2012-01-01

    How are those sounds you hear right now technically generated and post-produced, how are they aesthetically conceptualized and how culturally dependant are they really? How is your ability to hear intertwined with all the other senses and their cultural, biographical and technological constructio...... over time? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Jonathan Sterne, AGF a.k.a Antye Greie, Jens Gerrit Papenburg & Holger Schulze....

  7. Handbook for sound engineers

    CERN Document Server

    Ballou, Glen

    2013-01-01

    Handbook for Sound Engineers is the most comprehensive reference available for audio engineers. All audio topics are explored: if you work on anything related to audio you should not be without this book! The 4th edition of this trusted reference has been updated to reflect changes in the industry since the publication of the 3rd edition in 2002 -- including new technologies like software-based recording systems such as Pro Tools and Sound Forge; digital recording using MP3, wave files and others; mobile audio devices such as iPods and MP3 players. Over 40 topic

  8. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  9. Beacons of Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2018-01-01

    The chapter discusses expectations and imaginations vis-à-vis the concert hall of the twenty-first century. It outlines some of the central historical implications of western culture’s haven for sounding music. Based on the author’s study of the Icelandic concert-house Harpa, the chapter considers...... how these implications, together with the prime mover’s visions, have been transformed as private investors and politicians took over. The chapter furthermore investigates the objectives regarding musical sound and the far-reaching demands concerning acoustics that modern concert halls are required...

  10. Neuroplasticity beyond sounds

    DEFF Research Database (Denmark)

    Reybrouck, Mark; Brattico, Elvira

    2015-01-01

    Capitalizing from neuroscience knowledge on how individuals are affected by the sound environment, we propose to adopt a cybernetic and ecological point of view on the musical aesthetic experience, which includes subprocesses, such as feature extraction and integration, early affective reactions...... and motor actions, style mastering and conceptualization, emotion and proprioception, evaluation and preference. In this perspective, the role of the listener/composer/performer is seen as that of an active "agent" coping in highly individual ways with the sounds. The findings concerning the neural...

  11. Eliciting Sound Memories.

    Science.gov (United States)

    Harris, Anna

    2015-11-01

    Sensory experiences are often considered triggers of memory, most famously a little French cake dipped in lime blossom tea. Sense memory can also be evoked in public history research through techniques of elicitation. In this article I reflect on different social science methods for eliciting sound memories such as the use of sonic prompts, emplaced interviewing, and sound walks. I include examples from my research on medical listening. The article considers the relevance of this work for the conduct of oral histories, arguing that such methods "break the frame," allowing room for collaborative research connections and insights into the otherwise unarticulatable.

  12. SoleSound

    DEFF Research Database (Denmark)

    Zanotto, Damiano; Turchet, Luca; Boggs, Emily Marie

    2014-01-01

    This paper introduces the design of SoleSound, a wearable system designed to deliver ecological, audio-tactile, underfoot feedback. The device, which primarily targets clinical applications, uses an audio-tactile footstep synthesis engine informed by the readings of pressure and inertial sensors...... embedded in the footwear to integrate enhanced feedback modalities into the authors' previously developed instrumented footwear. The synthesis models currently implemented in the SoleSound simulate different ground surface interactions. Unlike similar devices, the system presented here is fully portable...

  13. Sound Symbolism in Basic Vocabulary

    Directory of Open Access Journals (Sweden)

    Søren Wichmann

    2010-04-01

    Full Text Available The relationship between meanings of words and their sound shapes is to a large extent arbitrary, but it is well known that languages exhibit sound symbolism effects violating arbitrariness. Evidence for sound symbolism is typically anecdotal, however. Here we present a systematic approach. Using a selection of basic vocabulary in nearly one half of the world’s languages we find commonalities among sound shapes for words referring to same concepts. These are interpreted as due to sound symbolism. Studying the effects of sound symbolism cross-linguistically is of key importance for the understanding of language evolution.

  14. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  15. Exploring Sound with Insects

    Science.gov (United States)

    Robertson, Laura; Meyer, John R.

    2010-01-01

    Differences in insect morphology and movement during singing provide a fascinating opportunity for students to investigate insects while learning about the characteristics of sound. In the activities described here, students use a free online computer software program to explore the songs of the major singing insects and experiment with making…

  16. Second sound tracking system

    Science.gov (United States)

    Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan

    2017-10-01

    It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.

  17. See This Sound

    DEFF Research Database (Denmark)

    Kristensen, Thomas Bjørnsten

    2009-01-01

    Anmeldelse af udstillingen See This Sound på Lentos Kunstmuseum Linz, Østrig, som markerer den foreløbige kulmination på et samarbejde mellem Lentos Kunstmuseum og Ludwig Boltzmann Institute Media.Art.Research. Udover den konkrete udstilling er samarbejdet tænkt som en ambitiøs, tværfaglig...

  18. Photoacoustic Sounds from Meteors.

    Energy Technology Data Exchange (ETDEWEB)

    Spalding, Richard E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tencer, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sweatt, William C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hogan, Roy E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boslough, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Spurny, Pavel [Academy of Sciences of the Czech Republic (ASCR), Prague (Czech Republic)

    2015-03-01

    High-speed photometric observations of meteor fireballs have shown that they often produce high-amplitude light oscillations with frequency components in the kHz range, and in some cases exhibit strong millisecond flares. We built a light source with similar characteristics and illuminated various materials in the laboratory, generating audible sounds. Models suggest that light oscillations and pulses can radiatively heat dielectric materials, which in turn conductively heats the surrounding air on millisecond timescales. The sound waves can be heard if the illuminated material is sufficiently close to the observer’s ears. The mechanism described herein may explain many reports of meteors that appear to be audible while they are concurrently visible in the sky and too far away for sound to have propagated to the observer. This photoacoustic (PA) explanation provides an alternative to electrophonic (EP) sounds hypothesized to arise from electromagnetic coupling of plasma oscillation in the meteor wake to natural antennas in the vicinity of an observer.

  19. Sound of Stockholm

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2013-01-01

    Med sine kun 4 år bag sig er Sound of Stockholm relativt ny i det internationale festival-landskab. Festivalen er efter sigende udsprunget af en større eller mindre frustration over, at den svenske eksperimentelle musikscenes forskellige foreninger og organisationer gik hinanden bedene, og...

  20. Making Sense of Sound

    Science.gov (United States)

    Menon, Deepika; Lankford, Deanna

    2016-01-01

    From the earliest days of their lives, children are exposed to all kinds of sound, from soft, comforting voices to the frightening rumble of thunder. Consequently, children develop their own naïve explanations largely based upon their experiences with phenomena encountered every day. When new information does not support existing conceptions,…

  1. The Sounds of Metal

    DEFF Research Database (Denmark)

    Grund, Cynthia M.

    2015-01-01

    Two, I propose that this framework allows for at least a theoretical distinction between the way in which extreme metal – e.g. black metal, doom metal, funeral doom metal, death metal – relates to its sound as music and the way in which much other music may be conceived of as being constituted...

  2. The Universe of Sound

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    Sound Scultor, Bill Fontana, the second winner of the Prix Ars Electronica Collide@CERN residency award, and his science inspiration partner, CERN cosmologist Subodh Patil, present their work in art and science at the CERN Globe of Science and Innovation on 4 July 2013 at 19:00.

  3. Urban Sound Ecologies

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh; Samson, Kristine

    2013-01-01

    . The article concludes that the ways in which recent sound installations work with urban ecologies vary. While two of the examples blend into the urban environment, the other transfers the concert format and its mode of listening to urban space. Last, and in accordance with recent soundscape research, we point...

  4. Sounds of Space

    Science.gov (United States)

    Gurnett, D. A.

    2005-12-01

    Starting in the early 1960s, spacecraft-borne plasma wave instruments revealed that space is filled with an astonishing variety of radio and plasma wave sounds, which have come to be called "sounds of space." For over forty years these sounds have been collected and played to a wide variety of audiences, often as the result of press conferences or press releases involving various NASA projects for which the University of Iowa has provided plasma wave instruments. This activity has led to many interviews on local and national radio programs, and occasionally on programs haviang world-wide coverage, such as the BBC. As a result of this media coverage, we have been approached many times by composers requesting copies of our space sounds for use in their various projects, many of which involve electronic synthesis of music. One of these collaborations led to "Sun Rings," which is a musical event produced by the Kronos Quartet that has played to large audiences all over the world. With the availability of modern computer graphic techniques we have recently been attempting to integrate some of these sound of space into an educational audio/video web site that illustrates the scientific principles involved in the origin of space plasma waves. Typically I try to emphasize that a substantial gas pressure exists everywhere in space in the form of an ionized gas called a plasma, and that this plasma can lead to a wide variety of wave phenomenon. Examples of some of this audio/video material will be presented.

  5. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    Science.gov (United States)

    2015-04-01

    associated with, or produced by, a physical event or human activity and 2) sound sources that are common in the environment. Reproductions or sound...Rogers S. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology . 2000;22(6):830–864. 14 VanDerveer NJ

  6. Remembering that big things sound big: Sound symbolism and associative memory.

    Science.gov (United States)

    Preziosi, Melissa A; Coane, Jennifer H

    2017-01-01

    According to sound symbolism theory, individual sounds or clusters of sounds can convey meaning. To examine the role of sound symbolic effects on processing and memory for nonwords, we developed a novel set of 100 nonwords to convey largeness (nonwords containing plosive consonants and back vowels) and smallness (nonwords containing fricative consonants and front vowels). In Experiments 1A and 1B, participants rated the size of the 100 nonwords and provided definitions to them as if they were products. Nonwords composed of fricative/front vowels were rated as smaller than those composed of plosive/back vowels. In Experiment 2, participants studied sound symbolic congruent and incongruent nonword and participant-generated definition pairings. Definitions paired with nonwords that matched the size and participant-generated meanings were recalled better than those that did not match. When the participant-generated definitions were re-paired with other nonwords, this mnemonic advantage was reduced, although still reliable. In a final free association study, the possibility that plosive/back vowel and fricative/front vowel nonwords elicit sound symbolic size effects due to mediation from word neighbors was ruled out. Together, these results suggest that definitions that are sound symbolically congruent with a nonword are more memorable than incongruent definition-nonword pairings. This work has implications for the creation of brand names and how to create brand names that not only convey desired product characteristics, but also are memorable for consumers.

  7. Crianças com fissura isolada de palato: desempenho nos testes de processamento auditivo Cleft palate children: performance in auditory processing tests

    Directory of Open Access Journals (Sweden)

    Mirela Boscariol

    2009-04-01

    Full Text Available Muitas crianças com transtorno de processamento auditivo têm uma prevalência alta de otite média, alteração na orelha média de grande ocorrência na população com fissura labiopalatina. OBJETIVO: Verificar o desempenho de crianças com fissura isolada de palato (FP em testes do processamento auditivo. Estudo prospectivo. MATERIAL E MÉTODO: Vinte crianças (7 a 11 anos com FP foram submetidas aos testes de localização sonora (LS, memória para sons verbais (MSSV e não-verbais em seqüência (MSSNV, Fusão Auditiva-Revisado (AFT-R, Teste Pediátrico de Inteligibilidade de Fala/Sentenças Sintéticas (PSI/SSI, Dissílabos alternados (SSW e Dicótico de dígitos (DD. O desempenho das crianças nos testes foi classificado em ruim e bom. RESULTADOS: Não houve diferença estatística entre os gêneros e orelhas. Os valores médios obtidos foram 2,16, 2,42, 4,37, 60,50ms, de 40,71 a 67,33%, 96,25 a 99,38%, 73,55 a 73,88% e 58,38 a 65,47%, respectivamente, para os testes MSSNV, MSSV, LS, AFT-R, PSI/SSI com mensagem competitiva ipsilateral (PSI/SSIMCI e contralateral (PSI/SSI/MCC, DD e SSW. CONCLUSÃO: Uma alta porcentagem de crianças demonstrou seus piores desempenhos nos testes AFT-R, DD, SSW e no teste PSI/SSIMCI. Os melhores desempenhos ocorreram nos testes de localização sonora, memória seqüencial para sons não verbais e verbais e para PSI/SSIMCC.Many children with auditory processing disorders have a high prevalence of otitis media, a middle ear alterations greatly prevalent in children with palatine and lip clefts. AIM: to check the performance of children with palate cleft alone (PC in auditory processing tests. Prospective study. MATERIALS AND METHODS: twenty children (7 to 11 years with CP were submitted to sound location tests (SL, memory for verbal sounds (MSSV and non verbal sounds in sequence (MSSNV, Revised auditory fusion (AFT-R, Pediatric test of speech intelligibility/synthetic sentences (PSI/SSI, alternate

  8. Non-verbal mother-child communication in conditions of maternal HIV in an experimental environment Comunicación no verbal madre/hijo em la existencia del HIV materna en ambiente experimental Comunicação não-verbal mãe/filho na vigência do HIV materno em ambiente experimental

    Directory of Open Access Journals (Sweden)

    Simone de Sousa Paiva

    2010-02-01

    Full Text Available Non-verbal communication is predominant in the mother-child relation. This study aimed to analyze non-verbal mother-child communication in conditions of maternal HIV. In an experimental environment, five HIV-positive mothers were evaluated during care delivery to their babies of up to six months old. Recordings of the care were analyzed by experts, observing aspects of non-verbal communication, such as: paralanguage, kinesics, distance, visual contact, tone of voice, maternal and infant tactile behavior. In total, 344 scenes were obtained. After statistical analysis, these permitted inferring that mothers use non-verbal communication to demonstrate their close attachment to their children and to perceive possible abnormalities. It is suggested that the mother’s infection can be a determining factor for the formation of mothers’ strong attachment to their children after birth.La comunicación no verbal es predominante en la relación entre madre/hijo. Se tuvo por objetivo verificar la comunicación no verbal madre/hijo en la existencia del HIV materno. En ambiente experimental, fueron evaluadas cinco madres HIV+, que cuidaban de sus hijos de hasta seis meses de vida. Las filmaciones de los cuidados fueron analizadas por peritos, siendo observados los aspectos de la comunicación no verbal, como: paralenguaje, cinestésica, proximidad, contacto visual, tono de voz y comportamiento táctil materno e infantil. Se obtuvo 344 escenas que, después de un análisis estadístico, posibilitó inferir que la comunicación no verbal es utilizada por la madre para demonstrar su apego íntimo a los hijos y para percibir posibles anormalidades. Se sugiere que la infección materna puede ser un factor determinante para la formación del fuerte apego de la madre por su bebé después el nacimiento.A comunicação não-verbal é predominante na relação entre mãe/filho. Objetivou-se verificar a comunicação não-verbal mãe/filho na vigência do HIV

  9. Temporally Regular Musical Primes Facilitate Subsequent Syntax Processing in Children with Specific Language Impairment.

    Science.gov (United States)

    Bedoin, Nathalie; Brisseau, Lucie; Molinier, Pauline; Roch, Didier; Tillmann, Barbara

    2016-01-01

    Children with developmental language disorders have been shown to be also impaired in rhythm and meter perception. Temporal processing and its link to language processing can be understood within the dynamic attending theory. An external stimulus can stimulate internal oscillators, which orient attention over time and drive speech signal segmentation to provide benefits for syntax processing, which is impaired in various patient populations. For children with Specific Language Impairment (SLI) and dyslexia, previous research has shown the influence of an external rhythmic stimulation on subsequent language processing by comparing the influence of a temporally regular musical prime to that of a temporally irregular prime. Here we tested whether the observed rhythmic stimulation effect is indeed due to a benefit provided by the regular musical prime (rather than a cost subsequent to the temporally irregular prime). Sixteen children with SLI and 16 age-matched controls listened to either a regular musical prime sequence or an environmental sound scene (without temporal regularities in event occurrence; i.e., referred to as "baseline condition") followed by grammatically correct and incorrect sentences. They were required to perform grammaticality judgments for each auditorily presented sentence. Results revealed that performance for the grammaticality judgments was better after the regular prime sequences than after the baseline sequences. Our findings are interpreted in the theoretical framework of the dynamic attending theory (Jones, 1976) and the temporal sampling (oscillatory) framework for developmental language disorders (Goswami, 2011). Furthermore, they encourage the use of rhythmic structures (even in non-verbal materials) to boost linguistic structure processing and outline perspectives for rehabilitation.

  10. Product sounds : Fundamentals and application

    NARCIS (Netherlands)

    Ozcan-Vieira, E.

    2008-01-01

    Products are ubiquitous, so are the sounds emitted by products. Product sounds influence our reasoning, emotional state, purchase decisions, preference, and expectations regarding the product and the product's performance. Thus, auditory experience elicited by product sounds may not be just about

  11. Sonic mediations: body, sound, technology

    NARCIS (Netherlands)

    Birdsall, C.; Enns, A.

    2008-01-01

    Sonic Mediations: Body, Sound, Technology is a collection of original essays that represents an invaluable contribution to the burgeoning field of sound studies. While sound is often posited as having a bridging function, as a passive in-between, this volume invites readers to rethink the concept of

  12. System for actively reducing sound

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2005-01-01

    A system for actively reducing sound from a primary noise source, such as traffic noise, comprising: a loudspeaker connector for connecting to at least one loudspeaker for generating anti-sound for reducing said noisy sound; a microphone connector for connecting to at least a first microphone placed

  13. Human-assisted sound event recognition for home service robots.

    Science.gov (United States)

    Do, Ha Manh; Sheng, Weihua; Liu, Meiqin

    This paper proposes and implements an open framework of active auditory learning for a home service robot to serve the elderly living alone at home. The framework was developed to realize the various auditory perception capabilities while enabling a remote human operator to involve in the sound event recognition process for elderly care. The home service robot is able to estimate the sound source position and collaborate with the human operator in sound event recognition while protecting the privacy of the elderly. Our experimental results validated the proposed framework and evaluated auditory perception capabilities and human-robot collaboration in sound event recognition.

  14. Plastic modes of listening: affordance in constructed sound environments

    Science.gov (United States)

    Sjolin, Anders

    This thesis is concerned with how the ecological approach to perception with the inclusion of listening modes, informs the creation of sound art installation, or more specifically as referred to in this thesis as constructed sound environments. The basis for the thesis has been a practiced based research where the aim and purpose of the written part of this PhD project has been to critically investigate the area of sound art, in order to map various approaches towards participating in and listening to a constructed sound environment. The main areas has been the notion of affordance as coined by James J. Gibson (1986), listening modes as coined by Pierre Schaeffer (1966) and further developed by Michel Chion (1994), aural architects as coined by Blesser and Salter (2007) and the holistic approach towards understanding sound art developed by Brandon LaBelle (2006). The findings within the written part of the thesis, based on a qualitative analysis, have informed the practice that has resulted in artefacts in the form of seven constructed sound environments that also functions as case studies for further analysis. The aim of the practice has been to exemplify the methodology, strategy and progress behind the organisation and construction of sound environments The research concerns point towards the acknowledgment of affordance as the crucial factor in understanding a constructed sound environment. The affordance approach govern the idea that perceiving a sound environment is a top-down process where the autonomic quality of a constructed sound environment is based upon the perception of structures of the sound material and its relationship with speaker placement and surrounding space. This enables a researcher to side step the conflicting poles of musical/abstract and non-musical/realistic classification of sound elements and regard these poles as included, not separated elements in the analysis of a constructed sound environment.

  15. Wood for sound.

    Science.gov (United States)

    Wegst, Ulrike G K

    2006-10-01

    The unique mechanical and acoustical properties of wood and its aesthetic appeal still make it the material of choice for musical instruments and the interior of concert halls. Worldwide, several hundred wood species are available for making wind, string, or percussion instruments. Over generations, first by trial and error and more recently by scientific approach, the most appropriate species were found for each instrument and application. Using material property charts on which acoustic properties such as the speed of sound, the characteristic impedance, the sound radiation coefficient, and the loss coefficient are plotted against one another for woods. We analyze and explain why spruce is the preferred choice for soundboards, why tropical species are favored for xylophone bars and woodwind instruments, why violinists still prefer pernambuco over other species as a bow material, and why hornbeam and birch are used in piano actions.

  16. Sound in Ergonomics

    Directory of Open Access Journals (Sweden)

    Jebreil Seraji

    1999-03-01

    Full Text Available The word of “Ergonomics “is composed of two separate parts: “Ergo” and” Nomos” and means the Human Factors Engineering. Indeed, Ergonomics (or human factors is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance. It has applied different sciences such as Anatomy and physiology, anthropometry, engineering, psychology, biophysics and biochemistry from different ergonomics purposes. Sound when is referred as noise pollution can affect such balance in human life. The industrial noise caused by factories, traffic jam, media, and modern human activity can affect the health of the society.Here we are aimed at discussing sound from an ergonomic point of view.

  17. Pitch Based Sound Classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  18. Pectoral sound generation in the blue catfish Ictalurus furcatus.

    Science.gov (United States)

    Mohajer, Yasha; Ghahramani, Zachary; Fine, Michael L

    2015-03-01

    Catfishes produce pectoral stridulatory sounds by "jerk" movements that rub ridges on the dorsal process against the cleithrum. We recorded sound synchronized with high-speed video to investigate the hypothesis that blue catfish Ictalurus furcatus produce sounds by a slip-stick mechanism, previously described only in invertebrates. Blue catfish produce a variably paced series of sound pulses during abduction sweeps (pulsers) although some individuals (sliders) form longer duration sound units (slides) interspersed with pulses. Typical pulser sounds are evoked by short 1-2 ms movements with a rotation of 2°-3°. Jerks excite sounds that increase in amplitude after motion stops, suggesting constructive interference, which decays before the next jerk. Longer contact of the ridges produces a more steady-state sound in slides. Pulse pattern during stridulation is determined by pauses without movement: the spine moves during about 14 % of the abduction sweep in pulsers (~45 % in sliders) although movement appears continuous to the human eye. Spine rotation parameters do not predict pulse amplitude, but amplitude correlates with pause duration suggesting that force between the dorsal process and cleithrum increases with longer pauses. Sound production, stimulated by a series of rapid movements that set the pectoral girdle into resonance, is caused by a slip-stick mechanism.

  19. Airspace: Antarctic Sound Transmission

    OpenAIRE

    Polli, Andrea

    2009-01-01

    This paper investigates how sound transmission can contribute to the public understanding of climate change within the context of the Poles. How have such transmission-based projects developed specifically in the Arctic and Antarctic, and how do these works create alternative pathways in order to help audiences better understand climate change? The author has created the media project Sonic Antarctica from a personal experience of the Antarctic. The work combines soundscape recordings and son...

  20. Integrating Sound Scattering Measurements in the Design of Complex Architectural Surfaces

    DEFF Research Database (Denmark)

    Peters, Brady

    2010-01-01

    Digital tools present the opportunity for incorporating performance analysis into the architectural design process. Acoustic performance is an important criterion for architectural design. There is much known about sound absorption but little about sound scattering, even though scattering is reco...

  1. FeelSound: interactive acoustic music making

    NARCIS (Netherlands)

    Fikkert, F.W.; Hakvoort, Michiel; Hakvoort, M.C.; van der Vet, P.E.; Nijholt, Antinus

    2009-01-01

    FeelSound is a multi-user, multi-touch application that aims to collaboratively compose, in an entertaining way, acoustic music. Simultaneous input by each of up to four users enables collaborative composing. This process as well as the resulting music are entertaining. Sensor-packed intelligent

  2. Redesigning Space for Interdisciplinary Connections: The Puget Sound Science Center

    Science.gov (United States)

    DeMarais, Alyce; Narum, Jeanne L.; Wolfson, Adele J.

    2013-01-01

    Mindful design of learning spaces can provide an avenue for supporting student engagement in STEM subjects. Thoughtful planning and wide participation in the design process were key in shaping new and renovated spaces for the STEM community at the University of Puget Sound. The finished project incorporated Puget Sound's mission and goals as well…

  3. Underwater sound produced by individual drop impacts and rainfall

    DEFF Research Database (Denmark)

    Pumphrey, Hugh C.; Crum, L. A.; Jensen, Leif Bjørnø

    1989-01-01

    An experimental study of the underwater sound produced by water drop impacts on the surface is described. It is found that sound may be produced in two ways: first when the drop strikes the surface and, second, when a bubble is created in the water. The first process occurs for every drop...

  4. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  5. Sound Symbolism in the Languages of Australia

    Science.gov (United States)

    Haynie, Hannah; Bowern, Claire; LaPalombara, Hannah

    2014-01-01

    The notion that linguistic forms and meanings are related only by convention and not by any direct relationship between sounds and semantic concepts is a foundational principle of modern linguistics. Though the principle generally holds across the lexicon, systematic exceptions have been identified. These “sound symbolic” forms have been identified in lexical items and linguistic processes in many individual languages. This paper examines sound symbolism in the languages of Australia. We conduct a statistical investigation of the evidence for several common patterns of sound symbolism, using data from a sample of 120 languages. The patterns examined here include the association of meanings denoting “smallness” or “nearness” with front vowels or palatal consonants, and the association of meanings denoting “largeness” or “distance” with back vowels or velar consonants. Our results provide evidence for the expected associations of vowels and consonants with meanings of “smallness” and “proximity” in Australian languages. However, the patterns uncovered in this region are more complicated than predicted. Several sound-meaning relationships are only significant for segments in prominent positions in the word, and the prevailing mapping between vowel quality and magnitude meaning cannot be characterized by a simple link between gradients of magnitude and vowel F2, contrary to the claims of previous studies. PMID:24752356

  6. Testing Cosmology with Cosmic Sound Waves

    CERN Document Server

    Corasaniti, Pier Stefano

    2008-01-01

    WMAP observations have accurately determined the position of the first two peaks and dips in the CMB temperature power spectrum. These encode information on the ratio of the distance to the last scattering surface to the sound horizon at decoupling. However pre-recombination processes can contaminate this distance information. In order to assess the amplitude of these effects we use the WMAP data and evaluate the relative differences of the CMB peaks and dips multipoles. We find that the position of the first peak is largely displaced with the respect to the expected position of the sound horizon scale at decoupling. In contrast the relative spacings of the higher extrema are statistically consistent with those expected from perfect harmonic oscillations. This provides evidence for a scale dependent phase shift of the CMB oscillations which is caused by gravitational driving forces affecting the propagation of sound waves before recombination. By accounting for these effects we have performed a MCMC likelihoo...

  7. Ultrasonic sound speed of hydrating calcium sulphate hemihydrate; part 2, the correlation of sound velocity to hydration degree

    NARCIS (Netherlands)

    de Korte, A.C.J.; Brouwers, Jos; Fischer, H.B; Matthes, C.; Beuthan, C.

    2011-01-01

    In this article the sound velocity through a mix is correlated to the hydration degree of the mix. Models are presented predicting the sound velocity through fresh slurries and hardened products. These two states correspond to the starting and finishing point of the hydration process. The present

  8. Ultrasonic sound speed of hydrating calcium sulphate hemihydrate; Part 2, The correlation of sound velocity to hydration degree

    NARCIS (Netherlands)

    Korte, de A.C.J.; Brouwers, H.J.H.; Fischer, H.B.; Mattes, Chr.; Beutha, C.

    2011-01-01

    In this article the sound velocity through a mix is correlated to the hydration degree of the mix. Models are presented predicting the sound velocity through fresh slurries and hardened products. These two states correspond to the starting and finishing point of the hydration process. The present

  9. Sound to language: different cortical processing for first and second languages in elementary school children as revealed by a large-scale study using fNIRS.

    Science.gov (United States)

    Sugiura, Lisa; Ojima, Shiro; Matsuba-Kurita, Hiroko; Dan, Ippeita; Tsuzuki, Daisuke; Katura, Takusige; Hagiwara, Hiroko

    2011-10-01

    A large-scale study of 484 elementary school children (6-10 years) performing word repetition tasks in their native language (L1-Japanese) and a second language (L2-English) was conducted using functional near-infrared spectroscopy. Three factors presumably associated with cortical activation, language (L1/L2), word frequency (high/low), and hemisphere (left/right), were investigated. L1 words elicited significantly greater brain activation than L2 words, regardless of semantic knowledge, particularly in the superior/middle temporal and inferior parietal regions (angular/supramarginal gyri). The greater L1-elicited activation in these regions suggests that they are phonological loci, reflecting processes tuned to the phonology of the native language, while phonologically unfamiliar L2 words were processed like nonword auditory stimuli. The activation was bilateral in the auditory and superior/middle temporal regions. Hemispheric asymmetry was observed in the inferior frontal region (right dominant), and in the inferior parietal region with interactions: low-frequency words elicited more right-hemispheric activation (particularly in the supramarginal gyrus), while high-frequency words elicited more left-hemispheric activation (particularly in the angular gyrus). The present results reveal the strong involvement of a bilateral language network in children's brains depending more on right-hemispheric processing while acquiring unfamiliar/low-frequency words. A right-to-left shift in laterality should occur in the inferior parietal region, as lexical knowledge increases irrespective of language.

  10. Complex sound processing during human REM sleep by recovering information from long-term memory as revealed by the mismatch negativity (MMN).

    Science.gov (United States)

    Atienza, M; Cantero, J L

    2001-05-18

    Perceptual learning is thought to be the result of neural changes that take place over a period of several hours or days, allowing information to be transferred to long-term memory. Evidence suggests that contents of long-term memory may improve attentive and pre-attentive sensory processing. Therefore, it is plausible to hypothesize that learning-induced neural changes that develop during wakefulness could improve automatic information processing during human REM sleep. The MMN, an objective measure of the automatic change detection in auditory cortex, was used to evaluate long-term learning effects on pre-attentive processing during wakefulness and REM sleep. When subjects learned to discriminate two complex auditory patterns in wakefulness, an increase in the MMN was obtained in both wake and REM states. The automatic detection of the infrequent complex auditory pattern may therefore be improved in both brain states by reactivating information from long-term memory. These findings suggest that long-term learning-related neural changes are accessible during REM sleep as well.

  11. Active sound reduction system and method

    NARCIS (Netherlands)

    2016-01-01

    The present invention refers to an active sound reduction system and method for attenuation of sound emitted by a primary sound source, especially for attenuation of snoring sounds emitted by a human being. This system comprises a primary sound source, at least one speaker as a secondary sound

  12. Magnetospheric radio sounding

    International Nuclear Information System (INIS)

    Ondoh, Tadanori; Nakamura, Yoshikatsu; Koseki, Teruo; Watanabe, Sigeaki; Murakami, Toshimitsu

    1977-01-01

    Radio sounding of the plasmapause from a geostationary satellite has been investigated to observe time variations of the plasmapause structure and effects of the plasma convection. In the equatorial plane, the plasmapause is located, on the average, at 4 R sub(E) (R sub(E); Earth radius), and the plasma density drops outwards from 10 2 -10 3 /cm 3 to 1-10/cm 3 in the plasmapause width of about 600 km. Plasmagrams showing a relation between the virtual range and sounding frequencies are computed by ray tracing of LF-VLF waves transmitted from a geostationary satellite, using model distributions of the electron density in the vicinity of the plasmapause. The general features of the plasmagrams are similar to the topside ionograms. The plasmagram has no penetration frequency such as f 0 F 2 , but the virtual range of the plasmagram increases rapidly with frequency above 100 kHz, since the distance between a satellite and wave reflection point increases rapidly with increasing the electron density inside the plasmapause. The plasmapause sounder on a geostationary satellite has been designed by taking account of an average propagation distance of 2 x 2.6 R sub(E) between a satellite (6.6 R sub(E)) and the plasmapause (4.0 R sub(E)), background noise, range resolution, power consumption, and receiver S/N of 10 dB. The 13-bit Barker coded pulses of baud length of 0.5 msec should be transmitted in direction parallel to the orbital plane at frequencies for 10 kHz-2MHz in a pulse interval of 0.5 sec. The transmitter peak power of 70 watts and 700 watts are required respectively in geomagnetically quiet and disturbed (strong nonthermal continuum emissions) conditions for a 400 meter cylindrical dipole of 1.2 cm diameter on the geostationary satellite. This technique will open new area of radio sounding in the magnetosphere. (auth.)

  13. Handbook for sound engineers

    CERN Document Server

    Ballou, Glen

    2015-01-01

    Handbook for Sound Engineers is the most comprehensive reference available for audio engineers, and is a must read for all who work in audio.With contributions from many of the top professionals in the field, including Glen Ballou on interpretation systems, intercoms, assistive listening, and fundamentals and units of measurement, David Miles Huber on MIDI, Bill Whitlock on audio transformers and preamplifiers, Steve Dove on consoles, DAWs, and computers, Pat Brown on fundamentals, gain structures, and test and measurement, Ray Rayburn on virtual systems, digital interfacing, and preamplifiers

  14. Facing Sound - Voicing Art

    DEFF Research Database (Denmark)

    Lønstrup, Ansa

    2013-01-01

    This article is based on examples of contemporary audiovisual art, with a special focus on the Tony Oursler exhibition Face to Face at Aarhus Art Museum ARoS in Denmark in March-July 2012. My investigation involves a combination of qualitative interviews with visitors, observations of the audience´s...... interactions with the exhibition and the artwork in the museum space and short analyses of individual works of art based on reception aesthetics and phenomenology and inspired by newer writings on sound, voice and listening....

  15. Sounds of a Star

    Science.gov (United States)

    2001-06-01

    Acoustic Oscillations in Solar-Twin "Alpha Cen A" Observed from La Silla by Swiss Team Summary Sound waves running through a star can help astronomers reveal its inner properties. This particular branch of modern astrophysics is known as "asteroseismology" . In the case of our Sun, the brightest star in the sky, such waves have been observed since some time, and have greatly improved our knowledge about what is going on inside. However, because they are much fainter, it has turned out to be very difficult to detect similar waves in other stars. Nevertheless, tiny oscillations in a solar-twin star have now been unambiguously detected by Swiss astronomers François Bouchy and Fabien Carrier from the Geneva Observatory, using the CORALIE spectrometer on the Swiss 1.2-m Leonard Euler telescope at the ESO La Silla Observatory. This telescope is mostly used for discovering exoplanets (see ESO PR 07/01 ). The star Alpha Centauri A is the nearest star visible to the naked eye, at a distance of a little more than 4 light-years. The new measurements show that it pulsates with a 7-minute cycle, very similar to what is observed in the Sun . Asteroseismology for Sun-like stars is likely to become an important probe of stellar theory in the near future. The state-of-the-art HARPS spectrograph , to be mounted on the ESO 3.6-m telescope at La Silla, will be able to search for oscillations in stars that are 100 times fainter than those for which such demanding observations are possible with CORALIE. PR Photo 23a/01 : Oscillations in a solar-like star (schematic picture). PR Photo 23b/01 : Acoustic spectrum of Alpha Centauri A , as observed with CORALIE. Asteroseismology: listening to the stars ESO PR Photo 23a/01 ESO PR Photo 23a/01 [Preview - JPEG: 357 x 400 pix - 96k] [Normal - JPEG: 713 x 800 pix - 256k] [HiRes - JPEG: 2673 x 3000 pix - 2.1Mb Caption : PR Photo 23a/01 is a graphical representation of resonating acoustic waves in the interior of a solar-like star. Red and blue

  16. Mississippi Sound Remote Sensing Study

    Science.gov (United States)

    Atwell, B. H.

    1973-01-01

    The Mississippi Sound Remote Sensing Study was initiated as part of the research program of the NASA Earth Resources Laboratory. The objective of this study is development of remote sensing techniques to study near-shore marine waters. Included within this general objective are the following: (1) evaluate existing techniques and instruments used for remote measurement of parameters of interest within these waters; (2) develop methods for interpretation of state-of-the-art remote sensing data which are most meaningful to an understanding of processes taking place within near-shore waters; (3) define hardware development requirements and/or system specifications; (4) develop a system combining data from remote and surface measurements which will most efficiently assess conditions in near-shore waters; (5) conduct projects in coordination with appropriate operating agencies to demonstrate applicability of this research to environmental and economic problems.

  17. Neuromimetic Sound Representation for Percept Detection and Manipulation

    Directory of Open Access Journals (Sweden)

    Chi Taishih

    2005-01-01

    Full Text Available The acoustic wave received at the ears is processed by the human auditory system to separate different sounds along the intensity, pitch, and timbre dimensions. Conventional Fourier-based signal processing, while endowed with fast algorithms, is unable to easily represent a signal along these attributes. In this paper, we discuss the creation of maximally separable sounds in auditory user interfaces and use a recently proposed cortical sound representation, which performs a biomimetic decomposition of an acoustic signal, to represent and manipulate sound for this purpose. We briefly overview algorithms for obtaining, manipulating, and inverting a cortical representation of a sound and describe algorithms for manipulating signal pitch and timbre separately. The algorithms are also used to create sound of an instrument between a "guitar" and a "trumpet." Excellent sound quality can be achieved if processing time is not a concern, and intelligible signals can be reconstructed in reasonable processing time (about ten seconds of computational time for a one-second signal sampled at . Work on bringing the algorithms into the real-time processing domain is ongoing.

  18. Fundamental plasma emission involving ion sound waves

    International Nuclear Information System (INIS)

    Cairns, I.H.

    1987-01-01

    The theory for fundamental plasma emission by the three-wave processes L ± S → T (where L, S and T denote Langmuir, ion sound and transverse waves, respectively) is developed. Kinematic constraints on the characteristics and growth lengths of waves participating in the wave processes are identified. In addition the rates, path-integrated wave temperatures, and limits on the brightness temperature of the radiation are derived. (author)

  19. Sound Velocity in Soap Foams

    International Nuclear Information System (INIS)

    Wu Gong-Tao; Lü Yong-Jun; Liu Peng-Fei; Li Yi-Ning; Shi Qing-Fan

    2012-01-01

    The velocity of sound in soap foams at high gas volume fractions is experimentally studied by using the time difference method. It is found that the sound velocities increase with increasing bubble diameter, and asymptotically approach to the value in air when the diameter is larger than 12.5 mm. We propose a simple theoretical model for the sound propagation in a disordered foam. In this model, the attenuation of a sound wave due to the scattering of the bubble wall is equivalently described as the effect of an additional length. This simplicity reasonably reproduces the sound velocity in foams and the predicted results are in good agreement with the experiments. Further measurements indicate that the increase of frequency markedly slows down the sound velocity, whereas the latter does not display a strong dependence on the solution concentration

  20. Analysis of environmental sounds

    Science.gov (United States)

    Lee, Keansub

    Environmental sound archives - casual recordings of people's daily life - are easily collected by MPS players or camcorders with low cost and high reliability, and shared in the web-sites. There are two kinds of user generated recordings we would like to be able to handle in this thesis: Continuous long-duration personal audio and Soundtracks of short consumer video clips. These environmental recordings contain a lot of useful information (semantic concepts) related with activity, location, occasion and content. As a consequence, the environment archives present many new opportunities for the automatic extraction of information that can be used in intelligent browsing systems. This thesis proposes systems for detecting these interesting concepts on a collection of these real-world recordings. The first system is to segment and label personal audio archives - continuous recordings of an individual's everyday experiences - into 'episodes' (relatively consistent acoustic situations lasting a few minutes or more) using the Bayesian Information Criterion and spectral clustering. The second system is for identifying regions of speech or music in the kinds of energetic and highly-variable noise present in this real-world sound. Motivated by psychoacoustic evidence that pitch is crucial in the perception and organization of sound, we develop a noise-robust pitch detection algorithm to locate speech or music-like regions. To avoid false alarms resulting from background noise with strong periodic components (such as air-conditioning), a new scheme is added in order to suppress these noises in the domain of autocorrelogram. In addition, the third system is to automatically detect a large set of interesting semantic concepts; which we chose for being both informative and useful to users, as well as being technically feasible. These 25 concepts are associated with people's activities, locations, occasions, objects, scenes and sounds, and are based on a large collection of

  1. Comunicação não-verbal: uma contribuição para o aconselhamento em amamentação Comunicación no verbal: una contribución para la consejería en lactancia materna Non verbal communication: a contribution to breastfeeding counseling

    Directory of Open Access Journals (Sweden)

    Adriana Moraes Leite

    2004-04-01

    . Los autores percibieron que las habilidades del curso están centradas en técnicas solamente orientadas hacia las actitudes de los profesionales. Es fundamental estar atento a las señales no verbales de la mujer, pues estas retratan sus emociones. Tales señales pueden ser los indicadores de las dificultades que la mujer enfrenta, de las interpretaciones a cerca de elementos interactivos en su contexto, y muchas veces, son los indicativos del curso que podrán imprimir al proceso de lactancia.The "Course on Breastfeeding Counseling", elaborated and implemented by the United Nation's Children's Emergency Fund - UNICEF in partnership with the World Health Organization - WHO, represents one of the most important initiatives towards the valorization of women as breastfeeding agents. With a view to understanding and facilitating the application of the nonverbal communication skills this course intends to develop among professionals, this study aims to organize the theoretical frameworks that will support the teaching of Listening and Learning Skills -1 - "Use of non verbal communication", considering the concepts of human communication found in different authors. We found out that the skills of the course are centered in techniques only directed at the professional's attitudes. However, it is necessary to pay attention to women's nonverbal signs, as they reflect their emotions. These signs can indicate the difficulties women are facing, their interpretations regarding the interaction elements in their context and, often, they are indicative of how they will direct the breastfeeding process.

  2. Non-verbal communication: aspects observed during nursing consultations with blind patients Comunicación no-verbal: aspectos observados durante la consulta de Enfermería con el paciente ciego Comunicação não-verbal: aspectos observados durante a consulta de Enfermagem com o paciente cego

    Directory of Open Access Journals (Sweden)

    Cristiana Brasil de Almeida Rebouças

    2007-03-01

    Full Text Available Exploratory-descriptive study on non-verbal communication among nurses and blind patients during nursing consultations to diabetes patients, based on Hall's theoretical reference framework. Data were collected by recording the consultations. The recordings were analyzed every fifteen seconds, totaling 1,131 non-verbal communication moments. The analysis shows intimate distance (91.0% and seated position (98.3%; no contact occurred in 83.3% of the interactions. Emblematic gestures were present, including hand movements (67.4%; looks deviated from the interlocutor (52.8%, and centered on the interlocutor (44.4%. In all recordings, considerable interference occurred at the moment of nurse-patient interaction. Nurses need to know about and deepen non-verbal communication studies and adequate its use to the type of patients attended during the consultations.Estudio exploratorio y descriptivo sobre comunicación no-verbal entre el enfermero y el paciente ciego durante la consulta de enfermería al diabético, desde el referencial teórico de Hall. Colecta de datos con filmación de la consulta, analizadas a cada quince segundos, totalizando 1.131 momentos de comunicación no-verbal. El análisis muestra alejamiento íntimo (91,0% y postura sentada (98,3%, en 83,3% de las intervenciones no hubo contacto. Estubo presente el gesto emblemático mover las manos (67,4%; el mirar desviado del interlocutor (52,8% y al mirar centrado en el interlocutor (44,4%. En todas las filmaciones, hubieron interferencias considerables en el momento de la interacción enfermero y paciente. Concluyese que el enfermero precisa conocer y profundizar los estudios en comunicación no-verbal y adecuar su utilización al tipo de pacientes asistidos durante las consultas.Estudo exploratório-descritivo sobre comunicação não-verbal entre o enfermeiro e o cego durante a consulta de enfermagem ao diabético, a partir do referencial teórico de Hall. Coleta de dados com filmagem da

  3. Sound therapies for tinnitus management.

    Science.gov (United States)

    Jastreboff, Margaret M

    2007-01-01

    Many people with bothersome (suffering) tinnitus notice that their tinnitus changes in different acoustical surroundings, it is more intrusive in silence and less profound in the sound enriched environments. This observation led to the development of treatment methods for tinnitus utilizing sound. Many of these methods are still under investigation in respect to their specific protocol and effectiveness and only some have been objectively evaluated in clinical trials. This chapter will review therapies for tinnitus using sound stimulation.

  4. Sound [signal] noise

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2012-01-01

    The article discusses the intricate relationship between sound and signification through notions of noise. The emergence of new fields of sonic artistic practices has generated several questions of how to approach sound as aesthetic form and material. During the past decade an increased attention...... has been paid to, for instance, a category such as ‘sound art’ together with an equally strengthened interest in phenomena and concepts that fall outside the accepted aesthetic procedures and constructions of what we traditionally would term as musical sound – a recurring example being ‘noise’....

  5. Musical Sound, Instruments, and Equipment

    Science.gov (United States)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  6. Sounding out the logo shot

    OpenAIRE

    Nicolai Jørgensgaard Graakjær

    2013-01-01

    This article focuses on how sound in combination with visuals (i.e. ‘branding by’) may possibly affect the signifying potentials (i.e. ‘branding effect’) of products and corporate brands (i.e. ‘branding of’) during logo shots in television commercials (i.e. ‘branding through’). This particular focus adds both to the understanding of sound in television commercials and to the understanding of sound brands. The article firstly presents a typology of sounds. Secondly, this typology is applied...

  7. Effect of Sound Waves on Decarburization Rate of Fe-C Melt

    Science.gov (United States)

    Komarov, Sergey V.; Sano, Masamichi

    2018-02-01

    Sound waves have the ability to propagate through a gas phase and, thus, to supply the acoustic energy from a sound generator to materials being processed. This offers an attractive tool, for example, for controlling the rates of interfacial reactions in steelmaking processes. This study investigates the kinetics of decarburization in molten Fe-C alloys, the surface of which was exposed to sound waves and Ar-O2 gas blown onto the melt surface. The main emphasis is placed on clarifying effects of sound frequency, sound pressure, and gas flow rate. A series of water model experiments and numerical simulations are also performed to explain the results of high-temperature experiments and to elucidate the mechanism of sound wave application. This is explained by two phenomena that occur simultaneously: (1) turbulization of Ar-O2 gas flow by sound wave above the melt surface and (2) motion and agitation of the melt surface when exposed to sound wave. It is found that sound waves can both accelerate and inhibit the decarburization rate depending on the Ar-O2 gas flow rate and the presence of oxide film on the melt surface. The effect of sound waves is clearly observed only at higher sound pressures on resonance frequencies, which are defined by geometrical features of the experimental setup. The resonance phenomenon makes it difficult to separate the effect of sound frequency from that of sound pressure under the present experimental conditions.

  8. Effects of capacity limits, memory loss, and sound type in change deafness.

    Science.gov (United States)

    Gregg, Melissa K; Irsik, Vanessa C; Snyder, Joel S

    2017-11-01

    Change deafness, the inability to notice changes to auditory scenes, has the potential to provide insights about sound perception in busy situations typical of everyday life. We determined the extent to which change deafness to sounds is due to the capacity of processing multiple sounds and the loss of memory for sounds over time. We also determined whether these processing limitations work differently for varying types of sounds within a scene. Auditory scenes composed of naturalistic sounds, spectrally dynamic unrecognizable sounds, tones, and noise rhythms were presented in a change-detection task. On each trial, two scenes were presented that were same or different. We manipulated the number of sounds within each scene to measure memory capacity and the silent interval between scenes to measure memory loss. For all sounds, change detection was worse as scene size increased, demonstrating the importance of capacity limits. Change detection to the natural sounds did not deteriorate much as the interval between scenes increased up to 2,000 ms, but it did deteriorate substantially with longer intervals. For artificial sounds, in contrast, change-detection performance suffered even for very short intervals. The results suggest that change detection is generally limited by capacity, regardless of sound type, but that auditory memory is more enduring for sounds with naturalistic acoustic structures.

  9. Sounding the Alarm: An Introduction to Ecological Sound Art

    Directory of Open Access Journals (Sweden)

    Jonathan Gilmurray

    2016-12-01

    Full Text Available In recent years, a number of sound artists have begun engaging with ecological issues through their work, forming a growing movement of ˝ecological sound art˝. This paper traces its development, examines its influences, and provides examples of the artists whose work is currently defining this important and timely new field.

  10. Sound preference test in animal models of addicts and phobias.

    Science.gov (United States)

    Soga, Ryo; Shiramatsu, Tomoyo I; Kanzaki, Ryohei; Takahashi, Hirokazu

    2016-08-01

    Biased or too strong preference for a particular object is often problematic, resulting in addiction and phobia. In animal models, alternative forced-choice tasks have been routinely used, but such preference test is far from daily situations that addicts or phobic are facing. In the present study, we developed a behavioral assay to evaluate the preference of sounds in rodents. In the assay, several sounds were presented according to the position of free-moving rats, and quantified the sound preference based on the behavior. A particular tone was paired with microstimulation to the ventral tegmental area (VTA), which plays central roles in reward processing, to increase sound preference. The behaviors of rats were logged during the classical conditioning for six days. Consequently, some behavioral indices suggest that rats search for the conditioned sound. Thus, our data demonstrated that quantitative evaluation of preference in the behavioral assay is feasible.

  11. Analysis of acoustic sound signal for ONB measurement

    International Nuclear Information System (INIS)

    Park, S. J.; Kim, H. I.; Han, K. Y.; Chai, H. T.; Park, C.

    2003-01-01

    The onset of nucleate boiling (ONB) was measured in a test fuel bundle composed of several fuel element simulators (FES) by analysing the aquatic sound signals. In order measure ONBs, a hydrophone, a pre-amplifier, and a data acquisition system to acquire/process the aquatic signal was prepared. The acoustic signal generated in the coolant is converted to the current signal through the microphone. When the signal is analyzed in the frequency domain, each sound signal can be identified according to its origin of sound source. As the power is increased to a certain degree, a nucleate boiling is started. The frequent formation and collapse of the void bubbles produce sound signal. By measuring this sound signal one can pinpoint the ONB. Since the signal characteristics is identical for different mass flow rates, this method can be applicable for ascertaining ONB

  12. Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

    OpenAIRE

    Yoshio Kurosawa; Takao Yamaguchi

    2015-01-01

    High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy ...

  13. Sound, memory and interruption

    DEFF Research Database (Denmark)

    Pinder, David

    2016-01-01

    This chapter considers how art can interrupt the times and spaces of urban development so they might be imagined, experienced and understood differently. It focuses on the construction of the M11 Link Road through north-east London during the 1990s that demolished hundreds of homes and displaced...... around a thousand people. The highway was strongly resisted and it became the site of one of the country’s longest and largest anti-road struggles. The chapter addresses specifically Graeme Miller’s sound walk LINKED (2003), which for more than a decade has been broadcasting memories and stories...... of people who were violently displaced by the road as well as those who actively sought to halt it. Attention is given to the walk’s interruption of senses of the given and inevitable in two main ways. The first is in relation to the pace of the work and its deployment of slowness and arrest in a context...

  14. Recycling Sounds in Commercials

    DEFF Research Database (Denmark)

    Larsen, Charlotte Rørdam

    2012-01-01

    Commercials offer the opportunity for intergenerational memory and impinge on cultural memory. TV commercials for foodstuffs often make reference to past times as a way of authenticating products. This is frequently achieved using visual cues, but in this paper I would like to demonstrate how...... such references to the past and ‘the good old days’ can be achieved through sounds. In particular, I will look at commercials for Danish non-dairy spreads, especially for OMA margarine. These commercials are notable in that they contain a melody and a slogan – ‘Say the name: OMA margarine’ – that have basically...... remained the same for 70 years. Together these identifiers make OMA an interesting Danish case to study. With reference to Ann Rigney’s memorial practices or mechanisms, the study aims to demonstrate how the auditory aspects of Danish margarine commercials for frying tend to be limited in variety...

  15. The sounds of science

    Science.gov (United States)

    Carlowicz, Michael

    As scientists carefully study some aspects of the ocean environment, are they unintentionally distressing others? That is a question to be answered by Robert Benson and his colleagues in the Center for Bioacoustics at Texas A&M University.With help from a 3-year, $316,000 grant from the U.S. Office of Naval Research, Benson will study how underwater noise produced by naval operations and other sources may affect marine mammals. In Benson's study, researchers will generate random sequences of low-frequency, high-intensity (180-decibel) sounds in the Gulf of Mexico, working at an approximate distance of 1 km from sperm whale herds. Using an array of hydrophones, the scientists will listen to the characteristic clicks and whistles of the sperm whales to detect changes in the animals' direction, speed, and depth, as derived from fluctuations in their calls.

  16. Sound of proteins

    DEFF Research Database (Denmark)

    2007-01-01

    In my group we work with Molecular Dynamics to model several different proteins and protein systems. We submit our modelled molecules to changes in temperature, changes in solvent composition and even external pulling forces. To analyze our simulation results we have so far used visual inspection...... and statistical analysis of the resulting molecular trajectories (as everybody else!). However, recently I started assigning a particular sound frequency to each amino acid in the protein, and by setting the amplitude of each frequency according to the movement amplitude we can "hear" whenever two aminoacids...... example of soundfile was obtained from using Steered Molecular Dynamics for stretching the neck region of the scallop myosin molecule (in rigor, PDB-id: 1SR6), in such a way as to cause a rotation of the myosin head. Myosin is the molecule responsible for producing the force during muscle contraction...

  17. Sound Coiled-Tubing Drilling Practices

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Thomas; Deskins, Greg (Maurer Technology Inc.); Ward, Stephen L. (Advantage Energy Services Ltd); Hightower, Mel

    2001-09-30

    This Coiled-Tubing Drilling (CTD) Sound Practices Manual provides tools needed by CTD engineers and supervisors to plan, design and perform safe, successful CTD operations. As emphasized throughout, both careful planning and attention to detail are mandatory for success. A bibliography of many useful CTD references is presented in Chapter 6. This manual is organized according to three processes: 1) Pre-Job Planning Process, 2) Operations Execution Process, and 3) Post-Job Review Process. Each is discussed in a logical and sequential format.

  18. Pervasive Sound Sensing: A Weakly Supervised Training Approach.

    Science.gov (United States)

    Kelly, Daniel; Caulfield, Brian

    2016-01-01

    Modern smartphones present an ideal device for pervasive sensing of human behavior. Microphones have the potential to reveal key information about a person's behavior. However, they have been utilized to a significantly lesser extent than other smartphone sensors in the context of human behavior sensing. We postulate that, in order for microphones to be useful in behavior sensing applications, the analysis techniques must be flexible and allow easy modification of the types of sounds to be sensed. A simplification of the training data collection process could allow a more flexible sound classification framework. We hypothesize that detailed training, a prerequisite for the majority of sound sensing techniques, is not necessary and that a significantly less detailed and time consuming data collection process can be carried out, allowing even a nonexpert to conduct the collection, labeling, and training process. To test this hypothesis, we implement a diverse density-based multiple instance learning framework, to identify a target sound, and a bag trimming algorithm, which, using the target sound, automatically segments weakly labeled sound clips to construct an accurate training set. Experiments reveal that our hypothesis is a valid one and results show that classifiers, trained using the automatically segmented training sets, were able to accurately classify unseen sound samples with accuracies comparable to supervised classifiers, achieving an average F -measure of 0.969 and 0.87 for two weakly supervised datasets.

  19. Sound modes in hot nuclear matter

    International Nuclear Information System (INIS)

    Kolomietz, V. M.; Shlomo, S.

    2001-01-01

    The propagation of the isoscalar and isovector sound modes in a hot nuclear matter is considered. The approach is based on the collisional kinetic theory and takes into account the temperature and memory effects. It is shown that the sound velocity and the attenuation coefficient are significantly influenced by the Fermi surface distortion (FSD). The corresponding influence is much stronger for the isoscalar mode than for the isovector one. The memory effects cause a nonmonotonous behavior of the attenuation coefficient as a function of the relaxation time leading to a zero-to-first sound transition with increasing temperature. The mixing of both the isoscalar and the isovector sound modes in an asymmetric nuclear matter is evaluated. The condition for the bulk instability and the instability growth rate in the presence of the memory effects is studied. It is shown that both the FSD and the relaxation processes lead to a shift of the maximum of the instability growth rate to the longer-wavelength region

  20. Cascaded Amplitude Modulations in Sound Texture Perception

    Directory of Open Access Journals (Sweden)

    Richard McWalter

    2017-09-01

    Full Text Available Sound textures, such as crackling fire or chirping crickets, represent a broad class of sounds defined by their homogeneous temporal structure. It has been suggested that the perception of texture is mediated by time-averaged summary statistics measured from early auditory representations. In this study, we investigated the perception of sound textures that contain rhythmic structure, specifically second-order amplitude modulations that arise from the interaction of different modulation rates, previously described as “beating” in the envelope-frequency domain. We developed an auditory texture model that utilizes a cascade of modulation filterbanks that capture the structure of simple rhythmic patterns. The model was examined in a series of psychophysical listening experiments using synthetic sound textures—stimuli generated using time-averaged statistics measured from real-world textures. In a texture identification task, our results indicated that second-order amplitude modulation sensitivity enhanced recognition. Next, we examined the contribution of the second-order modulation analysis in a preference task, where the proposed auditory texture model was preferred over a range of model deviants that lacked second-order modulation rate sensitivity. Lastly, the discriminability of textures that included second-order amplitude modulations appeared to be perceived using a time-averaging process. Overall, our results demonstrate that the inclusion of second-order modulation analysis generates improvements in the perceived quality of synthetic textures compared to the first-order modulation analysis considered in previous approaches.

  1. Applying cybernetic technology to diagnose human pulmonary sounds.

    Science.gov (United States)

    Chen, Mei-Yung; Chou, Cheng-Han

    2014-06-01

    Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.

  2. Designing a Sound Reducing Wall

    Science.gov (United States)

    Erk, Kendra; Lumkes, John; Shambach, Jill; Braile, Larry; Brickler, Anne; Matthys, Anna

    2015-01-01

    Acoustical engineers use their knowledge of sound to design quiet environments (e.g., classrooms and libraries) as well as to design environments that are supposed to be loud (e.g., concert halls and football stadiums). They also design sound barriers, such as the walls along busy roadways that decrease the traffic noise heard by people in…

  3. Thinking The City Through Sound

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2011-01-01

    n Acoutic Territories. Sound Culture and Everyday Life Brandon LaBelle sets out to charts an urban topology through sound. Working his way through six acoustic territories: underground, home, sidewalk, street, shopping mall and sky/radio LaBelle investigates tensions and potentials inherent in mo...

  4. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    2010-01-01

    The aim of this article is to shed light on a small part of the research taking place in the textile field. The article describes an ongoing PhD research project on textiles and sound and outlines the project's two main questions: how sound can be shaped by textiles and conversely how textiles can...

  5. Basic semantics of product sounds

    NARCIS (Netherlands)

    Özcan Vieira, E.; Van Egmond, R.

    2012-01-01

    Product experience is a result of sensory and semantic experiences with product properties. In this paper, we focus on the semantic attributes of product sounds and explore the basic components for product sound related semantics using a semantic differential paradigmand factor analysis. With two

  6. Measuring the 'complexity' of sound

    Indian Academy of Sciences (India)

    cate that specialized regions of the brain analyse different types of sounds [1]. Music, ... The left panel of figure 1 shows examples of sound–pressure waveforms from the nat- ... which is shown in the right panels in the spectrographic representation using a 45 Hz .... Plot of SFM(t) vs. time for different environmental sounds.

  7. Efficient individualization of hearing aid processed sound

    DEFF Research Database (Denmark)

    Nielsen, Jens Brehm; Nielsen, Jakob

    2013-01-01

    Due to the large amount of options offered by the vast number of adjustable parameters in modern digital hearing aids, it is becoming increasingly daunting—even for a fine-tuning professional—to perform parameter fine tuning to satisfactorily meet the preference of the hearing aid user. In addition......, the communication between the fine-tuning professional and the hearing aid user might muddle the task. In the present paper, an interactive system is proposed to ease and speed up fine tuning of hearing aids to suit the preference of the individual user. The system simultaneously makes the user conscious of his own...... preferences while the system itself learns the user’s preference. Since the learning is based on probabilistic modeling concepts, the system handles inconsistent user feedback efficiently. Experiments with hearing impaired subjects show that the system quickly discovers individual preferred hearing...

  8. The Aesthetic Experience of Sound

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2005-01-01

    to react on. In an ecological understanding of hearing our detection of audible information affords us ways of responding to our environment. In my paper I will address both these ways of using sound in relation to computer games. Since a game player is responsible for the unfolding of the game, his......The use of sound in (3D) computer games basically falls in two. Sound is used as an element in the design of the set and as a narrative. As set design sound stages the nature of the environment, it brings it to life. As a narrative it brings us information that we can choose to or perhaps need...... exploration of the virtual space laid out before him is pertinent. In this mood of exploration sound is important and heavily contributing to the aesthetic of the experience....

  9. Controlling sound with acoustic metamaterials

    DEFF Research Database (Denmark)

    Cummer, Steven A. ; Christensen, Johan; Alù, Andrea

    2016-01-01

    Acoustic metamaterials can manipulate and control sound waves in ways that are not possible in conventional materials. Metamaterials with zero, or even negative, refractive index for sound offer new possibilities for acoustic imaging and for the control of sound at subwavelength scales....... The combination of transformation acoustics theory and highly anisotropic acoustic metamaterials enables precise control over the deformation of sound fields, which can be used, for example, to hide or cloak objects from incident acoustic energy. Active acoustic metamaterials use external control to create......-scale metamaterial structures and converting laboratory experiments into useful devices. In this Review, we outline the designs and properties of materials with unusual acoustic parameters (for example, negative refractive index), discuss examples of extreme manipulation of sound and, finally, provide an overview...

  10. The sound symbolism bootstrapping hypothesis for language acquisition and language evolution.

    Science.gov (United States)

    Imai, Mutsumi; Kita, Sotaro

    2014-09-19

    Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  11. Sound insulation design of modular construction housing

    OpenAIRE

    Yates, D. J.; Hughes, Lawrence; Campbell, A.

    2007-01-01

    This paper provides an insight into the acoustic issues of modular housing using the Verbus System of construction. The paper briefly summarises the history of the development of Verbus modular housing and the acoustic design considerations of the process. Results are presented from two sound insulation tests conducted during the course of the project. The results are discussed in terms of compliance with Approved Document E1 and increased performance standards such as EcoHomes2.

  12. Sound waves in hadronic matter

    Science.gov (United States)

    Wilk, Grzegorz; Włodarczyk, Zbigniew

    2018-01-01

    We argue that recent high energy CERN LHC experiments on transverse momenta distributions of produced particles provide us new, so far unnoticed and not fully appreciated, information on the underlying production processes. To this end we concentrate on the small (but persistent) log-periodic oscillations decorating the observed pT spectra and visible in the measured ratios R = σdata(pT) / σfit (pT). Because such spectra are described by quasi-power-like formulas characterised by two parameters: the power index n and scale parameter T (usually identified with temperature T), the observed logperiodic behaviour of the ratios R can originate either from suitable modifications of n or T (or both, but such a possibility is not discussed). In the first case n becomes a complex number and this can be related to scale invariance in the system, in the second the scale parameter T exhibits itself log-periodic oscillations which can be interpreted as the presence of some kind of sound waves forming in the collision system during the collision process, the wave number of which has a so-called self similar solution of the second kind. Because the first case was already widely discussed we concentrate on the second one and on its possible experimental consequences.

  13. Sound segregation via embedded repetition is robust to inattention.

    Science.gov (United States)

    Masutomi, Keiko; Barascud, Nicolas; Kashino, Makio; McDermott, Josh H; Chait, Maria

    2016-03-01

    The segregation of sound sources from the mixture of sounds that enters the ear is a core capacity of human hearing, but the extent to which this process is dependent on attention remains unclear. This study investigated the effect of attention on the ability to segregate sounds via repetition. We utilized a dual task design in which stimuli to be segregated were presented along with stimuli for a "decoy" task that required continuous monitoring. The task to assess segregation presented a target sound 10 times in a row, each time concurrent with a different distractor sound. McDermott, Wrobleski, and Oxenham (2011) demonstrated that repetition causes the target sound to be segregated from the distractors. Segregation was queried by asking listeners whether a subsequent probe sound was identical to the target. A control task presented similar stimuli but probed discrimination without engaging segregation processes. We present results from 3 different decoy tasks: a visual multiple object tracking task, a rapid serial visual presentation (RSVP) digit encoding task, and a demanding auditory monitoring task. Load was manipulated by using high- and low-demand versions of each decoy task. The data provide converging evidence of a small effect of attention that is nonspecific, in that it affected the segregation and control tasks to a similar extent. In all cases, segregation performance remained high despite the presence of a concurrent, objectively demanding decoy task. The results suggest that repetition-based segregation is robust to inattention. (c) 2016 APA, all rights reserved).

  14. What the Toadfish Ear Tells the Toadfish Brain About Sound.

    Science.gov (United States)

    Edds-Walton, Peggy L

    2016-01-01

    Of the three, paired otolithic endorgans in the ear of teleost fishes, the saccule is the one most often demonstrated to have a major role in encoding frequencies of biologically relevant sounds. The toadfish saccule also encodes sound level and sound source direction in the phase-locked activity conveyed via auditory afferents to nuclei of the ipsilateral octaval column in the medulla. Although paired auditory receptors are present in teleost fishes, binaural processes were believed to be unimportant due to the speed of sound in water and the acoustic transparency of the tissues in water. In contrast, there are behavioral and anatomical data that support binaural processing in fishes. Studies in the toadfish combined anatomical tract-tracing and physiological recordings from identified sites along the ascending auditory pathway to document response characteristics at each level. Binaural computations in the medulla and midbrain sharpen the directional information provided by the saccule. Furthermore, physiological studies in the central nervous system indicated that encoding frequency, sound level, temporal pattern, and sound source direction are important components of what the toadfish ear tells the toadfish brain about sound.

  15. Fourth sound in relativistic superfluidity theory

    International Nuclear Information System (INIS)

    Vil'chinskij, S.I.; Fomin, P.I.

    1995-01-01

    The Lorentz-covariant equations describing propagation of the fourth sound in the relativistic theory of superfluidity are derived. The expressions for the velocity of the fourth sound are obtained. The character of oscillation in sound is determined

  16. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  17. EUVS Sounding Rocket Payload

    Science.gov (United States)

    Stern, Alan S.

    1996-01-01

    During the first half of this year (CY 1996), the EUVS project began preparations of the EUVS payload for the upcoming NASA sounding rocket flight 36.148CL, slated for launch on July 26, 1996 to observe and record a high-resolution (approx. 2 A FWHM) EUV spectrum of the planet Venus. These preparations were designed to improve the spectral resolution and sensitivity performance of the EUVS payload as well as prepare the payload for this upcoming mission. The following is a list of the EUVS project activities that have taken place since the beginning of this CY: (1) Applied a fresh, new SiC optical coating to our existing 2400 groove/mm grating to boost its reflectivity; (2) modified the Ranicon science detector to boost its detective quantum efficiency with the addition of a repeller grid; (3) constructed a new entrance slit plane to achieve 2 A FWHM spectral resolution; (4) prepared and held the Payload Initiation Conference (PIC) with the assigned NASA support team from Wallops Island for the upcoming 36.148CL flight (PIC held on March 8, 1996; see Attachment A); (5) began wavelength calibration activities of EUVS in the laboratory; (6) made arrangements for travel to WSMR to begin integration activities in preparation for the July 1996 launch; (7) paper detailing our previous EUVS Venus mission (NASA flight 36.117CL) published in Icarus (see Attachment B); and (8) continued data analysis of the previous EUVS mission 36.137CL (Spica occultation flight).

  18. Heart Sound Biometric System Based on Marginal Spectrum Analysis

    Science.gov (United States)

    Zhao, Zhidong; Shen, Qinqin; Ren, Fangqin

    2013-01-01

    This work presents a heart sound biometric system based on marginal spectrum analysis, which is a new feature extraction technique for identification purposes. This heart sound identification system is comprised of signal acquisition, pre-processing, feature extraction, training, and identification. Experiments on the selection of the optimal values for the system parameters are conducted. The results indicate that the new spectrum coefficients result in a significant increase in the recognition rate of 94.40% compared with that of the traditional Fourier spectrum (84.32%) based on a database of 280 heart sounds from 40 participants. PMID:23429515

  19. An investigation into vocal expressions of emotions: the roles of valence, culture, and acoustic factors

    Science.gov (United States)

    Sauter, Disa

    This PhD is an investigation of vocal expressions of emotions, mainly focusing on non-verbal sounds such as laughter, cries and sighs. The research examines the roles of categorical and dimensional factors, the contributions of a number of acoustic cues, and the influence of culture. A series of studies established that naive listeners can reliably identify non-verbal vocalisations of positive and negative emotions in forced-choice and rating tasks. Some evidence for underlying dimensions of arousal and valence is found, although each emotion had a discrete expression. The role of acoustic characteristics of the sounds is investigated experimentally and analytically. This work shows that the cues used to identify different emotions vary, although pitch and pitch variation play a central role. The cues used to identify emotions in non-verbal vocalisations differ from the cues used when comprehending speech. An additional set of studies using stimuli consisting of emotional speech demonstrates that these sounds can also be reliably identified, and rely on similar acoustic cues. A series of studies with a pre-literate Namibian tribe shows that non-verbal vocalisations can be recognized across cultures. An fMRI study carried out to investigate the neural processing of non-verbal vocalisations of emotions is presented. The results show activation in pre-motor regions arising from passive listening to non-verbal emotional vocalisations, suggesting neural auditory-motor interactions in the perception of these sounds. In sum, this thesis demonstrates that non-verbal vocalisations of emotions are reliably identifiable tokens of information that belong to discrete categories. These vocalisations are recognisable across vastly different cultures and thus seem to, like facial expressions of emotions, comprise human universals. Listeners rely mainly on pitch and pitch variation to identify emotions in non verbal vocalisations, which differs with the cues used to comprehend

  20. Sound Clocks and Sonic Relativity

    Science.gov (United States)

    Todd, Scott L.; Menicucci, Nicolas C.

    2017-10-01

    Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, "do devices exist that will experience the relativity in these systems?" We describe a thought experiment in which `acoustic observers' possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ , with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.

  1. Sound localization and occupational noise

    Directory of Open Access Journals (Sweden)

    Pedro de Lemos Menezes

    2014-02-01

    Full Text Available OBJECTIVE: The aim of this study was to determine the effects of occupational noise on sound localization in different spatial planes and frequencies among normal hearing firefighters. METHOD: A total of 29 adults with pure-tone hearing thresholds below 25 dB took part in the study. The participants were divided into a group of 19 firefighters exposed to occupational noise and a control group of 10 adults who were not exposed to such noise. All subjects were assigned a sound localization task involving 117 stimuli from 13 sound sources that were spatially distributed in horizontal, vertical, midsagittal and transverse planes. The three stimuli, which were square waves with fundamental frequencies of 500, 2,000 and 4,000 Hz, were presented at a sound level of 70 dB and were randomly repeated three times from each sound source. The angle between the speaker's axis in the same plane was 45°, and the distance to the subject was 1 m. RESULT: The results demonstrate that the sound localization ability of the firefighters was significantly lower (p<0.01 than that of the control group. CONCLUSION: Exposure to occupational noise, even when not resulting in hearing loss, may lead to a diminished ability to locate a sound source.

  2. Fourth sound of holographic superfluids

    International Nuclear Information System (INIS)

    Yarom, Amos

    2009-01-01

    We compute fourth sound for superfluids dual to a charged scalar and a gauge field in an AdS 4 background. For holographic superfluids with condensates that have a large scaling dimension (greater than approximately two), we find that fourth sound approaches first sound at low temperatures. For condensates that a have a small scaling dimension it exhibits non-conformal behavior at low temperatures which may be tied to the non-conformal behavior of the order parameter of the superfluid. We show that by introducing an appropriate scalar potential, conformal invariance can be enforced at low temperatures.

  3. Sound intensity as a function of sound insulation partition

    OpenAIRE

    Cvetkovic , S.; Prascevic , R.

    1994-01-01

    In the modern engineering practice, the sound insulation of the partitions is the synthesis of the theory and of the experience acquired in the procedure of the field and of the laboratory measurement. The science and research public treat the sound insulation in the context of the emission and propagation of the acoustic energy in the media with the different acoustics impedance. In this paper, starting from the essence of physical concept of the intensity as the energy vector, the authors g...

  4. Respiratory Constraints in Verbal and Non-verbal Communication

    Directory of Open Access Journals (Sweden)

    Marcin Włodarczak

    2017-05-01

    Full Text Available In the present paper we address the old question of respiratory planning in speech production. We recast the problem in terms of speakers' communicative goals and propose that speakers try to minimize respiratory effort in line with the H&H theory. We analyze respiratory cycles coinciding with no speech (i.e., silence, short verbal feedback expressions (SFE's as well as longer vocalizations in terms of parameters of the respiratory cycle and find little evidence for respiratory planning in feedback production. We also investigate timing of speech and SFEs in the exhalation and contrast it with nods. We find that while speech is strongly tied to the exhalation onset, SFEs are distributed much more uniformly throughout the exhalation and are often produced on residual air. Given that nods, which do not have any respiratory constraints, tend to be more frequent toward the end of an exhalation, we propose a mechanism whereby respiratory patterns are determined by the trade-off between speakers' communicative goals and respiratory constraints.

  5. Breastfeeding duration and non-verbal IQ in children

    NARCIS (Netherlands)

    A. Sajjad (Ayesha); A. Tharner (Anne); J.C. Kiefte-de Jong (Jessica); V.W.V. Jaddoe (Vincent); A. Hofman (Albert); F.C. Verhulst (Frank); O.H. Franco (Oscar); H.W. Tiemeier (Henning); S.J. Roza (Sabine)

    2015-01-01

    textabstractBackground: Breastfeeding has been related to better cognitive development in children. However, due to methodological challenges, such as confounding, recall bias or insufficient power, the mechanism and nature of the relation remains subject to debate. Methods: We included 3761

  6. Nakama : A companion for non-verbal affective communication

    NARCIS (Netherlands)

    Willemse, C.J.A.M.; Munters, G.M.; Erp, J.B.F. van; Heylen, D.K.J.

    2015-01-01

    We present "Nakama": A communication device that supports affective communication between a child and its - geographically separated - parent. Nakama consists of a control unit at the parent's end and an actuated teddy bear for the child. The bear contains several communication channels, including

  7. Nakama: a companion for non-verbal affective communication

    NARCIS (Netherlands)

    Willemse, Christian Jacob Arendt Maria; Munters, Gerald M.; van Erp, Johannes Bernardus Fransiscus; Heylen, Dirk K.J.

    2015-01-01

    We present "Nakama": A communication device that supports affective communication between a child and its - geographically separated - parent. Nakama consists of a control unit at the parent's end and an actuated teddy bear for the child. The bear contains several communication channels, including

  8. Colours as Non-Verbal Signs on Packages

    OpenAIRE

    Kauppinen, Hannele

    2005-01-01

    Colour is an essential aspect of our daily life, and still, it is a neglected issue within marketing research. The main reason for studying colours is to understand the impact of colours on consumer behaviour, and thus, colours should be studied when it comes to branding, advertising, packages, interiors, and the clothes of the employees, for example. This was an exploratory study about the impact of colours on packages. The focus was on low-involvement purchasing, where the consumer puts...

  9. Non-verbal Persuasion and Communication in an Affective Agent

    NARCIS (Netherlands)

    André, Elisabeth; Bevacqua, Elisabetta; Heylen, Dirk K.J.; Niewiadomski, Radoslaw; Pelachaud, Catherine; Peters, Christopher; Poggi, Isabella; Rehm, Matthias; Cowie, Roddy; Pelachaud, Catherine; Petta, Paolo

    2011-01-01

    This chapter deals with the communication of persuasion. Only a small percentage of communication involves words: as the old saying goes, “it’s not what you say, it’s how you say it‿. While this likely underestimates the importance of good verbal persuasion techniques, it is accurate in underlining

  10. The Art of Verbal and Non-verbal Communication

    Science.gov (United States)

    Jukola, Paivi

    A researcher who does not master the art of speech, who does not know how to write about results in the most outstanding and efficient manner is less likely to be able to persue investors to fund experiments, to receive support from other researchers, and is less likely to be able to publish the results. In many universities it is common to only focus in the particular subject matter. Less emphasis is set on learning to manage innovations, to understand the big picture, to study basics of corporate finance or strategic management, patent rights. Scientific writing and debate, teaching 'tutorials' is one of the keys of education in New England Liberal Arts Colleges, Harvard and MIT, Oxford and Cambridge in the UK, however, tutorials are not commonly used elsewhere. Hands on education -is another key that is similarily often overseen either due to lack of resources or simply due to lack of teaching skills. The discussion is based on past teaching and lectures as visiting professor at Williams College (2008-2009) and Howard University / NASA Marshall Space Center Lunar Base project (2009-2010). The discussion compares also teaching at MIT aero-astro, Aalto University /Helsinki University of Technology-School of Art and Design-School of Economics, Strate College in Paris, and Vienna University of Technology and Hochschule für Angewandte Kunst. u

  11. Respiratory Constraints in Verbal and Non-verbal Communication.

    Science.gov (United States)

    Włodarczak, Marcin; Heldner, Mattias

    2017-01-01

    In the present paper we address the old question of respiratory planning in speech production. We recast the problem in terms of speakers' communicative goals and propose that speakers try to minimize respiratory effort in line with the H&H theory. We analyze respiratory cycles coinciding with no speech (i.e., silence), short verbal feedback expressions (SFE's) as well as longer vocalizations in terms of parameters of the respiratory cycle and find little evidence for respiratory planning in feedback production. We also investigate timing of speech and SFEs in the exhalation and contrast it with nods. We find that while speech is strongly tied to the exhalation onset, SFEs are distributed much more uniformly throughout the exhalation and are often produced on residual air. Given that nods, which do not have any respiratory constraints, tend to be more frequent toward the end of an exhalation, we propose a mechanism whereby respiratory patterns are determined by the trade-off between speakers' communicative goals and respiratory constraints.

  12. Effects of checklist interface on non-verbal crew communications

    Science.gov (United States)

    Segal, Leon D.

    1994-01-01

    The investigation looked at the effects of the spatial layout and functionality of cockpit displays and controls on crew communication. Specifically, the study focused on the intra-cockpit crew interaction, and subsequent task performance, of airline pilots flying different configurations of a new electronic checklist, designed and tested in a high-fidelity simulator at NASA Ames Research Center. The first part of this proposal establishes the theoretical background for the assumptions underlying the research, suggesting that in the context of the interaction between a multi-operator crew and a machine, the design and configuration of the interface will affect interactions between individual operators and the machine, and subsequently, the interaction between operators. In view of the latest trends in cockpit interface design and flight-deck technology, in particular, the centralization of displays and controls, the introduction identifies certain problems associated with these modern designs and suggests specific design issues to which the expected results could be applied. A detailed research program and methodology is outlined and the results are described and discussed. Overall, differences in cockpit design were shown to impact the activity within the cockpit, including interactions between pilots and aircraft and the cooperative interactions between pilots.

  13. An Integrated Approach to Motion and Sound

    National Research Council Canada - National Science Library

    Hahn, James K; Geigel, Joe; Lee, Jong W; Gritz, Larry; Takala, Tapio; Mishra, Suneil

    1995-01-01

    Until recently, sound has been given little attention in computer graphics and related domains of computer animation and virtual environments, although sounds which are properly synchronized to motion...

  14. Temporal Organization of Sound Information in Auditory Memory

    OpenAIRE

    Song, Kun; Luo, Huan

    2017-01-01

    Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed ...

  15. Understanding Animal Detection of Precursor Earthquake Sounds.

    Science.gov (United States)

    Garstang, Michael; Kelley, Michael C

    2017-08-31

    We use recent research to provide an explanation of how animals might detect earthquakes before they occur. While the intrinsic value of such warnings is immense, we show that the complexity of the process may result in inconsistent responses of animals to the possible precursor signal. Using the results of our research, we describe a logical but complex sequence of geophysical events triggered by precursor earthquake crustal movements that ultimately result in a sound signal detectable by animals. The sound heard by animals occurs only when metal or other surfaces (glass) respond to vibrations produced by electric currents induced by distortions of the earth's electric fields caused by the crustal movements. A combination of existing measurement systems combined with more careful monitoring of animal response could nevertheless be of value, particularly in remote locations.

  16. Improving Robustness against Environmental Sounds for Directing Attention of Social Robots

    DEFF Research Database (Denmark)

    Thomsen, Nicolai Bæk; Tan, Zheng-Hua; Lindberg, Børge

    2015-01-01

    This paper presents a multi-modal system for finding out where to direct the attention of a social robot in a dialog scenario, which is robust against environmental sounds (door slamming, phone ringing etc.) and short speech segments. The method is based on combining voice activity detection (VAD......) and sound source localization (SSL) and furthermore apply post-processing to SSL to filter out short sounds. The system is tested against a baseline system in four different real-world experiments, where different sounds are used as interfering sounds. The results are promising and show a clear improvement....

  17. The science of sound recording

    CERN Document Server

    Kadis, Jay

    2012-01-01

    The Science of Sound Recording will provide you with more than just an introduction to sound and recording, it will allow you to dive right into some of the technical areas that often appear overwhelming to anyone without an electrical engineering or physics background.  The Science of Sound Recording helps you build a basic foundation of scientific principles, explaining how recording really works. Packed with valuable must know information, illustrations and examples of 'worked through' equations this book introduces the theory behind sound recording practices in a logical and prac

  18. Decoding sound level in the marmoset primary auditory cortex.

    Science.gov (United States)

    Sun, Wensheng; Marongelli, Ellisha N; Watkins, Paul V; Barbour, Dennis L

    2017-10-01

    Neurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations. Pure nonmonotonic subpopulations did not provide the best level-invariant decoding; instead, mixtures of monotonic and nonmonotonic neurons provided the most accurate decoding. For level-fidelity decoding, the inclusion of nonmonotonic neurons slightly improved or did not change decoding accuracy until they constituted a high proportion. These results indicate that nonmonotonic neurons fill an encoding role complementary to, rather than alternate to, monotonic neurons. NEW & NOTEWORTHY Neurons with nonmonotonic rate-level functions are unique to the central auditory system. These level-tuned neurons have been proposed to account for invariant sound perception across sound levels. Through systematic simulations based on real neuron responses, this study shows that neuron populations perform sound encoding optimally when containing both monotonic and nonmonotonic neurons. The results indicate that instead of working independently, nonmonotonic neurons complement the function of monotonic neurons in different sound-encoding contexts. Copyright © 2017 the American Physiological Society.

  19. A framework for automatic heart sound analysis without segmentation

    Directory of Open Access Journals (Sweden)

    Tungpimolrut Kanokvate

    2011-02-01

    Full Text Available Abstract Background A new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs. Method Equal number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS. The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors. Result The proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR, and 0.90 under impulse noise up to 0.3 s duration. Conclusion The proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set.

  20. Primate auditory recognition memory performance varies with sound type.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2009-10-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g., social status, kinship, environment), have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition and/or memory. The present study employs a delayed matching-to-sample task with auditory stimuli to examine auditory memory performance of rhesus macaques (Macaca mulatta), wherein two sounds are determined to be the same or different. Rhesus macaques seem to have relatively poor short-term memory with auditory stimuli, and we examine if particular sound types are more favorable for memory performance. Experiment 1 suggests memory performance with vocalization sound types (particularly monkey), are significantly better than when using non-vocalization sound types, and male monkeys outperform female monkeys overall. Experiment 2, controlling for number of sound exemplars and presentation pairings across types, replicates Experiment 1, demonstrating better performance or decreased response latencies, depending on trial type, to species-specific monkey vocalizations. The findings cannot be explained by acoustic differences between monkey vocalizations and the other sound types, suggesting the biological, and/or ethological meaning of these sounds are more effective for auditory memory. 2009 Elsevier B.V.