WorldWideScience

Sample records for non-verbal sound processing

  1. Prosody Predicts Contest Outcome in Non-Verbal Dialogs.

    Science.gov (United States)

    Dreiss, Amélie N; Chatelain, Philippe G; Roulin, Alexandre; Richner, Heinz

    2016-01-01

    Non-verbal communication has important implications for inter-individual relationships and negotiation success. However, to what extent humans can spontaneously use rhythm and prosody as a sole communication tool is largely unknown. We analysed human ability to resolve a conflict without verbal dialogs, independently of semantics. We invited pairs of subjects to communicate non-verbally using whistle sounds. Along with the production of more whistles, participants unwittingly used a subtle prosodic feature to compete over a resource (ice-cream scoops). Winners can be identified by their propensity to accentuate the first whistles blown when replying to their partner, compared to the following whistles. Naive listeners correctly identified this prosodic feature as a key determinant of which whistler won the interaction. These results suggest that in the absence of other communication channels, individuals spontaneously use a subtle variation of sound accentuation (prosody), instead of merely producing exuberant sounds, to impose themselves in a conflict of interest. We discuss the biological and cultural bases of this ability and their link with verbal communication. Our results highlight the human ability to use non-verbal communication in a negotiation process.

  2. Anatomical Correlates of Non-Verbal Perception in Dementia Patients

    Directory of Open Access Journals (Sweden)

    Pin-Hsuan Lin

    2016-08-01

    Full Text Available Purpose: Patients with dementia who have dissociations in verbal and non-verbal sound processing may offer insights into the anatomic basis for highly related auditory modes. Methods: To determine the neuronal networks on non-verbal perception, 16 patients with Alzheimer’s dementia (AD, 15 with behavior variant fronto-temporal dementia (bv-FTD, 14 with semantic dementia (SD were evaluated and compared with 15 age-matched controls. Neuropsychological and auditory perceptive tasks were included to test the ability to compare pitch changes, scale-violated melody and for naming and associating with environmental sound. The brain 3D T1 images were acquired and voxel-based morphometry (VBM was used to compare and correlated the volumetric measures with task scores. Results: The SD group scored the lowest among 3 groups in pitch or scale-violated melody tasks. In the environmental sound test, the SD group also showed impairment in naming and also in associating sound with pictures. The AD and bv-FTD groups, compared with the controls, showed no differences in all tests. VBM with task score correlation showed that atrophy in the right supra-marginal and superior temporal gyri was strongly related to deficits in detecting violated scales, while atrophy in the bilateral anterior temporal poles and left medial temporal structures was related to deficits in environmental sound recognition. Conclusions: Auditory perception of pitch, scale-violated melody or environmental sound reflects anatomical degeneration in dementia patients and the processing of non-verbal sounds is mediated by distinct neural circuits.

  3. Non-verbal communication in severe aphasia: influence of aphasia, apraxia, or semantic processing?

    Science.gov (United States)

    Hogrefe, Katharina; Ziegler, Wolfram; Weidinger, Nicole; Goldenberg, Georg

    2012-09-01

    Patients suffering from severe aphasia have to rely on non-verbal means of communication to convey a message. However, to date it is not clear which patients are able to do so. Clinical experience indicates that some patients use non-verbal communication strategies like gesturing very efficiently whereas others fail to transmit semantic content by non-verbal means. Concerns have been expressed that limb apraxia would affect the production of communicative gestures. Research investigating if and how apraxia influences the production of communicative gestures, led to contradictory outcomes. The purpose of this study was to investigate the impact of limb apraxia on spontaneous gesturing. Further, linguistic and non-verbal semantic processing abilities were explored as potential factors that might influence non-verbal expression in aphasic patients. Twenty-four aphasic patients with highly limited verbal output were asked to retell short video-clips. The narrations were videotaped. Gestural communication was analyzed in two ways. In the first part of the study, we used a form-based approach. Physiological and kinetic aspects of hand movements were transcribed with a notation system for sign languages. We determined the formal diversity of the hand gestures as an indicator of potential richness of the transmitted information. In the second part of the study, comprehensibility of the patients' gestural communication was evaluated by naive raters. The raters were familiarized with the model video-clips and shown the recordings of the patients' retelling without sound. They were asked to indicate, for each narration, which story was being told and which aspects of the stories they recognized. The results indicate that non-verbal faculties are the most important prerequisites for the production of hand gestures. Whereas results on standardized aphasia testing did not correlate with any gestural indices, non-verbal semantic processing abilities predicted the formal diversity

  4. Getting the Message Across; Non-Verbal Communication in the Classroom.

    Science.gov (United States)

    Levy, Jack

    This handbook presents selected theories, activities, and resources which can be utilized by educators in the area of non-verbal communication. Particular attention is given to the use of non-verbal communication in a cross-cultural context. Categories of non-verbal communication such as proxemics, haptics, kinesics, smiling, sound, clothing, and…

  5. Consonant Differentiation Mediates the Discrepancy between Non-verbal and Verbal Abilities in Children with ASD

    Science.gov (United States)

    Key, A. P.; Yoder, P. J.; Stone, W. L.

    2016-01-01

    Background: Many children with autism spectrum disorder (ASD) demonstrate verbal communication disorders reflected in lower verbal than non-verbal abilities. The present study examined the extent to which this discrepancy is associated with atypical speech sound differentiation. Methods: Differences in the amplitude of auditory event-related…

  6. A Meta-study of musicians' non-verbal interaction

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer; Marchetti, Emanuela

    2010-01-01

    interruptions. Hence, despite the fact that the skill to engage in a non-verbal interaction is described as tacit knowledge, it is fundamental for both musicians and teachers (Davidson and Good 2002). Typical observed non-verbal cues are for example: physical gestures, modulations of sound, steady eye contact...

  7. Musical ability and non-native speech-sound processing are linked through sensitivity to pitch and spectral information.

    Science.gov (United States)

    Kempe, Vera; Bublitz, Dennis; Brooks, Patricia J

    2015-05-01

    Is the observed link between musical ability and non-native speech-sound processing due to enhanced sensitivity to acoustic features underlying both musical and linguistic processing? To address this question, native English speakers (N = 118) discriminated Norwegian tonal contrasts and Norwegian vowels. Short tones differing in temporal, pitch, and spectral characteristics were used to measure sensitivity to the various acoustic features implicated in musical and speech processing. Musical ability was measured using Gordon's Advanced Measures of Musical Audiation. Results showed that sensitivity to specific acoustic features played a role in non-native speech-sound processing: Controlling for non-verbal intelligence, prior foreign language-learning experience, and sex, sensitivity to pitch and spectral information partially mediated the link between musical ability and discrimination of non-native vowels and lexical tones. The findings suggest that while sensitivity to certain acoustic features partially mediates the relationship between musical ability and non-native speech-sound processing, complex tests of musical ability also tap into other shared mechanisms. © 2014 The British Psychological Society.

  8. Non-verbal auditory cognition in patients with temporal epilepsy before and after anterior temporal lobectomy

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-11-01

    Full Text Available For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL - i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri - is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits.

  9. Dissociating verbal and nonverbal audiovisual object processing.

    Science.gov (United States)

    Hocking, Julia; Price, Cathy J

    2009-02-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

  10. Serial recall of rhythms and verbal sequences: Impacts of concurrent tasks and irrelevant sound.

    Science.gov (United States)

    Hall, Debbora; Gathercole, Susan E

    2011-08-01

    Rhythmic grouping enhances verbal serial recall, yet very little is known about memory for rhythmic patterns. The aim of this study was to compare the cognitive processes supporting memory for rhythmic and verbal sequences using a range of concurrent tasks and irrelevant sounds. In Experiment 1, both concurrent articulation and paced finger tapping during presentation and during a retention interval impaired rhythm recall, while letter recall was only impaired by concurrent articulation. In Experiments 2 and 3, irrelevant sound consisted of irrelevant speech or tones, changing-state or steady-state sound, and syncopated or paced sound during presentation and during a retention interval. Irrelevant speech was more damaging to rhythm and letter recall than was irrelevant tone sound, but there was no effect of changing state on rhythm recall, while letter recall accuracy was disrupted by changing-state sound. Pacing of sound did not consistently affect either rhythm or letter recall. There are similarities in the way speech and rhythms are processed that appear to extend beyond reliance on temporal coding mechanisms involved in serial-order recall.

  11. Perception of non-verbal auditory stimuli in Italian dyslexic children.

    Science.gov (United States)

    Cantiani, Chiara; Lorusso, Maria Luisa; Valnegri, Camilla; Molteni, Massimo

    2010-01-01

    Auditory temporal processing deficits have been proposed as the underlying cause of phonological difficulties in Developmental Dyslexia. The hypothesis was tested in a sample of 20 Italian dyslexic children aged 8-14, and 20 matched control children. Three tasks of auditory processing of non-verbal stimuli, involving discrimination and reproduction of sequences of rapidly presented short sounds were expressly created. Dyslexic subjects performed more poorly than control children, suggesting the presence of a deficit only partially influenced by the duration of the stimuli and of inter-stimulus intervals (ISIs).

  12. Dissociation of neural correlates of verbal and non-verbal visual working memory with different delays

    Directory of Open Access Journals (Sweden)

    Endestad Tor

    2007-10-01

    Full Text Available Abstract Background Dorsolateral prefrontal cortex (DLPFC, posterior parietal cortex, and regions in the occipital cortex have been identified as neural sites for visual working memory (WM. The exact involvement of the DLPFC in verbal and non-verbal working memory processes, and how these processes depend on the time-span for retention, remains disputed. Methods We used functional MRI to explore the neural correlates of the delayed discrimination of Gabor stimuli differing in orientation. Twelve subjects were instructed to code the relative orientation either verbally or non-verbally with memory delays of short (2 s or long (8 s duration. Results Blood-oxygen level dependent (BOLD 3-Tesla fMRI revealed significantly more activity for the short verbal condition compared to the short non-verbal condition in bilateral superior temporal gyrus, insula and supramarginal gyrus. Activity in the long verbal condition was greater than in the long non-verbal condition in left language-associated areas (STG and bilateral posterior parietal areas, including precuneus. Interestingly, right DLPFC and bilateral superior frontal gyrus was more active in the non-verbal long delay condition than in the long verbal condition. Conclusion The results point to a dissociation between the cortical sites involved in verbal and non-verbal WM for long and short delays. Right DLPFC seems to be engaged in non-verbal WM tasks especially for long delays. Furthermore, the results indicate that even slightly different memory maintenance intervals engage largely differing networks and that this novel finding may explain differing results in previous verbal/non-verbal WM studies.

  13. Motor system contributions to verbal and non-verbal working memory

    Directory of Open Access Journals (Sweden)

    Diana A Liao

    2014-09-01

    Full Text Available Working memory (WM involves the ability to maintain and manipulate information held in mind. Neuroimaging studies have shown that secondary motor areas activate during WM for verbal content (e.g., words or letters, in the absence of primary motor area activation. This activation pattern may reflect an inner speech mechanism supporting online phonological rehearsal. Here, we examined the causal relationship between motor system activity and WM processing by using transcranial magnetic stimulation (TMS to manipulate motor system activity during WM rehearsal. We tested WM performance for verbalizable (words and pseudowords and non-verbalizable (Chinese characters visual information. We predicted that disruption of motor circuits would specifically affect WM processing of verbalizable information. We found that TMS targeting motor cortex slowed response times on verbal WM trials with high (pseudoword vs. low (real word phonological load. However, non-verbal WM trials were also significantly slowed with motor TMS. WM performance was unaffected by sham stimulation or TMS over visual cortex. Self-reported use of motor strategy predicted the degree of motor stimulation disruption on WM performance. These results provide evidence of the motor system’s contributions to verbal and non-verbal WM processing. We speculate that the motor system supports WM by creating motor traces consistent with the type of information being rehearsed during maintenance.

  14. An executable model of the interaction between verbal and non-verbal communication.

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.

    2000-01-01

    In this paper an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The model has been formalised by three-levelled partial temporal models, covering both the material and mental processes and their relations. The generic

  15. An Executable Model of the Interaction between Verbal and Non-Verbal Communication

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.; Dignum, F.; Greaves, M.

    2000-01-01

    In this paper an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The model has been formalised by three-levelled partial temporal models, covering both the material and mental processes and their relations. The generic

  16. Drama to promote non-verbal communication skills.

    Science.gov (United States)

    Kelly, Martina; Nixon, Lara; Broadfoot, Kirsten; Hofmeister, Marianna; Dornan, Tim

    2018-05-23

    Non-verbal communication skills (NVCS) help physicians to deliver relationship-centred care, and the effective use of NVCS is associated with improved patient satisfaction, better use of health services and high-quality clinical care. In contrast to verbal communication skills, NVCS training is under developed in communication curricula for the health care professions. One of the challenges teaching NVCS is their tacit nature. In this study, we evaluated drama exercises to raise awareness of NVCS by making familiar activities 'strange'. Workshops based on drama exercises were designed to heighten an awareness of sight, hearing, touch and proxemics in non-verbal communication. These were conducted at eight medical education conferences, held between 2014 and 2016, and were open to all conference participants. Workshops were evaluated by recording narrative data generated during the workshops and an open-ended questionnaire following the workshop. Data were analysed qualitatively, using thematic analysis. Non-verbal communication skills help doctors to deliver relationship-centred care RESULTS: One hundred and twelve participants attended workshops, 73 (65%) of whom completed an evaluation form: 56 physicians, nine medical students and eight non-physician faculty staff. Two themes were described: an increased awareness of NVCS and the importance of NVCS in relationship building. Drama exercises enabled participants to experience NVCS, such as sight, sound, proxemics and touch, in novel ways. Participants reflected on how NCVS contribute to developing trust and building relationships in clinical practice. Drama-based exercises elucidate the tacit nature of NVCS and require further evaluation in formal educational settings. © 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  17. Cortical Auditory Disorders: A Case of Non-Verbal Disturbances Assessed with Event-Related Brain Potentials

    Directory of Open Access Journals (Sweden)

    Sönke Johannes

    1998-01-01

    Full Text Available In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians’ musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19–30 and by event-related potentials (ERP recorded in a modified 'oddball paradigm’. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  18. Cortical auditory disorders: a case of non-verbal disturbances assessed with event-related brain potentials.

    Science.gov (United States)

    Johannes, Sönke; Jöbges, Michael E.; Dengler, Reinhard; Münte, Thomas F.

    1998-01-01

    In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians' musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19-30) and by event-related potentials (ERP) recorded in a modified 'oddball paradigm'. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  19. Evaluating verbal and non-verbal communication skills, in an ethnogeriatric OSCE.

    Science.gov (United States)

    Collins, Lauren G; Schrimmer, Anne; Diamond, James; Burke, Janice

    2011-05-01

    Communication during medical interviews plays a large role in patient adherence, satisfaction with care, and health outcomes. Both verbal and non-verbal communication (NVC) skills are central to the development of rapport between patients and healthcare professionals. The purpose of this study was to assess the role of non-verbal and verbal communication skills on evaluations by standardized patients during an ethnogeriatric Objective Structured Clinical Examination (OSCE). Interviews from 19 medical students, residents, and fellows in an ethnogeriatric OSCE were analyzed. Each interview was videotaped and evaluated on a 14 item verbal and an 8 item non-verbal communication checklist. The relationship between verbal and non-verbal communication skills on interview evaluations by standardized patients were examined using correlational analyses. Maintaining adequate facial expression (FE), using affirmative gestures (AG), and limiting both unpurposive movements (UM) and hand gestures (HG) had a significant positive effect on perception of interview quality during this OSCE. Non-verbal communication skills played a role in perception of overall interview quality as well as perception of culturally competent communication. Incorporating formative and summative evaluation of both verbal and non-verbal communication skills may be a critical component of curricular innovations in ethnogeriatrics, such as the OSCE. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  20. [Non-verbal communication in Alzheimer's disease].

    Science.gov (United States)

    Schiaratura, Loris Tamara

    2008-09-01

    This review underlines the importance of non-verbal communication in Alzheimer's disease. A social psychological perspective of communication is privileged. Non-verbal behaviors such as looks, head nods, hand gestures, body posture or facial expression provide a lot of information about interpersonal attitudes, behavioral intentions, and emotional experiences. Therefore they play an important role in the regulation of interaction between individuals. Non-verbal communication is effective in Alzheimer's disease even in the late stages. Patients still produce non-verbal signals and are responsive to others. Nevertheless, few studies have been devoted to the social factors influencing the non-verbal exchange. Misidentification and misinterpretation of behaviors may have negative consequences for the patients. Thus, improving the comprehension of and the response to non-verbal behavior would increase first the quality of the interaction, then the physical and psychological well-being of patients and that of caregivers. The role of non-verbal behavior in social interactions should be approached from an integrative and functional point of view.

  1. Non-verbal numerical cognition: from reals to integers.

    Science.gov (United States)

    Gallistel; Gelman

    2000-02-01

    Data on numerical processing by verbal (human) and non-verbal (animal and human) subjects are integrated by the hypothesis that a non-verbal counting process represents discrete (countable) quantities by means of magnitudes with scalar variability. These appear to be identical to the magnitudes that represent continuous (uncountable) quantities such as duration. The magnitudes representing countable quantity are generated by a discrete incrementing process, which defines next magnitudes and yields a discrete ordering. In the case of continuous quantities, the continuous accumulation process does not define next magnitudes, so the ordering is also continuous ('dense'). The magnitudes representing both countable and uncountable quantity are arithmetically combined in, for example, the computation of the income to be expected from a foraging patch. Thus, on the hypothesis presented here, the primitive machinery for arithmetic processing works with real numbers (magnitudes).

  2. Neurophysiological Modulations of Non-Verbal and Verbal Dual-Tasks Interference during Word Planning.

    Directory of Open Access Journals (Sweden)

    Raphaël Fargier

    Full Text Available Running a concurrent task while speaking clearly interferes with speech planning, but whether verbal vs. non-verbal tasks interfere with the same processes is virtually unknown. We investigated the neural dynamics of dual-task interference on word production using event-related potentials (ERPs with either tones or syllables as concurrent stimuli. Participants produced words from pictures in three conditions: without distractors, while passively listening to distractors and during a distractor detection task. Production latencies increased for tasks with higher attentional demand and were longer for syllables relative to tones. ERP analyses revealed common modulations by dual-task for verbal and non-verbal stimuli around 240 ms, likely corresponding to lexical selection. Modulations starting around 350 ms prior to vocal onset were only observed when verbal stimuli were involved. These later modulations, likely reflecting interference with phonological-phonetic encoding, were observed only when overlap between tasks was maximal and the same underlying neural circuits were engaged (cross-talk.

  3. Neurophysiological Modulations of Non-Verbal and Verbal Dual-Tasks Interference during Word Planning.

    Science.gov (United States)

    Fargier, Raphaël; Laganaro, Marina

    2016-01-01

    Running a concurrent task while speaking clearly interferes with speech planning, but whether verbal vs. non-verbal tasks interfere with the same processes is virtually unknown. We investigated the neural dynamics of dual-task interference on word production using event-related potentials (ERPs) with either tones or syllables as concurrent stimuli. Participants produced words from pictures in three conditions: without distractors, while passively listening to distractors and during a distractor detection task. Production latencies increased for tasks with higher attentional demand and were longer for syllables relative to tones. ERP analyses revealed common modulations by dual-task for verbal and non-verbal stimuli around 240 ms, likely corresponding to lexical selection. Modulations starting around 350 ms prior to vocal onset were only observed when verbal stimuli were involved. These later modulations, likely reflecting interference with phonological-phonetic encoding, were observed only when overlap between tasks was maximal and the same underlying neural circuits were engaged (cross-talk).

  4. Interactive use of communication by verbal and non-verbal autistic children.

    Science.gov (United States)

    Amato, Cibelle Albuquerque de la Higuera; Fernandes, Fernanda Dreux Miranda

    2010-01-01

    Communication of autistic children. To assess the communication functionality of verbal and non-verbal children of the autistic spectrum and to identify possible associations amongst the groups. Subjects were 20 children of the autistic spectrum divided into two groups: V with 10 verbal children and NV with 10 non-verbal children with ages varying between 2y10m and 10y6m. All subjects were video recorded during 30 minutes of spontaneous interaction with their mothers. The samples were analyzed according to the functional communicative profile and comparisons within and between groups were conducted. Data referring to the occupation of communicative space suggest that there is an even balance between each child and his mother. The number of communicative acts per minute shows a clear difference between verbal and non-verbal children. Both verbal and non-verbal children use mostly the gestual communicative mean in their interactions. Data about the use of interpersonal communicative functions point out to the autistic children's great interactive impairment. The characterization of the functional communicative profile proposed in this study confirmed the autistic children's difficulties with interpersonal communication and that these difficulties do not depend on the preferred communicative mean.

  5. Vocal Imitations of Non-Vocal Sounds

    Science.gov (United States)

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long

  6. Keeping Timbre in Mind: Working Memory for Complex Sounds that Can't Be Verbalized

    Science.gov (United States)

    Golubock, Jason L.; Janata, Petr

    2013-01-01

    Properties of auditory working memory for sounds that lack strong semantic associations and are not readily verbalized or sung are poorly understood. We investigated auditory working memory capacity for lists containing 2-6 easily discriminable abstract sounds synthesized within a constrained timbral space, at delays of 1-6 s (Experiment 1), and…

  7. Comparative Analysis of Verbal and Non-Verbal Mental Activity Components Regarding the Young People with Different Intellectual Levels

    Directory of Open Access Journals (Sweden)

    Y. M. Revenko

    2013-01-01

    Full Text Available The paper maintains that for developing the educational pro- grams and technologies adequate to the different stages of students’ growth and maturity, there is a need for exploring the natural determinants of intel- lectual development as well as the students’ individual qualities affecting the cognition process. The authors investigate the differences of the intellect manifestations with the reference to the gender principle, and analyze the correlations be- tween verbal and non-verbal components in boys and girls’ mental activity depending on their general intellect potential. The research, carried out in Si- berian State Automobile Road Academy and focused on the first year stu- dents, demonstrates the absence of gender differences in students’ general in- tellect levels; though, there are some other conformities: the male students of different intellectual levels show the same correlation coefficient of verbal and non-verbal intellect while the female ones have the same correlation only at the high intellect level. In conclusion, the authors emphasize the need for the integral ap- proach to raising students’ mental abilities considering the close interrelation between the verbal and non-verbal component development. The teaching materials should stimulate different mental qualities by differentiating the educational process to develop students’ individual abilities. 

  8. The role of interaction of verbal and non-verbal means of communication in different types of discourse

    OpenAIRE

    Orlova M. А.

    2010-01-01

    Communication relies on verbal and non-verbal interaction. To be most effective, group members need to improve verbal and non-verbal communication. Non-verbal communication fulfills functions within groups that are sometimes difficult to communicate verbally. But interpreting non-verbal messages requires a great deal of skill because multiple meanings abound in these messages.

  9. Verbal and Non-verbal Fluency in Adults with Developmental Dyslexia: Phonological Processing or Executive Control Problems?

    Science.gov (United States)

    Smith-Spark, James H; Henry, Lucy A; Messer, David J; Zięcik, Adam P

    2017-08-01

    The executive function of fluency describes the ability to generate items according to specific rules. Production of words beginning with a certain letter (phonemic fluency) is impaired in dyslexia, while generation of words belonging to a certain semantic category (semantic fluency) is typically unimpaired. However, in dyslexia, verbal fluency has generally been studied only in terms of overall words produced. Furthermore, performance of adults with dyslexia on non-verbal design fluency tasks has not been explored but would indicate whether deficits could be explained by executive control, rather than phonological processing, difficulties. Phonemic, semantic and design fluency tasks were presented to adults with dyslexia and without dyslexia, using fine-grained performance measures and controlling for IQ. Hierarchical regressions indicated that dyslexia predicted lower phonemic fluency, but not semantic or design fluency. At the fine-grained level, dyslexia predicted a smaller number of switches between subcategories on phonemic fluency, while dyslexia did not predict the size of phonemically related clusters of items. Overall, the results suggested that phonological processing problems were at the root of dyslexia-related fluency deficits; however, executive control difficulties could not be completely ruled out as an alternative explanation. Developments in research methodology, equating executive demands across fluency tasks, may resolve this issue. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Effects of proactive interference on non-verbal working memory.

    Science.gov (United States)

    Cyr, Marilyn; Nee, Derek E; Nelson, Eric; Senger, Thea; Jonides, John; Malapani, Chara

    2017-02-01

    Working memory (WM) is a cognitive system responsible for actively maintaining and processing relevant information and is central to successful cognition. A process critical to WM is the resolution of proactive interference (PI), which involves suppressing memory intrusions from prior memories that are no longer relevant. Most studies that have examined resistance to PI in a process-pure fashion used verbal material. By contrast, studies using non-verbal material are scarce, and it remains unclear whether the effect of PI is domain-general or whether it applies solely to the verbal domain. The aim of the present study was to examine the effect of PI in visual WM using both objects with high and low nameability. Using a Directed-Forgetting paradigm, we varied discriminability between WM items on two dimensions, one verbal (high-nameability vs. low-nameability objects) and one perceptual (colored vs. gray objects). As in previous studies using verbal material, effects of PI were found with object stimuli, even after controlling for verbal labels being used (i.e., low-nameability condition). We also found that the addition of distinctive features (color, verbal label) increased performance in rejecting intrusion probes, most likely through an increase in discriminability between content-context bindings in WM.

  11. Incongruence between Verbal and Non-Verbal Information Enhances the Late Positive Potential.

    Science.gov (United States)

    Morioka, Shu; Osumi, Michihiro; Shiotani, Mayu; Nobusako, Satoshi; Maeoka, Hiroshi; Okada, Yohei; Hiyamizu, Makoto; Matsuo, Atsushi

    2016-01-01

    Smooth social communication consists of both verbal and non-verbal information. However, when presented with incongruence between verbal information and nonverbal information, the relationship between an individual judging trustworthiness in those who present the verbal-nonverbal incongruence and the brain activities observed during judgment for trustworthiness are not clear. In the present study, we attempted to identify the impact of incongruencies between verbal information and facial expression on the value of trustworthiness and brain activity using event-related potentials (ERP). Combinations of verbal information [positive/negative] and facial expressions [smile/angry] expressions were presented randomly on a computer screen to 17 healthy volunteers. The value of trustworthiness of the presented facial expression was evaluated by the amount of donation offered by the observer to the person depicted on the computer screen. In addition, the time required to judge the value of trustworthiness was recorded for each trial. Using electroencephalography, ERP were obtained by averaging the wave patterns recorded while the participants judged the value of trustworthiness. The amount of donation offered was significantly lower when the verbal information and facial expression were incongruent, particularly for [negative × smile]. The amplitude of the early posterior negativity (EPN) at the temporal lobe showed no significant difference between all conditions. However, the amplitude of the late positive potential (LPP) at the parietal electrodes for the incongruent condition [negative × smile] was higher than that for the congruent condition [positive × smile]. These results suggest that the LPP amplitude observed from the parietal cortex is involved in the processing of incongruence between verbal information and facial expression.

  12. The Effects of Verbal and Non-Verbal Features on the Reception of DRTV Commercials

    Directory of Open Access Journals (Sweden)

    Smiljana Komar

    2016-12-01

    Full Text Available Analyses of consumer response are important for successful advertising as they help advertisers to find new, original and successful ways of persuasion. Successful advertisements have to boost the product’s benefits but they also have to appeal to consumers’ emotions. In TV advertisements, this is done by means of verbal and non-verbal strategies. The paper presents the results of an empirical investigation whose purpose was to examine the viewers’ emotional responses to a DRTV commercial induced by different verbal and non-verbal features, the amount of credibility and persuasiveness of the commercial and its general acceptability. Our findings indicate that (1 an overload of the same verbal and non-verbal information decreases persuasion; and (2 highly marked prosodic delivery is either exaggerated or funny, while the speaker is perceived as annoying.

  13. Interpersonal Interactions in Instrumental Lessons: Teacher/Student Verbal and Non-Verbal Behaviours

    Science.gov (United States)

    Zhukov, Katie

    2013-01-01

    This study examined verbal and non-verbal teacher/student interpersonal interactions in higher education instrumental music lessons. Twenty-four lessons were videotaped and teacher/student behaviours were analysed using a researcher-designed instrument. The findings indicate predominance of student and teacher joke among the verbal behaviours with…

  14. Habilidades de praxia verbal e não-verbal em indivíduos gagos Verbal and non-verbal praxic abilities in stutterers

    Directory of Open Access Journals (Sweden)

    Natália Casagrande Brabo

    2009-12-01

    Full Text Available OBJETIVO: caracterizar as habilidades de praxias verbal e não-verbal em indivíduos gagos. MÉTODOS: participaram do estudo 40 indivíduos, com idade igual ou superior a 18 anos, do sexo masculino e feminino: 20 gagos adultos e 20 sem queixas de comunicação. Para a avaliação das praxias verbal e não-verbal, os indivíduos foram submetidos à aplicação do Protocolo de Avaliação da Apraxia Verbal e Não-verbal (Martins e Ortiz, 2004. RESULTADOS: com relação às habilidades de praxia verbal houve diferença estatisticamente significante no número de disfluências típicas e atípicas apresentadas pelos grupos estudados. Quanto à tipologia das disfluências observou-se que nas típicas houve diferença estatisticamente significante entre os grupos estudados apenas na repetição de frase, e nas atípicas, houve diferença estatisticamente significante, tanto no bloqueio quanto na repetição de sílaba e no prolongamento. Com relação às habilidades de praxia não-verbal, não foram observadas diferenças estatisticamente significantes entre os indivíduos estudados na realização dos movimentos de lábios, língua e mandíbula, isolados e em sequência. CONCLUSÃO: com relação às habilidades de praxia verbal, os gagos apresentaram frequência maior de rupturas da fala, tanto de disfluências típicas quanto de atípicas, quando comparado ao grupo controle. Já na realização de movimentos práxicos isolados e em sequência, ou seja, nas habilidades de praxia não-verbal, os indivíduos gagos não se diferenciaram dos fluentes não confirmando a hipótese de que o início precoce da gagueira poderia comprometer as habilidades de praxia não-verbal.PURPOSE: to characterize the verbal and non-verbal praxic abilities in adult stutterers. METHODS: for this research, 40 over 18-year old men and women were selected: 20 stuttering adults and 20 without communication complaints. For the praxis evaluation, they were submitted to

  15. Parts of Speech in Non-typical Function: (Asymmetrical Encoding of Non-verbal Predicates in Erzya

    Directory of Open Access Journals (Sweden)

    Rigina Turunen

    2011-01-01

    Full Text Available Erzya non-verbal conjugation refers to symmetric paradigms in which non-verbal predicates behave morphosyntactically in a similar way to verbal predicates. Notably, though, non-verbal conjugational paradigms are asymmetric, which is seen as an outcome of paradigmatic neutralisation in less frequent/less typical contexts. For non-verbal predicates it is not obligatory to display the same amount of behavioural potential as it is for verbal predicates, and the lexical class of non-verbal predicate operates in such a way that adjectival predicates are more likely to be conjugated than nominals. Further, besides symmetric paradigms and constructions, in Erzya there are non-verbal predicate constructions which display a more overt structural encoding than do verbal ones, namely, copula constructions. Complexity in the domain of non-verbal predication in Erzya decreases the symmetry of the paradigms. Complexity increases in asymmetric constructions, as well as in paradigmatic neutralisation when non-verbal predicates cannot be inflected in all the tenses and moods occurring in verbal predication. The results would be the reverse if we were to measure complexity in terms of the morphological structure. The asymmetric features in non-verbal predication are motivated language-externally, because non-verbal predicates refer to states and occur less frequently as predicates than verbal categories. The symmetry of the paradigms and constructions is motivated language-internally: a grammatical system with fewer rules is economical.

  16. Verbal and Non-Verbal Communication and Coordination in Mission Control

    Science.gov (United States)

    Vinkhuyzen, Erik; Norvig, Peter (Technical Monitor)

    1998-01-01

    In this talk I will present some video-materials gathered in Mission Control during simulations. The focus of the presentation will be on verbal and non-verbal communication between the officers in the front and backroom, especially the practices that have evolved around a peculiar communications technology called voice loops.

  17. The impact of the teachers' non-verbal communication on success in teaching.

    Science.gov (United States)

    Bambaeeroo, Fatemeh; Shokrpour, Nasrin

    2017-04-01

    Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers' non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers' use of non-verbal communication and also its impact on success in teaching. Considering the research method, i.e. a review article, we searched for all articles in this field using key words such as success in teaching, verbal communication and non-verbal communication. In this study, we did not encode the articles. The results of this revealed that there was a strong relationship among the quality, amount and the method of using non-verbal communication by teachers while teaching. Based on the findings of the studies reviewed, it was found that the more the teachers used verbal and non-verbal communication, the more efficacious their education and the students' academic progress were. Under non-verbal communication, some other patterns were used. For example, emotive, team work, supportive, imaginative, purposive, and balanced communication using speech, body, and pictures all have been effective in students' learning and academic success. The teachers' attention to the students' non-verbal reactions and arranging the syllabus considering the students' mood and readiness have been emphasized in the studies reviewed. It was concluded that if this skill is practiced by teachers, it will have a positive and profound effect on the students' mood. Non-verbal communication is highly reliable in the communication process, so if the recipient of a message is between two contradictory verbal and nonverbal messages, logic dictates that we push him toward the non-verbal message and ask him to pay more attention to non-verbal than verbal messages because non-verbal

  18. Verbal and non-verbal behaviour and patient perception of communication in primary care: an observational study.

    Science.gov (United States)

    Little, Paul; White, Peter; Kelly, Joanne; Everitt, Hazel; Gashi, Shkelzen; Bikker, Annemieke; Mercer, Stewart

    2015-06-01

    Few studies have assessed the importance of a broad range of verbal and non-verbal consultation behaviours. To explore the relationship of observer ratings of behaviours of videotaped consultations with patients' perceptions. Observational study in general practices close to Southampton, Southern England. Verbal and non-verbal behaviour was rated by independent observers blind to outcome. Patients competed the Medical Interview Satisfaction Scale (MISS; primary outcome) and questionnaires addressing other communication domains. In total, 275/360 consultations from 25 GPs had useable videotapes. Higher MISS scores were associated with slight forward lean (an 0.02 increase for each degree of lean, 95% confidence interval [CI] = 0.002 to 0.03), the number of gestures (0.08, 95% CI = 0.01 to 0.15), 'back-channelling' (for example, saying 'mmm') (0.11, 95% CI = 0.02 to 0.2), and social talk (0.29, 95% CI = 0.4 to 0.54). Starting the consultation with professional coolness ('aloof') was helpful and optimism unhelpful. Finishing with non-verbal 'cut-offs' (for example, looking away), being professionally cool ('aloof'), or patronising, ('infantilising') resulted in poorer ratings. Physical contact was also important, but not traditional verbal communication. These exploratory results require confirmation, but suggest that patients may be responding to several non-verbal behaviours and non-specific verbal behaviours, such as social talk and back-channelling, more than traditional verbal behaviours. A changing consultation dynamic may also help, from professional 'coolness' at the beginning of the consultation to becoming warmer and avoiding non-verbal cut-offs at the end. © British Journal of General Practice 2015.

  19. The impact of the teachers’ non-verbal communication on success in teaching

    Directory of Open Access Journals (Sweden)

    FATEMEH BAMBAEEROO

    2017-04-01

    Full Text Available Introduction: Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers’ non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers’ use of non-verbal communication and also its impact on success in teaching. Methods: Considering the research method, i.e. a review article, we searched for all articles in this field using key words such as success in teaching, verbal communication and non-verbal communication. In this study, we did not encode the articles. Results: The results of this revealed that there was a strong relationship among the quality, amount and the method of using non-verbal communication by teachers while teaching. Based on the findings of the studies reviewed, it was found that the more the teachers used verbal and non-verbal communication, the more efficacious their education and the students’ academic progress were. Under non-verbal communication, some other patterns were used. For example, emotive, team work, supportive, imaginative, purposive, and balanced communication using speech, body, and pictures all have been effective in students’ learning and academic success. The teachers’ attention to the students’ non-verbal reactions and arranging the syllabus considering the students’ mood and readiness have been emphasized in the studies reviewed. Conclusion: It was concluded that if this skill is practiced by teachers, it will have a positive and profound effect on the students’ mood. Non-verbal communication is highly reliable in the communication process, so if the recipient of a message is between two contradictory verbal and nonverbal messages, logic dictates that we push him toward the non-verbal message

  20. The impact of the teachers’ non-verbal communication on success in teaching

    Science.gov (United States)

    BAMBAEEROO, FATEMEH; SHOKRPOUR, NASRIN

    2017-01-01

    Introduction: Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers’ non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers’ use of non-verbal communication and also its impact on success in teaching. Methods: Considering the research method, i.e. a review article, we searched for all articles in this field using key words such as success in teaching, verbal communication and non-verbal communication. In this study, we did not encode the articles. Results: The results of this revealed that there was a strong relationship among the quality, amount and the method of using non-verbal communication by teachers while teaching. Based on the findings of the studies reviewed, it was found that the more the teachers used verbal and non-verbal communication, the more efficacious their education and the students’ academic progress were. Under non-verbal communication, some other patterns were used. For example, emotive, team work, supportive, imaginative, purposive, and balanced communication using speech, body, and pictures all have been effective in students’ learning and academic success. The teachers’ attention to the students’ non-verbal reactions and arranging the syllabus considering the students’ mood and readiness have been emphasized in the studies reviewed. Conclusion: It was concluded that if this skill is practiced by teachers, it will have a positive and profound effect on the students’ mood. Non-verbal communication is highly reliable in the communication process, so if the recipient of a message is between two contradictory verbal and nonverbal messages, logic dictates that we push him toward the non-verbal message and ask him to pay

  1. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    Science.gov (United States)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  2. The similar effects of verbal and non-verbal intervening tasks on word recall in an elderly population.

    Science.gov (United States)

    Williams, B R; Sullivan, S K; Morra, L F; Williams, J R; Donovick, P J

    2014-01-01

    Vulnerability to retroactive interference has been shown to increase with cognitive aging. Consistent with the findings of memory and aging literature, the authors of the California Verbal Learning Test-II (CVLT-II) suggest that a non-verbal task be administered during the test's delay interval to minimize the effects of retroactive interference on delayed recall. The goal of the present study was to determine the extent to which retroactive interference caused by non-verbal and verbal intervening tasks affects recall of verbal information in non-demented, older adults. The effects of retroactive interference on recall of words during Long-Delay recall on the California Verbal Learning Test-II (CVLT-II) were evaluated. Participants included 85 adults age 60 and older. During a 20-minute delay interval on the CVLT-II, participants received either a verbal (WAIS-III Vocabulary or Peabody Picture Vocabulary Test-IIIB) or non-verbal (Raven's Standard Progressive Matrices or WAIS-III Block Design) intervening task. Similarly to previous research with young adults (Williams & Donovick, 2008), older adults recalled the same number of words across all groups, regardless of the type of intervening task. These findings suggest that the administration of verbal intervening tasks during the CVLT-II do not elicit more retroactive interference than non-verbal intervening tasks, and thus verbal tasks need not be avoided during the delay interval of the CVLT-II.

  3. Network structure underlying resolution of conflicting non-verbal and verbal social information.

    Science.gov (United States)

    Watanabe, Takamitsu; Yahata, Noriaki; Kawakubo, Yuki; Inoue, Hideyuki; Takano, Yosuke; Iwashiro, Norichika; Natsubori, Tatsunobu; Takao, Hidemasa; Sasaki, Hiroki; Gonoi, Wataru; Murakami, Mizuho; Katsura, Masaki; Kunimatsu, Akira; Abe, Osamu; Kasai, Kiyoto; Yamasue, Hidenori

    2014-06-01

    Social judgments often require resolution of incongruity in communication contents. Although previous studies revealed that such conflict resolution recruits brain regions including the medial prefrontal cortex (mPFC) and posterior inferior frontal gyrus (pIFG), functional relationships and networks among these regions remain unclear. In this functional magnetic resonance imaging study, we investigated the functional dissociation and networks by measuring human brain activity during resolving incongruity between verbal and non-verbal emotional contents. First, we found that the conflict resolutions biased by the non-verbal contents activated the posterior dorsal mPFC (post-dmPFC), bilateral anterior insula (AI) and right dorsal pIFG, whereas the resolutions biased by the verbal contents activated the bilateral ventral pIFG. In contrast, the anterior dmPFC (ant-dmPFC), bilateral superior temporal sulcus and fusiform gyrus were commonly involved in both of the resolutions. Second, we found that the post-dmPFC and right ventral pIFG were hub regions in networks underlying the non-verbal- and verbal-content-biased resolutions, respectively. Finally, we revealed that these resolution-type-specific networks were bridged by the ant-dmPFC, which was recruited for the conflict resolutions earlier than the two hub regions. These findings suggest that, in social conflict resolutions, the ant-dmPFC selectively recruits one of the resolution-type-specific networks through its interaction with resolution-type-specific hub regions. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  4. Multi-level prediction of short-term outcome of depression : non-verbal interpersonal processes, cognitions and personality traits

    NARCIS (Netherlands)

    Geerts, E; Bouhuys, N

    1998-01-01

    It was hypothesized that personality factors determine the short-term outcome of depression, and that they may do this via non-verbal interpersonal interactions and via cognitive interpretations of non-verbal behaviour. Twenty-six hospitalized depressed patients entered the study. Personality

  5. A qualitative study on non-verbal sensitivity in nursing students.

    Science.gov (United States)

    Chan, Zenobia C Y

    2013-07-01

    To explore nursing students' perception of the meanings and roles of non-verbal communication and sensitivity. It also attempts to understand how different factors influence their non-verbal communication style. The importance of non-verbal communication in the health arena lies in the need for good communication for efficient healthcare delivery. Understanding nursing students' non-verbal communication with patients and the influential factors is essential to prepare them for field work in the future. Qualitative approach based on 16 in-depth interviews. Sixteen nursing students from the Master of Nursing and the Year 3 Bachelor of Nursing program were interviewed. Major points in the recorded interviews were marked down for content analysis. Three main themes were developed: (1) understanding students' non-verbal communication, which shows how nursing students value and experience non-verbal communication in the nursing context; (2) factors that influence the expression of non-verbal cues, which reveals the effect of patients' demographic background (gender, age, social status and educational level) and participants' characteristics (character, age, voice and appearance); and (3) metaphors of non-verbal communication, which is further divided into four subthemes: providing assistance, individualisation, dropping hints and promoting interaction. Learning about students' non-verbal communication experiences in the clinical setting allowed us to understand their use of non-verbal communication and sensitivity, as well as to understand areas that may need further improvement. The experiences and perceptions revealed by the nursing students could provoke nurses to reconsider the effects of the different factors suggested in this study. The results might also help students and nurses to learn and ponder their missing gap, leading them to rethink, train and pay more attention to their non-verbal communication style and sensitivity. © 2013 John Wiley & Sons Ltd.

  6. Phenomenology of non-verbal communication as a representation of sports activities

    Directory of Open Access Journals (Sweden)

    Liubov Karpets

    2018-04-01

    Full Text Available The priority of language professional activity in sports is such non-verbal communication as body language. Purpose: to delete the main aspects of non-verbal communication as a representation of sports activities. Material & Methods: in the study participated members of sports teams, individual athletes, in particular, for such sports: basketball, handball, volleyball, football, hockey, bodybuilding. Results: in the process of research it was revealed that in sports activities such nonverbal communication as gestures, facial expressions, physique, etc., are lapped, and, as a consequence, the position "everything is language" (Lyotard is embodied. Conclusions: non-verbal communication is one of the most significant forms of communication in sports. Additional means of communication through the "language" of the body help the athletes to realize themselves and self-determination.

  7. The Bursts and Lulls of Multimodal Interaction: Temporal Distributions of Behavior Reveal Differences Between Verbal and Non-Verbal Communication.

    Science.gov (United States)

    Abney, Drew H; Dale, Rick; Louwerse, Max M; Kello, Christopher T

    2018-04-06

    Recent studies of naturalistic face-to-face communication have demonstrated coordination patterns such as the temporal matching of verbal and non-verbal behavior, which provides evidence for the proposal that verbal and non-verbal communicative control derives from one system. In this study, we argue that the observed relationship between verbal and non-verbal behaviors depends on the level of analysis. In a reanalysis of a corpus of naturalistic multimodal communication (Louwerse, Dale, Bard, & Jeuniaux, ), we focus on measuring the temporal patterns of specific communicative behaviors in terms of their burstiness. We examined burstiness estimates across different roles of the speaker and different communicative modalities. We observed more burstiness for verbal versus non-verbal channels, and for more versus less informative language subchannels. Using this new method for analyzing temporal patterns in communicative behaviors, we show that there is a complex relationship between verbal and non-verbal channels. We propose a "temporal heterogeneity" hypothesis to explain how the language system adapts to the demands of dialog. Copyright © 2018 Cognitive Science Society, Inc.

  8. Non-verbal communication barriers when dealing with Saudi sellers

    Directory of Open Access Journals (Sweden)

    Yosra Missaoui

    2015-12-01

    Full Text Available Communication has a major impact on how customers perceive sellers and their organizations. Especially, the non-verbal communication such as body language, appearance, facial expressions, gestures, proximity, posture, eye contact that can influence positively or negatively the first impression of customers and their experiences in stores. Salespeople in many countries, especially the developing ones, are just telling about their companies’ products because they are unaware of the real role of sellers and the importance of non-verbal communication. In Saudi Arabia, the seller profession has been exclusively for foreign labor until 2006. It is very recently that Saudi workforce enters to the retailing sector as sellers. The non-verbal communication of those sellers has never been evaluated from consumer’s point of view. Therefore, the aim of this paper is to explore the non-verbal communication barriers that customers are facing when dealing with Saudi sellers. After discussing the non-verbal communication skills that sellers must have in the light of the previous academic research and the depth interviews with seven focus groups of Saudi customers, this study found that the Saudi customers were not totally satisfied with the current non-verbal communication skills of Saudi sellers. Therefore, it is strongly recommended to develop the non-verbal communication skills of Saudi sellers by intensive trainings, to distinguish more the appearance of their sellers, especially the female ones, to focus on the time of intervention as well as the proximity to customers.

  9. Patients' perceptions of GP non-verbal communication: a qualitative study.

    Science.gov (United States)

    Marcinowicz, Ludmila; Konstantynowicz, Jerzy; Godlewski, Cezary

    2010-02-01

    During doctor-patient interactions, many messages are transmitted without words, through non-verbal communication. To elucidate the types of non-verbal behaviours perceived by patients interacting with family GPs and to determine which cues are perceived most frequently. In-depth interviews with patients of family GPs. Nine family practices in different regions of Poland. At each practice site, interviews were performed with four patients who were scheduled consecutively to see their family doctor. Twenty-four of 36 studied patients spontaneously perceived non-verbal behaviours of the family GP during patient-doctor encounters. They reported a total of 48 non-verbal cues. The most frequent features were tone of voice, eye contact, and facial expressions. Less frequent were examination room characteristics, touch, interpersonal distance, GP clothing, gestures, and posture. Non-verbal communication is an important factor by which patients spontaneously describe and evaluate their interactions with a GP. Family GPs should be trained to better understand and monitor their own non-verbal behaviours towards patients.

  10. Boosting Vocabulary Learning by Verbal Cueing During Sleep.

    Science.gov (United States)

    Schreiner, Thomas; Rasch, Björn

    2015-11-01

    Reactivating memories during sleep by re-exposure to associated memory cues (e.g., odors or sounds) improves memory consolidation. Here, we tested for the first time whether verbal cueing during sleep can improve vocabulary learning. We cued prior learned Dutch words either during non-rapid eye movement sleep (NonREM) or during active or passive waking. Re-exposure to Dutch words during sleep improved later memory for the German translation of the cued words when compared with uncued words. Recall of uncued words was similar to an additional group receiving no verbal cues during sleep. Furthermore, verbal cueing failed to improve memory during active and passive waking. High-density electroencephalographic recordings revealed that successful verbal cueing during NonREM sleep is associated with a pronounced frontal negativity in event-related potentials, a higher frequency of frontal slow waves as well as a cueing-related increase in right frontal and left parietal oscillatory theta power. Our results indicate that verbal cues presented during NonREM sleep reactivate associated memories, and facilitate later recall of foreign vocabulary without impairing ongoing consolidation processes. Likewise, our oscillatory analysis suggests that both sleep-specific slow waves as well as theta oscillations (typically associated with successful memory encoding during wakefulness) might be involved in strengthening memories by cueing during sleep. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Disturbance effect of music on processing of verbal and spatial memories.

    Science.gov (United States)

    Iwanaga, Makoto; Ito, Takako

    2002-06-01

    The purpose of the present study was to examine the disturbance effect of music on performances of memory tasks. Subjects performed a verbal memory task and a spatial memory task in 4 sound conditions, including the presence of vocal music, instrumental music, a natural sound (murmurings of a stream), and no music. 47 undergraduate volunteers were randomly assigned to perform tasks under each condition. Perceived disturbance was highest under the vocal music condition regardless of the type of task. A disturbance in performance by music was observed only with the verbal memory task under the vocal and the instrumental music conditions. These findings were discussed from the perspectives of the working memory hypothesis and the changing state model.

  12. Formulation of the verbal thought process based on generative rules

    Energy Technology Data Exchange (ETDEWEB)

    Suehiro, N; Fujisaki, H

    1984-01-01

    As assumption is made on the generative nature of the verbal thought process, based on an analogy between language use and verbal thought. A procedure is then presented for acquiring the set of generative rules from a given set of concept strings, leading to an efficient representation of verbal knowledge. The non-terminal symbols derived in the acquisition process are found to correspond to concepts and superordinate concepts in the human process of verbal thought. The validity of the formulation and the efficiency of knowledge representation is demonstrated by an example in which knowledge of biological properties of animals is reorganized into a set of generative rules. The process of inductive inference is then defined as a generalization of the acquired knowledge, and the principle of maximum simplicity of rules is proposed as a possible criterion for such generalization. The proposal is also tested by an example in which only a small part of a systematic body of knowledge is utilized to make interferences on the unknown parts of the system. 6 references.

  13. Non-Verbal Communication in Children with Visual Impairment

    Science.gov (United States)

    Mallineni, Sharmila; Nutheti, Rishita; Thangadurai, Shanimole; Thangadurai, Puspha

    2006-01-01

    The aim of this study was to determine: (a) whether children with visual and additional impairments show any non-verbal behaviors, and if so what were the common behaviors; (b) whether two rehabilitation professionals interpreted the non-verbal behaviors similarly; and (c) whether a speech pathologist and a rehabilitation professional interpreted…

  14. Virtual Chironomia: A Multimodal Study of Verbal and Non-Verbal Communication in a Virtual World

    Science.gov (United States)

    Verhulsdonck, Gustav

    2010-01-01

    This mixed methods study examined the various aspects of multimodal use of non-verbal communication in virtual worlds during dyadic negotiations. Quantitative analysis uncovered a treatment effect whereby people with more rhetorical certainty used more neutral non-verbal communication; whereas people that were rhetorically less certain used more…

  15. A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    OpenAIRE

    Mavridis, Nikolaos

    2014-01-01

    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-lookin...

  16. Non-verbal communication between primary care physicians and older patients: how does race matter?

    Science.gov (United States)

    Stepanikova, Irena; Zhang, Qian; Wieland, Darryl; Eleazer, G Paul; Stewart, Thomas

    2012-05-01

    Non-verbal communication is an important aspect of the diagnostic and therapeutic process, especially with older patients. It is unknown how non-verbal communication varies with physician and patient race. To examine the joint influence of physician race and patient race on non-verbal communication displayed by primary care physicians during medical interviews with patients 65 years or older. Video-recordings of visits of 209 patients 65 years old or older to 30 primary care physicians at three clinics located in the Midwest and Southwest. Duration of physicians' open body position, eye contact, smile, and non-task touch, coded using an adaption of the Nonverbal Communication in Doctor-Elderly Patient Transactions form. African American physicians with African American patients used more open body position, smile, and touch, compared to the average across other dyads (adjusted mean difference for open body position = 16.55, p non-verbal communication with older patients. Its influence is best understood when physician race and patient race are considered jointly.

  17. Guidelines for Teaching Non-Verbal Communications Through Visual Media

    Science.gov (United States)

    Kundu, Mahima Ranjan

    1976-01-01

    There is a natural unique relationship between non-verbal communication and visual media such as television and film. Visual media will have to be used extensively--almost exclusively--in teaching non-verbal communications, as well as other methods requiring special teaching skills. (Author/ER)

  18. Non-verbal communication in meetings of psychiatrists and patients with schizophrenia.

    Science.gov (United States)

    Lavelle, M; Dimic, S; Wildgrube, C; McCabe, R; Priebe, S

    2015-03-01

    Recent evidence found that patients with schizophrenia display non-verbal behaviour designed to avoid social engagement during the opening moments of their meetings with psychiatrists. This study aimed to replicate, and build on, this finding, assessing the non-verbal behaviour of patients and psychiatrists during meetings, exploring changes over time and its association with patients' symptoms and the quality of the therapeutic relationship. 40-videotaped routine out-patient consultations, involving patients with schizophrenia, were analysed. Non-verbal behaviour of patients and psychiatrists was assessed during three fixed, 2-min intervals using a modified Ethological Coding System for Interviews. Symptoms, satisfaction with communication and the quality of the therapeutic relationship were also measured. Over time, patients' non-verbal behaviour remained stable, whilst psychiatrists' flight behaviour decreased. Patients formed two groups based on their non-verbal profiles, one group (n = 25) displaying pro-social behaviour, inviting interaction and a second (n = 15) displaying flight behaviour, avoiding interaction. Psychiatrists interacting with pro-social patients displayed more pro-social behaviours (P communication (P non-verbal behaviour during routine psychiatric consultations remains unchanged, and is linked to both their psychiatrist's non-verbal behaviour and the quality of the therapeutic relationship. © 2014 The Authors. Acta Psychiatrica Scandinavica Published by John Wiley & Sons Ltd.

  19. Symbiotic Relations of Verbal and Non-Verbal Components of Creolized Text on the Example of Stephen King’s Books Covers Analysis

    OpenAIRE

    Anna S. Kobysheva; Viktoria A. Nakaeva

    2017-01-01

    The article examines the symbiotic relationships between non-verbal and verbal components of the creolized text. The research focuses on the analysis of the correlation between verbal and visual elements of horror book covers based on three types of correlations between verbal and non-verbal text constituents, i.e. recurrent, additive and emphatic.

  20. Cross-cultural features of gestures in non-verbal communication

    Directory of Open Access Journals (Sweden)

    Chebotariova N. A.

    2017-09-01

    Full Text Available this article is devoted to analysis of the concept of non-verbal communication and ways of expressing it. Gesticulation is studied in detail as it is the main element of non-verbal communication and has different characteristics in various countries of the world.

  1. From SOLER to SURETY for effective non-verbal communication.

    Science.gov (United States)

    Stickley, Theodore

    2011-11-01

    This paper critiques the model for non-verbal communication referred to as SOLER (which stands for: "Sit squarely"; "Open posture"; "Lean towards the other"; "Eye contact; "Relax"). It has been approximately thirty years since Egan (1975) introduced his acronym SOLER as an aid for teaching and learning about non-verbal communication. There is evidence that the SOLER framework has been widely used in nurse education with little published critical appraisal. A new acronym that might be appropriate for non-verbal communication skills training and education is proposed and this is SURETY (which stands for "Sit at an angle"; "Uncross legs and arms"; "Relax"; "Eye contact"; "Touch"; "Your intuition"). The proposed model advances the SOLER model by including the use of touch and the importance of individual intuition is emphasised. The model encourages student nurse educators to also think about therapeutic space when they teach skills of non-verbal communication. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Toward a digitally mediated, transgenerational negotiation of verbal and non-verbal concepts in daycare

    DEFF Research Database (Denmark)

    Chimirri, Niklas Alexander

    an adult researcher’s research problem and her/his conceptual knowledge of the child-adult-digital media interaction are able to do justice to what the children actually intend to communicate about their experiences and actions, both verbally and non-verbally, by and large remains little explored...

  3. Symbiotic Relations of Verbal and Non-Verbal Components of Creolized Text on the Example of Stephen King’s Books Covers Analysis

    Directory of Open Access Journals (Sweden)

    Anna S. Kobysheva

    2017-12-01

    Full Text Available The article examines the symbiotic relationships between non-verbal and verbal components of the creolized text. The research focuses on the analysis of the correlation between verbal and visual elements of horror book covers based on three types of correlations between verbal and non-verbal text constituents, i.e. recurrent, additive and emphatic.

  4. By the sound of it. An ERP investigation of human action sound processing in 7-month-old infants

    Directory of Open Access Journals (Sweden)

    Elena Geangu

    2015-04-01

    Full Text Available Recent evidence suggests that human adults perceive human action sounds as a distinct category from human vocalizations, environmental, and mechanical sounds, activating different neural networks (Engel et al., 2009; Lewis et al., 2011. Yet, little is known about the development of such specialization. Using event-related potentials (ERP, this study investigated neural correlates of 7-month-olds’ processing of human action (HA sounds in comparison to human vocalizations (HV, environmental (ENV, and mechanical (MEC sounds. Relative to the other categories, HA sounds led to increased positive amplitudes between 470 and 570 ms post-stimulus onset at left anterior temporal locations, while HV led to increased negative amplitudes at the more posterior temporal locations in both hemispheres. Collectively, human produced sounds (HA + HV led to significantly different response profiles compared to non-living sound sources (ENV + MEC at parietal and frontal locations in both hemispheres. Overall, by 7 months of age human action sounds are being differentially processed in the brain, consistent with a dichotomy for processing living versus non-living things. This provides novel evidence regarding the typical categorical processing of socially relevant sounds.

  5. A comparison of processing load during non-verbal decision-making in two individuals with aphasia

    Directory of Open Access Journals (Sweden)

    Salima Suleman

    2015-05-01

    Full Text Available INTRODUCTION A growing body of evidence suggests people with aphasia (PWA can have impairments to cognitive functions such as attention, working memory and executive functions.(1-5 Such cognitive impairments have been shown to negatively affect the decision-making (DM abilities adults with neurological damage. (6,7 However, little is known about DM abilities of PWA.(8 Pupillometry is “the measurement of changes in pupil diameter”.(9;p.1 Researchers have reported a positive relationship between processing load and phasic pupil size (i.e., as processing load increases, pupil size increases.(10 Thus pupillometry has the potential to be a useful tool for investigating processing load during DM in PWA. AIMS The primary aim of this study was to establish the feasibility of using pupillometry during a non-verbal DM task with PWA. The secondary aim was to explore non-verbal DM performance in PWA and determine the relationship between DM performance and processing load using pupillometry. METHOD DESIGN. A single-subject case-study design with two participants was used in this study. PARTICIPANTS. Two adult males with anomic aphasia participated in this study. Participants were matched for age and education. Both participants were independent, able to drive, and had legal autonomy. MEASURES. PERFORMANCE ON A DM TASK. We used a computerized risk-taking card game called the Iowa Gambling Task (IGT as our non-verbal DM task.(11 In the IGT, participants made 100 selections (via eye gaze from four decks of cards presented on the computer screen with the goal of maximizing their overall hypothetical monetary gain. PROCESSING LOAD. The EyeLink 1000+ eye tracking system was used to collect pupil size measures while participants deliberated before each deck selection during the IGT. For this analysis, we calculated change in pupil size as a measure of processing load. RESULTS P1. P1 made increasingly advantageous decisions as the task progressed (Fig.1. When

  6. Non-Wovens as Sound Reducers

    Science.gov (United States)

    Belakova, D.; Seile, A.; Kukle, S.; Plamus, T.

    2018-04-01

    Within the present study, the effect of hemp (40 wt%) and polyactide (60 wt%), non-woven surface density, thickness and number of fibre web layers on the sound absorption coefficient and the sound transmission loss in the frequency range from 50 to 5000 Hz is analysed. The sound insulation properties of the experimental samples have been determined, compared to the ones in practical use, and the possible use of material has been defined. Non-woven materials are ideally suited for use in acoustic insulation products because the arrangement of fibres produces a porous material structure, which leads to a greater interaction between sound waves and fibre structure. Of all the tested samples (A, B and D), the non-woven variant B exceeded the surface density of sample A by 1.22 times and 1.15 times that of sample D. By placing non-wovens one above the other in 2 layers, it is possible to increase the absorption coefficient of the material, which depending on the frequency corresponds to C, D, and E sound absorption classes. Sample A demonstrates the best sound absorption of all the three samples in the frequency range from 250 to 2000 Hz. In the test frequency range from 50 to 5000 Hz, the sound transmission loss varies from 0.76 (Sample D at 63 Hz) to 3.90 (Sample B at 5000 Hz).

  7. Oncologists’ non-verbal behavior and analog patients’ recall of information

    NARCIS (Netherlands)

    Hillen, M.A.; de Haes, H.C.J.M.; van Tienhoven, G.; van Laarhoven, H.W.M.; van Weert, J.C.M.; Vermeulen, D.M.; Smets, E.M.A.

    2016-01-01

    Background Information in oncological consultations is often excessive. Those patients who better recall information are more satisfied, less anxious and more adherent. Optimal recall may be enhanced by the oncologist’s non-verbal communication. We tested the influence of three non-verbal behaviors,

  8. Oncologists' non-verbal behavior and analog patients' recall of information

    NARCIS (Netherlands)

    Hillen, Marij A.; de Haes, Hanneke C. J. M.; van Tienhoven, Geertjan; van Laarhoven, Hanneke W. M.; van Weert, Julia C. M.; Vermeulen, Daniëlle M.; Smets, Ellen M. A.

    2016-01-01

    Background Information in oncological consultations is often excessive. Those patients who better recall information are more satisfied, less anxious and more adherent. Optimal recall may be enhanced by the oncologist's non-verbal communication. We tested the influence of three non-verbal behaviors,

  9. Hemispheric processing of vocal emblem sounds.

    Science.gov (United States)

    Neumann-Werth, Yael; Levy, Erika S; Obler, Loraine K

    2013-01-01

    Vocal emblems, such as shh and brr, are speech sounds that have linguistic and nonlinguistic features; thus, it is unclear how they are processed in the brain. Five adult dextral individuals with left-brain damage and moderate-severe Wernicke's aphasia, five adult dextral individuals with right-brain damage, and five Controls participated in two tasks: (1) matching vocal emblems to photographs ('picture task') and (2) matching vocal emblems to verbal translations ('phrase task'). Cross-group statistical analyses on items on which the Controls performed at ceiling revealed lower accuracy by the group with left-brain damage (than by Controls) on both tasks, and lower accuracy by the group with right-brain damage (than by Controls) on the picture task. Additionally, the group with left-brain damage performed significantly less accurately than the group with right-brain damage on the phrase task only. Findings suggest that comprehension of vocal emblems recruits more left- than right-hemisphere processing.

  10. Linguistic analysis of verbal and non-verbal communication in the operating room.

    Science.gov (United States)

    Moore, Alison; Butt, David; Ellis-Clarke, Jodie; Cartmill, John

    2010-12-01

    Surgery can be a triumph of co-operation, the procedure evolving as a result of joint action between multiple participants. The communication that mediates the joint action of surgery is conveyed by verbal but particularly by non-verbal signals. Competing priorities superimposed by surgical learning must also be negotiated within this context and this paper draws on techniques of systemic functional linguistics to observe and analyse the flow of information during such a phase of surgery. © 2010 The Authors. ANZ Journal of Surgery © 2010 Royal Australasian College of Surgeons.

  11. Introducing the Oxford Vocal (OxVoc Sounds Database: A validated set of non-acted affective sounds from human infants, adults and domestic animals

    Directory of Open Access Journals (Sweden)

    Christine eParsons

    2014-06-01

    Full Text Available Sound moves us. Nowhere is this more apparent than in our responses to genuine emotional vocalisations, be they heartfelt distress cries or raucous laughter. Here, we present perceptual ratings and a description of a freely available, large database of natural affective vocal sounds from human infants, adults and domestic animals, the Oxford Vocal (OxVoc Sounds database. This database consists of 173 non-verbal sounds expressing a range of happy, sad and neutral emotional states. Ratings are presented for the sounds on a range of dimensions from a number of independent participant samples. Perceptions related to valence, including distress, vocaliser mood, and listener mood are presented in Study 1. Perceptions of the arousal of the sound, listener motivation to respond and valence (positive, negative are presented in Study 2. Perceptions of the emotional content of the stimuli in both Study 1 and Study 2 were consistent with the predefined categories (e.g., laugh stimuli perceived as positive. While the adult vocalisations received more extreme valence ratings, rated motivation to respond to the sounds was highest for the infant sounds. The major advantages of this database are the inclusion of vocalisations from naturalistic situations, which represent genuine expressions of emotion, and the inclusion of vocalisations from animals and infants, providing comparison stimuli for use in cross-species and developmental studies. The associated website provides a detailed description of the physical properties of the each sound stimulus along with cross-category descriptions.

  12. Transport processes and sound velocity in vibrationally non-equilibrium gas of anharmonic oscillators

    Science.gov (United States)

    Rydalevskaya, Maria A.; Voroshilova, Yulia N.

    2018-05-01

    Vibrationally non-equilibrium flows of chemically homogeneous diatomic gases are considered under the conditions that the distribution of the molecules over vibrational levels differs significantly from the Boltzmann distribution. In such flows, molecular collisions can be divided into two groups: the first group corresponds to "rapid" microscopic processes whereas the second one corresponds to "slow" microscopic processes (their rate is comparable to or larger than that of gasdynamic parameters variation). The collisions of the first group form quasi-stationary vibrationally non-equilibrium distribution functions. The model kinetic equations are used to study the transport processes under these conditions. In these equations, the BGK-type approximation is used to model only the collision operators of the first group. It allows us to simplify derivation of the transport fluxes and calculation of the kinetic coefficients. Special attention is given to the connection between the formulae for the bulk viscosity coefficient and the sound velocity square.

  13. Non-verbal Communication in a Neonatal Intensive Care Unit: A Video Audit Using Non-verbal Immediacy Scale (NIS-O).

    Science.gov (United States)

    Nimbalkar, Somashekhar Marutirao; Raval, Himalaya; Bansal, Satvik Chaitanya; Pandya, Utkarsh; Pathak, Ajay

    2018-05-03

    Effective communication with parents is a very important skill for pediatricians especially in a neonatal setup. The authors analyzed non-verbal communication of medical caregivers during counseling sessions. Recorded videos of counseling sessions from the months of March-April 2016 were audited. Counseling episodes were scored using Non-verbal Immediacy Scale Observer Report (NIS-O). A total of 150 videos of counseling sessions were audited. The mean (SD) total score on (NIS-O) was 78.96(7.07). Female counseled sessions had significantly higher proportion of low scores (p communication skills in a neonatal unit. This study lays down a template on which other Neonatal intensive care units (NICUs) can carry out gap defining audits.

  14. The Use of Virtual Characters to Assess and Train Non-Verbal Communication in High-Functioning Autism

    Science.gov (United States)

    Georgescu, Alexandra Livia; Kuzmanovic, Bojana; Roth, Daniel; Bente, Gary; Vogeley, Kai

    2014-01-01

    High-functioning autism (HFA) is a neurodevelopmental disorder, which is characterized by life-long socio-communicative impairments on the one hand and preserved verbal and general learning and memory abilities on the other. One of the areas where particular difficulties are observable is the understanding of non-verbal communication cues. Thus, investigating the underlying psychological processes and neural mechanisms of non-verbal communication in HFA allows a better understanding of this disorder, and potentially enables the development of more efficient forms of psychotherapy and trainings. However, the research on non-verbal information processing in HFA faces several methodological challenges. The use of virtual characters (VCs) helps to overcome such challenges by enabling an ecologically valid experience of social presence, and by providing an experimental platform that can be systematically and fully controlled. To make this field of research accessible to a broader audience, we elaborate in the first part of the review the validity of using VCs in non-verbal behavior research on HFA, and we review current relevant paradigms and findings from social-cognitive neuroscience. In the second part, we argue for the use of VCs as either agents or avatars in the context of “transformed social interactions.” This allows for the implementation of real-time social interaction in virtual experimental settings, which represents a more sensitive measure of socio-communicative impairments in HFA. Finally, we argue that VCs and environments are a valuable assistive, educational and therapeutic tool for HFA. PMID:25360098

  15. The use of virtual characters to assess and train non-verbal communication in high-functioning autism.

    Science.gov (United States)

    Georgescu, Alexandra Livia; Kuzmanovic, Bojana; Roth, Daniel; Bente, Gary; Vogeley, Kai

    2014-01-01

    High-functioning autism (HFA) is a neurodevelopmental disorder, which is characterized by life-long socio-communicative impairments on the one hand and preserved verbal and general learning and memory abilities on the other. One of the areas where particular difficulties are observable is the understanding of non-verbal communication cues. Thus, investigating the underlying psychological processes and neural mechanisms of non-verbal communication in HFA allows a better understanding of this disorder, and potentially enables the development of more efficient forms of psychotherapy and trainings. However, the research on non-verbal information processing in HFA faces several methodological challenges. The use of virtual characters (VCs) helps to overcome such challenges by enabling an ecologically valid experience of social presence, and by providing an experimental platform that can be systematically and fully controlled. To make this field of research accessible to a broader audience, we elaborate in the first part of the review the validity of using VCs in non-verbal behavior research on HFA, and we review current relevant paradigms and findings from social-cognitive neuroscience. In the second part, we argue for the use of VCs as either agents or avatars in the context of "transformed social interactions." This allows for the implementation of real-time social interaction in virtual experimental settings, which represents a more sensitive measure of socio-communicative impairments in HFA. Finally, we argue that VCs and environments are a valuable assistive, educational and therapeutic tool for HFA.

  16. Musical and linguistic expertise influence pre-attentive and attentive processing of non-speech sounds.

    Science.gov (United States)

    Marie, Céline; Kujala, Teija; Besson, Mireille

    2012-04-01

    The aim of this experiment was two-fold. Our first goal was to determine whether linguistic expertise influences the pre-attentive [as reflected by the Mismatch Negativity - (MMN)] and the attentive processing (as reflected by behavioural discrimination accuracy) of non-speech, harmonic sounds. The second was to directly compare the effects of linguistic and musical expertise. To this end, we compared non-musician native speakers of a quantity language, Finnish, in which duration is a phonemically contrastive cue, with French musicians and French non-musicians. Results revealed that pre-attentive and attentive processing of duration deviants was enhanced in Finn non-musicians and French musicians compared to French non-musicians. By contrast, MMN in French musicians was larger than in both Finns and French non-musicians for frequency deviants, whereas no between-group differences were found for intensity deviants. By showing similar effects of linguistic and musical expertise, these results argue in favor of common processing of duration in music and speech. Copyright © 2010 Elsevier Srl. All rights reserved.

  17. Exploring Children’s Peer Relationships through Verbal and Non-verbal Communication: A Qualitative Action Research Focused on Waldorf Pedagogy

    Directory of Open Access Journals (Sweden)

    Aida Milena Montenegro Mantilla

    2007-12-01

    Full Text Available This study analyzes the relationships that children around seven and eight years old establish in a classroom. It shows that peer relationships have a positive dimension with features such as the development of children’s creativity to communicate and modify norms. These features were found through an analysis of children’s verbal and non-verbal communication and an interdisciplinary view of children’s learning process from Rudolf Steiner, founder of Waldorf Pedagogy, and Jean Piaget and Lev Vygotsky, specialists in children’s cognitive and social dimensions. This research is an invitation to recognize children’s capacity to construct their own rules in peer relationships.

  18. The impact of the teachers? non-verbal communication on success in teaching

    OpenAIRE

    BAMBAEEROO, FATEMEH; SHOKRPOUR, NASRIN

    2017-01-01

    Introduction: Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers’ non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers’ use of non-verbal communication and ...

  19. Exploring laterality and memory effects in the haptic discrimination of verbal and non-verbal shapes.

    Science.gov (United States)

    Stoycheva, Polina; Tiippana, Kaisa

    2018-03-14

    The brain's left hemisphere often displays advantages in processing verbal information, while the right hemisphere favours processing non-verbal information. In the haptic domain due to contra-lateral innervations, this functional lateralization is reflected in a hand advantage during certain functions. Findings regarding the hand-hemisphere advantage for haptic information remain contradictory, however. This study addressed these laterality effects and their interaction with memory retention times in the haptic modality. Participants performed haptic discrimination of letters, geometric shapes and nonsense shapes at memory retention times of 5, 15 and 30 s with the left and right hand separately, and we measured the discriminability index d'. The d' values were significantly higher for letters and geometric shapes than for nonsense shapes. This might result from dual coding (naming + spatial) or/and from a low stimulus complexity. There was no stimulus-specific laterality effect. However, we found a time-dependent laterality effect, which revealed that the performance of the left hand-right hemisphere was sustained up to 15 s, while the performance of the right-hand-left hemisphere decreased progressively throughout all retention times. This suggests that haptic memory traces are more robust to decay when they are processed by the left hand-right hemisphere.

  20. The impact of culture and education on non-verbal neuropsychological measurements: a critical review.

    Science.gov (United States)

    Rosselli, Mónica; Ardila, Alfredo

    2003-08-01

    Clinical neuropsychology has frequently considered visuospatial and non-verbal tests to be culturally and educationally fair or at least fairer than verbal tests. This paper reviews the cross-cultural differences in performance on visuoperceptual and visuoconstructional ability tasks and analyzes the impact of education and culture on non-verbal neuropsychological measurements. This paper compares: (1) non-verbal test performance among groups with different educational levels, and the same cultural background (inter-education intra-culture comparison); (2) the test performance among groups with the same educational level and different cultural backgrounds (intra-education inter-culture comparisons). Several studies have demonstrated a strong association between educational level and performance on common non-verbal neuropsychological tests. When neuropsychological test performance in different cultural groups is compared, significant differences are evident. Performance on non-verbal tests such as copying figures, drawing maps or listening to tones can be significantly influenced by the individual's culture. Arguments against the use of some current neuropsychological non-verbal instruments, procedures, and norms in the assessment of diverse educational and cultural groups are discussed and possible solutions to this problem are presented.

  1. Context, culture and (non-verbal) communication affect handover quality.

    Science.gov (United States)

    Frankel, Richard M; Flanagan, Mindy; Ebright, Patricia; Bergman, Alicia; O'Brien, Colleen M; Franks, Zamal; Allen, Andrew; Harris, Angela; Saleem, Jason J

    2012-12-01

    Transfers of care, also known as handovers, remain a substantial patient safety risk. Although research on handovers has been done since the 1980s, the science is incomplete. Surprisingly few interventions have been rigorously evaluated and, of those that have, few have resulted in long-term positive change. Researchers, both in medicine and other high reliability industries, agree that face-to-face handovers are the most reliable. It is not clear, however, what the term face-to-face means in actual practice. We studied the use of non-verbal behaviours, including gesture, posture, bodily orientation, facial expression, eye contact and physical distance, in the delivery of information during face-to-face handovers. To address this question and study the role of non-verbal behaviour on the quality and accuracy of handovers, we videotaped 52 nursing, medicine and surgery handovers covering 238 patients. Videotapes were analysed using immersion/crystallisation methods of qualitative data analysis. A team of six researchers met weekly for 18 months to view videos together using a consensus-building approach. Consensus was achieved on verbal, non-verbal, and physical themes and patterns observed in the data. We observed four patterns of non-verbal behaviour (NVB) during handovers: (1) joint focus of attention; (2) 'the poker hand'; (3) parallel play and (4) kerbside consultation. In terms of safety, joint focus of attention was deemed to have the best potential for high quality and reliability; however, it occurred infrequently, creating opportunities for education and improvement. Attention to patterns of NVB in face-to-face handovers coupled with education and practice can improve quality and reliability.

  2. Emergence of category-level sensitivities in non-native speech sound learning

    Directory of Open Access Journals (Sweden)

    Emily eMyers

    2014-08-01

    Full Text Available Over the course of development, speech sounds that are contrastive in one’s native language tend to become perceived categorically: that is, listeners are unaware of variation within phonetic categories while showing excellent sensitivity to speech sounds that span linguistically meaningful phonetic category boundaries. The end stage of this developmental process is that the perceptual systems that handle acoustic-phonetic information show special tuning to native language contrasts, and as such, category-level information appears to be present at even fairly low levels of the neural processing stream. Research on adults acquiring non-native speech categories offers an avenue for investigating the interplay of category-level information and perceptual sensitivities to these sounds as speech categories emerge. In particular, one can observe the neural changes that unfold as listeners learn not only to perceive acoustic distinctions that mark non-native speech sound contrasts, but also to map these distinctions onto category-level representations. An emergent literature on the neural basis of novel and non-native speech sound learning offers new insight into this question. In this review, I will examine this literature in order to answer two key questions. First, where in the neural pathway does sensitivity to category-level phonetic information first emerge over the trajectory of speech sound learning? Second, how do frontal and temporal brain areas work in concert over the course of non-native speech sound learning? Finally, in the context of this literature I will describe a model of speech sound learning in which rapidly-adapting access to categorical information in the frontal lobes modulates the sensitivity of stable, slowly-adapting responses in the temporal lobes.

  3. Trauma team leaders' non-verbal communication: video registration during trauma team training.

    Science.gov (United States)

    Härgestam, Maria; Hultin, Magnus; Brulin, Christine; Jacobsson, Maritha

    2016-03-25

    There is widespread consensus on the importance of safe and secure communication in healthcare, especially in trauma care where time is a limiting factor. Although non-verbal communication has an impact on communication between individuals, there is only limited knowledge of how trauma team leaders communicate. The purpose of this study was to investigate how trauma team members are positioned in the emergency room, and how leaders communicate in terms of gaze direction, vocal nuances, and gestures during trauma team training. Eighteen trauma teams were audio and video recorded during trauma team training in the emergency department of a hospital in northern Sweden. Quantitative content analysis was used to categorize the team members' positions and the leaders' non-verbal communication: gaze direction, vocal nuances, and gestures. The quantitative data were interpreted in relation to the specific context. Time sequences of the leaders' gaze direction, speech time, and gestures were identified separately and registered as time (seconds) and proportions (%) of the total training time. The team leaders who gained control over the most important area in the emergency room, the "inner circle", positioned themselves as heads over the team, using gaze direction, gestures, vocal nuances, and verbal commands that solidified their verbal message. Changes in position required both attention and collaboration. Leaders who spoke in a hesitant voice, or were silent, expressed ambiguity in their non-verbal communication: and other team members took over the leader's tasks. In teams where the leader had control over the inner circle, the members seemed to have an awareness of each other's roles and tasks, knowing when in time and where in space these tasks needed to be executed. Deviations in the leaders' communication increased the ambiguity in the communication, which had consequences for the teamwork. Communication cannot be taken for granted; it needs to be practiced

  4. Cross-cultural Differences of Stereotypes about Non-verbal Communication of Russian and Chinese Students

    Directory of Open Access Journals (Sweden)

    I A Novikova

    2011-09-01

    Full Text Available The article deals with peculiarities of non-verbal communication as a factor of cross-cultural intercourse and adaptation of representatives of different cultures. The possibility of studying of ethnic stereotypes concerning non-verbal communication is considered. The results of empiric research of stereotypes about non-verbal communication of Russian and Chinese students are presented.

  5. Verbal and non-verbal semantic impairment: From fluent primary progressive aphasia to semantic dementia

    Directory of Open Access Journals (Sweden)

    Mirna Lie Hosogi Senaha

    Full Text Available Abstract Selective disturbances of semantic memory have attracted the interest of many investigators and the question of the existence of single or multiple semantic systems remains a very controversial theme in the literature. Objectives: To discuss the question of multiple semantic systems based on a longitudinal study of a patient who presented semantic dementia from fluent primary progressive aphasia. Methods: A 66 year-old woman with selective impairment of semantic memory was examined on two occasions, undergoing neuropsychological and language evaluations, the results of which were compared to those of three paired control individuals. Results: In the first evaluation, physical examination was normal and the score on the Mini-Mental State Examination was 26. Language evaluation revealed fluent speech, anomia, disturbance in word comprehension, preservation of the syntactic and phonological aspects of the language, besides surface dyslexia and dysgraphia. Autobiographical and episodic memories were relatively preserved. In semantic memory tests, the following dissociation was found: disturbance of verbal semantic memory with preservation of non-verbal semantic memory. Magnetic resonance of the brain revealed marked atrophy of the left anterior temporal lobe. After 14 months, the difficulties in verbal semantic memory had become more severe and the semantic disturbance, limited initially to the linguistic sphere, had worsened to involve non-verbal domains. Conclusions: Given the dissociation found in the first examination, we believe there is sufficient clinical evidence to refute the existence of a unitary semantic system.

  6. Near-infrared-spectroscopic study on processing of sounds in the brain; a comparison between native and non-native speakers of Japanese.

    Science.gov (United States)

    Tsunoda, Koichi; Sekimoto, Sotaro; Itoh, Kenji

    2016-06-01

    Conclusions The result suggested that mother tongue Japanese and non- mother tongue Japanese differ in their pattern of brain dominance when listening to sounds from the natural world-in particular, insect sounds. These results reveal significant support for previous findings from Tsunoda (in 1970). Objectives This study concentrates on listeners who show clear evidence of a 'speech' brain vs a 'music' brain and determines which side is most active in the processing of insect sounds, using with near-infrared spectroscopy. Methods The present study uses 2-channel Near Infrared Spectroscopy (NIRS) to provide a more direct measure of left- and right-brain activity while participants listen to each of three types of sounds: Japanese speech, Western violin music, or insect sounds. Data were obtained from 33 participants who showed laterality on opposite sides for Japanese speech and Western music. Results Results showed that a majority (80%) of the MJ participants exhibited dominance for insect sounds on the side that was dominant for language, while a majority (62%) of the non-MJ participants exhibited dominance for insect sounds on the side that was dominant for music.

  7. Physical growth and non-verbal intelligence: Associations in Zambia

    Science.gov (United States)

    Hein, Sascha; Reich, Jodi; Thuma, Philip E.; Grigorenko, Elena L.

    2014-01-01

    Objectives To investigate normative developmental BMI trajectories and associations of physical growth indicators (ie, height, weight, head circumference [HC], body mass index [BMI]) with non-verbal intelligence in an understudied population of children from Sub-Saharan Africa. Study design A sample of 3981 students (50.8% male), grades 3 to 7, with a mean age of 12.75 years was recruited from 34 rural Zambian schools. Children with low scores on vision and hearing screenings were excluded. Height, weight and HC were measured, and non-verbal intelligence was assessed using UNIT-symbolic memory and KABC-II-triangles. Results Results showed that students in higher grades have a higher BMI over and above the effect of age. Girls showed a marginally higher BMI, although that for both boys and girls was approximately 1 SD below the international CDC and WHO norms. Controlling for the effect of age, non-verbal intelligence showed small but significant positive relationships with HC (r = .17) and BMI (r = .11). HC and BMI accounted for 1.9% of the variance in non-verbal intelligence, over and above the contribution of grade and sex. Conclusions BMI-for-age growth curves of Zambian children follow observed worldwide developmental trajectories. The positive relationships between BMI and intelligence underscore the importance of providing adequate nutritional and physical growth opportunities for children worldwide and in sub-Saharan Africa in particular. Directions for future studies are discussed with regard to maximizing the cognitive potential of all rural African children. PMID:25217196

  8. Culture and Social Relationship as Factors of Affecting Communicative Non-Verbal Behaviors

    DEFF Research Database (Denmark)

    Lipi, Afia Akhter; Nakano, Yukiko; Rehm, Matthias

    2010-01-01

    The goal of this paper is to link a bridge between social relationship and cultural variation to predict conversants' non-verbal behaviors. This idea serves as a basis of establishing a parameter based socio-cultural model, which determines non-verbal expressive parameters that specify the shapes...

  9. Cross-cultural differences in the processing of non-verbal affective vocalizations by Japanese and canadian listeners.

    Science.gov (United States)

    Koeda, Michihiko; Belin, Pascal; Hama, Tomoko; Masuda, Tadashi; Matsuura, Masato; Okubo, Yoshiro

    2013-01-01

    The Montreal Affective Voices (MAVs) consist of a database of non-verbal affect bursts portrayed by Canadian actors, and high recognitions accuracies were observed in Canadian listeners. Whether listeners from other cultures would be as accurate is unclear. We tested for cross-cultural differences in perception of the MAVs: Japanese listeners were asked to rate the MAVs on several affective dimensions and ratings were compared to those obtained by Canadian listeners. Significant Group × Emotion interactions were observed for ratings of Intensity, Valence, and Arousal. Whereas Intensity and Valence ratings did not differ across cultural groups for sad and happy vocalizations, they were significantly less intense and less negative in Japanese listeners for angry, disgusted, and fearful vocalizations. Similarly, pleased vocalizations were rated as less intense and less positive by Japanese listeners. These results demonstrate important cross-cultural differences in affective perception not just of non-verbal vocalizations expressing positive affect (Sauter et al., 2010), but also of vocalizations expressing basic negative emotions.

  10. Cross-Cultural Differences in the Processing of Non-Verbal Affective Vocalizations by Japanese and Canadian Listeners

    Science.gov (United States)

    Koeda, Michihiko; Belin, Pascal; Hama, Tomoko; Masuda, Tadashi; Matsuura, Masato; Okubo, Yoshiro

    2013-01-01

    The Montreal Affective Voices (MAVs) consist of a database of non-verbal affect bursts portrayed by Canadian actors, and high recognitions accuracies were observed in Canadian listeners. Whether listeners from other cultures would be as accurate is unclear. We tested for cross-cultural differences in perception of the MAVs: Japanese listeners were asked to rate the MAVs on several affective dimensions and ratings were compared to those obtained by Canadian listeners. Significant Group × Emotion interactions were observed for ratings of Intensity, Valence, and Arousal. Whereas Intensity and Valence ratings did not differ across cultural groups for sad and happy vocalizations, they were significantly less intense and less negative in Japanese listeners for angry, disgusted, and fearful vocalizations. Similarly, pleased vocalizations were rated as less intense and less positive by Japanese listeners. These results demonstrate important cross-cultural differences in affective perception not just of non-verbal vocalizations expressing positive affect (Sauter et al., 2010), but also of vocalizations expressing basic negative emotions. PMID:23516137

  11. MODELO DE COMUNICACIÓN NO VERBAL EN DEPORTE Y BALLET NON-VERBAL COMMUNICATION MODELS IN SPORTS AND BALLET

    Directory of Open Access Journals (Sweden)

    Gloria Vallejo

    2010-12-01

    Full Text Available Este estudio analiza el modelo de comunicación que se genera en los entrenadores de fútbol y de gimnasia artística a nivel profesional, y en los instructores de ballet en modalidad folklórica, tomando como referente el lenguaje corporal dinámico propio de la comunicación especializada de deportistas y bailarines, en la que se evidencia lenguaje no verbal. Este último se estudió tanto en prácticas psicomotrices como sociomotrices, para identificar y caracterizar relaciones entre diferentes conceptos y su correspondiente representación gestual. Los resultados indican que el lenguaje no verbal de los entrenadores e instructores toma ocasionalmente el lugar del lenguaje verbal, cuando este último resulta insuficiente o inapropiado para describir una acción motriz de gran precisión, debido a las condiciones de distancia o de interferencias acústicas. En los instructores de ballet se encontró una forma generalizada de dirigir los ensayos utilizando conteos rítmicos con las palmas o los pies. De igual forma, se destacan los componentes paralingüísticos de los diversos actos de habla, especialmente, en lo que se refiere a entonación, duración e intensidad.This study analyzes the communication model generated among professional soccer trainers, artistic gymnastics trainers, and folkloric ballet instructors, on the basis of the dynamic body language typical of specialized communication among sportspeople and dancers, which includes a high percentage of non-verbal language. Non-verbal language was observed in both psychomotor and sociomotor practices in order to identify and characterize relations between different concepts and their corresponding gestural representation. This made it possible to generate a communication model that takes into account the non-verbal aspects of specialized communicative contexts. The results indicate that the non-verbal language of trainers and instructors occasionally replaces verbal language when the

  12. Non-Verbal Communication Training: An Avenue for University Professionalizing Programs?

    Science.gov (United States)

    Gazaille, Mariane

    2011-01-01

    In accordance with today's workplace expectations, many university programs identify the ability to communicate as a crucial asset for future professionals. Yet, if the teaching of verbal communication is clearly identifiable in most university programs, the same cannot be said of non-verbal communication (NVC). Knowing the importance of the…

  13. Measuring Verbal and Non-Verbal Communication in Aphasia: Reliability, Validity, and Sensitivity to Change of the Scenario Test

    Science.gov (United States)

    van der Meulen, Ineke; van de Sandt-Koenderman, W. Mieke E.; Duivenvoorden, Hugo J.; Ribbers, Gerard M.

    2010-01-01

    Background: This study explores the psychometric qualities of the Scenario Test, a new test to assess daily-life communication in severe aphasia. The test is innovative in that it: (1) examines the effectiveness of verbal and non-verbal communication; and (2) assesses patients' communication in an interactive setting, with a supportive…

  14. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Directory of Open Access Journals (Sweden)

    Alena Galilee

    Full Text Available Previous event-related potential (ERP research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD. However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600 when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  15. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Science.gov (United States)

    Galilee, Alena; Stefanidou, Chrysi; McCleery, Joseph P

    2017-01-01

    Previous event-related potential (ERP) research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD). However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600) when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  16. From Sensory Perception to Lexical-Semantic Processing: An ERP Study in Non-Verbal Children with Autism.

    Science.gov (United States)

    Cantiani, Chiara; Choudhury, Naseem A; Yu, Yan H; Shafer, Valerie L; Schwartz, Richard G; Benasich, April A

    2016-01-01

    This study examines electrocortical activity associated with visual and auditory sensory perception and lexical-semantic processing in nonverbal (NV) or minimally-verbal (MV) children with Autism Spectrum Disorder (ASD). Currently, there is no agreement on whether these children comprehend incoming linguistic information and whether their perception is comparable to that of typically developing children. Event-related potentials (ERPs) of 10 NV/MV children with ASD and 10 neurotypical children were recorded during a picture-word matching paradigm. Atypical ERP responses were evident at all levels of processing in children with ASD. Basic perceptual processing was delayed in both visual and auditory domains but overall was similar in amplitude to typically-developing children. However, significant differences between groups were found at the lexical-semantic level, suggesting more atypical higher-order processes. The results suggest that although basic perception is relatively preserved in NV/MV children with ASD, higher levels of processing, including lexical- semantic functions, are impaired. The use of passive ERP paradigms that do not require active participant response shows significant potential for assessment of non-compliant populations such as NV/MV children with ASD.

  17. From Sensory Perception to Lexical-Semantic Processing: An ERP Study in Non-Verbal Children with Autism

    Science.gov (United States)

    Cantiani, Chiara; Choudhury, Naseem A.; Yu, Yan H.; Shafer, Valerie L.; Schwartz, Richard G.; Benasich, April A.

    2016-01-01

    This study examines electrocortical activity associated with visual and auditory sensory perception and lexical-semantic processing in nonverbal (NV) or minimally-verbal (MV) children with Autism Spectrum Disorder (ASD). Currently, there is no agreement on whether these children comprehend incoming linguistic information and whether their perception is comparable to that of typically developing children. Event-related potentials (ERPs) of 10 NV/MV children with ASD and 10 neurotypical children were recorded during a picture-word matching paradigm. Atypical ERP responses were evident at all levels of processing in children with ASD. Basic perceptual processing was delayed in both visual and auditory domains but overall was similar in amplitude to typically-developing children. However, significant differences between groups were found at the lexical-semantic level, suggesting more atypical higher-order processes. The results suggest that although basic perception is relatively preserved in NV/MV children with ASD, higher levels of processing, including lexical- semantic functions, are impaired. The use of passive ERP paradigms that do not require active participant response shows significant potential for assessment of non-compliant populations such as NV/MV children with ASD. PMID:27560378

  18. Processing of unconventional stimuli requires the recruitment of the non-specialized hemisphere

    Directory of Open Access Journals (Sweden)

    Yoed Nissan Kenett

    2015-02-01

    Full Text Available In the present study we investigate hemispheric processing of conventional and unconventional visual stimuli in the context of visual and verbal creative ability. In Experiment 1, we studied two unconventional visual recognition tasks – Mooney face and objects' silhouette recognition – and found a significant relationship between measures of verbal creativity and unconventional face recognition. In Experiment 2 we used the split visual field paradigm to investigate hemispheric processing of conventional and unconventional faces and its relation to verbal and visual characteristics of creativity. Results showed that while conventional faces were better processed by the specialized right hemisphere, unconventional faces were better processed by the non-specialized left hemisphere. In addition, only unconventional face processing by the non-specialized left hemisphere was related to verbal and visual measures of creative ability. Our findings demonstrate the role of the non-specialized hemisphere in processing unconventional stimuli and how it relates to creativity.

  19. Culture and Social Relationship as Factors of Affecting Communicative Non-verbal Behaviors

    Science.gov (United States)

    Akhter Lipi, Afia; Nakano, Yukiko; Rehm, Mathias

    The goal of this paper is to link a bridge between social relationship and cultural variation to predict conversants' non-verbal behaviors. This idea serves as a basis of establishing a parameter based socio-cultural model, which determines non-verbal expressive parameters that specify the shapes of agent's nonverbal behaviors in HAI. As the first step, a comparative corpus analysis is done for two cultures in two specific social relationships. Next, by integrating the cultural and social parameters factors with the empirical data from corpus analysis, we establish a model that predicts posture. The predictions from our model successfully demonstrate that both cultural background and social relationship moderate communicative non-verbal behaviors.

  20. Persistent non-verbal memory impairment in remitted major depression - caused by encoding deficits?

    Science.gov (United States)

    Behnken, Andreas; Schöning, Sonja; Gerss, Joachim; Konrad, Carsten; de Jong-Meyer, Renate; Zwanzger, Peter; Arolt, Volker

    2010-04-01

    While neuropsychological impairments are well described in acute phases of major depressive disorders (MDD), little is known about the neuropsychological profile in remission. There is evidence for episodic memory impairments in both acute depressed and remitted patients with MDD. Learning and memory depend on individuals' ability to organize information during learning. This study investigates non-verbal memory functions in remitted MDD and whether nonverbal memory performance is mediated by organizational strategies whilst learning. 30 well-characterized fully remitted individuals with unipolar MDD and 30 healthy controls matching in age, sex and education were investigated. Non-verbal learning and memory were measured by the Rey-Osterrieth-Complex-Figure-Test (RCFT). The RCFT provides measures of planning, organizational skills, perceptual and non-verbal memory functions. For assessing the mediating effects of organizational strategies, we used the Savage Organizational Score. Compared to healthy controls, participants with remitted MDD showed more deficits in their non-verbal memory function. Moreover, participants with remitted MDD demonstrated difficulties in organizing non-verbal information appropriately during learning. In contrast, no impairments regarding visual-spatial functions in remitted MDD were observed. Except for one patient, all the others were taking psychopharmacological medication. The neuropsychological function was solely investigated in the remitted phase of MDD. Individuals with MDD in remission showed persistent non-verbal memory impairments, modulated by a deficient use of organizational strategies during encoding. Therefore, our results strongly argue for additional therapeutic interventions in order to improve these remaining deficits in cognitive function. Copyright 2009 Elsevier B.V. All rights reserved.

  1. Pedagogical and didactical rationale of phonemic stimulation process in pre-school age children

    Directory of Open Access Journals (Sweden)

    López, Yudenia

    2010-01-01

    Full Text Available The paper describes the main results of a regional research problem dealing with education in pre-school age. It examines the effectiveness of the didactic conception of the process of phonemic stimulation in children from 3 to 5 years old. The pedagogical and didactic rationale of the process, viewed from the evolutionary, ontogeny, systemic perspective is explained. Likewise, possible scaffolding is illustrated. The suggested procedures focus the provision of support on a systematic and purposely practice which involve first the discrimination of non-verbal sounds and the discrimi-nation of verbal sound later, aiming to the creation of a phonological consciousness.

  2. Condom use: exploring verbal and non-verbal communication strategies among Latino and African American men and women.

    Science.gov (United States)

    Zukoski, Ann P; Harvey, S Marie; Branch, Meredith

    2009-08-01

    A growing body of literature provides evidence of a link between communication with sexual partners and safer sexual practices, including condom use. More research is needed that explores the dynamics of condom communication including gender differences in initiation, and types of communication strategies. The overall objective of this study was to explore condom use and the dynamics surrounding condom communication in two distinct community-based samples of African American and Latino heterosexual couples at increased risk for HIV. Based on 122 in-depth interviews, 80% of women and 74% of men reported ever using a condom with their primary partner. Of those who reported ever using a condom with their current partner, the majority indicated that condom use was initiated jointly by men and women. In addition, about one-third of the participants reported that the female partner took the lead and let her male partner know she wanted to use a condom. A sixth of the sample reported that men initiated use. Although over half of the respondents used bilateral verbal strategies (reminding, asking and persuading) to initiate condom use, one-fourth used unilateral verbal strategies (commanding and threatening to withhold sex). A smaller number reported using non-verbal strategies involving condoms themselves (e.g. putting a condom on or getting condoms). The results suggest that interventions designed to improve condom use may need to include both members of a sexual dyad and focus on improving verbal and non-verbal communication skills of individuals and couples.

  3. Auditory-motor mapping training as an intervention to facilitate speech output in non-verbal children with autism: a proof of concept study.

    Directory of Open Access Journals (Sweden)

    Catherine Y Wan

    Full Text Available Although up to 25% of children with autism are non-verbal, there are very few interventions that can reliably produce significant improvements in speech output. Recently, a novel intervention called Auditory-Motor Mapping Training (AMMT has been developed, which aims to promote speech production directly by training the association between sounds and articulatory actions using intonation and bimanual motor activities. AMMT capitalizes on the inherent musical strengths of children with autism, and offers activities that they intrinsically enjoy. It also engages and potentially stimulates a network of brain regions that may be dysfunctional in autism. Here, we report an initial efficacy study to provide 'proof of concept' for AMMT. Six non-verbal children with autism participated. Prior to treatment, the children had no intelligible words. They each received 40 individual sessions of AMMT 5 times per week, over an 8-week period. Probe assessments were conducted periodically during baseline, therapy, and follow-up sessions. After therapy, all children showed significant improvements in their ability to articulate words and phrases, with generalization to items that were not practiced during therapy sessions. Because these children had no or minimal vocal output prior to treatment, the acquisition of speech sounds and word approximations through AMMT represents a critical step in expressive language development in children with autism.

  4. Non-musical sound branding – a conceptualization and research overview

    DEFF Research Database (Denmark)

    Graakjær, Nicolai J.; Bonde, Anders

    2018-01-01

    Purpose The purpose of this paper is to advance the understanding of sound branding by developing a new conceptual framework and providing an overview of the research literature on non-musical sound. Design/methodology/approach Using four mutually exclusive and collectively exhaustive types of non......-musical sound, the paper assesses and synthesizes 99 significant studies across various scholarly fields. Findings The overview reveals two areas in which more research may be warranted, that is, non-musical atmospherics and non-musical sonic logos. Moreover, future sound-branding research should examine...... in further detail the potentials of developed versus annexed object sounds, and mediated versus unmediated brand sounds. Research limitations/implications The paper provides important insights into critical issues that suggest directions for further research on non-musical sound branding. Practical...

  5. Negative Symptoms and Avoidance of Social Interaction: A Study of Non-Verbal Behaviour.

    Science.gov (United States)

    Worswick, Elizabeth; Dimic, Sara; Wildgrube, Christiane; Priebe, Stefan

    2018-01-01

    Non-verbal behaviour is fundamental to social interaction. Patients with schizophrenia display an expressivity deficit of non-verbal behaviour, exhibiting behaviour that differs from both healthy subjects and patients with different psychiatric diagnoses. The present study aimed to explore the association between non-verbal behaviour and symptom domains, overcoming methodological shortcomings of previous studies. Standardised interviews with 63 outpatients diagnosed with schizophrenia were videotaped. Symptoms were assessed using the Clinical Assessment Interview for Negative Symptoms (CAINS), the Positive and Negative Syndrome Scale (PANSS) and the Calgary Depression Scale. Independent raters later analysed the videos for non-verbal behaviour, using a modified version of the Ethological Coding System for Interviews (ECSI). Patients with a higher level of negative symptoms displayed significantly fewer prosocial (e.g., nodding and smiling), gesture, and displacement behaviours (e.g., fumbling), but significantly more flight behaviours (e.g., looking away, freezing). No gender differences were found, and these associations held true when adjusted for antipsychotic medication dosage. Negative symptoms are associated with both a lower level of actively engaging non-verbal behaviour and an increased active avoidance of social contact. Future research should aim to identify the mechanisms behind flight behaviour, with implications for the development of treatments to improve social functioning. © 2017 S. Karger AG, Basel.

  6. Verbal Knowledge, Working Memory, and Processing Speed as Predictors of Verbal Learning in Older Adults

    Science.gov (United States)

    Rast, Philippe

    2011-01-01

    The present study aimed at modeling individual differences in a verbal learning task by means of a latent structured growth curve approach based on an exponential function that yielded 3 parameters: initial recall, learning rate, and asymptotic performance. Three cognitive variables--speed of information processing, verbal knowledge, working…

  7. Improviser non verbalement pour l’apprentissage de la langue parlée

    Directory of Open Access Journals (Sweden)

    Francine Chaîné

    2015-04-01

    Full Text Available Un texte réflexif sur la pratique de l'improvisation dans un contexte scolaire en vue d'apprendre la langue parlée. D'aucun penserait que l'improvisation verbale est le moyen par excellence pour faire l'apprentissage de la langue, mais l'expérience nous a fait découvrir la richesse de l'improvisation non-verbale suivie de prise de parole sur la pratique comme moyen privilégié. L'article est illustré d'un atelier d'improvisation-non verbale s'adressant à des enfants ou à des adolescents.

  8. Verbal auditory agnosia in a patient with traumatic brain injury: A case report.

    Science.gov (United States)

    Kim, Jong Min; Woo, Seung Beom; Lee, Zeeihn; Heo, Sung Jae; Park, Donghwi

    2018-03-01

    Verbal auditory agnosia is the selective inability to recognize verbal sounds. Patients with this disorder lose the ability to understand language, write from dictation, and repeat words with reserved ability to identify nonverbal sounds. However, to the best of our knowledge, there was no report about verbal auditory agnosia in adult patient with traumatic brain injury. He was able to clearly distinguish between language and nonverbal sounds, and he did not have any difficulty in identifying the environmental sounds. However, he did not follow oral commands and could not repeat and dictate words. On the other hand, he had fluent and comprehensible speech, and was able to read and understand written words and sentences. Verbal auditory agnosia INTERVENTION:: He received speech therapy and cognitive rehabilitation during his hospitalization, and he practiced understanding of verbal language by providing written sentences together. Two months after hospitalization, he regained his ability to understand some verbal words. Six months after hospitalization, his ability to understand verbal language was improved to an understandable level when speaking slowly in front of his eyes, but his comprehension of verbal sound language was still word level, not sentence level. This case gives us the lesson that the evaluation of auditory functions as well as cognition and language functions important for accurate diagnosis and appropriate treatment, because the verbal auditory agnosia tends to be easily misdiagnosed as hearing impairment, cognitive dysfunction and sensory aphasia.

  9. Non-verbal mother-child communication in conditions of maternal HIV in an experimental environment.

    Science.gov (United States)

    de Sousa Paiva, Simone; Galvão, Marli Teresinha Gimeniz; Pagliuca, Lorita Marlena Freitag; de Almeida, Paulo César

    2010-01-01

    Non-verbal communication is predominant in the mother-child relation. This study aimed to analyze non-verbal mother-child communication in conditions of maternal HIV. In an experimental environment, five HIV-positive mothers were evaluated during care delivery to their babies of up to six months old. Recordings of the care were analyzed by experts, observing aspects of non-verbal communication, such as: paralanguage, kinesics, distance, visual contact, tone of voice, maternal and infant tactile behavior. In total, 344 scenes were obtained. After statistical analysis, these permitted inferring that mothers use non-verbal communication to demonstrate their close attachment to their children and to perceive possible abnormalities. It is suggested that the mothers infection can be a determining factor for the formation of mothers strong attachment to their children after birth.

  10. Abnormal neural hierarchy in processing of verbal information in patients with schizophrenia.

    Science.gov (United States)

    Lerner, Yulia; Bleich-Cohen, Maya; Solnik-Knirsh, Shimrit; Yogev-Seligmann, Galit; Eisenstein, Tamir; Madah, Waheed; Shamir, Alon; Hendler, Talma; Kremer, Ilana

    2018-01-01

    Previous research indicates abnormal comprehension of verbal information in patients with schizophrenia. Yet the neural mechanism underlying the breakdown of verbal information processing in schizophrenia is poorly understood. Imaging studies in healthy populations have shown a network of brain areas involved in hierarchical processing of verbal information over time. Here, we identified critical aspects of this hierarchy, examining patients with schizophrenia. Using functional magnetic resonance imaging, we examined various levels of information comprehension elicited by naturally presented verbal stimuli; from a set of randomly shuffled words to an intact story. Specifically, patients with first episode schizophrenia ( N  = 15), their non-manifesting siblings ( N  = 14) and healthy controls ( N  = 15) listened to a narrated story and randomly scrambled versions of it. To quantify the degree of dissimilarity between the groups, we adopted an inter-subject correlation (inter-SC) approach, which estimates differences in synchronization of neural responses within and between groups. The temporal topography found in healthy and siblings groups were consistent with our previous findings - high synchronization in responses from early sensory toward high order perceptual and cognitive areas. In patients with schizophrenia, stimuli with short and intermediate temporal scales evoked a typical pattern of reliable responses, whereas story condition (long temporal scale) revealed robust and widespread disruption of the inter-SCs. In addition, the more similar the neural activity of patients with schizophrenia was to the average response in the healthy group, the less severe the positive symptoms of the patients. Our findings suggest that system-level neural indication of abnormal verbal information processing in schizophrenia reflects disease manifestations.

  11. Abnormal neural hierarchy in processing of verbal information in patients with schizophrenia

    Directory of Open Access Journals (Sweden)

    Yulia Lerner

    2018-01-01

    Full Text Available Previous research indicates abnormal comprehension of verbal information in patients with schizophrenia. Yet the neural mechanism underlying the breakdown of verbal information processing in schizophrenia is poorly understood. Imaging studies in healthy populations have shown a network of brain areas involved in hierarchical processing of verbal information over time. Here, we identified critical aspects of this hierarchy, examining patients with schizophrenia. Using functional magnetic resonance imaging, we examined various levels of information comprehension elicited by naturally presented verbal stimuli; from a set of randomly shuffled words to an intact story. Specifically, patients with first episode schizophrenia (N = 15, their non-manifesting siblings (N = 14 and healthy controls (N = 15 listened to a narrated story and randomly scrambled versions of it. To quantify the degree of dissimilarity between the groups, we adopted an inter-subject correlation (inter-SC approach, which estimates differences in synchronization of neural responses within and between groups. The temporal topography found in healthy and siblings groups were consistent with our previous findings – high synchronization in responses from early sensory toward high order perceptual and cognitive areas. In patients with schizophrenia, stimuli with short and intermediate temporal scales evoked a typical pattern of reliable responses, whereas story condition (long temporal scale revealed robust and widespread disruption of the inter-SCs. In addition, the more similar the neural activity of patients with schizophrenia was to the average response in the healthy group, the less severe the positive symptoms of the patients. Our findings suggest that system-level neural indication of abnormal verbal information processing in schizophrenia reflects disease manifestations.

  12. Oncologists' non-verbal behavior and analog patients' recall of information.

    Science.gov (United States)

    Hillen, Marij A; de Haes, Hanneke C J M; van Tienhoven, Geertjan; van Laarhoven, Hanneke W M; van Weert, Julia C M; Vermeulen, Daniëlle M; Smets, Ellen M A

    2016-06-01

    Background Information in oncological consultations is often excessive. Those patients who better recall information are more satisfied, less anxious and more adherent. Optimal recall may be enhanced by the oncologist's non-verbal communication. We tested the influence of three non-verbal behaviors, i.e. eye contact, body posture and smiling, on patients' recall of information and perceived friendliness of the oncologist. Moreover, the influence of patient characteristics on recall was examined, both directly or as a moderator of non-verbal communication. Material and methods Non-verbal communication of an oncologist was experimentally varied using video vignettes. In total 194 breast cancer patients/survivors and healthy women participated as 'analog patients', viewing a randomly selected video version while imagining themselves in the role of the patient. Directly after viewing, they evaluated the oncologist. From 24 to 48 hours later, participants' passive recall, i.e. recognition, and free recall of information provided by the oncologist were assessed. Results Participants' recognition was higher if the oncologist maintained more consistent eye contact (β = 0.17). More eye contact and smiling led to a perception of the oncologist as more friendly. Body posture and smiling did not significantly influence recall. Older age predicted significantly worse recognition (β = -0.28) and free recall (β = -0.34) of information. Conclusion Oncologists may be able to facilitate their patients' recall functioning through consistent eye contact. This seems particularly relevant for older patients, whose recall is significantly worse. These findings can be used in training, focused on how to maintain eye contact while managing computer tasks.

  13. The sound symbolism bootstrapping hypothesis for language acquisition and language evolution.

    Science.gov (United States)

    Imai, Mutsumi; Kita, Sotaro

    2014-09-19

    Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  14. Parents and Physiotherapists Recognition of Non-Verbal Communication of Pain in Individuals with Cerebral Palsy.

    Science.gov (United States)

    Riquelme, Inmaculada; Pades Jiménez, Antonia; Montoya, Pedro

    2017-08-29

    Pain assessment is difficult in individuals with cerebral palsy (CP). This is of particular relevance in children with communication difficulties, when non-verbal pain behaviors could be essential for appropriate pain recognition. Parents are considered good proxies in the recognition of pain in their children; however, health professionals also need a good understanding of their patients' pain experience. This study aims at analyzing the agreement between parents' and physiotherapists' assessments of verbal and non-verbal pain behaviors in individuals with CP. A written survey about pain characteristics and non-verbal pain expression of 96 persons with CP (45 classified as communicative, and 51 as non-communicative individuals) was performed. Parents and physiotherapists displayed a high agreement in their estimations of the presence of chronic pain, healthcare seeking, pain intensity and pain interference, as well as in non-verbal pain behaviors. Physiotherapists and parents can recognize pain behaviors in individuals with CP regardless of communication disabilities.

  15. Non-verbal Full Body Emotional and Social Interaction: A Case Study on Multimedia Systems for Active Music Listening

    Science.gov (United States)

    Camurri, Antonio

    Research on HCI and multimedia systems for art and entertainment based on non-verbal, full-body, emotional and social interaction is the main topic of this paper. A short review of previous research projects in this area at our centre are presented, to introduce the main issues discussed in the paper. In particular, a case study based on novel paradigms of social active music listening is presented. Active music listening experience enables users to dynamically mould expressive performance of music and of audiovisual content. This research is partially supported by the 7FP EU-ICT Project SAME (Sound and Music for Everyone, Everyday, Everywhere, Every Way, www.sameproject.eu).

  16. Shall we use non-verbal fluency in schizophrenia? A pilot study.

    Science.gov (United States)

    Rinaldi, Romina; Trappeniers, Julie; Lefebvre, Laurent

    2014-05-30

    Over the last few years, numerous studies have attempted to explain fluency impairments in people with schizophrenia, leading to heterogeneous results. This could notably be due to the fact that fluency is often used in its verbal form where semantic dimensions are implied. In order to gain an in-depth understanding of fluency deficits, a non-verbal fluency task - the Five-Point Test (5PT) - was proposed to 24 patients with schizophrenia and to 24 healthy subjects categorized in terms of age, gender and schooling. The 5PT involves producing as many abstract figures as possible within 1min by connecting points with straight lines. All subjects also completed the Frontal Assessment Battery (FAB) while those with schizophrenia were further assessed using the Positive and Negative Syndrome Scale (PANSS). Results show that the 5PT evaluation differentiates patients from healthy subjects with regard to the number of figures produced. Patients׳ results also suggest that the number of figures produced is linked to the "overall executive functioning" and to some inhibition components. Although this study is a first step in the non-verbal efficiency research field, we believe that experimental psychopathology could benefit from the investigations on non-verbal fluency. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. The role of non-verbal behaviour in racial disparities in health care: implications and solutions.

    Science.gov (United States)

    Levine, Cynthia S; Ambady, Nalini

    2013-09-01

    People from racial minority backgrounds report less trust in their doctors and have poorer health outcomes. Although these deficiencies have multiple roots, one important set of explanations involves racial bias, which may be non-conscious, on the part of providers, and minority patients' fears that they will be treated in a biased way. Here, we focus on one mechanism by which this bias may be communicated and reinforced: namely, non-verbal behaviour in the doctor-patient interaction. We review 2 lines of research on race and non-verbal behaviour: (i) the ways in which a patient's race can influence a doctor's non-verbal behaviour toward the patient, and (ii) the relative difficulty that doctors can have in accurately understanding the nonverbal communication of non-White patients. Further, we review research on the implications that both lines of work can have for the doctor-patient relationship and the patient's health. The research we review suggests that White doctors interacting with minority group patients are likely to behave and respond in ways that are associated with worse health outcomes. As doctors' disengaged non-verbal behaviour towards minority group patients and lower ability to read minority group patients' non-verbal behaviours may contribute to racial disparities in patients' satisfaction and health outcomes, solutions that target non-verbal behaviour may be effective. A number of strategies for such targeting are discussed. © 2013 John Wiley & Sons Ltd.

  18. Language, Power, Multilingual and Non-Verbal Multicultural Communication

    NARCIS (Netherlands)

    Marácz, L.; Zhuravleva, E.A.

    2014-01-01

    Due to developments in internal migration and mobility there is a proliferation of linguistic diversity, multilingual and non-verbal multicultural communication. At the same time the recognition of the use of one’s first language receives more and more support in international political, legal and

  19. Effects of musical training on sound pattern processing in high-school students.

    Science.gov (United States)

    Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse

    2009-05-01

    Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.

  20. Automated Video Analysis of Non-verbal Communication in a Medical Setting.

    Science.gov (United States)

    Hart, Yuval; Czerniak, Efrat; Karnieli-Miller, Orit; Mayo, Avraham E; Ziv, Amitai; Biegon, Anat; Citron, Atay; Alon, Uri

    2016-01-01

    Non-verbal communication plays a significant role in establishing good rapport between physicians and patients and may influence aspects of patient health outcomes. It is therefore important to analyze non-verbal communication in medical settings. Current approaches to measure non-verbal interactions in medicine employ coding by human raters. Such tools are labor intensive and hence limit the scale of possible studies. Here, we present an automated video analysis tool for non-verbal interactions in a medical setting. We test the tool using videos of subjects that interact with an actor portraying a doctor. The actor interviews the subjects performing one of two scripted scenarios of interviewing the subjects: in one scenario the actor showed minimal engagement with the subject. The second scenario included active listening by the doctor and attentiveness to the subject. We analyze the cross correlation in total kinetic energy of the two people in the dyad, and also characterize the frequency spectrum of their motion. We find large differences in interpersonal motion synchrony and entrainment between the two performance scenarios. The active listening scenario shows more synchrony and more symmetric followership than the other scenario. Moreover, the active listening scenario shows more high-frequency motion termed jitter that has been recently suggested to be a marker of followership. The present approach may be useful for analyzing physician-patient interactions in terms of synchrony and dominance in a range of medical settings.

  1. Non-verbal behaviour in nurse-elderly patient communication.

    NARCIS (Netherlands)

    Caris-Verhallen, W.M.C.M.; Kerkstra, A.; Bensing, J.M.

    1999-01-01

    This study explores the occurence of non-verbal communication in nurse-elderly patient interaction in two different care settings: home nursing and a home for the elderly. In a sample of 181 nursing encounters involving 47 nurses a study was made of videotaped nurse-patient communication. Six

  2. [Non-verbal communication of patients submitted to heart surgery: from awaking after anesthesia to extubation].

    Science.gov (United States)

    Werlang, Sueli da Cruz; Azzolin, Karina; Moraes, Maria Antonieta; de Souza, Emiliane Nogueira

    2008-12-01

    Preoperative orientation is an essential tool for patient's communication after surgery. This study had the objective of evaluating non-verbal communication of patients submitted to cardiac surgery from the time of awaking from anesthesia until extubation, after having received preoperative orientation by nurses. A quantitative cross-sectional study was developed in a reference hospital of the state of Rio Grande do Sul, Brazil, from March to July 2006. Data were collected in the pre and post operative periods. A questionnaire to evaluate non-verbal communication on awaking from sedation was applied to a sample of 100 patients. Statistical analysis included Student, Wilcoxon, and Mann Whittney tests. Most of the patients responded satisfactorily to non-verbal communication strategies as instructed on the preoperative orientation. Thus, non-verbal communication based on preoperative orientation was helpful during the awaking period.

  3. Non-verbal Persuasion and Communication in an Affective Agent

    DEFF Research Database (Denmark)

    André, Elisabeth; Bevacqua, Elisabetta; Heylen, Dirk

    2011-01-01

    the critical role of non-verbal behaviour during face-to-face communication. In this chapter we restrict the discussion to body language. We also consider embodied virtual agents. As is the case with humans, there are a number of fundamental factors to be considered when constructing persuasive agents......This chapter deals with the communication of persuasion. Only a small percentage of communication involves words: as the old saying goes, “it’s not what you say, it’s how you say it”. While this likely underestimates the importance of good verbal persuasion techniques, it is accurate in underlining...

  4. Videotutoring, Non-Verbal Communication and Initial Teacher Training.

    Science.gov (United States)

    Nichol, Jon; Watson, Kate

    2000-01-01

    Describes the use of video tutoring for distance education within the context of a post-graduate teacher training course at the University of Exeter. Analysis of the tapes used a protocol based on non-verbal communication research, and findings suggest that the interaction of participants was significantly different from face-to-face…

  5. Time course of the influence of musical expertise on the processing of vocal and musical sounds.

    Science.gov (United States)

    Rigoulot, S; Pell, M D; Armony, J L

    2015-04-02

    Previous functional magnetic resonance imaging (fMRI) studies have suggested that different cerebral regions preferentially process human voice and music. Yet, little is known on the temporal course of the brain processes that decode the category of sounds and how the expertise in one sound category can impact these processes. To address this question, we recorded the electroencephalogram (EEG) of 15 musicians and 18 non-musicians while they were listening to short musical excerpts (piano and violin) and vocal stimuli (speech and non-linguistic vocalizations). The task of the participants was to detect noise targets embedded within the stream of sounds. Event-related potentials revealed an early differentiation of sound category, within the first 100 ms after the onset of the sound, with mostly increased responses to musical sounds. Importantly, this effect was modulated by the musical background of participants, as musicians were more responsive to music sounds than non-musicians, consistent with the notion that musical training increases sensitivity to music. In late temporal windows, brain responses were enhanced in response to vocal stimuli, but musicians were still more responsive to music. These results shed new light on the temporal course of neural dynamics of auditory processing and reveal how it is impacted by the stimulus category and the expertise of participants. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  6. Differential patterns of prefrontal MEG activation during verbal & visual encoding and retrieval.

    Science.gov (United States)

    Prendergast, Garreth; Limbrick-Oldfield, Eve; Ingamells, Ed; Gathercole, Susan; Baddeley, Alan; Green, Gary G R

    2013-01-01

    The spatiotemporal profile of activation of the prefrontal cortex in verbal and non-verbal recognition memory was examined using magnetoencephalography (MEG). Sixteen neurologically healthy right-handed participants were scanned whilst carrying out a modified version of the Doors and People Test of recognition memory. A pattern of significant prefrontal activity was found for non-verbal and verbal encoding and recognition. During the encoding, verbal stimuli activated an area in the left ventromedial prefrontal cortex, and non-verbal stimuli activated an area in the right. A region in the left dorsolateral prefrontal cortex also showed significant activation during the encoding of non-verbal stimuli. Both verbal and non-verbal stimuli significantly activated an area in the right dorsomedial prefrontal cortex and the right anterior prefrontal cortex during successful recognition, however these areas showed temporally distinct activation dependent on material, with non-verbal showing activation earlier than verbal stimuli. Additionally, non-verbal material activated an area in the left anterior prefrontal cortex during recognition. These findings suggest a material-specific laterality in the ventromedial prefrontal cortex during encoding for verbal and non-verbal but also support the HERA model for verbal material. The discovery of two process dependent areas during recognition that showed patterns of temporal activation dependent on material demonstrates the need for the application of more temporally sensitive techniques to the involvement of the prefrontal cortex in recognition memory.

  7. Algorithmic modeling of the irrelevant sound effect (ISE) by the hearing sensation fluctuation strength.

    Science.gov (United States)

    Schlittmeier, Sabine J; Weissgerber, Tobias; Kerber, Stefan; Fastl, Hugo; Hellbrück, Jürgen

    2012-01-01

    Background sounds, such as narration, music with prominent staccato passages, and office noise impair verbal short-term memory even when these sounds are irrelevant. This irrelevant sound effect (ISE) is evoked by so-called changing-state sounds that are characterized by a distinct temporal structure with varying successive auditory-perceptive tokens. However, because of the absence of an appropriate psychoacoustically based instrumental measure, the disturbing impact of a given speech or nonspeech sound could not be predicted until now, but necessitated behavioral testing. Our database for parametric modeling of the ISE included approximately 40 background sounds (e.g., speech, music, tone sequences, office noise, traffic noise) and corresponding performance data that was collected from 70 behavioral measurements of verbal short-term memory. The hearing sensation fluctuation strength was chosen to model the ISE and describes the percept of fluctuations when listening to slowly modulated sounds (f(mod) background sounds, the algorithm estimated behavioral performance data in 63 of 70 cases within the interquartile ranges. In particular, all real-world sounds were modeled adequately, whereas the algorithm overestimated the (non-)disturbance impact of synthetic steady-state sounds that were constituted by a repeated vowel or tone. Implications of the algorithm's strengths and prediction errors are discussed.

  8. Apathy and Reduced Speed of Processing Underlie Decline in Verbal Fluency following DBS

    Directory of Open Access Journals (Sweden)

    Jennifer A. Foley

    2017-01-01

    Full Text Available Objective. Reduced verbal fluency is a strikingly uniform finding following deep brain stimulation (DBS for Parkinson’s disease (PD. The precise cognitive mechanism underlying this reduction remains unclear, but theories have suggested reduced motivation, linguistic skill, and/or executive function. It is of note, however, that previous reports have failed to consider the potential role of any changes in speed of processing. Thus, the aim of this study was to examine verbal fluency changes with a particular focus on the role of cognitive speed. Method. In this study, 28 patients with PD completed measures of verbal fluency, motivation, language, executive functioning, and speed of processing, before and after DBS. Results. As expected, there was a marked decline in verbal fluency but also in a timed test of executive functions and two measures of speed of processing. Verbal fluency decline was associated with markers of linguistic and executive functioning, but not after speed of processing was statistically controlled for. In contrast, greater decline in verbal fluency was associated with higher levels of apathy at baseline, which was not associated with changes in cognitive speed. Discussion. Reduced generativity and processing speed may account for the marked reduction in verbal fluency commonly observed following DBS.

  9. Apathy and Reduced Speed of Processing Underlie Decline in Verbal Fluency following DBS

    Science.gov (United States)

    Foltynie, Tom; Zrinzo, Ludvic; Hyam, Jonathan A.; Limousin, Patricia

    2017-01-01

    Objective. Reduced verbal fluency is a strikingly uniform finding following deep brain stimulation (DBS) for Parkinson's disease (PD). The precise cognitive mechanism underlying this reduction remains unclear, but theories have suggested reduced motivation, linguistic skill, and/or executive function. It is of note, however, that previous reports have failed to consider the potential role of any changes in speed of processing. Thus, the aim of this study was to examine verbal fluency changes with a particular focus on the role of cognitive speed. Method. In this study, 28 patients with PD completed measures of verbal fluency, motivation, language, executive functioning, and speed of processing, before and after DBS. Results. As expected, there was a marked decline in verbal fluency but also in a timed test of executive functions and two measures of speed of processing. Verbal fluency decline was associated with markers of linguistic and executive functioning, but not after speed of processing was statistically controlled for. In contrast, greater decline in verbal fluency was associated with higher levels of apathy at baseline, which was not associated with changes in cognitive speed. Discussion. Reduced generativity and processing speed may account for the marked reduction in verbal fluency commonly observed following DBS. PMID:28408788

  10. Apathy and Reduced Speed of Processing Underlie Decline in Verbal Fluency following DBS.

    Science.gov (United States)

    Foley, Jennifer A; Foltynie, Tom; Zrinzo, Ludvic; Hyam, Jonathan A; Limousin, Patricia; Cipolotti, Lisa

    2017-01-01

    Objective . Reduced verbal fluency is a strikingly uniform finding following deep brain stimulation (DBS) for Parkinson's disease (PD). The precise cognitive mechanism underlying this reduction remains unclear, but theories have suggested reduced motivation, linguistic skill, and/or executive function. It is of note, however, that previous reports have failed to consider the potential role of any changes in speed of processing. Thus, the aim of this study was to examine verbal fluency changes with a particular focus on the role of cognitive speed. Method . In this study, 28 patients with PD completed measures of verbal fluency, motivation, language, executive functioning, and speed of processing, before and after DBS. Results . As expected, there was a marked decline in verbal fluency but also in a timed test of executive functions and two measures of speed of processing. Verbal fluency decline was associated with markers of linguistic and executive functioning, but not after speed of processing was statistically controlled for. In contrast, greater decline in verbal fluency was associated with higher levels of apathy at baseline, which was not associated with changes in cognitive speed. Discussion . Reduced generativity and processing speed may account for the marked reduction in verbal fluency commonly observed following DBS.

  11. The visual attention span deficit in dyslexia is visual and not verbal.

    Science.gov (United States)

    Lobier, Muriel; Zoubrinetzky, Rachel; Valdois, Sylviane

    2012-06-01

    The visual attention (VA) span deficit hypothesis of dyslexia posits that letter string deficits are a consequence of impaired visual processing. Alternatively, some have interpreted this deficit as resulting from a visual-to-phonology code mapping impairment. This study aims to disambiguate between the two interpretations by investigating performance in a non-verbal character string visual categorization task with verbal and non-verbal stimuli. Results show that VA span ability predicts performance for the non-verbal visual processing task in normal reading children. Furthermore, VA span impaired dyslexic children are also impaired for the categorization task independently of stimuli type. This supports the hypothesis that the underlying impairment responsible for the VA span deficit is visual, not verbal. Copyright © 2011 Elsevier Srl. All rights reserved.

  12. Differential patterns of prefrontal MEG activation during verbal & visual encoding and retrieval.

    Directory of Open Access Journals (Sweden)

    Garreth Prendergast

    Full Text Available The spatiotemporal profile of activation of the prefrontal cortex in verbal and non-verbal recognition memory was examined using magnetoencephalography (MEG. Sixteen neurologically healthy right-handed participants were scanned whilst carrying out a modified version of the Doors and People Test of recognition memory. A pattern of significant prefrontal activity was found for non-verbal and verbal encoding and recognition. During the encoding, verbal stimuli activated an area in the left ventromedial prefrontal cortex, and non-verbal stimuli activated an area in the right. A region in the left dorsolateral prefrontal cortex also showed significant activation during the encoding of non-verbal stimuli. Both verbal and non-verbal stimuli significantly activated an area in the right dorsomedial prefrontal cortex and the right anterior prefrontal cortex during successful recognition, however these areas showed temporally distinct activation dependent on material, with non-verbal showing activation earlier than verbal stimuli. Additionally, non-verbal material activated an area in the left anterior prefrontal cortex during recognition. These findings suggest a material-specific laterality in the ventromedial prefrontal cortex during encoding for verbal and non-verbal but also support the HERA model for verbal material. The discovery of two process dependent areas during recognition that showed patterns of temporal activation dependent on material demonstrates the need for the application of more temporally sensitive techniques to the involvement of the prefrontal cortex in recognition memory.

  13. Sentence processing and verbal working memory in a white-matter-disconnection patient.

    Science.gov (United States)

    Meyer, Lars; Cunitz, Katrin; Obleser, Jonas; Friederici, Angela D

    2014-08-01

    The Arcuate Fasciculus/Superior Longitudinal Fasciculus (AF/SLF) is the white-matter bundle that connects posterior superior temporal and inferior frontal cortex. Its causal functional role in sentence processing and verbal working memory is currently under debate. While impairments of sentence processing and verbal working memory often co-occur in patients suffering from AF/SLF damage, it is unclear whether these impairments result from shared white-matter damage to the verbal-working-memory network. The present study sought to specify the behavioral consequences of focal AF/SLF damage for sentence processing and verbal working memory, which were assessed in a single patient suffering from a cleft-like lesion spanning the deep left superior temporal gyrus, sparing most surrounding gray matter. While tractography suggests that the ventral fronto-temporal white-matter bundle is intact in this patient, the AF/SLF was not visible to tractography. In line with the hypothesis that the AF/SLF is causally involved in sentence processing, the patient׳s performance was selectively impaired on sentences that jointly involve both complex word orders and long word-storage intervals. However, the patient was unimpaired on sentences that only involved long word-storage intervals without involving complex word orders. On the contrary, the patient performed generally worse than a control group across standard verbal-working-memory tests. We conclude that the AF/SLF not only plays a causal role in sentence processing, linking regions of the left dorsal inferior frontal gyrus to the temporo-parietal region, but moreover plays a crucial role in verbal working memory, linking regions of the left ventral inferior frontal gyrus to the left temporo-parietal region. Together, the specific sentence-processing impairment and the more general verbal-working-memory impairment may imply that the AF/SLF subserves both sentence processing and verbal working memory, possibly pointing to the AF

  14. Musicians' Memory for Verbal and Tonal Materials under Conditions of Irrelevant Sound

    Science.gov (United States)

    Williamson, Victoria J.; Mitchell, Tom; Hitch, Graham J.; Baddeley, Alan D.

    2010-01-01

    Studying short-term memory within the framework of the working memory model and its associated paradigms (Baddeley, 2000; Baddeley & Hitch, 1974) offers the chance to compare similarities and differences between the way that verbal and tonal materials are processed. This study examined amateur musicians' short-term memory using a newly adapted…

  15. A brain-computer interface for potential non-verbal facial communication based on EEG signals related to specific emotions.

    Science.gov (United States)

    Kashihara, Koji

    2014-01-01

    Unlike assistive technology for verbal communication, the brain-machine or brain-computer interface (BMI/BCI) has not been established as a non-verbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients' emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based non-verbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600-700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus (FG). This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals. A classification method based on a support vector machine enables the easy classification of neutral faces that trigger specific individual emotions. In

  16. Heart rate variability during acute psychosocial stress: A randomized cross-over trial of verbal and non-verbal laboratory stressors.

    Science.gov (United States)

    Brugnera, Agostino; Zarbo, Cristina; Tarvainen, Mika P; Marchettini, Paolo; Adorni, Roberta; Compare, Angelo

    2018-05-01

    Acute psychosocial stress is typically investigated in laboratory settings using protocols with distinctive characteristics. For example, some tasks involve the action of speaking, which seems to alter Heart Rate Variability (HRV) through acute changes in respiration patterns. However, it is still unknown which task induces the strongest subjective and autonomic stress response. The present cross-over randomized trial sought to investigate the differences in perceived stress and in linear and non-linear analyses of HRV between three different verbal (Speech and Stroop) and non-verbal (Montreal Imaging Stress Task; MIST) stress tasks, in a sample of 60 healthy adults (51.7% females; mean age = 25.6 ± 3.83 years). Analyses were run controlling for respiration rates. Participants reported similar levels of perceived stress across the three tasks. However, MIST induced a stronger cardiovascular response than Speech and Stroop tasks, even after controlling for respiration rates. Finally, women reported higher levels of perceived stress and lower HRV both at rest and in response to acute psychosocial stressors, compared to men. Taken together, our results suggest the presence of gender-related differences during psychophysiological experiments on stress. They also suggest that verbal activity masked the vagal withdrawal through altered respiration patterns imposed by speaking. Therefore, our findings support the use of highly-standardized math task, such as MIST, as a valid and reliable alternative to verbal protocols during laboratory studies on stress. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Young Children's Understanding of Markedness in Non-Verbal Communication

    Science.gov (United States)

    Liebal, Kristin; Carpenter, Malinda; Tomasello, Michael

    2011-01-01

    Speakers often anticipate how recipients will interpret their utterances. If they wish some other, less obvious interpretation, they may "mark" their utterance (e.g. with special intonations or facial expressions). We investigated whether two- and three-year-olds recognize when adults mark a non-verbal communicative act--in this case a pointing…

  18. Verbal Processing Speed and Executive Functioning in Long-Term Cochlear Implant Users

    Science.gov (United States)

    AuBuchon, Angela M.; Pisoni, David B.; Kronenberger, William G.

    2015-01-01

    Purpose: The purpose of this study was to report how "verbal rehearsal speed" (VRS), a form of covert speech used to maintain verbal information in working memory, and another verbal processing speed measure, perceptual encoding speed, are related to 3 domains of executive function (EF) at risk in cochlear implant (CI) users: verbal…

  19. Unconscious learning processes: mental integration of verbal and pictorial instructional materials.

    Science.gov (United States)

    Kuldas, Seffetullah; Ismail, Hairul Nizam; Hashim, Shahabuddin; Bakar, Zainudin Abu

    2013-12-01

    This review aims to provide an insight into human learning processes by examining the role of cognitive and emotional unconscious processing in mentally integrating visual and verbal instructional materials. Reviewed literature shows that conscious mental integration does not happen all the time, nor does it necessarily result in optimal learning. Students of all ages and levels of experience cannot always have conscious awareness, control, and the intention to learn or promptly and continually organize perceptual, cognitive, and emotional processes of learning. This review suggests considering the role of unconscious learning processes to enhance the understanding of how students form or activate mental associations between verbal and pictorial information. The understanding would assist in presenting students with spatially-integrated verbal and pictorial instructional materials as a way of facilitating mental integration and improving teaching and learning performance.

  20. Computerised respiratory sounds can differentiate smokers and non-smokers.

    Science.gov (United States)

    Oliveira, Ana; Sen, Ipek; Kahya, Yasemin P; Afreixo, Vera; Marques, Alda

    2017-06-01

    Cigarette smoking is often associated with the development of several respiratory diseases however, if diagnosed early, the changes in the lung tissue caused by smoking may be reversible. Computerised respiratory sounds have shown to be sensitive to detect changes within the lung tissue before any other measure, however it is unknown if it is able to detect changes in the lungs of healthy smokers. This study investigated the differences between computerised respiratory sounds of healthy smokers and non-smokers. Healthy smokers and non-smokers were recruited from a university campus. Respiratory sounds were recorded simultaneously at 6 chest locations (right and left anterior, lateral and posterior) using air-coupled electret microphones. Airflow (1.0-1.5 l/s) was recorded with a pneumotachograph. Breathing phases were detected using airflow signals and respiratory sounds with validated algorithms. Forty-four participants were enrolled: 18 smokers (mean age 26.2, SD = 7 years; mean FEV 1 % predicted 104.7, SD = 9) and 26 non-smokers (mean age 25.9, SD = 3.7 years; mean FEV 1 % predicted 96.8, SD = 20.2). Smokers presented significantly higher frequency at maximum sound intensity during inspiration [(M = 117, SD = 16.2 Hz vs. M = 106.4, SD = 21.6 Hz; t(43) = -2.62, p = 0.0081, d z  = 0.55)], lower expiratory sound intensities (maximum intensity: [(M = 48.2, SD = 3.8 dB vs. M = 50.9, SD = 3.2 dB; t(43) = 2.68, p = 0.001, d z  = -0.78)]; mean intensity: [(M = 31.2, SD = 3.6 dB vs. M = 33.7,SD = 3 dB; t(43) = 2.42, p = 0.001, d z  = 0.75)] and higher number of inspiratory crackles (median [interquartile range] 2.2 [1.7-3.7] vs. 1.5 [1.2-2.2], p = 0.081, U = 110, r = -0.41) than non-smokers. Significant differences between computerised respiratory sounds of smokers and non-smokers have been found. Changes in respiratory sounds are often the earliest sign of disease. Thus, computerised respiratory sounds

  1. Consistency between verbal and non-verbal affective cues: a clue to speaker credibility.

    Science.gov (United States)

    Gillis, Randall L; Nilsen, Elizabeth S

    2017-06-01

    Listeners are exposed to inconsistencies in communication; for example, when speakers' words (i.e. verbal) are discrepant with their demonstrated emotions (i.e. non-verbal). Such inconsistencies introduce ambiguity, which may render a speaker to be a less credible source of information. Two experiments examined whether children make credibility discriminations based on the consistency of speakers' affect cues. In Experiment 1, school-age children (7- to 8-year-olds) preferred to solicit information from consistent speakers (e.g. those who provided a negative statement with negative affect), over novel speakers, to a greater extent than they preferred to solicit information from inconsistent speakers (e.g. those who provided a negative statement with positive affect) over novel speakers. Preschoolers (4- to 5-year-olds) did not demonstrate this preference. Experiment 2 showed that school-age children's ratings of speakers were influenced by speakers' affect consistency when the attribute being judged was related to information acquisition (speakers' believability, "weird" speech), but not general characteristics (speakers' friendliness, likeability). Together, findings suggest that school-age children are sensitive to, and use, the congruency of affect cues to determine whether individuals are credible sources of information.

  2. [Non-speech oral motor treatment efficacy for children with developmental speech sound disorders].

    Science.gov (United States)

    Ygual-Fernandez, A; Cervera-Merida, J F

    2016-01-01

    In the treatment of speech disorders by means of speech therapy two antagonistic methodological approaches are applied: non-verbal ones, based on oral motor exercises (OME), and verbal ones, which are based on speech processing tasks with syllables, phonemes and words. In Spain, OME programmes are called 'programas de praxias', and are widely used and valued by speech therapists. To review the studies conducted on the effectiveness of OME-based treatments applied to children with speech disorders and the theoretical arguments that could justify, or not, their usefulness. Over the last few decades evidence has been gathered about the lack of efficacy of this approach to treat developmental speech disorders and pronunciation problems in populations without any neurological alteration of motor functioning. The American Speech-Language-Hearing Association has advised against its use taking into account the principles of evidence-based practice. The knowledge gathered to date on motor control shows that the pattern of mobility and its corresponding organisation in the brain are different in speech and other non-verbal functions linked to nutrition and breathing. Neither the studies on their effectiveness nor the arguments based on motor control studies recommend the use of OME-based programmes for the treatment of pronunciation problems in children with developmental language disorders.

  3. Non-verbal emotion communication training induces specific changes in brain function and structure.

    Science.gov (United States)

    Kreifelts, Benjamin; Jacob, Heike; Brück, Carolin; Erb, Michael; Ethofer, Thomas; Wildgruber, Dirk

    2013-01-01

    The perception of emotional cues from voice and face is essential for social interaction. However, this process is altered in various psychiatric conditions along with impaired social functioning. Emotion communication trainings have been demonstrated to improve social interaction in healthy individuals and to reduce emotional communication deficits in psychiatric patients. Here, we investigated the impact of a non-verbal emotion communication training (NECT) on cerebral activation and brain structure in a controlled and combined functional magnetic resonance imaging (fMRI) and voxel-based morphometry study. NECT-specific reductions in brain activity occurred in a distributed set of brain regions including face and voice processing regions as well as emotion processing- and motor-related regions presumably reflecting training-induced familiarization with the evaluation of face/voice stimuli. Training-induced changes in non-verbal emotion sensitivity at the behavioral level and the respective cerebral activation patterns were correlated in the face-selective cortical areas in the posterior superior temporal sulcus and fusiform gyrus for valence ratings and in the temporal pole, lateral prefrontal cortex and midbrain/thalamus for the response times. A NECT-induced increase in gray matter (GM) volume was observed in the fusiform face area. Thus, NECT induces both functional and structural plasticity in the face processing system as well as functional plasticity in the emotion perception and evaluation system. We propose that functional alterations are presumably related to changes in sensory tuning in the decoding of emotional expressions. Taken together, these findings highlight that the present experimental design may serve as a valuable tool to investigate the altered behavioral and neuronal processing of emotional cues in psychiatric disorders as well as the impact of therapeutic interventions on brain function and structure.

  4. Individual Differences in Verbal and Non-Verbal Affective Responses to Smells: Influence of Odor Label Across Cultures.

    Science.gov (United States)

    Ferdenzi, Camille; Joussain, Pauline; Digard, Bérengère; Luneau, Lucie; Djordjevic, Jelena; Bensafi, Moustafa

    2017-01-01

    Olfactory perception is highly variable from one person to another, as a function of individual and contextual factors. Here, we investigated the influence of 2 important factors of variation: culture and semantic information. More specifically, we tested whether cultural-specific knowledge and presence versus absence of odor names modulate odor perception, by measuring these effects in 2 populations differing in cultural background but not in language. Participants from France and Quebec, Canada, smelled 4 culture-specific and 2 non-specific odorants in 2 conditions: first without label, then with label. Their ratings of pleasantness, familiarity, edibility, and intensity were collected as well as their psychophysiological and olfactomotor responses. The results revealed significant effects of culture and semantic information, both at the verbal and non-verbal level. They also provided evidence that availability of semantic information reduced cultural differences. Semantic information had a unifying action on olfactory perception that overrode the influence of cultural background. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Prevalence of inter-hemispheric asymetry in children and adolescents with interdisciplinary diagnosis of non-verbal learning disorder.

    Science.gov (United States)

    Wajnsztejn, Alessandra Bernardes Caturani; Bianco, Bianca; Barbosa, Caio Parente

    2016-01-01

    To describe clinical and epidemiological features of children and adolescents with interdisciplinary diagnosis of non-verbal learning disorder and to investigate the prevalence of inter-hemispheric asymmetry in this population group. Cross-sectional study including children and adolescents referred for interdisciplinary assessment with learning difficulty complaints, who were given an interdisciplinary diagnosis of non-verbal learning disorder. The following variables were included in the analysis: sex-related prevalence, educational system, initial presumptive diagnoses and respective prevalence, overall non-verbal learning disorder prevalence, prevalence according to school year, age range at the time of assessment, major family complaints, presence of inter-hemispheric asymmetry, arithmetic deficits, visuoconstruction impairments and major signs and symptoms of non-verbal learning disorder. Out of 810 medical records analyzed, 14 were from individuals who met the diagnostic criteria for non-verbal learning disorder, including the presence of inter-hemispheric asymmetry. Of these 14 patients, 8 were male. The high prevalence of inter-hemispheric asymmetry suggests this parameter can be used to predict or support the diagnosis of non-verbal learning disorder. Descrever as características clínicas e epidemiológicas de crianças e adolescentes com transtorno de aprendizagem não verbal, e investigar a prevalência de assimetria inter-hemisférica neste grupo populacional. Estudo transversal que incluiu crianças e adolescentes encaminhados para uma avaliação interdisciplinar, com queixas de dificuldades de aprendizagem e que receberam diagnóstico interdisciplinar de transtorno de aprendizagem não verbal. As variáveis avaliadas foram prevalência por sexo, sistema de ensino, hipóteses diagnósticas iniciais e respectivas prevalências, prevalência de condições em relação à amostra total, prevalência geral do transtorno de aprendizagem não verbal

  6. The neural basis of non-verbal communication-enhanced processing of perceived give-me gestures in 9-month-old girls.

    Science.gov (United States)

    Bakker, Marta; Kaduk, Katharina; Elsner, Claudia; Juvrud, Joshua; Gustaf Gredebäck

    2015-01-01

    This study investigated the neural basis of non-verbal communication. Event-related potentials were recorded while 29 nine-month-old infants were presented with a give-me gesture (experimental condition) and the same hand shape but rotated 90°, resulting in a non-communicative hand configuration (control condition). We found different responses in amplitude between the two conditions, captured in the P400 ERP component. Moreover, the size of this effect was modulated by participants' sex, with girls generally demonstrating a larger relative difference between the two conditions than boys.

  7. Non-contact test of coating by means of laser-induced ultrasonic excitation and holographic sound representation

    International Nuclear Information System (INIS)

    Crostack, H.A.; Pohl, K.Y.; Radtke, U.

    1991-01-01

    In order to circumvent the problems of introducing and picking off sound, which occur in conventional ultrasonic testing, a completely non-contact test process was developed. The ultrasonic surface wave required for the test is generated without contact by absorption of laser beams. The recording of the ultrasound also occurs by a non-contact holographic interferometry technique, which permits a large scale representation of the sound. Using the example of MCrAlY and ZrO 2 layers, the suitability of the process for testing thermally sprayed coatings on metal substrates is identified. The possibilities and limits of the process for the detection and description of delamination and cracks are shown. (orig.) [de

  8. Performance of active feedforward control systems in non-ideal, synthesized diffuse sound fields.

    Science.gov (United States)

    Misol, Malte; Bloch, Christian; Monner, Hans Peter; Sinapius, Michael

    2014-04-01

    The acoustic performance of passive or active panel structures is usually tested in sound transmission loss facilities. A reverberant sending room, equipped with one or a number of independent sound sources, is used to generate a diffuse sound field excitation which acts as a disturbance source on the structure under investigation. The spatial correlation and coherence of such a synthesized non-ideal diffuse-sound-field excitation, however, might deviate significantly from the ideal case. This has consequences for the operation of an active feedforward control system which heavily relies on the acquisition of coherent disturbance source information. This work, therefore, evaluates the spatial correlation and coherence of ideal and non-ideal diffuse sound fields and considers the implications on the performance of a feedforward control system. The system under consideration is an aircraft-typical double panel system, equipped with an active sidewall panel (lining), which is realized in a transmission loss facility. Experimental results for different numbers of sound sources in the reverberation room are compared to simulation results of a comparable generic double panel system excited by an ideal diffuse sound field. It is shown that the number of statistically independent noise sources acting on the primary structure of the double panel system depends not only on the type of diffuse sound field but also on the sample lengths of the processed signals. The experimental results show that the number of reference sensors required for a defined control performance exhibits an inverse relationship to control filter length.

  9. “Communication by impact” and other forms of non-verbal ...

    African Journals Online (AJOL)

    This article aims to review the importance, place and especially the emotional impact of non-verbal communication in psychiatry. The paper argues that while biological psychiatry is in the ascendency with increasing discoveries being made about the functioning of the brain and psycho-pharmacology, it is important to try ...

  10. A influência da comunicação não verbal no cuidado de enfermagem La influencia de la comunicación no verbal en la atención de enfermería The influence of non-verbal communication in nursing care

    Directory of Open Access Journals (Sweden)

    Carla Cristina Viana Santos

    2005-08-01

    Nursing School Alfredo Pinto UNIRIO, and it started during the development of a monograph. The object of the study is the meaning of non-verbal communication under the optics of the nursing course undergraduates. The study presents the following objectives: to determine how non-verbal communication is comprehended among college students in nursing and to analyze in what way that comprehension influences nursing care. The methodological approach was qualitative, while the dynamics of sensitivity were applied as strategy for data collection. It was observed that undergraduate students identify the relevance and influence of non-verbal communication along nursing care, however there is a need in amplifying the knowledge of non-verbal communication process prior the implementation of nursing care.

  11. Verbal lie detection

    NARCIS (Netherlands)

    Vrij, Aldert; Taylor, Paul J.; Picornell, Isabel; Oxburgh, Gavin; Myklebust, Trond; Grant, Tim; Milne, Rebecca

    2015-01-01

    In this chapter, we discuss verbal lie detection and will argue that speech content can be revealing about deception. Starting with a section discussing the, in our view, myth that non-verbal behaviour would be more revealing about deception than speech, we then provide an overview of verbal lie

  12. A comparison of verbal and numerical judgments in the analytic hierarchy process

    NARCIS (Netherlands)

    Huizingh, EKRE; Vrolijk, HCJ

    In the Analytic Hierarchy Process (AHP), decision makers make pairwise comparisons of alternatives and criteria. The AHP allows to make these pairwise comparisons verbally or numerically. Although verbal statements are intuitively attractive for preference elicitation, there is overwhelming evidence

  13. Linking social cognition with social interaction: Non-verbal expressivity, social competence and "mentalising" in patients with schizophrenia spectrum disorders

    Directory of Open Access Journals (Sweden)

    Lehmkämper Caroline

    2009-01-01

    Full Text Available Abstract Background Research has shown that patients with schizophrenia spectrum disorders (SSD can be distinguished from controls on the basis of their non-verbal expression. For example, patients with SSD use facial expressions less than normals to invite and sustain social interaction. Here, we sought to examine whether non-verbal expressivity in patients corresponds with their impoverished social competence and neurocognition. Method Fifty patients with SSD were videotaped during interviews. Non-verbal expressivity was evaluated using the Ethological Coding System for Interviews (ECSI. Social competence was measured using the Social Behaviour Scale and psychopathology was rated using the Positive and Negative Symptom Scale. Neurocognitive variables included measures of IQ, executive functioning, and two mentalising tasks, which tapped into the ability to appreciate mental states of story characters. Results Non-verbal expressivity was reduced in patients relative to controls. Lack of "prosocial" nonverbal signals was associated with poor social competence and, partially, with impaired understanding of others' minds, but not with non-social cognition or medication. Conclusion This is the first study to link deficits in non-verbal expressivity to levels of social skills and awareness of others' thoughts and intentions in patients with SSD.

  14. Binaural Processing of Multiple Sound Sources

    Science.gov (United States)

    2016-08-18

    AFRL-AFOSR-VA-TR-2016-0298 Binaural Processing of Multiple Sound Sources William Yost ARIZONA STATE UNIVERSITY 660 S MILL AVE STE 312 TEMPE, AZ 85281...18-08-2016 2. REPORT TYPE Final Performance 3. DATES COVERED (From - To) 15 Jul 2012 to 14 Jul 2016 4. TITLE AND SUBTITLE Binaural Processing of...three topics cited above are entirely within the scope of the AFOSR grant. 15. SUBJECT TERMS Binaural hearing, Sound Localization, Interaural signal

  15. Kindergarteners' performance in a sound-symbol paradigm predicts early reading.

    Science.gov (United States)

    Horbach, Josefine; Scharke, Wolfgang; Cröll, Jennifer; Heim, Stefan; Günther, Thomas

    2015-11-01

    The current study examined the role of serial processing of newly learned sound-symbol associations in early reading acquisition. A computer-based sound-symbol paradigm (SSP) was administered to 243 children during their last year of kindergarten (T1), and their reading performance was assessed 1 year later in first grade (T2). Results showed that performance on the SSP measured before formal reading instruction was associated with later reading development. At T1, early readers performed significantly better than nonreaders in learning correspondences between sounds and symbols as well as in applying those correspondences in a serial manner. At T2, SSP performance measured at T1 was positively associated with reading performance. Importantly, serial application of newly learned correspondences at T1 explained unique variance in first-grade reading performance in nonreaders over and above other verbal predictors, including phonological awareness, verbal short-term memory, and rapid automatized naming. Consequently, the SSP provides a promising way to study aspects of reading in preliterate children. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Individual differences in non-verbal number acuity correlate with maths achievement.

    Science.gov (United States)

    Halberda, Justin; Mazzocco, Michèle M M; Feigenson, Lisa

    2008-10-02

    Human mathematical competence emerges from two representational systems. Competence in some domains of mathematics, such as calculus, relies on symbolic representations that are unique to humans who have undergone explicit teaching. More basic numerical intuitions are supported by an evolutionarily ancient approximate number system that is shared by adults, infants and non-human animals-these groups can all represent the approximate number of items in visual or auditory arrays without verbally counting, and use this capacity to guide everyday behaviour such as foraging. Despite the widespread nature of the approximate number system both across species and across development, it is not known whether some individuals have a more precise non-verbal 'number sense' than others. Furthermore, the extent to which this system interfaces with the formal, symbolic maths abilities that humans acquire by explicit instruction remains unknown. Here we show that there are large individual differences in the non-verbal approximation abilities of 14-year-old children, and that these individual differences in the present correlate with children's past scores on standardized maths achievement tests, extending all the way back to kindergarten. Moreover, this correlation remains significant when controlling for individual differences in other cognitive and performance factors. Our results show that individual differences in achievement in school mathematics are related to individual differences in the acuity of an evolutionarily ancient, unlearned approximate number sense. Further research will determine whether early differences in number sense acuity affect later maths learning, whether maths education enhances number sense acuity, and the extent to which tertiary factors can affect both.

  17. Presentation Trainer: a toolkit for learning non-verbal public speaking skills

    NARCIS (Netherlands)

    Schneider, Jan; Börner, Dirk; Van Rosmalen, Peter; Specht, Marcus

    2014-01-01

    The paper presents and outlines the demonstration of Presentation Trainer, a prototype that works as a public speaking instructor. It tracks and analyses the body posture, movements and voice of the user in order to give in- structional feedback on non-verbal communication skills. Besides exploring

  18. Deaf children’s non-verbal working memory is impacted by their language experience

    Directory of Open Access Journals (Sweden)

    Chloe eMarshall

    2015-05-01

    Full Text Available Recent studies suggest that deaf children perform more poorly on working memory tasks compared to hearing children, but do not say whether this poorer performance arises directly from deafness itself or from deaf children’s reduced language exposure. The issue remains unresolved because findings come from (1 tasks that are verbal as opposed to non-verbal, and (2 involve deaf children who use spoken communication and therefore may have experienced impoverished input and delayed language acquisition. This is in contrast to deaf children who have been exposed to a sign language since birth from Deaf parents (and who therefore have native language-learning opportunities. A more direct test of how the type and quality of language exposure impacts working memory is to use measures of non-verbal working memory (NVWM and to compare hearing children with two groups of deaf signing children: those who have had native exposure to a sign language, and those who have experienced delayed acquisition compared to their native-signing peers. In this study we investigated the relationship between NVWM and language in three groups aged 6-11 years: hearing children (n=27, deaf native users of British Sign Language (BSL; n=7, and deaf children non native signers (n=19. We administered a battery of non-verbal reasoning, NVWM, and language tasks. We examined whether the groups differed on NVWM scores, and if language tasks predicted scores on NVWM tasks. For the two NVWM tasks, the non-native signers performed less accurately than the native signer and hearing groups (who did not differ from one another. Multiple regression analysis revealed that the vocabulary measure predicted scores on NVWM tasks. Our results suggest that whatever the language modality – spoken or signed – rich language experience from birth, and the good language skills that result from this early age of aacquisition, play a critical role in the development of NVWM and in performance on NVWM

  19. Non-verbal communication of compassion: measuring psychophysiologic effects.

    Science.gov (United States)

    Kemper, Kathi J; Shaltout, Hossam A

    2011-12-20

    Calm, compassionate clinicians comfort others. To evaluate the direct psychophysiologic benefits of non-verbal communication of compassion (NVCC), it is important to minimize the effect of subjects' expectation. This preliminary study was designed to a) test the feasibility of two strategies for maintaining subject blinding to non-verbal communication of compassion (NVCC), and b) determine whether blinded subjects would experience psychophysiologic effects from NVCC. Subjects were healthy volunteers who were told the study was evaluating the effect of time and touch on the autonomic nervous system. The practitioner had more than 10 years' experience with loving-kindness meditation (LKM), a form of NVCC. Subjects completed 10-point visual analog scales (VAS) for stress, relaxation, and peacefulness before and after LKM. To assess physiologic effects, practitioners and subjects wore cardiorespiratory monitors to assess respiratory rate (RR), heart rate (HR) and heart rate variability (HRV) throughout the 4 10-minute study periods: Baseline (both practitioner and subjects read neutral material); non-tactile-LKM (subjects read while the practitioner practiced LKM while pretending to read); tactile-LKM (subjects rested while the practitioner practiced LKM while lightly touching the subject on arms, shoulders, hands, feet, and legs); Post-Intervention Rest (subjects rested; the practitioner read). To assess blinding, subjects were asked after the interventions what the practitioner was doing during each period (reading, touch, or something else). Subjects' mean age was 43.6 years; all were women. Blinding was maintained and the practitioner was able to maintain meditation for both tactile and non-tactile LKM interventions as reflected in significantly reduced RR. Despite blinding, subjects' VAS scores improved from baseline to post-intervention for stress (5.5 vs. 2.2), relaxation (3.8 vs. 8.8) and peacefulness (3.8 vs. 9.0, P non-tactile LKM. It is possible to test the

  20. Effect of interaction with clowns on vital signs and non-verbal communication of hospitalized children.

    Science.gov (United States)

    Alcântara, Pauline Lima; Wogel, Ariane Zonho; Rossi, Maria Isabela Lobo; Neves, Isabela Rodrigues; Sabates, Ana Llonch; Puggina, Ana Cláudia

    2016-12-01

    Compare the non-verbal communication of children before and during interaction with clowns and compare their vital signs before and after this interaction. Uncontrolled, intervention, cross-sectional, quantitative study with children admitted to a public university hospital. The intervention was performed by medical students dressed as clowns and included magic tricks, juggling, singing with the children, making soap bubbles and comedic performances. The intervention time was 20minutes. Vital signs were assessed in two measurements with an interval of one minute immediately before and after the interaction. Non-verbal communication was observed before and during the interaction using the Non-Verbal Communication Template Chart, a tool in which nonverbal behaviors are assessed as effective or ineffective in the interactions. The sample consisted of 41 children with a mean age of 7.6±2.7 years; most were aged 7 to 11 years (n=23; 56%) and were males (n=26; 63.4%). There was a statistically significant difference in systolic and diastolic blood pressure, pain and non-verbal behavior of children with the intervention. Systolic and diastolic blood pressure increased and pain scales showed decreased scores. The playful interaction with clowns can be a therapeutic resource to minimize the effects of the stressing environment during the intervention, improve the children's emotional state and reduce the perception of pain. Copyright © 2016 Sociedade de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.

  1. SAFT-assisted sound beam focusing using phased arrays (PA-SAFT) for non-destructive evaluation

    Science.gov (United States)

    Nanekar, Paritosh; Kumar, Anish; Jayakumar, T.

    2015-04-01

    Focusing of sound has always been a subject of interest in ultrasonic non-destructive evaluation. An integrated approach to sound beam focusing using phased array and synthetic aperture focusing technique (PA-SAFT) has been developed in the authors' laboratory. The approach involves SAFT processing on ultrasonic B-scan image collected by a linear array transducer using a divergent sound beam. The objective is to achieve sound beam focusing using fewer elements than the ones required using conventional phased array. The effectiveness of the approach is demonstrated on aluminium blocks with artificial flaws and steel plate samples with embedded volumetric weld flaws, such as slag and clustered porosities. The results obtained by the PA-SAFT approach are found to be comparable to those obtained by conventional phased array and full matrix capture - total focusing method approaches.

  2. Congenital Amusia: A Short-Term Memory Deficit for Non-Verbal, but Not Verbal Sounds

    Science.gov (United States)

    Tillmann, Barbara; Schulze, Katrin; Foxton, Jessica M.

    2009-01-01

    Congenital amusia refers to a lifelong disorder of music processing and is linked to pitch-processing deficits. The present study investigated congenital amusics' short-term memory for tones, musical timbres and words. Sequences of five events (tones, timbres or words) were presented in pairs and participants had to indicate whether the sequences…

  3. [Non-verbal communication and executive function impairment after traumatic brain injury: a case report].

    Science.gov (United States)

    Sainson, C

    2007-05-01

    Following post-traumatic impairment in executive function, failure to adjust to communication situations often creates major obstacles to social and professional reintegration. The analysis of pathological verbal communication has been based on clinical scales since the 1980s, but that of nonverbal elements has been neglected, although their importance should be acknowledged. The aim of this research was to study non-verbal aspects of communication in a case of executive-function impairment after traumatic brain injury. During the patient's conversation with an interlocutor, all nonverbal parameters - coverbal gestures, gaze, posture, proxemics and facial expressions - were studied in as much an ecological way as possible, to closely approximate natural conversation conditions. Such an approach highlights the difficulties such patients experience in communicating, difficulties of a pragmatic kind, that have so far been overlooked by traditional investigations, which mainly take into account the formal linguistic aspects of language. The analysis of the patient's conversation revealed non-verbal dysfunctions, not only on a pragmatic and interactional level but also in terms of enunciation. Moreover, interactional adjustment phenomena were noted in the interlocutor's behaviour. The two inseparable aspects of communication - verbal and nonverbal - should be equally assessed in patients with communication difficulties; highlighting distortions in each area might bring about an improvement in the rehabilitation of such people.

  4. Auditory Verbal Experience and Agency in Waking, Sleep Onset, REM, and Non-REM Sleep.

    Science.gov (United States)

    Speth, Jana; Harley, Trevor A; Speth, Clemens

    2017-04-01

    We present one of the first quantitative studies on auditory verbal experiences ("hearing voices") and auditory verbal agency (inner speech, and specifically "talking to (imaginary) voices or characters") in healthy participants across states of consciousness. Tools of quantitative linguistic analysis were used to measure participants' implicit knowledge of auditory verbal experiences (VE) and auditory verbal agencies (VA), displayed in mentation reports from four different states. Analysis was conducted on a total of 569 mentation reports from rapid eye movement (REM) sleep, non-REM sleep, sleep onset, and waking. Physiology was controlled with the nightcap sleep-wake mentation monitoring system. Sleep-onset hallucinations, traditionally at the focus of scientific attention on auditory verbal hallucinations, showed the lowest degree of VE and VA, whereas REM sleep showed the highest degrees. Degrees of different linguistic-pragmatic aspects of VE and VA likewise depend on the physiological states. The quantity and pragmatics of VE and VA are a function of the physiologically distinct state of consciousness in which they are conceived. Copyright © 2016 Cognitive Science Society, Inc.

  5. Maternal postpartum depressive symptoms predict delay in non-verbal communication in 14-month-old infants.

    Science.gov (United States)

    Kawai, Emiko; Takagai, Shu; Takei, Nori; Itoh, Hiroaki; Kanayama, Naohiro; Tsuchiya, Kenji J

    2017-02-01

    We investigated the potential relationship between maternal depressive symptoms during the postpartum period and non-verbal communication skills of infants at 14 months of age in a birth cohort study of 951 infants and assessed what factors may influence this association. Maternal depressive symptoms were measured using the Edinburgh Postnatal Depression Scale, and non-verbal communication skills were measured using the MacArthur-Bates Communicative Development Inventories, which include Early Gestures and Later Gestures domains. Infants whose mothers had a high level of depressive symptoms (13+ points) during both the first month postpartum and at 10 weeks were approximately 0.5 standard deviations below normal in Early Gestures scores and 0.5-0.7 standard deviations below normal in Later Gestures scores. These associations were independent of potential explanations, such as maternal depression/anxiety prior to birth, breastfeeding practices, and recent depressive symptoms among mothers. These findings indicate that infants whose mothers have postpartum depressive symptoms may be at increased risk of experiencing delay in non-verbal development. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Verbal Working Memory in Older Adults: The Roles of Phonological Capacities and Processing Speed

    Science.gov (United States)

    Nittrouer, Susan; Lowenstein, Joanna H.; Wucinich, Taylor; Moberly, Aaron C.

    2016-01-01

    Purpose: This study examined the potential roles of phonological sensitivity and processing speed in age-related declines of verbal working memory. Method: Twenty younger and 25 older adults with age-normal hearing participated. Two measures of verbal working memory were collected: digit span and serial recall of words. Processing speed was…

  7. Enhanced Memory Consolidation Via Automatic Sound Stimulation During Non-REM Sleep.

    Science.gov (United States)

    Leminen, Miika M; Virkkala, Jussi; Saure, Emma; Paajanen, Teemu; Zee, Phyllis C; Santostasi, Giovanni; Hublin, Christer; Müller, Kiti; Porkka-Heiskanen, Tarja; Huotilainen, Minna; Paunio, Tiina

    2017-03-01

    Slow-wave sleep (SWS) slow waves and sleep spindle activity have been shown to be crucial for memory consolidation. Recently, memory consolidation has been causally facilitated in human participants via auditory stimuli phase-locked to SWS slow waves. Here, we aimed to develop a new acoustic stimulus protocol to facilitate learning and to validate it using different memory tasks. Most importantly, the stimulation setup was automated to be applicable for ambulatory home use. Fifteen healthy participants slept 3 nights in the laboratory. Learning was tested with 4 memory tasks (word pairs, serial finger tapping, picture recognition, and face-name association). Additional questionnaires addressed subjective sleep quality and overnight changes in mood. During the stimulus night, auditory stimuli were adjusted and targeted by an unsupervised algorithm to be phase-locked to the negative peak of slow waves in SWS. During the control night no sounds were presented. Results showed that the sound stimulation increased both slow wave (p = .002) and sleep spindle activity (p memory performance was compared between stimulus and control nights, we found a significant effect in word pair task but not in other memory tasks. The stimulation did not affect sleep structure or subjective sleep quality. We showed that the memory effect of the SWS-targeted individually triggered single-sound stimulation is specific to verbal associative memory. Moreover, the ambulatory and automated sound stimulus setup was promising and allows for a broad range of potential follow-up studies in the future. © Sleep Research Society 2017. Published by Oxford University Press [on behalf of the Sleep Research Society].

  8. Speed of sound in hadronic matter using non-extensive Tsallis statistics

    International Nuclear Information System (INIS)

    Khuntia, Arvind; Sahoo, Pragati; Garg, Prakhar; Sahoo, Raghunath; Cleymans, Jean

    2016-01-01

    The speed of sound (c_s) is studied to understand the hydrodynamical evolution of the matter created in heavy-ion collisions. The quark-gluon plasma (QGP) formed in heavy-ion collisions evolves from an initial QGP to the hadronic phase via a possible mixed phase. Due to the system expansion in a first-order phase transition scenario, the speed of sound reduces to zero as the specific heat diverges. We study the speed of sound for systems which deviate from a thermalized Boltzmann distribution using non-extensive Tsallis statistics. In the present work, we calculate the speed of sound as a function of temperature for different q-values for a hadron resonance gas. We observe a similar mass cut-off behaviour in the non-extensive case for c"2_s by including heavier particles, as is observed in the case of a hadron resonance gas following equilibrium statistics. Also, we explicitly show that the temperature where the mass cut-off starts varies with the q-parameter which hints at a relation between the degree of non-equilibrium and the limiting temperature of the system. It is shown that for values of q above approximately 1.13 all criticality disappears in the speed of sound, i.e. the decrease in the value of the speed of sound, observed at lower values of q, disappears completely. (orig.)

  9. Speed of sound in hadronic matter using non-extensive Tsallis statistics

    Energy Technology Data Exchange (ETDEWEB)

    Khuntia, Arvind; Sahoo, Pragati; Garg, Prakhar; Sahoo, Raghunath [Indian Institute of Technology Indore, Discipline of Physics, School of Basic Science, Simrol, M.P. (India); Cleymans, Jean [University of Cape Town, UCT-CERN Research Centre and Department of Physics, Rondebosch (South Africa)

    2016-09-15

    The speed of sound (c{sub s}) is studied to understand the hydrodynamical evolution of the matter created in heavy-ion collisions. The quark-gluon plasma (QGP) formed in heavy-ion collisions evolves from an initial QGP to the hadronic phase via a possible mixed phase. Due to the system expansion in a first-order phase transition scenario, the speed of sound reduces to zero as the specific heat diverges. We study the speed of sound for systems which deviate from a thermalized Boltzmann distribution using non-extensive Tsallis statistics. In the present work, we calculate the speed of sound as a function of temperature for different q-values for a hadron resonance gas. We observe a similar mass cut-off behaviour in the non-extensive case for c{sup 2}{sub s} by including heavier particles, as is observed in the case of a hadron resonance gas following equilibrium statistics. Also, we explicitly show that the temperature where the mass cut-off starts varies with the q-parameter which hints at a relation between the degree of non-equilibrium and the limiting temperature of the system. It is shown that for values of q above approximately 1.13 all criticality disappears in the speed of sound, i.e. the decrease in the value of the speed of sound, observed at lower values of q, disappears completely. (orig.)

  10. Auditory spatial attention to speech and complex non-speech sounds in children with autism spectrum disorder.

    Science.gov (United States)

    Soskey, Laura N; Allen, Paul D; Bennetto, Loisa

    2017-08-01

    One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  11. Contrasting visual working memory for verbal and non-verbal material with multivariate analysis of fMRI

    Science.gov (United States)

    Habeck, Christian; Rakitin, Brian; Steffener, Jason; Stern, Yaakov

    2012-01-01

    We performed a delayed-item-recognition task to investigate the neural substrates of non-verbal visual working memory with event-related fMRI (‘Shape task’). 25 young subjects (mean age: 24.0 years; STD=3.8 years) were instructed to study a list of either 1,2 or 3 unnamable nonsense line drawings for 3 seconds (‘stimulus phase’ or STIM). Subsequently, the screen went blank for 7 seconds (‘retention phase’ or RET), and then displayed a probe stimulus for 3 seconds in which subject indicated with a differential button press whether the probe was contained in the studied shape-array or not (‘probe phase’ or PROBE). Ordinal Trend Canonical Variates Analysis (Habeck et al., 2005a) was performed to identify spatial covariance patterns that showed a monotonic increase in expression with memory load during all task phases. Reliable load-related patterns were identified in the stimulus and retention phase (pmemory loads (pmemory load, and mediofrontal and temporal regions that were decreasing. Mean subject expression of both patterns across memory load during retention also correlated positively with recognition accuracy (dL) in the Shape task (prehearsal processes. Encoding processes, on the other hand, are critically dependent on the to-be-remembered material, and seem to necessitate material-specific neural substrates. PMID:22652306

  12. Randomised controlled trial of a brief intervention targeting predominantly non-verbal communication in general practice consultations.

    Science.gov (United States)

    Little, Paul; White, Peter; Kelly, Joanne; Everitt, Hazel; Mercer, Stewart

    2015-06-01

    The impact of changing non-verbal consultation behaviours is unknown. To assess brief physician training on improving predominantly non-verbal communication. Cluster randomised parallel group trial among adults aged ≥16 years attending general practices close to the study coordinating centres in Southampton. Sixteen GPs were randomised to no training, or training consisting of a brief presentation of behaviours identified from a prior study (acronym KEPe Warm: demonstrating Knowledge of the patient; Encouraging [back-channelling by saying 'hmm', for example]; Physically engaging [touch, gestures, slight lean]; Warm-up: cool/professional initially, warming up, avoiding distancing or non-verbal cut-offs at the end of the consultation); and encouragement to reflect on videos of their consultation. Outcomes were the Medical Interview Satisfaction Scale (MISS) mean item score (1-7) and patients' perceptions of other domains of communication. Intervention participants scored higher MISS overall (0.23, 95% confidence interval [CI] = 0.06 to 0.41), with the largest changes in the distress-relief and perceived relationship subscales. Significant improvement occurred in perceived communication/partnership (0.29, 95% CI = 0.09 to 0.49) and health promotion (0.26, 95% CI = 0.05 to 0.46). Non-significant improvements occurred in perceptions of a personal relationship, a positive approach, and understanding the effects of the illness on life. Brief training of GPs in predominantly non-verbal communication in the consultation and reflection on consultation videotapes improves patients' perceptions of satisfaction, distress, a partnership approach, and health promotion. © British Journal of General Practice 2015.

  13. Evolution of non-speech sound memory in postlingual deafness: implications for cochlear implant rehabilitation.

    Science.gov (United States)

    Lazard, D S; Giraud, A L; Truy, E; Lee, H J

    2011-07-01

    Neurofunctional patterns assessed before or after cochlear implantation (CI) are informative markers of implantation outcome. Because phonological memory reorganization in post-lingual deafness is predictive of the outcome, we investigated, using a cross-sectional approach, whether memory of non-speech sounds (NSS) produced by animals or objects (i.e. non-human sounds) is also reorganized, and how this relates to speech perception after CI. We used an fMRI auditory imagery task in which sounds were evoked by pictures of noisy items for post-lingual deaf candidates for CI and for normal-hearing subjects. When deaf subjects imagined sounds, the left inferior frontal gyrus, the right posterior temporal gyrus and the right amygdala were less activated compared to controls. Activity levels in these regions decreased with duration of auditory deprivation, indicating declining NSS representations. Whole brain correlations with duration of auditory deprivation and with speech scores after CI showed an activity decline in dorsal, fronto-parietal, cortical regions, and an activity increase in ventral cortical regions, the right anterior temporal pole and the hippocampal gyrus. Both dorsal and ventral reorganizations predicted poor speech perception outcome after CI. These results suggest that post-CI speech perception relies, at least partially, on the integrity of a neural system used for processing NSS that is based on audio-visual and articulatory mapping processes. When this neural system is reorganized, post-lingual deaf subjects resort to inefficient semantic- and memory-based strategies. These results complement those of other studies on speech processing, suggesting that both speech and NSS representations need to be maintained during deafness to ensure the success of CI. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Emotional sounds modulate early neural processing of emotional pictures

    Directory of Open Access Journals (Sweden)

    Antje B M Gerdes

    2013-10-01

    Full Text Available In our natural environment, emotional information is conveyed by converging visual and auditory information; multimodal integration is of utmost importance. In the laboratory, however, emotion researchers have mostly focused on the examination of unimodal stimuli. Few existing studies on multimodal emotion processing have focused on human communication such as the integration of facial and vocal expressions. Extending the concept of multimodality, the current study examines how the neural processing of emotional pictures is influenced by simultaneously presented sounds. Twenty pleasant, unpleasant, and neutral pictures of complex scenes were presented to 22 healthy participants. On the critical trials these pictures were paired with pleasant, unpleasant and neutral sounds. Sound presentation started 500 ms before picture onset and each stimulus presentation lasted for 2s. EEG was recorded from 64 channels and ERP analyses focused on the picture onset. In addition, valence, and arousal ratings were obtained. Previous findings for the neural processing of emotional pictures were replicated. Specifically, unpleasant compared to neutral pictures were associated with an increased parietal P200 and a more pronounced centroparietal late positive potential (LPP, independent of the accompanying sound valence. For audiovisual stimulation, increased parietal P100 and P200 were found in response to all pictures which were accompanied by unpleasant or pleasant sounds compared to pictures with neutral sounds. Most importantly, incongruent audiovisual pairs of unpleasant pictures and pleasant sounds enhanced parietal P100 and P200 compared to pairings with congruent sounds. Taken together, the present findings indicate that emotional sounds modulate early stages of visual processing and, therefore, provide an avenue by which multimodal experience may enhance perception.

  15. Brainstem auditory evoked potentials with the use of acoustic clicks and complex verbal sounds in young adults with learning disabilities.

    Science.gov (United States)

    Kouni, Sophia N; Giannopoulos, Sotirios; Ziavra, Nausika; Koutsojannis, Constantinos

    2013-01-01

    Acoustic signals are transmitted through the external and middle ear mechanically to the cochlea where they are transduced into electrical impulse for further transmission via the auditory nerve. The auditory nerve encodes the acoustic sounds that are conveyed to the auditory brainstem. Multiple brainstem nuclei, the cochlea, the midbrain, the thalamus, and the cortex constitute the central auditory system. In clinical practice, auditory brainstem responses (ABRs) to simple stimuli such as click or tones are widely used. Recently, complex stimuli or complex auditory brain responses (cABRs), such as monosyllabic speech stimuli and music, are being used as a tool to study the brainstem processing of speech sounds. We have used the classic 'click' as well as, for the first time, the artificial successive complex stimuli 'ba', which constitutes the Greek word 'baba' corresponding to the English 'daddy'. Twenty young adults institutionally diagnosed as dyslexic (10 subjects) or light dyslexic (10 subjects) comprised the diseased group. Twenty sex-, age-, education-, hearing sensitivity-, and IQ-matched normal subjects comprised the control group. Measurements included the absolute latencies of waves I through V, the interpeak latencies elicited by the classical acoustic click, the negative peak latencies of A and C waves, as well as the interpeak latencies of A-C elicited by the verbal stimulus 'baba' created on a digital speech synthesizer. The absolute peak latencies of waves I, III, and V in response to monoaural rarefaction clicks as well as the interpeak latencies I-III, III-V, and I-V in the dyslexic subjects, although increased in comparison with normal subjects, did not reach the level of a significant difference (pwave C and the interpeak latencies of A-C elicited by verbal stimuli were found to be increased in the dyslexic group in comparison with the control group (p=0.0004 and p=0.045, respectively). In the subgroup consisting of 10 patients suffering from

  16. Collecting verbal autopsies: improving and streamlining data collection processes using electronic tablets.

    Science.gov (United States)

    Flaxman, Abraham D; Stewart, Andrea; Joseph, Jonathan C; Alam, Nurul; Alam, Sayed Saidul; Chowdhury, Hafizur; Mooney, Meghan D; Rampatige, Rasika; Remolador, Hazel; Sanvictores, Diozele; Serina, Peter T; Streatfield, Peter Kim; Tallo, Veronica; Murray, Christopher J L; Hernandez, Bernardo; Lopez, Alan D; Riley, Ian Douglas

    2018-02-01

    There is increasing interest in using verbal autopsy to produce nationally representative population-level estimates of causes of death. However, the burden of processing a large quantity of surveys collected with paper and pencil has been a barrier to scaling up verbal autopsy surveillance. Direct electronic data capture has been used in other large-scale surveys and can be used in verbal autopsy as well, to reduce time and cost of going from collected data to actionable information. We collected verbal autopsy interviews using paper and pencil and using electronic tablets at two sites, and measured the cost and time required to process the surveys for analysis. From these cost and time data, we extrapolated costs associated with conducting large-scale surveillance with verbal autopsy. We found that the median time between data collection and data entry for surveys collected on paper and pencil was approximately 3 months. For surveys collected on electronic tablets, this was less than 2 days. For small-scale surveys, we found that the upfront costs of purchasing electronic tablets was the primary cost and resulted in a higher total cost. For large-scale surveys, the costs associated with data entry exceeded the cost of the tablets, so electronic data capture provides both a quicker and cheaper method of data collection. As countries increase verbal autopsy surveillance, it is important to consider the best way to design sustainable systems for data collection. Electronic data capture has the potential to greatly reduce the time and costs associated with data collection. For long-term, large-scale surveillance required by national vital statistical systems, electronic data capture reduces costs and allows data to be available sooner.

  17. Bilateral generic working memory circuit requires left-lateralized addition for verbal processing.

    Science.gov (United States)

    Ray, Manaan Kar; Mackay, Clare E; Harmer, Catherine J; Crow, Timothy J

    2008-06-01

    According to the Baddeley-Hitch model, phonological and visuospatial representations are separable components of working memory (WM) linked by a central executive. The traditional view that the separation reflects the relative contribution of the 2 hemispheres (verbal WM--left; spatial WM--right) has been challenged by the position that a common bilateral frontoparietal network subserves both domains. Here, we test the hypothesis that there is a generic WM circuit that recruits additional specialized regions for verbal and spatial processing. We designed a functional magnetic resonance imaging paradigm to elicit activation in the WM circuit for verbal and spatial information using identical stimuli and applied this in 33 healthy controls. We detected left-lateralized quantitative differences in the left frontal and temporal lobe for verbal > spatial WM but no areas of activation for spatial > verbal WM. We speculate that spatial WM is analogous to a "generic" bilateral frontoparietal WM circuit we inherited from our great ape ancestors that evolved, by recruitment of additional left-lateralized frontal and temporal regions, to accommodate language.

  18. Speed of Sound in Hadronic matter using Non-extensive Statistics

    CERN Document Server

    Khuntia, Arvind; Garg, Prakhar; Sahoo, Raghunath; Cleymans, Jean

    2016-01-01

    The speed of sound ($c_s$) is studied to understand the hydrodynamical evolution of the matter created in heavy-ion collisions. The quark gluon plasma (QGP) formed in heavy-ion collisions evolves from an initial QGP to the hadronic phase via a possible mixed phase. Due to the system expansion in a first order phase transition scenario, the speed of sound reduces to zero as the specific heat diverges. We study the speed of sound for systems, which deviate from a thermalized Boltzmann distribution using non-extensive Tsallis statistics. In the present work, we calculate the speed of sound as a function of temperature for different $q$-values for a hadron resonance gas. We observe a similar mass cut-off behaviour in non-extensive case for $c^{2}_s$ by including heavier particles, as is observed in the case of a hadron resonance gas following equilibrium statistics. Also, we explicitly present that the temperature where the mass cut-off starts, varies with the $q$-parameter which hints at a relation between the d...

  19. Listening to an audio drama activates two processing networks, one for all sounds, another exclusively for speech.

    Directory of Open Access Journals (Sweden)

    Robert Boldt

    Full Text Available Earlier studies have shown considerable intersubject synchronization of brain activity when subjects watch the same movie or listen to the same story. Here we investigated the across-subjects similarity of brain responses to speech and non-speech sounds in a continuous audio drama designed for blind people. Thirteen healthy adults listened for ∼19 min to the audio drama while their brain activity was measured with 3 T functional magnetic resonance imaging (fMRI. An intersubject-correlation (ISC map, computed across the whole experiment to assess the stimulus-driven extrinsic brain network, indicated statistically significant ISC in temporal, frontal and parietal cortices, cingulate cortex, and amygdala. Group-level independent component (IC analysis was used to parcel out the brain signals into functionally coupled networks, and the dependence of the ICs on external stimuli was tested by comparing them with the ISC map. This procedure revealed four extrinsic ICs of which two-covering non-overlapping areas of the auditory cortex-were modulated by both speech and non-speech sounds. The two other extrinsic ICs, one left-hemisphere-lateralized and the other right-hemisphere-lateralized, were speech-related and comprised the superior and middle temporal gyri, temporal poles, and the left angular and inferior orbital gyri. In areas of low ISC four ICs that were defined intrinsic fluctuated similarly as the time-courses of either the speech-sound-related or all-sounds-related extrinsic ICs. These ICs included the superior temporal gyrus, the anterior insula, and the frontal, parietal and midline occipital cortices. Taken together, substantial intersubject synchronization of cortical activity was observed in subjects listening to an audio drama, with results suggesting that speech is processed in two separate networks, one dedicated to the processing of speech sounds and the other to both speech and non-speech sounds.

  20. The Introduction of Non-Verbal Communication in Greek Education: A Literature Review

    Science.gov (United States)

    Stamatis, Panagiotis J.

    2012-01-01

    Introduction: The introductory part of this paper underlines the research interest of the educational community in the issue of non-verbal communication in education. The question for the introduction of this scientific field in Greek education enter within the context of this research which include many aspects. Method: The paper essentially…

  1. Peculiarities of Stereotypes about Non-Verbal Communication and their Role in Cross-Cultural Interaction between Russian and Chinese Students

    Directory of Open Access Journals (Sweden)

    I A Novikova

    2012-12-01

    Full Text Available The article is devoted to the analysis of the peculiarities of the stereotypes about non-verbal communication, formed in Russian and Chinese cultures. The results of the experimental research of the role of ethnic auto- and heterostereotypes about non-verbal communication in cross-cultural interaction between Russian and Chinese students of the Peoples’ Friendship University of Russia are presented.

  2. Modular and Adaptive Control of Sound Processing

    Science.gov (United States)

    van Nort, Douglas

    This dissertation presents research into the creation of systems for the control of sound synthesis and processing. The focus differs from much of the work related to digital musical instrument design, which has rightly concentrated on the physicality of the instrument and interface: sensor design, choice of controller, feedback to performer and so on. Often times a particular choice of sound processing is made, and the resultant parameters from the physical interface are conditioned and mapped to the available sound parameters in an exploratory fashion. The main goal of the work presented here is to demonstrate the importance of the space that lies between physical interface design and the choice of sound manipulation algorithm, and to present a new framework for instrument design that strongly considers this essential part of the design process. In particular, this research takes the viewpoint that instrument designs should be considered in a musical control context, and that both control and sound dynamics must be considered in tandem. In order to achieve this holistic approach, the work presented in this dissertation assumes complementary points of view. Instrument design is first seen as a function of musical context, focusing on electroacoustic music and leading to a view on gesture that relates perceived musical intent to the dynamics of an instrumental system. The important design concept of mapping is then discussed from a theoretical and conceptual point of view, relating perceptual, systems and mathematically-oriented ways of examining the subject. This theoretical framework gives rise to a mapping design space, functional analysis of pertinent existing literature, implementations of mapping tools, instrumental control designs and several perceptual studies that explore the influence of mapping structure. Each of these reflect a high-level approach in which control structures are imposed on top of a high-dimensional space of control and sound synthesis

  3. Attentional and non-attentional systems in the maintenance of verbal information in working memory: the executive and phonological loops.

    Science.gov (United States)

    Camos, Valérie; Barrouillet, Pierre

    2014-01-01

    Working memory is the structure devoted to the maintenance of information at short term during concurrent processing activities. In this respect, the question regarding the nature of the mechanisms and systems fulfilling this maintenance function is of particular importance and has received various responses in the recent past. In the time-based resource-sharing (TBRS) model, we suggest that only two systems sustain the maintenance of information at the short term, counteracting the deleterious effect of temporal decay and interference. A non-attentional mechanism of verbal rehearsal, similar to the one described by Baddeley in the phonological loop model, uses language processes to reactivate phonological memory traces. Besides this domain-specific mechanism, an executive loop allows the reconstruction of memory traces through an attention-based mechanism of refreshing. The present paper reviews evidence of the involvement of these two independent systems in the maintenance of verbal memory items.

  4. Attentional and non-attentional systems in the maintenance of verbal information in working memory: the executive and phonological loops

    Science.gov (United States)

    Camos, Valérie; Barrouillet, Pierre

    2014-01-01

    Working memory is the structure devoted to the maintenance of information at short term during concurrent processing activities. In this respect, the question regarding the nature of the mechanisms and systems fulfilling this maintenance function is of particular importance and has received various responses in the recent past. In the time-based resource-sharing (TBRS) model, we suggest that only two systems sustain the maintenance of information at the short term, counteracting the deleterious effect of temporal decay and interference. A non-attentional mechanism of verbal rehearsal, similar to the one described by Baddeley in the phonological loop model, uses language processes to reactivate phonological memory traces. Besides this domain-specific mechanism, an executive loop allows the reconstruction of memory traces through an attention-based mechanism of refreshing. The present paper reviews evidence of the involvement of these two independent systems in the maintenance of verbal memory items. PMID:25426049

  5. Attentional and non-attentional systems in the maintenance of verbal information in working memory: the executive and phonological loops.

    Directory of Open Access Journals (Sweden)

    Valerie eCamos

    2014-11-01

    Full Text Available Working memory is the structure devoted to the maintenance of information at short term during concurrent processing activities. In this respect, the question regarding the nature of the mechanisms and systems fulfilling this maintenance function is of particular importance and has received various responses in the recent past. In the time-based resource-sharing model, we suggest that only two systems sustain the maintenance of information at the short term, counteracting the deleterious effect of temporal decay and interference. A non-attentional mechanism of verbal rehearsal, similar to the one described by Baddeley in the phonological loop model, uses language processes to reactivate phonological memory traces. Besides this domain-specific mechanism, an executive loop allows the reconstruction of memory traces through an attention-based mechanism of refreshing. The present paper reviews evidence of the involvement of these two independent systems in the maintenance of verbal memory items.

  6. The Process of Optimizing Mechanical Sound Quality in Product Design

    DEFF Research Database (Denmark)

    Eriksen, Kaare; Holst, Thomas

    2011-01-01

    The research field concerning optimizing product sound quality is a relatively unexplored area, and may become difficult for designers to operate in. To some degree, sound is a highly subjective parameter, which is normally targeted sound specialists. This paper describes the theoretical...... and practical background for managing a process of optimizing the mechanical sound quality in a product design by using simple tools and workshops systematically. The procedure is illustrated by a case study of a computer navigation tool (computer mouse or mouse). The process is divided into 4 phases, which...... clarify the importance of product sound, defining perceptive demands identified by users, and, finally, how to suggest mechanical principles for modification of an existing sound design. The optimized mechanical sound design is followed by tests on users of the product in its use context. The result...

  7. Near Real-Time Comprehension Classification with Artificial Neural Networks: Decoding e-Learner Non-Verbal Behavior

    Science.gov (United States)

    Holmes, Mike; Latham, Annabel; Crockett, Keeley; O'Shea, James D.

    2018-01-01

    Comprehension is an important cognitive state for learning. Human tutors recognize comprehension and non-comprehension states by interpreting learner non-verbal behavior (NVB). Experienced tutors adapt pedagogy, materials, and instruction to provide additional learning scaffold in the context of perceived learner comprehension. Near real-time…

  8. What and Where in auditory sensory processing: A high-density electrical mapping study of distinct neural processes underlying sound object recognition and sound localization

    Directory of Open Access Journals (Sweden)

    Victoria M Leavitt

    2011-06-01

    Full Text Available Functionally distinct dorsal and ventral auditory pathways for sound localization (where and sound object recognition (what have been described in non-human primates. A handful of studies have explored differential processing within these streams in humans, with highly inconsistent findings. Stimuli employed have included simple tones, noise bursts and speech sounds, with simulated left-right spatial manipulations, and in some cases participants were not required to actively discriminate the stimuli. Our contention is that these paradigms were not well suited to dissociating processing within the two streams. Our aim here was to determine how early in processing we could find evidence for dissociable pathways using better titrated what and where task conditions. The use of more compelling tasks should allow us to amplify differential processing within the dorsal and ventral pathways. We employed high-density electrical mapping using a relatively large and environmentally realistic stimulus set (seven animal calls delivered from seven free-field spatial locations; with stimulus configuration identical across the where and what tasks. Topographic analysis revealed distinct dorsal and ventral auditory processing networks during the where and what tasks with the earliest point of divergence seen during the N1 component of the auditory evoked response, beginning at approximately 100 ms. While this difference occurred during the N1 timeframe, it was not a simple modulation of N1 amplitude as it displayed a wholly different topographic distribution to that of the N1. Global dissimilarity measures using topographic modulation analysis confirmed that this difference between tasks was driven by a shift in the underlying generator configuration. Minimum norm source reconstruction revealed distinct activations that corresponded well with activity within putative dorsal and ventral auditory structures.

  9. Context effects on processing widely deviant sounds in newborn infants

    Directory of Open Access Journals (Sweden)

    Gábor Péter Háden

    2013-09-01

    Full Text Available Detecting and orienting towards sounds carrying new information is a crucial feature of the human brain that supports adaptation to the environment. Rare, acoustically widely deviant sounds presented amongst frequent tones elicit large event related brain potentials (ERPs in neonates. Here we tested whether these discriminative ERP responses reflect only the activation of fresh afferent neuronal populations (i.e., neuronal circuits not affected by the tones or they also index the processing of contextual mismatch between the rare and the frequent sounds.In two separate experiments, we presented sleeping newborns with 150 different environmental sounds and the same number of white noise bursts. Both sounds served either as deviants in an oddball paradigm with the frequent standard stimulus a tone (Novel/Noise deviant, or as the standard stimulus with the tone as deviant (Novel/Noise standard, or they were delivered alone with the same timing as the deviants in the oddball condition (Novel/Noise alone.Whereas the ERP responses to noise–deviants elicited similar responses as the same sound presented alone, the responses elicited by environmental sounds in the corresponding conditions morphologically differed from each other. Thus whereas the ERP response to the noise sounds can be explained by the different refractory state of stimulus specific neuronal populations, the ERP response to environmental sounds indicated context sensitive processing. These results provide evidence for an innate tendency of context dependent auditory processing as well as a basis for the different developmental trajectories of processing acoustical deviance and contextual novelty.

  10. Development of non-verbal intellectual capacity in school-age children with cerebral palsy

    NARCIS (Netherlands)

    Smits, D. W.; Ketelaar, M.; Gorter, J. W.; van Schie, P. E.; Becher, J. G.; Lindeman, E.; Jongmans, M. J.

    Background Children with cerebral palsy (CP) are at greater risk for a limited intellectual development than typically developing children. Little information is available which children with CP are most at risk. This study aimed to describe the development of non-verbal intellectual capacity of

  11. Spectral integration in speech and non-speech sounds

    Science.gov (United States)

    Jacewicz, Ewa

    2005-04-01

    Spectral integration (or formant averaging) was proposed in vowel perception research to account for the observation that a reduction of the intensity of one of two closely spaced formants (as in /u/) produced a predictable shift in vowel quality [Delattre et al., Word 8, 195-210 (1952)]. A related observation was reported in psychoacoustics, indicating that when the components of a two-tone periodic complex differ in amplitude and frequency, its perceived pitch is shifted toward that of the more intense tone [Helmholtz, App. XIV (1875/1948)]. Subsequent research in both fields focused on the frequency interval that separates these two spectral components, in an attempt to determine the size of the bandwidth for spectral integration to occur. This talk will review the accumulated evidence for and against spectral integration within the hypothesized limit of 3.5 Bark for static and dynamic signals in speech perception and psychoacoustics. Based on similarities in the processing of speech and non-speech sounds, it is suggested that spectral integration may reflect a general property of the auditory system. A larger frequency bandwidth, possibly close to 3.5 Bark, may be utilized in integrating acoustic information, including speech, complex signals, or sound quality of a violin.

  12. Relationship of Non-Verbal Intelligence Materials as Catalyst for Academic Achievement and Peaceful Co-Existence among Secondary School Students in Nigeria

    Science.gov (United States)

    Sambo, Aminu

    2015-01-01

    This paper examines students' performance in Non-verbal Intelligence tests relative academic achievement of some selected secondary school students. Two hypotheses were formulated with a view to generating data for the ease of analyses. Two non-verbal intelligent tests viz: Raven's Standard Progressive Matrices (SPM) and AH[subscript 4] Part II…

  13. The Ineluctable Modality of the Audible: Perceptual Determinants of Auditory Verbal Short-Term Memory

    Science.gov (United States)

    Maidment, David W.; Macken, William J.

    2012-01-01

    Classical cognitive accounts of verbal short-term memory (STM) invoke an abstract, phonological level of representation which, although it may be derived differently via different modalities, is itself amodal. Key evidence for this view is that serial recall of phonologically similar verbal items (e.g., the letter sounds "b",…

  14. Executive functioning and non-verbal intelligence as predictors of bullying in early elementary school

    NARCIS (Netherlands)

    Verlinden, Marina; Veenstra, René; Ghassabian, Akhgar; Jansen, P.W.; Hofman, Albert; Jaddoe, Vincent W. V.; Verhulst, F.C.; Tiemeier, Henning

    Executive function and intelligence are negatively associated with aggression, yet the role of executive function has rarely been examined in the context of school bullying. We studied whether different domains of executive function and non-verbal intelligence are associated with bullying

  15. On the embedded cognition of non-verbal narratives

    DEFF Research Database (Denmark)

    Bruni, Luis Emilio; Baceviciute, Sarune

    2014-01-01

    Acknowledging that narratives are an important resource in human communication and cognition, the focus of this article is on the cognitive aspects of involvement with visual and auditory non-verbal narratives, particularly in relation to the newest immersive media and digital interactive...... representational technologies. We consider three relevant trends in narrative studies that have emerged in the 60 years of cognitive and digital revolution. The issue at hand could have implications for developmental psychology, pedagogics, cognitive science, cognitive psychology, ethology and evolutionary studies...... of language. In particular, it is of great importance for narratology in relation to interactive media and new representational technologies. Therefore we outline a research agenda for a bio-cognitive semiotic interdisciplinary investigation on how people understand, react to, and interact with narratives...

  16. EKSPRESI VERBAL PENDERITA APRAXIA WICARA: KASUS GANGGUAN WICARA MURID SDN 2 BATU PUTIH KAB. BOMBANA

    Directory of Open Access Journals (Sweden)

    Batmang Batmang

    2016-05-01

    Full Text Available Abstract This study aimed to obtain factual data of the verbal expression of speech apraxia people in order to know the forms of verbal expressions of patients with apraxia speech in terms of aspects of phonological, lexical aspect, and description of non-linguistic abilities. The study was conducted in SD Negeri 2 Batuputih, Southeast Sulawesi with a single subject, a fourth-grade student who suffered speech apraxia. This research was a case study that examined the behavior of language of the speech apraxia patients. The techniques used in data collection are observation, recording, question and answer, and interviews. The instrument used in data collection is, the pictures of objects, field notes, interview guide, and voice recorder. The data analysis was done by using an error analysis and contrastive analysis. The results obtained: (1 in terms of phonological aspects, the speech apraxia patients tended to have difficulty in reciting the phoneme, (2 in terms of lexical aspect, the verbal expressions of the apraxia speech patients are not meaningful at all, what people said just the unmeaning sounds, (3 linguistically speech apraxia patients are unable to express themselves the meaningful words but the non-linguistic one, the observed patients did not show any symptoms of abnormality. Keywords: Apraxia speech, verbal expression, impaired speech   Abstrak Penelitian ini betujuan untuk memperoleh data faktual tentang ekspresi verbal penderita apraxia wicara agar dapat mengetahui bentuk-bentuk ekspresi verbal penderita apraxia wicara dalam hal aspek fonologi, aspek leksikal, dan gambaran tentang kemampuan non-linguistiknya. Penelitian dilakukan di SD Negeri 2 Batuputih, Sulawesi Tenggara dengan subjek tunggal seorang murid kelas IV yang menderita apraxia wicara. Penelitian ini mengkaji perilaku berbahasa pada penderita apraxia wicara. Teknik yang digunakan dalam pengumpulan data adalah: observasi, perekaman, tanya-jawab, dan wawancara. Alat yang

  17. Neural processing of musical meter in musicians and non-musicians.

    Science.gov (United States)

    Zhao, T Christina; Lam, H T Gloria; Sohi, Harkirat; Kuhl, Patricia K

    2017-11-01

    Musical sounds, along with speech, are the most prominent sounds in our daily lives. They are highly dynamic, yet well structured in the temporal domain in a hierarchical manner. The temporal structures enhance the predictability of musical sounds. Western music provides an excellent example: while time intervals between musical notes are highly variable, underlying beats can be realized. The beat-level temporal structure provides a sense of regular pulses. Beats can be further organized into units, giving the percept of alternating strong and weak beats (i.e. metrical structure or meter). Examining neural processing at the meter level offers a unique opportunity to understand how the human brain extracts temporal patterns, predicts future stimuli and optimizes neural resources for processing. The present study addresses two important questions regarding meter processing, using the mismatch negativity (MMN) obtained with electroencephalography (EEG): 1) how tempo (fast vs. slow) and type of metrical structure (duple: two beats per unit vs. triple: three beats per unit) affect the neural processing of metrical structure in non-musically trained individuals, and 2) how early music training modulates the neural processing of metrical structure. Metrical structures were established by patterns of consecutive strong and weak tones (Standard) with occasional violations that disrupted and reset the structure (Deviant). Twenty non-musicians listened passively to these tones while their neural activities were recorded. MMN indexed the neural sensitivity to the meter violations. Results suggested that MMNs were larger for fast tempo and for triple meter conditions. Further, 20 musically trained individuals were tested using the same methods and the results were compared to the non-musicians. While tempo and meter type similarly influenced MMNs in both groups, musicians overall exhibited significantly reduced MMNs, compared to their non-musician counterparts. Further analyses

  18. Persistent Thalamic Sound Processing Despite Profound Cochlear Denervation

    Directory of Open Access Journals (Sweden)

    Anna R. Chambers

    2016-08-01

    Full Text Available Neurons at higher stages of sensory processing can partially compensate for a sudden drop in input from the periphery through a homeostatic plasticity process that increases the gain on weak afferent inputs. Even after a profound unilateral auditory neuropathy where > 95% of synapses between auditory nerve fibers and inner hair cells have been eliminated with ouabain, central gain can restore the cortical processing and perceptual detection of basic sounds delivered to the denervated ear. In this model of profound auditory neuropathy, cortical processing and perception recover despite the absence of an auditory brainstem response (ABR or brainstem acoustic reflexes, and only a partial recovery of sound processing at the level of the inferior colliculus (IC, an auditory midbrain nucleus. In this study, we induced a profound cochlear neuropathy with ouabain and asked whether central gain enabled a compensatory plasticity in the auditory thalamus comparable to the full recovery of function previously observed in the auditory cortex (ACtx, the partial recovery observed in the IC, or something different entirely. Unilateral ouabain treatment in adult mice effectively eliminated the ABR, yet robust sound-evoked activity persisted in a minority of units recorded from the contralateral medial geniculate body (MGB of awake mice. Sound-driven MGB units could decode moderate and high-intensity sounds with accuracies comparable to sham-treated control mice, but low-intensity classification was near chance. Pure tone receptive fields and synchronization to broadband pulse trains also persisted, albeit with significantly reduced quality and precision, respectively. MGB decoding of temporally modulated pulse trains and speech tokens were both greatly impaired in ouabain-treated mice. Taken together, the absence of an ABR belied a persistent auditory processing at the level of the MGB that was likely enabled through increased central gain. Compensatory

  19. Impaired self-monitoring of inner speech in schizophrenia patients with verbal hallucinations and in non-clinical individuals prone to hallucinations

    Directory of Open Access Journals (Sweden)

    Gildas Brébion

    2016-09-01

    Full Text Available Background: Previous research has shown that various memory errors reflecting failure in the self-monitoring of speech were associated with auditory/verbal hallucinations in schizophrenia patients and with proneness to hallucinations in non-clinical individuals. Method: We administered to 57 schizophrenia patients and 60 healthy participants a verbal memory task involving free recall and recognition of lists of words with different structures (high-frequency, low-frequency, and semantically-organisable words. Extra-list intrusions in free recall were tallied, and the response bias reflecting tendency to make false recognitions of non-presented words was computed for each list. Results: In the male patient subsample, extra-list intrusions were positively associated with verbal hallucinations and inversely associated with negative symptoms. In the healthy participants the extra-list intrusions were positively associated with proneness to hallucinations. A liberal response bias in the recognition of the high-frequency words was associated with verbal hallucinations in male patients and with proneness to hallucinations in healthy men. Meanwhile, a conservative response bias for these high-frequency words was associated with negative symptoms in male patients and with social anhedonia in healthy men. Conclusions: Misattribution of inner speech to an external source, reflected by false recollection of familiar material, seems to underlie both clinical and non-clinical hallucinations. Further, both clinical and non-clinical negative symptoms may exert on verbal memory errors an effect opposite to that of hallucinations.

  20. Effect of aging, education, reading and writing, semantic processing and depression symptoms on verbal fluency

    Directory of Open Access Journals (Sweden)

    André Luiz Moraes

    2013-12-01

    Full Text Available Verbal fluency tasks are widely used in (clinical neuropsychology to evaluate components of executive functioning and lexical-semantic processing (linguistic and semantic memory. Performance in those tasks may be affected by several variables, such as age, education and diseases. This study investigated whether aging, education, reading and writing frequency, performance in semantic judgment tasks and depression symptoms predict the performance in unconstrained, phonemic and semantic fluency tasks. This study sample comprised 260 healthy adults aged 19 to 75 years old. The Pearson correlation coefficient and multiple regression models were used for data analysis. The variables under analysis were associated in different ways and had different levels of contribution according to the type of verbal fluency task. Education had the greatest effect on verbal fluency tasks. There was a greater effect of age on semantic fluency than on phonemic tasks. The semantic judgment tasks predicted the verbal fluency performance alone or in combination with other variables. These findings corroborate the importance of education in cognition supporting the hypothesis of a cognitive reserve and confirming the contribution of lexical-semantic processing to verbal fluency.

  1. Verbal Reports as Data.

    Science.gov (United States)

    Ericsson, K. Anders; Simon, Herbert A.

    1980-01-01

    Accounting for verbal reports requires explication of the mechanisms by which the reports are generated and influenced by experimental factors. We discuss different cognitive processes underlying verbalization and present a model of how subjects, when asked to think aloud, verbalize information from their short-term memory. (Author/GDC)

  2. Stochastic Signal Processing for Sound Environment System with Decibel Evaluation and Energy Observation

    Directory of Open Access Journals (Sweden)

    Akira Ikuta

    2014-01-01

    Full Text Available In real sound environment system, a specific signal shows various types of probability distribution, and the observation data are usually contaminated by external noise (e.g., background noise of non-Gaussian distribution type. Furthermore, there potentially exist various nonlinear correlations in addition to the linear correlation between input and output time series. Consequently, often the system input and output relationship in the real phenomenon cannot be represented by a simple model using only the linear correlation and lower order statistics. In this study, complex sound environment systems difficult to analyze by using usual structural method are considered. By introducing an estimation method of the system parameters reflecting correlation information for conditional probability distribution under existence of the external noise, a prediction method of output response probability for sound environment systems is theoretically proposed in a suitable form for the additive property of energy variable and the evaluation in decibel scale. The effectiveness of the proposed stochastic signal processing method is experimentally confirmed by applying it to the observed data in sound environment systems.

  3. Associations between olfactory identification and verbal memory in patients with schizophrenia, first-degree relatives, and non-psychiatric controls.

    Science.gov (United States)

    Compton, Michael T; McKenzie Mack, LaTasha; Esterberg, Michelle L; Bercu, Zachary; Kryda, Aimee D; Quintero, Luis; Weiss, Paul S; Walker, Elaine F

    2006-09-01

    Olfactory identification deficits and verbal memory impairments may represent trait markers for schizophrenia. The aims of this study were to: (1) assess olfactory identification in patients, first-degree relatives, and non-psychiatric controls, (2) determine differences in verbal memory functioning in these three groups, and (3) study correlations between olfactory identification and three specific verbal memory domains. A total of 106 participants-41 patients with schizophrenia or related disorders, 27 relatives, and 38 controls-were assessed with the University of Pennsylvania Smell Identification Test (UPSIT) and the Wechsler Memory Scale-Third Edition. Linear mixed models, accounting for clustering within families and relevant covariates, were used to compare scores across groups and to examine associations between olfactory identification ability and the three verbal memory domains. A group effect was apparent for all four measures, and relatives scored midway between patients and controls on all three memory domains. UPSIT scores were significantly correlated with all three forms of verbal memory. Age, verbal working memory, and auditory recognition delayed memory were independently predictive of UPSIT scores. Impairments in olfactory identification and verbal memory appear to represent two correlated risk markers for schizophrenia, and frontal-temporal deficits likely account for both impairments.

  4. Non-verbal communication of the residents living in homes for the older people in Slovenia.

    Science.gov (United States)

    Zaletel, Marija; Kovacev, Asja Nina; Sustersic, Olga; Kragelj, Lijana Zaletel

    2010-09-01

    and paralinguistic signs. The caregivers should be aware of this and pay a lot of attention to these two groups of non-verbal expressions. Their importance should be constantly emphasized during the educational process of all kinds of health-care professionals as well.

  5. Judging the urgency of non-verbal auditory alarms: a case study.

    Science.gov (United States)

    Arrabito, G Robert; Mondor, Todd; Kent, Kimberley

    2004-06-22

    When designed correctly, non-verbal auditory alarms can convey different levels of urgency to the aircrew, and thereby permit the operator to establish the appropriate level of priority to address the alarmed condition. The conveyed level of urgency of five non-verbal auditory alarms presently used in the Canadian Forces CH-146 Griffon helicopter was investigated. Pilots of the CH-146 Griffon helicopter and non-pilots rated the perceived urgency of the signals using a rating scale. The pilots also ranked the urgency of the alarms in a post-experiment questionnaire to reflect their assessment of the actual situation that triggers the alarms. The results of this investigation revealed that participants' ratings of perceived urgency appear to be based on the acoustic properties of the alarms which are known to affect the listener's perceived level of urgency. Although for 28% of the pilots the mapping of perceived urgency to the urgency of their perception of the triggering situation was statistically significant for three of the five alarms, the overall data suggest that the triggering situations are not adequately conveyed by the acoustic parameters inherent in the alarms. The pilots' judgement of the triggering situation was intended as a means of evaluating the reliability of the alerting system. These data will subsequently be discussed with respect to proposed enhancements in alerting systems as it relates to addressing the problem of phase of flight. These results call for more serious consideration of incorporating situational awareness in the design and assignment of auditory alarms in aircraft.

  6. The influence of (central) auditory processing disorder in speech sound disorders.

    Science.gov (United States)

    Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Vilela, Nadia; Carvallo, Renata Mota Mamede; Wertzner, Haydée Fiszbein

    2016-01-01

    Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central) auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. To study phonological measures and (central) auditory processing of children with speech sound disorder. Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central) auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. The group with (central) auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central) auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  7. Non-verbal communication between nurses and people with an intellectual disability: a review of the literature.

    Science.gov (United States)

    Martin, Anne-Marie; O'Connor-Fenelon, Maureen; Lyons, Rosemary

    2010-12-01

    This article critically synthesizes current literature regarding communication between nurses and people with an intellectual disability who communicate non-verbally. The unique context of communication between the intellectual disability nurse and people with intellectual disability and the review aims and strategies are outlined. Communication as a concept is explored in depth. Communication between the intellectual disability nurse and the person with an intellectual disability is then comprehensively examined in light of existing literature. Issues including knowledge of the person with intellectual disability, mismatch of communication ability, and knowledge of communication arose as predominant themes. A critical review of the importance of communication in nursing practice follows. The paucity of literature relating to intellectual disability nursing and non-verbal communication clearly indicates a need for research.

  8. O papel das pistas do contexto verbal no reconhecimento de palavras The role of verbal-context clues in the word-recognition process

    Directory of Open Access Journals (Sweden)

    Sandra Regina Kirchner Guimarães

    2004-08-01

    Full Text Available O processo de leitura é estudado principalmente com base em dois modelos teóricos: o ascendente (bottom-up, baseado na concepção que considera o desempenho em leitura dependente do processo de decodificação, e o descendente (top-down, fundamentado na concepção que defende que a leitura se apóia especialmente na utilização de informações sintático-semânticas do texto. O presente estudo teve por objetivo investigar a contribuição do uso de informações do contexto verbal no reconhecimento de palavras. De acordo com os resultados obtidos, os sujeitos com dificuldades na leitura apoiaram-se no contexto verbal para compensar suas dificuldades, lendo corretamente 75,24% das palavras apresentadas.The reading process is studied mainly with basis on two theoretical models: the bottom-up model - based on the conception that considers the reading performance as being dependent on a decoding process, and the top-down model - based on the conception that the reading ability relies mainly on the use of syntactic-semantic information present in the text. This study was targeted at determining the importance of using information provided by the verbal context in recognizing words. Pursuant to the results attained, the subjects with reading problems found support in verbal context to compensate their difficulties, being able to read correctly about 75.24% of the words presented.

  9. Visual Processing of Verbal and Nonverbal Stimuli in Adolescents with Reading Disabilities.

    Science.gov (United States)

    Boden, Catherine; Brodeur, Darlene A.

    1999-01-01

    A study investigated whether 32 adolescents with reading disabilities (RD) were slower at processing visual information compared to children of comparable age and reading level, or whether their deficit was specific to the written word. Adolescents with RD demonstrated difficulties in processing rapidly presented verbal and nonverbal visual…

  10. Auditory processing and phonological awareness skills of five-year-old children with and without musical experience.

    Science.gov (United States)

    Escalda, Júlia; Lemos, Stela Maris Aguiar; França, Cecília Cavalieri

    2011-09-01

    To investigate the relations between musical experience, auditory processing and phonological awareness of groups of 5-year-old children with and without musical experience. Participants were 56 5-year-old subjects of both genders, 26 in the Study Group, consisting of children with musical experience, and 30 in the Control Group, consisting of children without musical experience. All participants were assessed with the Simplified Auditory Processing Assessment and Phonological Awareness Test and the data was statistically analyzed. There was a statistically significant difference between the results of the sequential memory test for verbal and non-verbal sounds with four stimuli, phonological awareness tasks of rhyme recognition, phonemic synthesis and phonemic deletion. Analysis of multiple binary logistic regression showed that, with exception of the sequential verbal memory with four syllables, the observed difference in subjects' performance was associated with their musical experience. Musical experience improves auditory and metalinguistic abilities of 5-year-old children.

  11. The Development of Verbal and Visual Working Memory Processes: A Latent Variable Approach

    Science.gov (United States)

    Koppenol-Gonzalez, Gabriela V.; Bouwmeester, Samantha; Vermunt, Jeroen K.

    2012-01-01

    Working memory (WM) processing in children has been studied with different approaches, focusing on either the organizational structure of WM processing during development (factor analytic) or the influence of different task conditions on WM processing (experimental). The current study combined both approaches, aiming to distinguish verbal and…

  12. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    Science.gov (United States)

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  13. On Verbal Competence

    Directory of Open Access Journals (Sweden)

    Zhongxin Dai

    2014-04-01

    Full Text Available This paper explored a new concept, verbal competence, to present a challenge to Chomsky’s linguistic competence and Hymes’ communicative competence. It is generally acknowledged that Chomsky concerned himself only with the syntactic/grammatical structures, and viewed the speaker’s generation and transformation of syntactic structures as the production of language. Hymes challenged Chomsky’s conception of linguistic competence and argued for an ethnographic or sociolinguistic concept, communicative competence, but his concept is too broad to be adequately grasped and followed in such fields as linguistics and second language acquisition. Communicative competence can include abilities to communicate with nonverbal behaviors, e.g. gestures, postures or even silence. The concept of verbal competence concerns itself with the mental and psychological processes of verbal production in communication. These processes originate from the speaker’s personal experience, in a certain situation of human communication, and with the sudden appearance of the intentional notion, shape up as the meaning images and end up in the verbal expression.

  14. Incipient preoperative reorganization processes of verbal memory functions in patients with left temporal lobe epilepsy.

    Science.gov (United States)

    Milian, Monika; Zeltner, Lena; Erb, Michael; Klose, Uwe; Wagner, Kathrin; Frings, Lars; Veil, Cornelia; Rona, Sabine; Lerche, Holger; Klamer, Silke

    2015-01-01

    We previously reported nonlinear correlations between verbal episodic memory performance and BOLD signal in memory fMRI in healthy subjects. The purpose of the present study was to examine this observation in patients with left mesial temporal lobe epilepsy (mTLE) who often experience memory decline and need reliable prediction tools before epilepsy surgery with hippocampectomy. Fifteen patients with left mTLE (18-57years, nine females) underwent a verbal memory fMRI paradigm. Correlations between BOLD activity and neuropsychological data were calculated for the i) hippocampus (HC) as well as ii) extrahippocampal mTL structures. Memory performance was systematically associated with activations within the right HC as well as with activations within the left extrahippocampal mTL regions (amygdala and parahippocampal gyrus). As hypothesized, the analyses revealed cubic relationships, with one peak in patients with marginal memory performance and another peak in patients with very good performance. The nonlinear correlations between memory performance and activations might reflect the compensatory recruitment of neural resources to maintain memory performance in patients with ongoing memory deterioration. The present data suggest an already incipient preoperative reorganization process of verbal memory in non-amnesic patients with left mTLE by simultaneously tapping the resources of the right HC and left extrahippocampal mTL regions. Thus, in the preoperative assessment, both neuropsychological performance and memory fMRI should be considered together. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Imitation Therapy for Non-Verbal Toddlers

    Science.gov (United States)

    Gill, Cindy; Mehta, Jyutika; Fredenburg, Karen; Bartlett, Karen

    2011-01-01

    When imitation skills are not present in young children, speech and language skills typically fail to emerge. There is little information on practices that foster the emergence of imitation skills in general and verbal imitation skills in particular. The present study attempted to add to our limited evidence base regarding accelerating the…

  16. Dynamic Assessment of Phonological Awareness for Children with Speech Sound Disorders

    Science.gov (United States)

    Gillam, Sandra Laing; Ford, Mikenzi Bentley

    2012-01-01

    The current study was designed to examine the relationships between performance on a nonverbal phoneme deletion task administered in a dynamic assessment format with performance on measures of phoneme deletion, word-level reading, and speech sound production that required verbal responses for school-age children with speech sound disorders (SSDs).…

  17. Dynamics of unstable sound waves in a non-equilibrium medium at the nonlinear stage

    Science.gov (United States)

    Khrapov, Sergey; Khoperskov, Alexander

    2018-03-01

    A new dispersion equation is obtained for a non-equilibrium medium with an exponential relaxation model of a vibrationally excited gas. We have researched the dependencies of the pump source and the heat removal on the medium thermodynamic parameters. The boundaries of sound waves stability regions in a non-equilibrium gas have been determined. The nonlinear stage of sound waves instability development in a vibrationally excited gas has been investigated within CSPH-TVD and MUSCL numerical schemes using parallel technologies OpenMP-CUDA. We have obtained a good agreement of numerical simulation results with the linear perturbations dynamics at the initial stage of the sound waves growth caused by instability. At the nonlinear stage, the sound waves amplitude reaches the maximum value that leads to the formation of shock waves system.

  18. Argumentation, confrontation et violence verbale fulgurante Argumentative Processes, Confrontation and Acute Verbal Abuse

    Directory of Open Access Journals (Sweden)

    Claudine Moïse

    2012-04-01

    Full Text Available Si nous avons défini la violence verbale fulgurante comme une montée en tension caractérisée par des actes menaçants directs (provocation, menace, insultes… et la violence polémique comme un discours à visée argumentative mobilisant des procédés discursifs indirects (implicites, ironie…, on ne peut considérer ces deux types de discours comme hermétiques. À travers des scènes de violences verbales quotidiennes dans l’espace public et institutionnel (contrôles, convocations, verbalisations…, constituées pour un DVD pédagogique, il s’agit de montrer comment dans des interactions caractérisées par la violence fulgurante, certains procédés argumentatifs particuliers et que nous décrirons, sont utilisés, avec force efficacité, à des fins de déstabilisation et de prise de pouvoir sur l’autre. Our research has defined severe verbal abuse as built up tension characterized by directly threatening acts (such as provocation, threats, insults, and polemical violence as argumentative discourse which mobilizes indirect discursive devices, such as implicit discourse relations and irony. Yet, neither type of discourse can be considered to be impervious to mutual influence. Based on the content of an educational DVD featuring acted out scenes of daily verbal abuse taking place in public and institutional spaces (i.e., checks, summons, fines, we will show how specific argumentative devices, which we will describe, are very efficiently used within interactions that are characterised by severe abuse, with the aim of destabilizing and taking control over somebody.

  19. Letter-sound processing deficits in children with developmental dyslexia: An ERP study.

    Science.gov (United States)

    Moll, Kristina; Hasko, Sandra; Groth, Katharina; Bartling, Jürgen; Schulte-Körne, Gerd

    2016-04-01

    The time course during letter-sound processing was investigated in children with developmental dyslexia (DD) and typically developing (TD) children using electroencephalography. Thirty-eight children with DD and 25 TD children participated in a visual-auditory oddball paradigm. Event-related potentials (ERPs) elicited by standard and deviant stimuli in an early (100-190 ms) and late (560-750 ms) time window were analysed. In the early time window, ERPs elicited by the deviant stimulus were delayed and less left lateralized over fronto-temporal electrodes for children with DD compared to TD children. In the late time window, children with DD showed higher amplitudes extending more over right frontal electrodes. Longer latencies in the early time window and stronger right hemispheric activation in the late time window were associated with slower reading and naming speed. Additionally, stronger right hemispheric activation in the late time window correlated with poorer phonological awareness skills. Deficits in early stages of letter-sound processing influence later more explicit cognitive processes during letter-sound processing. Identifying the neurophysiological correlates of letter-sound processing and their relation to reading related skills provides insight into the degree of automaticity during letter-sound processing beyond behavioural measures of letter-sound-knowledge. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  20. Verbal learning in marijuana users seeking treatment: a comparison between depressed and non-depressed samples.

    Science.gov (United States)

    Roebke, Patrick V; Vadhan, Nehal P; Brooks, Daniel J; Levin, Frances R

    2014-07-01

    Both individuals with marijuana use and depressive disorders exhibit verbal learning and memory decrements. This study investigated the interaction between marijuana dependence and depression on learning and memory performance. The California Verbal Learning Test-Second Edition (CVLT-II) was administered to depressed (n = 71) and non-depressed (n = 131) near-daily marijuana users. The severity of depressive symptoms was measured by the self-rated Beck Depression Inventory (BDI-II) and the clinician-rated Hamilton Depression Rating Scale (HAM-D). Multivariate analyses of covariance statistics (MANCOVA) were employed to analyze group differences in cognitive performance. Pearson's correlation coefficients were calculated to examine the relative associations between marijuana use, depression and CVLT-II performance. Findings from each group were compared to published normative data. Although both groups exhibited decreased CVLT-II performance relative to the test's normative sample (p marijuana-dependent subjects with a depressive disorder did not perform differently than marijuana-dependent subjects without a depressive disorder (p > 0.05). Further, poorer CVLT-II performance was modestly associated with increased self-reported daily amount of marijuana use (corrected p depressive symptoms (corrected p > 0.002). These findings suggest an inverse association between marijuana use and verbal learning function, but not between depression and verbal learning function in regular marijuana users.

  1. Measurement of sound velocity profiles in fluids for process monitoring

    International Nuclear Information System (INIS)

    Wolf, M; Kühnicke, E; Lenz, M; Bock, M

    2012-01-01

    In ultrasonic measurements, the time of flight to the object interface is often the only information that is analysed. Conventionally it is only possible to determine distances or sound velocities if the other value is known. The current paper deals with a novel method to measure the sound propagation path length and the sound velocity in media with moving scattering particles simultaneously. Since the focal position also depends on sound velocity, it can be used as a second parameter. Via calibration curves it is possible to determine the focal position and sound velocity from the measured time of flight to the focus, which is correlated to the maximum of averaged echo signal amplitude. To move focal position along the acoustic axis, an annular array is used. This allows measuring sound velocity locally resolved without any previous knowledge of the acoustic media and without a reference reflector. In previous publications the functional efficiency of this method was shown for media with constant velocities. In this work the accuracy of these measurements is improved. Furthermore first measurements and simulations are introduced for non-homogeneous media. Therefore an experimental set-up was created to generate a linear temperature gradient, which also causes a gradient of sound velocity.

  2. Evidence for a double dissociation of articulatory rehearsal and non-articulatory maintenance of phonological information in human verbal working memory.

    Science.gov (United States)

    Trost, Sarah; Gruber, Oliver

    2012-01-01

    Recent functional neuroimaging studies have provided evidence that human verbal working memory is represented by two complementary neural systems, a left lateralized premotor-parietal network implementing articulatory rehearsal and a presumably phylogenetically older bilateral anterior-prefrontal/inferior-parietal network subserving non-articulatory maintenance of phonological information. In order to corroborate these findings from functional neuroimaging, we performed a targeted behavioural study in patients with very selective and circumscribed brain lesions to key regions suggested to support these different subcomponents of human verbal working memory. Within a sample of over 500 neurological patients assessed with high-resolution structural magnetic resonance imaging, we identified 2 patients with corresponding brain lesions, one with an isolated lesion to Broca's area and the other with a selective lesion bilaterally to the anterior middle frontal gyrus. These 2 patients as well as groups of age-matched healthy controls performed two circuit-specific verbal working memory tasks. In this way, we systematically assessed the hypothesized selective behavioural effects of these brain lesions on the different subcomponents of verbal working memory in terms of a double dissociation. Confirming prior findings, the lesion to Broca's area led to reduced performance under articulatory rehearsal, whereas the non-articulatory maintenance of phonological information was unimpaired. Conversely, the bifrontopolar brain lesion was associated with impaired non-articulatory phonological working memory, whereas performance under articulatory rehearsal was unaffected. The present experimental neuropsychological study in patients with specific and circumscribed brain lesions confirms the hypothesized double dissociation of two complementary brain systems underlying verbal working memory in humans. In particular, the results demonstrate the functional relevance of the anterior

  3. Cortical processing of dynamic sound envelope transitions.

    Science.gov (United States)

    Zhou, Yi; Wang, Xiaoqin

    2010-12-08

    Slow envelope fluctuations in the range of 2-20 Hz provide important segmental cues for processing communication sounds. For a successful segmentation, a neural processor must capture envelope features associated with the rise and fall of signal energy, a process that is often challenged by the interference of background noise. This study investigated the neural representations of slowly varying envelopes in quiet and in background noise in the primary auditory cortex (A1) of awake marmoset monkeys. We characterized envelope features based on the local average and rate of change of sound level in envelope waveforms and identified envelope features to which neurons were selective by reverse correlation. Our results showed that envelope feature selectivity of A1 neurons was correlated with the degree of nonmonotonicity in their static rate-level functions. Nonmonotonic neurons exhibited greater feature selectivity than monotonic neurons in quiet and in background noise. The diverse envelope feature selectivity decreased spike-timing correlation among A1 neurons in response to the same envelope waveforms. As a result, the variability, but not the average, of the ensemble responses of A1 neurons represented more faithfully the dynamic transitions in low-frequency sound envelopes both in quiet and in background noise.

  4. An investigation into vocal expressions of emotions: the roles of valence, culture, and acoustic factors

    Science.gov (United States)

    Sauter, Disa

    This PhD is an investigation of vocal expressions of emotions, mainly focusing on non-verbal sounds such as laughter, cries and sighs. The research examines the roles of categorical and dimensional factors, the contributions of a number of acoustic cues, and the influence of culture. A series of studies established that naive listeners can reliably identify non-verbal vocalisations of positive and negative emotions in forced-choice and rating tasks. Some evidence for underlying dimensions of arousal and valence is found, although each emotion had a discrete expression. The role of acoustic characteristics of the sounds is investigated experimentally and analytically. This work shows that the cues used to identify different emotions vary, although pitch and pitch variation play a central role. The cues used to identify emotions in non-verbal vocalisations differ from the cues used when comprehending speech. An additional set of studies using stimuli consisting of emotional speech demonstrates that these sounds can also be reliably identified, and rely on similar acoustic cues. A series of studies with a pre-literate Namibian tribe shows that non-verbal vocalisations can be recognized across cultures. An fMRI study carried out to investigate the neural processing of non-verbal vocalisations of emotions is presented. The results show activation in pre-motor regions arising from passive listening to non-verbal emotional vocalisations, suggesting neural auditory-motor interactions in the perception of these sounds. In sum, this thesis demonstrates that non-verbal vocalisations of emotions are reliably identifiable tokens of information that belong to discrete categories. These vocalisations are recognisable across vastly different cultures and thus seem to, like facial expressions of emotions, comprise human universals. Listeners rely mainly on pitch and pitch variation to identify emotions in non verbal vocalisations, which differs with the cues used to comprehend

  5. Reactive and Pre-Emptive Language-Related Episodes and Verbal ...

    African Journals Online (AJOL)

    Studies have shown that most Nigerian secondary school students exhibit gross deficiency in verbal communication. Scholars have attributed this problem mainly to the non-use of reactive and pre-emptive language related episodes (LRE's) in the instructional process. Hence, this study investigated the impact of reactive ...

  6. Neural mechanisms underlying valence inferences to sound: The role of the right angular gyrus.

    Science.gov (United States)

    Bravo, Fernando; Cross, Ian; Hawkins, Sarah; Gonzalez, Nadia; Docampo, Jorge; Bruno, Claudio; Stamatakis, Emmanuel Andreas

    2017-07-28

    We frequently infer others' intentions based on non-verbal auditory cues. Although the brain underpinnings of social cognition have been extensively studied, no empirical work has yet examined the impact of musical structure manipulation on the neural processing of emotional valence during mental state inferences. We used a novel sound-based theory-of-mind paradigm in which participants categorized stimuli of different sensory dissonance level in terms of positive/negative valence. Whilst consistent with previous studies which propose facilitated encoding of consonances, our results demonstrated that distinct levels of consonance/dissonance elicited differential influences on the right angular gyrus, an area implicated in mental state attribution and attention reorienting processes. Functional and effective connectivity analyses further showed that consonances modulated a specific inhibitory interaction from associative memory to mental state attribution substrates. Following evidence suggesting that individuals with autism may process social affective cues differently, we assessed the relationship between participants' task performance and self-reported autistic traits in clinically typical adults. Higher scores on the social cognition scales of the AQ were associated with deficits in recognising positive valence in consonant sound cues. These findings are discussed with respect to Bayesian perspectives on autistic perception, which highlight a functional failure to optimize precision in relation to prior beliefs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Non-verbal mother-child communication in conditions of maternal HIV in an experimental environment Comunicación no verbal madre/hijo em la existencia del HIV materna en ambiente experimental Comunicação não-verbal mãe/filho na vigência do HIV materno em ambiente experimental

    Directory of Open Access Journals (Sweden)

    Simone de Sousa Paiva

    2010-02-01

    Full Text Available Non-verbal communication is predominant in the mother-child relation. This study aimed to analyze non-verbal mother-child communication in conditions of maternal HIV. In an experimental environment, five HIV-positive mothers were evaluated during care delivery to their babies of up to six months old. Recordings of the care were analyzed by experts, observing aspects of non-verbal communication, such as: paralanguage, kinesics, distance, visual contact, tone of voice, maternal and infant tactile behavior. In total, 344 scenes were obtained. After statistical analysis, these permitted inferring that mothers use non-verbal communication to demonstrate their close attachment to their children and to perceive possible abnormalities. It is suggested that the mother’s infection can be a determining factor for the formation of mothers’ strong attachment to their children after birth.La comunicación no verbal es predominante en la relación entre madre/hijo. Se tuvo por objetivo verificar la comunicación no verbal madre/hijo en la existencia del HIV materno. En ambiente experimental, fueron evaluadas cinco madres HIV+, que cuidaban de sus hijos de hasta seis meses de vida. Las filmaciones de los cuidados fueron analizadas por peritos, siendo observados los aspectos de la comunicación no verbal, como: paralenguaje, cinestésica, proximidad, contacto visual, tono de voz y comportamiento táctil materno e infantil. Se obtuvo 344 escenas que, después de un análisis estadístico, posibilitó inferir que la comunicación no verbal es utilizada por la madre para demonstrar su apego íntimo a los hijos y para percibir posibles anormalidades. Se sugiere que la infección materna puede ser un factor determinante para la formación del fuerte apego de la madre por su bebé después el nacimiento.A comunicação não-verbal é predominante na relação entre mãe/filho. Objetivou-se verificar a comunicação não-verbal mãe/filho na vigência do HIV

  8. Musicians' and nonmusicians' short-term memory for verbal and musical sequences: comparing phonological similarity and pitch proximity.

    Science.gov (United States)

    Williamson, Victoria J; Baddeley, Alan D; Hitch, Graham J

    2010-03-01

    Language-music comparative studies have highlighted the potential for shared resources or neural overlap in auditory short-term memory. However, there is a lack of behavioral methodologies for comparing verbal and musical serial recall. We developed a visual grid response that allowed both musicians and nonmusicians to perform serial recall of letter and tone sequences. The new method was used to compare the phonological similarity effect with the impact of an operationalized musical equivalent-pitch proximity. Over the course of three experiments, we found that short-term memory for tones had several similarities to verbal memory, including limited capacity and a significant effect of pitch proximity in nonmusicians. Despite being vulnerable to phonological similarity when recalling letters, however, musicians showed no effect of pitch proximity, a result that we suggest might reflect strategy differences. Overall, the findings support a limited degree of correspondence in the way that verbal and musical sounds are processed in auditory short-term memory.

  9. Quality Matters! Differences between Expressive and Receptive Non-Verbal Communication Skills in Adolescents with ASD

    Science.gov (United States)

    Grossman, Ruth B.; Tager-Flusberg, Helen

    2012-01-01

    We analyzed several studies of non-verbal communication (prosody and facial expressions) completed in our lab and conducted a secondary analysis to compare performance on receptive vs. expressive tasks by adolescents with ASD and their typically developing peers. Results show a significant between-group difference for the aggregate score of…

  10. Attitude Patterns and the Production of Original Verbal Images: A Study in Construct Validity

    Science.gov (United States)

    Khatena, Joe; Torrance, E. Paul

    1971-01-01

    The Runner Studies of Attitude Patterns, a personality inventory, was used as the criterion to determine construct validity of Sounds and Images and Onomatopoeia and Images, two tests of verbal originality. (KW)

  11. Domain-Generality of Timing-Based Serial Order Processes in Short-Term Memory: New Insights from Musical and Verbal Domains.

    Directory of Open Access Journals (Sweden)

    Simon Gorin

    Full Text Available Several models in the verbal domain of short-term memory (STM consider a dissociation between item and order processing. This view is supported by data demonstrating that different types of time-based interference have a greater effect on memory for the order of to-be-remembered items than on memory for the items themselves. The present study investigated the domain-generality of the item versus serial order dissociation by comparing the differential effects of time-based interfering tasks, such as rhythmic interference and articulatory suppression, on item and order processing in verbal and musical STM domains. In Experiment 1, participants had to maintain sequences of verbal or musical information in STM, followed by a probe sequence, this under different conditions of interference (no-interference, rhythmic interference, articulatory suppression. They were required to decide whether all items of the probe list matched those of the memory list (item condition or whether the order of the items in the probe sequence matched the order in the memory list (order condition. In Experiment 2, participants performed a serial order probe recognition task for verbal and musical sequences ensuring sequential maintenance processes, under no-interference or rhythmic interference conditions. For Experiment 1, serial order recognition was not significantly more impacted by interfering tasks than was item recognition, this for both verbal and musical domains. For Experiment 2, we observed selective interference of the rhythmic interference condition on both musical and verbal order STM tasks. Overall, the results suggest a similar and selective sensitivity to time-based interference for serial order STM in verbal and musical domains, but only when the STM tasks ensure sequential maintenance processes.

  12. Domain-Generality of Timing-Based Serial Order Processes in Short-Term Memory: New Insights from Musical and Verbal Domains.

    Science.gov (United States)

    Gorin, Simon; Kowialiewski, Benjamin; Majerus, Steve

    2016-01-01

    Several models in the verbal domain of short-term memory (STM) consider a dissociation between item and order processing. This view is supported by data demonstrating that different types of time-based interference have a greater effect on memory for the order of to-be-remembered items than on memory for the items themselves. The present study investigated the domain-generality of the item versus serial order dissociation by comparing the differential effects of time-based interfering tasks, such as rhythmic interference and articulatory suppression, on item and order processing in verbal and musical STM domains. In Experiment 1, participants had to maintain sequences of verbal or musical information in STM, followed by a probe sequence, this under different conditions of interference (no-interference, rhythmic interference, articulatory suppression). They were required to decide whether all items of the probe list matched those of the memory list (item condition) or whether the order of the items in the probe sequence matched the order in the memory list (order condition). In Experiment 2, participants performed a serial order probe recognition task for verbal and musical sequences ensuring sequential maintenance processes, under no-interference or rhythmic interference conditions. For Experiment 1, serial order recognition was not significantly more impacted by interfering tasks than was item recognition, this for both verbal and musical domains. For Experiment 2, we observed selective interference of the rhythmic interference condition on both musical and verbal order STM tasks. Overall, the results suggest a similar and selective sensitivity to time-based interference for serial order STM in verbal and musical domains, but only when the STM tasks ensure sequential maintenance processes.

  13. Spectro-temporal analysis of complex tones: two cortical processes dependent on retention of sounds in the long auditory store.

    Science.gov (United States)

    Jones, S J; Vaz Pato, M; Sprague, L

    2000-09-01

    To examine whether two cortical processes concerned with spectro-temporal analysis of complex tones, a 'C-process' generating CN1 and CP2 potentials at cf. 100 and 180 ms after sudden change of pitch or timbre, and an 'M-process' generating MN1 and MP2 potentials of similar latency at the sudden cessation of repeated changes, are dependent on accumulation of a sound image in the long auditory store. The durations of steady (440 Hz) and rapidly oscillating (440-494 Hz, 16 changes/s) pitch of a synthesized 'clarinet' tone were reciprocally varied between 0.5 and 4.5 s within a duty cycle of 5 s. Potentials were recorded at the beginning and end of the period of oscillation in 10 non-attending normal subjects. The CN1 at the beginning of pitch oscillation and the MN1 at the end were both strongly influenced by the duration of the immediately preceding stimulus pattern, mean amplitudes being 3-4 times larger after 4.5 s as compared with 0.5 s. The processes responsible for both CN1 and MN1 are influenced by the duration of the preceding sound pattern over a period comparable to that of the 'echoic memory' or long auditory store. The store therefore appears to occupy a key position in spectro-temporal sound analysis. The C-process is concerned with the spectral structure of complex sounds, and may therefore reflect the 'grouping' of frequency components underlying auditory stream segregation. The M-process (mismatch negativity) is concerned with the temporal sound structure, and may play an important role in the extraction of information from sequential sounds.

  14. School effects on non-verbal intelligence and nutritional status in rural Zambia

    OpenAIRE

    Hein, Sascha; Tan, Mei; Reich, Jodi; Thuma, Philip E.; Grigorenko, Elena L.

    2015-01-01

    This study uses hierarchical linear modeling (HLM) to examine the school factors (i.e., related to school organization and teacher and student body) associated with non-verbal intelligence (NI) and nutritional status (i.e., body mass index; BMI) of 4204 3rd to 7th graders in rural areas of Southern Province, Zambia. Results showed that 23.5% and 7.7% of the NI and BMI variance, respectively, were conditioned by differences between schools. The set of 14 school factors accounted for 58.8% and ...

  15. Impaired verbal memory in Parkinson disease: relationship to prefrontal dysfunction and somatosensory discrimination

    Directory of Open Access Journals (Sweden)

    Weniger Dorothea

    2009-12-01

    Full Text Available Abstract Objective To study the neurocognitive profile and its relationship to prefrontal dysfunction in non-demented Parkinson's disease (PD with deficient haptic perception. Methods Twelve right-handed patients with PD and 12 healthy control subjects underwent thorough neuropsychological testing including Rey complex figure, Rey auditory verbal and figural learning test, figural and verbal fluency, and Stroop test. Test scores reflecting significant differences between patients and healthy subjects were correlated with the individual expression coefficients of one principal component, obtained in a principal component analysis of an oxygen-15-labeled water PET study exploring somatosensory discrimination that differentiated between the two groups and involved prefrontal cortices. Results We found significantly decreased total scores for the verbal learning trials and verbal delayed free recall in PD patients compared with normal volunteers. Further analysis of these parameters using Spearman's ranking correlation showed a significantly negative correlation of deficient verbal recall with expression coefficients of the principal component whose image showed a subcortical-cortical network, including right dorsolateral-prefrontal cortex, in PD patients. Conclusion PD patients with disrupted right dorsolateral prefrontal cortex function and associated diminished somatosensory discrimination are impaired also in verbal memory functions. A negative correlation between delayed verbal free recall and PET activation in a network including the prefrontal cortices suggests that verbal cues and accordingly declarative memory processes may be operative in PD during activities that demand sustained attention such as somatosensory discrimination. Verbal cues may be compensatory in nature and help to non-specifically enhance focused attention in the presence of a functionally disrupted prefrontal cortex.

  16. Development of linear projecting in studies of non-linear flow. Acoustic heating induced by non-periodic sound

    Energy Technology Data Exchange (ETDEWEB)

    Perelomova, Anna [Gdansk University of Technology, Faculty of Applied Physics and Mathematics, ul. Narutowicza 11/12, 80-952 Gdansk (Poland)]. E-mail: anpe@mif.pg.gda.pl

    2006-08-28

    The equation of energy balance is subdivided into two dynamics equations, one describing evolution of the dominative sound, and the second one responsible for acoustic heating. The first one is the famous KZK equation, and the second one is a novel equation governing acoustic heating. The novel dynamic equation considers both periodic and non-periodic sound. Quasi-plane geometry of flow is supposed. Subdividing is provided on the base of specific links of every mode. Media with arbitrary thermic T(p,{rho}) and caloric e(p,{rho}) equations of state are considered. Individual roles of thermal conductivity and viscosity in the heating induced by aperiodic sound in the ideal gases and media different from ideal gases are discussed.

  17. Development of linear projecting in studies of non-linear flow. Acoustic heating induced by non-periodic sound

    Science.gov (United States)

    Perelomova, Anna

    2006-08-01

    The equation of energy balance is subdivided into two dynamics equations, one describing evolution of the dominative sound, and the second one responsible for acoustic heating. The first one is the famous KZK equation, and the second one is a novel equation governing acoustic heating. The novel dynamic equation considers both periodic and non-periodic sound. Quasi-plane geometry of flow is supposed. Subdividing is provided on the base of specific links of every mode. Media with arbitrary thermic T(p,ρ) and caloric e(p,ρ) equations of state are considered. Individual roles of thermal conductivity and viscosity in the heating induced by aperiodic sound in the ideal gases and media different from ideal gases are discussed.

  18. Development of linear projecting in studies of non-linear flow. Acoustic heating induced by non-periodic sound

    International Nuclear Information System (INIS)

    Perelomova, Anna

    2006-01-01

    The equation of energy balance is subdivided into two dynamics equations, one describing evolution of the dominative sound, and the second one responsible for acoustic heating. The first one is the famous KZK equation, and the second one is a novel equation governing acoustic heating. The novel dynamic equation considers both periodic and non-periodic sound. Quasi-plane geometry of flow is supposed. Subdividing is provided on the base of specific links of every mode. Media with arbitrary thermic T(p,ρ) and caloric e(p,ρ) equations of state are considered. Individual roles of thermal conductivity and viscosity in the heating induced by aperiodic sound in the ideal gases and media different from ideal gases are discussed

  19. Referential Interactions of Turkish-Learning Children with Their Caregivers about Non-Absent Objects: Integration of Non-Verbal Devices and Prior Discourse

    Science.gov (United States)

    Ates, Beyza S.; Küntay, Aylin C.

    2018-01-01

    This paper examines the way children younger than two use non-verbal devices (i.e., deictic gestures and communicative functional acts) and pay attention to discourse status (i.e., prior mention vs. newness) of referents in interactions with caregivers. Data based on semi-naturalistic interactions with caregivers of four children, at ages 1;00,…

  20. Respiratory Constraints in Verbal and Non-verbal Communication.

    Science.gov (United States)

    Włodarczak, Marcin; Heldner, Mattias

    2017-01-01

    In the present paper we address the old question of respiratory planning in speech production. We recast the problem in terms of speakers' communicative goals and propose that speakers try to minimize respiratory effort in line with the H&H theory. We analyze respiratory cycles coinciding with no speech (i.e., silence), short verbal feedback expressions (SFE's) as well as longer vocalizations in terms of parameters of the respiratory cycle and find little evidence for respiratory planning in feedback production. We also investigate timing of speech and SFEs in the exhalation and contrast it with nods. We find that while speech is strongly tied to the exhalation onset, SFEs are distributed much more uniformly throughout the exhalation and are often produced on residual air. Given that nods, which do not have any respiratory constraints, tend to be more frequent toward the end of an exhalation, we propose a mechanism whereby respiratory patterns are determined by the trade-off between speakers' communicative goals and respiratory constraints.

  1. LEXICAL CREATION PROCESSES IN PERUVIAN ARGOT: THE CASE OF VERBAL 'FLOREO'

    Directory of Open Access Journals (Sweden)

    Thayssa Taranto Ramírez

    2013-12-01

    Full Text Available This article aims to conceptualize and define the so-called verbal Floreo, phenomenon of argot (ie, related to the argot, used by jeringa speakers, Peruvian youth argot. It is discussed the validity of the remaining existing nomenclatures, why speakers resort to such process of lexical creation and, based on the corpus, it is demonstrated how the phenomenon works in Spanish.

  2. Treating depressive symptoms in psychosis : A Network Meta-Analysis on the Effects of Non-Verbal Therapies

    NARCIS (Netherlands)

    Steenhuis, L. A.; Nauta, M. H.; Bockting, C. L. H.; Pijnenborg, G. H. M.

    2015-01-01

    AIMS: The aim of this study was to examine whether non-verbal therapies are effective in treating depressive symptoms in psychotic disorders. MATERIAL AND METHODS: A systematic literature search was performed in PubMed, Psychinfo, Picarta, Embase and ISI Web of Science, up to January 2015.

  3. Treating depressive symptoms in psychosis : A network meta-analysis on the effects of non-verbal therapies

    NARCIS (Netherlands)

    Steenhuis, Laura A.; Nauta, Maaike H.; Bocking, Claudi L.H.; Pijnenborg, Gerdina H.M.

    2015-01-01

    AIMS: The aim of this study was to examine whether non-verbal therapies are effective in treating depressive symptoms in psychotic disorders. MATERIAL AND METHODS: A systematic literature search was performed in PubMed, Psychinfo, Picarta, Embase and ISI Web of Science, up to January 2015.

  4. Literal/non literal and the processing of verbal irony

    Directory of Open Access Journals (Sweden)

    Francisco Yus Ramos

    2011-04-01

    Full Text Available Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} En el presente artículo se propone una distinción terminológica entre 'proposición expresada 'evitada y 'proposición expresada' Contemplada, desde una perspectiva cognitiva (sobre todo desde la teoría de la relevancia, En esta propuesta terminológica subyace la afirmación de que la identificación rápida, lenta o inexistente de la ironía depende del número de incompatibilidades detectado por el destinatario en múltiples activaciones mentales de las fuentes contextuales disponibles, Esta visión de la comprensión de la ironía intenta arrojar luz sobre debates, aún por  dilucidar, como por ejemplo el que se centra en el papel del significado literal en el procesamiento de la ironía verbal. o sobre si el procesamiento de la ironía necesariamente exige más esfuerzo de procesamiento que el procesamiento de enunciados explícitos.

  5. 50 CFR Figure 4 to Subpart E of... - Prince William Sound Rural and Non-Rural Areas

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Prince William Sound Rural and Non-Rural Areas 4 Figure 4 to Subpart E of Part 300 Wildlife and Fisheries INTERNATIONAL FISHING AND RELATED... to Subpart E of Part 300—Prince William Sound Rural and Non-Rural Areas ER04NO09.010 [74 FR 57110...

  6. Extraction of auditory features and elicitation of attributes for the assessment of multi-channel reproduced sound

    DEFF Research Database (Denmark)

    Choisel, Sylvain; Wickelmaier, Florian Maria

    2005-01-01

    ), subjects were asked to directly assign verbal labels to the features when encountering them and to subsequently rate the sounds on the scales thus obtained. The second method requires the subjects to consistently identify the perceptually relevant features before assigning them a verbal label. Under...

  7. Extraction of auditory features and elicitation of attributes for the assessment of multi-channel reproduced sound

    DEFF Research Database (Denmark)

    Choisel, Sylvain; Wickelmaier, Florian

    2005-01-01

    ), subjects were asked to directly assign verbal labels to the features when encountering them, and to subsequently rate the sounds on the scales thus obtained. The second method requires the subjects to consistently identify the perceptually relevant features before assigning them a verbal label. Under...

  8. Conditioned sounds enhance visual processing.

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    Full Text Available This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral or monetary outcomes (+50 euro cents, -50 cents, 0 cents. In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.

  9. The low-frequency sound power measuring technique for an underwater source in a non-anechoic tank

    Science.gov (United States)

    Zhang, Yi-Ming; Tang, Rui; Li, Qi; Shang, Da-Jing

    2018-03-01

    In order to determine the radiated sound power of an underwater source below the Schroeder cut-off frequency in a non-anechoic tank, a low-frequency extension measuring technique is proposed. This technique is based on a unique relationship between the transmission characteristics of the enclosed field and those of the free field, which can be obtained as a correction term based on previous measurements of a known simple source. The radiated sound power of an unknown underwater source in the free field can thereby be obtained accurately from measurements in a non-anechoic tank. To verify the validity of the proposed technique, a mathematical model of the enclosed field is established using normal-mode theory, and the relationship between the transmission characteristics of the enclosed and free fields is obtained. The radiated sound power of an underwater transducer source is tested in a glass tank using the proposed low-frequency extension measuring technique. Compared with the free field, the radiated sound power level of the narrowband spectrum deviation is found to be less than 3 dB, and the 1/3 octave spectrum deviation is found to be less than 1 dB. The proposed testing technique can be used not only to extend the low-frequency applications of non-anechoic tanks, but also for measurement of radiated sound power from complicated sources in non-anechoic tanks.

  10. Non-contact test of coating by means of laser-induced ultrasonic excitation and holographic sound representation. Beruehrungslose Pruefung von Beschichtungen mittels laserinduzierter Ultraschallanregung und holographischer Schallabbildung

    Energy Technology Data Exchange (ETDEWEB)

    Crostack, H A; Pohl, K Y [QZ-DO Qualitaetszentrum Dortmund GmbH und Co. KG (Germany); Radtke, U [Dortmund Univ. (Germany). Fachgebiet Qualitaetskontrolle

    1991-01-01

    In order to circumvent the problems of introducing and picking off sound, which occur in conventional ultrasonic testing, a completely non-contact test process was developed. The ultrasonic surface wave required for the test is generated without contact by absorption of laser beams. The recording of the ultrasound also occurs by a non-contact holographic interferometry technique, which permits a large scale representation of the sound. Using the example of MCrAlY and ZrO[sub 2] layers, the suitability of the process for testing thermally sprayed coatings on metal substrates is identified. The possibilities and limits of the process for the detection and description of delamination and cracks are shown. (orig.).

  11. A Common Neural Substrate for Language Production and Verbal Working Memory

    Science.gov (United States)

    Acheson, Daniel J.; Hamidi, Massihullah; Binder, Jeffrey R.; Postle, Bradley R.

    2011-01-01

    Verbal working memory (VWM), the ability to maintain and manipulate representations of speech sounds over short periods, is held by some influential models to be independent from the systems responsible for language production and comprehension [e.g., Baddeley, A. D. "Working memory, thought, and action." New York, NY: Oxford University Press,…

  12. Respiratory Constraints in Verbal and Non-verbal Communication

    Directory of Open Access Journals (Sweden)

    Marcin Włodarczak

    2017-05-01

    Full Text Available In the present paper we address the old question of respiratory planning in speech production. We recast the problem in terms of speakers' communicative goals and propose that speakers try to minimize respiratory effort in line with the H&H theory. We analyze respiratory cycles coinciding with no speech (i.e., silence, short verbal feedback expressions (SFE's as well as longer vocalizations in terms of parameters of the respiratory cycle and find little evidence for respiratory planning in feedback production. We also investigate timing of speech and SFEs in the exhalation and contrast it with nods. We find that while speech is strongly tied to the exhalation onset, SFEs are distributed much more uniformly throughout the exhalation and are often produced on residual air. Given that nods, which do not have any respiratory constraints, tend to be more frequent toward the end of an exhalation, we propose a mechanism whereby respiratory patterns are determined by the trade-off between speakers' communicative goals and respiratory constraints.

  13. Effects of musical expertise on oscillatory brain activity in response to emotional sounds.

    Science.gov (United States)

    Nolden, Sophie; Rigoulot, Simon; Jolicoeur, Pierre; Armony, Jorge L

    2017-08-01

    Emotions can be conveyed through a variety of channels in the auditory domain, be it via music, non-linguistic vocalizations, or speech prosody. Moreover, recent studies suggest that expertise in one sound category can impact the processing of emotional sounds in other sound categories as they found that musicians process more efficiently emotional musical and vocal sounds than non-musicians. However, the neural correlates of these modulations, especially their time course, are not very well understood. Consequently, we focused here on how the neural processing of emotional information varies as a function of sound category and expertise of participants. Electroencephalogram (EEG) of 20 non-musicians and 17 musicians was recorded while they listened to vocal (speech and vocalizations) and musical sounds. The amplitude of EEG-oscillatory activity in the theta, alpha, beta, and gamma band was quantified and Independent Component Analysis (ICA) was used to identify underlying components of brain activity in each band. Category differences were found in theta and alpha bands, due to larger responses to music and speech than to vocalizations, and in posterior beta, mainly due to differential processing of speech. In addition, we observed greater activation in frontal theta and alpha for musicians than for non-musicians, as well as an interaction between expertise and emotional content of sounds in frontal alpha. The results reflect musicians' expertise in recognition of emotion-conveying music, which seems to also generalize to emotional expressions conveyed by the human voice, in line with previous accounts of effects of expertise on musical and vocal sounds processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Intelligent Systems Approaches to Product Sound Quality Analysis

    Science.gov (United States)

    Pietila, Glenn M.

    As a product market becomes more competitive, consumers become more discriminating in the way in which they differentiate between engineered products. The consumer often makes a purchasing decision based on the sound emitted from the product during operation by using the sound to judge quality or annoyance. Therefore, in recent years, many sound quality analysis tools have been developed to evaluate the consumer preference as it relates to a product sound and to quantify this preference based on objective measurements. This understanding can be used to direct a product design process in order to help differentiate the product from competitive products or to establish an impression on consumers regarding a product's quality or robustness. The sound quality process is typically a statistical tool that is used to model subjective preference, or merit score, based on objective measurements, or metrics. In this way, new product developments can be evaluated in an objective manner without the laborious process of gathering a sample population of consumers for subjective studies each time. The most common model used today is the Multiple Linear Regression (MLR), although recently non-linear Artificial Neural Network (ANN) approaches are gaining popularity. This dissertation will review publicly available published literature and present additional intelligent systems approaches that can be used to improve on the current sound quality process. The focus of this work is to address shortcomings in the current paired comparison approach to sound quality analysis. This research will propose a framework for an adaptive jury analysis approach as an alternative to the current Bradley-Terry model. The adaptive jury framework uses statistical hypothesis testing to focus on sound pairings that are most interesting and is expected to address some of the restrictions required by the Bradley-Terry model. It will also provide a more amicable framework for an intelligent systems approach

  15. The meaning of city noises: Investigating sound quality in Paris (France)

    Science.gov (United States)

    Dubois, Daniele; Guastavino, Catherine; Maffiolo, Valerie; Guastavino, Catherine; Maffiolo, Valerie

    2004-05-01

    The sound quality of Paris (France) was investigated by using field inquiries in actual environments (open questionnaires) and using recordings under laboratory conditions (free-sorting tasks). Cognitive categories of soundscapes were inferred by means of psycholinguistic analyses of verbal data and of mathematical analyses of similarity judgments. Results show that auditory judgments mainly rely on source identification. The appraisal of urban noise therefore depends on the qualitative evaluation of noise sources. The salience of human sounds in public spaces has been demonstrated, in relation to pleasantness judgments: soundscapes with human presence tend to be perceived as more pleasant than soundscapes consisting solely of mechanical sounds. Furthermore, human sounds are qualitatively processed as indicators of human outdoor activities, such as open markets, pedestrian areas, and sidewalk cafe districts that reflect city life. In contrast, mechanical noises (mainly traffic noise) are commonly described in terms of physical properties (temporal structure, intensity) of a permanent background noise that also characterizes urban areas. This connotes considering both quantitative and qualitative descriptions to account for the diversity of cognitive interpretations of urban soundscapes, since subjective evaluations depend both on the meaning attributed to noise sources and on inherent properties of the acoustic signal.

  16. Precision of working memory for speech sounds.

    Science.gov (United States)

    Joseph, Sabine; Iverson, Paul; Manohar, Sanjay; Fox, Zoe; Scott, Sophie K; Husain, Masud

    2015-01-01

    Memory for speech sounds is a key component of models of verbal working memory (WM). But how good is verbal WM? Most investigations assess this using binary report measures to derive a fixed number of items that can be stored. However, recent findings in visual WM have challenged such "quantized" views by employing measures of recall precision with an analogue response scale. WM for speech sounds might rely on both continuous and categorical storage mechanisms. Using a novel speech matching paradigm, we measured WM recall precision for phonemes. Vowel qualities were sampled from a formant space continuum. A probe vowel had to be adjusted to match the vowel quality of a target on a continuous, analogue response scale. Crucially, this provided an index of the variability of a memory representation around its true value and thus allowed us to estimate how memories were distorted from the original sounds. Memory load affected the quality of speech sound recall in two ways. First, there was a gradual decline in recall precision with increasing number of items, consistent with the view that WM representations of speech sounds become noisier with an increase in the number of items held in memory, just as for vision. Based on multidimensional scaling (MDS), the level of noise appeared to be reflected in distortions of the formant space. Second, as memory load increased, there was evidence of greater clustering of participants' responses around particular vowels. A mixture model captured both continuous and categorical responses, demonstrating a shift from continuous to categorical memory with increasing WM load. This suggests that direct acoustic storage can be used for single items, but when more items must be stored, categorical representations must be used.

  17. SoundScapes: non-formal learning potentials from interactive VEs

    DEFF Research Database (Denmark)

    Brooks, Tony; Petersson, Eva

    2007-01-01

    Non-formal learning is evident from an inhabited information space that is created from non-invasive multi-dimensional sensor technologies that source human gesture. Libraries of intuitive interfaces empower natural interaction where the gesture is mapped to the multisensory content. Large screen...... and international bodies have consistently recognized SoundScapes which, as a research body of work, is directly responsible for numerous patents. Please note that my full name is Anthony Lewis Brooks. I publish with Anthony Brooks: A. L. Brooks; Tony Brooks.  ...

  18. Deficits in visual short-term memory binding in children at risk of non-verbal learning disabilities.

    Science.gov (United States)

    Garcia, Ricardo Basso; Mammarella, Irene C; Pancera, Arianna; Galera, Cesar; Cornoldi, Cesare

    2015-01-01

    It has been hypothesized that learning disabled children meet short-term memory (STM) problems especially when they must bind different types of information, however the hypothesis has not been systematically tested. This study assessed visual STM for shapes and colors and the binding of shapes and colors, comparing a group of children (aged between 8 and 10 years) at risk of non-verbal learning disabilities (NLD) with a control group of children matched for general verbal abilities, age, gender, and socioeconomic level. Results revealed that groups did not differ in retention of either shapes or colors, but children at risk of NLD were poorer than controls in memory for shape-color bindings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Verbal learning changes in older adults across 18 months.

    Science.gov (United States)

    Zimprich, Daniel; Rast, Philippe

    2009-07-01

    The major aim of this study was to investigate individual changes in verbal learning across a period of 18 months. Individual differences in verbal learning have largely been neglected in the last years and, even more so, individual differences in change in verbal learning. The sample for this study comes from the Zurich Longitudinal Study on Cognitive Aging (ZULU; Zimprich et al., 2008a) and comprised 336 older adults in the age range of 65-80 years at first measurement occasion. In order to address change in verbal learning we used a latent change model of structured latent growth curves to account for the non-linearity of the verbal learning data. The individual learning trajectories were captured by a hyperbolic function which yielded three psychologically distinct parameters: initial performance, learning rate, and asymptotic performance. We found that average performance increased with respect to initial performance, but not in learning rate or in asymptotic performance. Further, variances and covariances remained stable across both measurement occasions, indicating that the amount of individual differences in the three parameters remained stable, as did the relationships among them. Moreover, older adults differed reliably in their amount of change in initial performance and asymptotic performance. Eventually, changes in asymptotic performance and learning rate were strongly negatively correlated. It thus appears as if change in verbal learning in old age is a constrained process: an increase in total learning capacity implies that it takes longer to learn. Together, these results point to the significance of individual differences in change of verbal learning in the elderly.

  20. Development and inter-rater reliability of a standardized verbal instruction manual for the Chinese Geriatric Depression Scale-short form.

    Science.gov (United States)

    Wong, M T P; Ho, T P; Ho, M Y; Yu, C S; Wong, Y H; Lee, S Y

    2002-05-01

    The Geriatric Depression Scale (GDS) is a common screening tool for elderly depression in Hong Kong. This study aimed at (1) developing a standardized manual for the verbal administration and scoring of the GDS-SF, and (2) comparing the inter-rater reliability between the standardized and non-standardized verbal administration of GDS-SF. Two studies were reported. In Study 1, the process of developing the manual was described. In Study 2, we compared the inter-rater reliabilities of GDS-SF scores using the standardized verbal instructions and the traditional non-standardized administration. Results of Study 2 indicated that the standardized procedure in verbal administration and scoring improved the inter-rater reliabilities of GDS-SF. Copyright 2002 John Wiley & Sons, Ltd.

  1. Noise control, sound, and the vehicle design process

    Science.gov (United States)

    Donavan, Paul

    2005-09-01

    For many products, noise and sound are viewed as necessary evils that need to be dealt with in order to bring the product successfully to market. They are generally not product ``exciters'' although some vehicle manufacturers do tune and advertise specific sounds to enhance the perception of their products. In this paper, influencing the design process for the ``evils,'' such as wind noise and road noise, are considered in more detail. There are three ingredients to successfully dealing with the evils in the design process. The first of these is knowing how excesses in noise effects the end customer in a tangible manner and how that effects customer satisfaction and ultimately sells. The second is having and delivering the knowledge of what is required of the design to achieve a satisfactory or even better level of noise performance. The third ingredient is having the commitment of the designers to incorporate the knowledge into their part, subsystem or system. In this paper, the elements of each of these ingredients are discussed in some detail and the attributes of a successful design process are enumerated.

  2. Non-word repetition in children with specific language impairment: a deficit in phonological working memory or in long-term verbal knowledge?

    Science.gov (United States)

    Casalini, Claudia; Brizzolara, Daniela; Chilosi, Anna; Cipriani, Paola; Marcolini, Stefania; Pecini, Chiara; Roncoli, Silvia; Burani, Cristina

    2007-08-01

    In this study we investigated the effects of long-term memory (LTM) verbal knowledge on short-term memory (STM) verbal recall in a sample of Italian children affected by different subtypes of specific language impairment (SLI). The aim of the study was to evaluate if phonological working memory (PWM) abilities of SLI children can be supported by LTM linguistic representations and if PWM performances can be differently affected in the various subtypes of SLI. We tested a sample of 54 children affected by Mixed Receptive-Expressive (RE), Expressive (Ex) and Phonological (Ph) SLI (DSM-IV - American Psychiatric Association, 1994) by means of a repetition task of words (W) and non-words (NW) differing in morphemic structure [morphological non-words (MNW), consisting of combinations of roots and affixes - and simple non-words - with no morphological constituency]. We evaluated the effects of lexical and morpho-lexical LTM representations on STM recall by comparing the repetition accuracy across the three types of stimuli. Results indicated that although SLI children, as a group, showed lower repetition scores than controls, their performance was affected similarly to controls by the type of stimulus and the experimental manipulation of the non-words (better repetition of W than MNW and NW, and of MNW than NW), confirming the recourse to LTM verbal representations to support STM recall. The influence of LTM verbal knowledge on STM recall in SLI improved with age and did not differ among the three types of SLI. However, the three types of SLI differed in the accuracy of their repetition performances (PMW abilities), with the Phonological group showing the best scores. The implications for SLI theory and practice are discussed.

  3. Nonlinear generation of non-acoustic modes by low-frequency sound in a vibrationally relaxing gas

    International Nuclear Information System (INIS)

    Perelomova, A.

    2010-01-01

    Two dynamic equations referring to a weakly nonlinear and weakly dispersive flow of a gas in which molecular vibrational relaxation takes place, are derived. The first one governs an excess temperature associated with the thermal mode, and the second one describes variations in vibrational energy. Both quantities refer to non-wave types of gas motion. These variations are caused by the nonlinear transfer of acoustic energy into thermal mode and internal vibrational degrees of freedom of a relaxing gas. The final dynamic equations are instantaneous; they include a quadratic nonlinear acoustic source, reflecting the nonlinear character of interaction of low-frequency acoustic and non-acoustic motions of the fluid. All types of sound, periodic or aperiodic, may serve as an acoustic source of both phenomena. The low-frequency sound is considered in this study. Some conclusions about temporal behavior of non-acoustic modes caused by periodic and aperiodic sound are made. Under certain conditions, acoustic cooling takes place instead of heating. (author)

  4. Children's Verbal Working Memory: Role of Processing Complexity in Predicting Spoken Sentence Comprehension

    Science.gov (United States)

    Magimairaj, Beula M.; Montgomery, James W.

    2012-01-01

    Purpose: This study investigated the role of processing complexity of verbal working memory tasks in predicting spoken sentence comprehension in typically developing children. Of interest was whether simple and more complex working memory tasks have similar or different power in predicting sentence comprehension. Method: Sixty-five children (6- to…

  5. Non-verbal communication: aspects observed during nursing consultations with blind patients Comunicación no-verbal: aspectos observados durante la consulta de Enfermería con el paciente ciego Comunicação não-verbal: aspectos observados durante a consulta de Enfermagem com o paciente cego

    Directory of Open Access Journals (Sweden)

    Cristiana Brasil de Almeida Rebouças

    2007-03-01

    Full Text Available Exploratory-descriptive study on non-verbal communication among nurses and blind patients during nursing consultations to diabetes patients, based on Hall's theoretical reference framework. Data were collected by recording the consultations. The recordings were analyzed every fifteen seconds, totaling 1,131 non-verbal communication moments. The analysis shows intimate distance (91.0% and seated position (98.3%; no contact occurred in 83.3% of the interactions. Emblematic gestures were present, including hand movements (67.4%; looks deviated from the interlocutor (52.8%, and centered on the interlocutor (44.4%. In all recordings, considerable interference occurred at the moment of nurse-patient interaction. Nurses need to know about and deepen non-verbal communication studies and adequate its use to the type of patients attended during the consultations.Estudio exploratorio y descriptivo sobre comunicación no-verbal entre el enfermero y el paciente ciego durante la consulta de enfermería al diabético, desde el referencial teórico de Hall. Colecta de datos con filmación de la consulta, analizadas a cada quince segundos, totalizando 1.131 momentos de comunicación no-verbal. El análisis muestra alejamiento íntimo (91,0% y postura sentada (98,3%, en 83,3% de las intervenciones no hubo contacto. Estubo presente el gesto emblemático mover las manos (67,4%; el mirar desviado del interlocutor (52,8% y al mirar centrado en el interlocutor (44,4%. En todas las filmaciones, hubieron interferencias considerables en el momento de la interacción enfermero y paciente. Concluyese que el enfermero precisa conocer y profundizar los estudios en comunicación no-verbal y adecuar su utilización al tipo de pacientes asistidos durante las consultas.Estudo exploratório-descritivo sobre comunicação não-verbal entre o enfermeiro e o cego durante a consulta de enfermagem ao diabético, a partir do referencial teórico de Hall. Coleta de dados com filmagem da

  6. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2010-05-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  7. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2009-09-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  8. The absoluteness of semantic processing: lessons from the analysis of temporal clusters in phonemic verbal fluency.

    Directory of Open Access Journals (Sweden)

    Isabelle Vonberg

    Full Text Available For word production, we may consciously pursue semantic or phonological search strategies, but it is uncertain whether we can retrieve the different aspects of lexical information independently from each other. We therefore studied the spread of semantic information into words produced under exclusively phonemic task demands.42 subjects participated in a letter verbal fluency task, demanding the production of as many s-words as possible in two minutes. Based on curve fittings for the time courses of word production, output spurts (temporal clusters considered to reflect rapid lexical retrieval based on automatic activation spread, were identified. Semantic and phonemic word relatedness within versus between these clusters was assessed by respective scores (0 meaning no relation, 4 maximum relation.Subjects produced 27.5 (±9.4 words belonging to 6.7 (±2.4 clusters. Both phonemically and semantically words were more related within clusters than between clusters (phon: 0.33±0.22 vs. 0.19±0.17, p<.01; sem: 0.65±0.29 vs. 0.37±0.29, p<.01. Whereas the extent of phonemic relatedness correlated with high task performance, the contrary was the case for the extent of semantic relatedness.The results indicate that semantic information spread occurs, even if the consciously pursued word search strategy is purely phonological. This, together with the negative correlation between semantic relatedness and verbal output suits the idea of a semantic default mode of lexical search, acting against rapid task performance in the given scenario of phonemic verbal fluency. The simultaneity of enhanced semantic and phonemic word relatedness within the same temporal cluster boundaries suggests an interaction between content and sound-related information whenever a new semantic field has been opened.

  9. Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing

    Science.gov (United States)

    Wolf, Gail Marie

    2016-01-01

    This intervention study investigated the growth of letter sound reading and growth of consonant-vowel-consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching preschool children to…

  10. Effects of spectral complexity and sound duration on automatic complex-sound pitch processing in humans - a mismatch negativity study.

    Science.gov (United States)

    Tervaniemi, M; Schröger, E; Saher, M; Näätänen, R

    2000-08-18

    The pitch of a spectrally rich sound is known to be more easily perceived than that of a sinusoidal tone. The present study compared the importance of spectral complexity and sound duration in facilitated pitch discrimination. The mismatch negativity (MMN), which reflects automatic neural discrimination, was recorded to a 2. 5% pitch change in pure tones with only one sinusoidal frequency component (500 Hz) and in spectrally rich tones with three (500-1500 Hz) and five (500-2500 Hz) harmonic partials. During the recordings, subjects concentrated on watching a silent movie. In separate blocks, stimuli were of 100 and 250 ms in duration. The MMN amplitude was enhanced with both spectrally rich sounds when compared with pure tones. The prolonged sound duration did not significantly enhance the MMN. This suggests that increased spectral rather than temporal information facilitates pitch processing of spectrally rich sounds.

  11. The Influence of Manifest Strabismus and Stereoscopic Vision on Non-Verbal Abilities of Visually Impaired Children

    Science.gov (United States)

    Gligorovic, Milica; Vucinic, Vesna; Eskirovic, Branka; Jablan, Branka

    2011-01-01

    This research was conducted in order to examine the influence of manifest strabismus and stereoscopic vision on non-verbal abilities of visually impaired children aged between 7 and 15. The sample included 55 visually impaired children from the 1st to the 6th grade of elementary schools for visually impaired children in Belgrade. RANDOT stereotest…

  12. Role of Auditory Non-Verbal Working Memory in Sentence Repetition for Bilingual Children with Primary Language Impairment

    Science.gov (United States)

    Ebert, Kerry Danahy

    2014-01-01

    Background: Sentence repetition performance is attracting increasing interest as a valuable clinical marker for primary (or specific) language impairment (LI) in both monolingual and bilingual populations. Multiple aspects of memory appear to contribute to sentence repetition performance, but non-verbal memory has not yet been considered. Aims: To…

  13. Fish protection at water intakes using a new signal development process and sound system

    International Nuclear Information System (INIS)

    Loeffelman, P.H.; Klinect, D.A.; Van Hassel, J.H.

    1991-01-01

    American Electric Power Company, Inc., is exploring the feasibility of using a patented signal development process and sound system to guide aquatic animals with underwater sound. Sounds from animals such as chinook salmon, steelhead trout, striped bass, freshwater drum, largemouth bass, and gizzard shad can be used to synthesize a new signal to stimulate the animal in the most sensitive portion of its hearing range. AEP's field tests during its research demonstrate that adult chinook salmon, steelhead trout and warmwater fish, and steelhead trout and chinook salmon smolts can be repelled with a properly-tuned system. The signal development process and sound system is designed to be transportable and use animals at the site to incorporate site-specific factors known to affect underwater sound, e.g., bottom shape and type, water current, and temperature. This paper reports that, because the overall goal of this research was to determine the feasibility of using sound to divert fish, it was essential that the approach use a signal development process which could be customized to animals and site conditions at any hydropower plant site

  14. Verbal fluency in male and female schizophrenia patients: Different patterns of association with processing speed, working memory span, and clinical symptoms.

    Science.gov (United States)

    Brébion, Gildas; Stephan-Otto, Christian; Ochoa, Susana; Nieto, Lourdes; Contel, Montserrat; Usall, Judith

    2018-01-01

    Decreased processing speed in schizophrenia patients has been identified as a major impairment factor in various neuropsychological domains. Working memory span has been found to be involved in several deep or effortful cognitive processes. We investigated the impact that these 2 cognitive functions may have on phonological and semantic fluency in schizophrenia patients and healthy participants. Fifty-five patients with schizophrenia and 60 healthy participants were administered a neuropsychological battery including phonological and semantic fluency, working memory, and cognitive and motor speed. Regression analyses revealed that motor speed was related to phonological fluency in female patients, whereas cognitive speed was related to semantic fluency in male patients. In addition, working memory span was related to verbal fluency in women from both the patient and the healthy control groups. Decreased processing speed, but not decreased working memory span, accounted for the verbal fluency deficit in patients. Verbal fluency was inversely related to attention deficit in female patients and to negative symptoms in male patients. Decreased processing speed may be the main factor in verbal fluency impairment of patients. Further, the cognitive and clinical predictors of verbal fluency efficiency are different in men and women. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Language representation of the emotional state of the personage in non-verbal speech behavior (on the material of Russian and German languages

    Directory of Open Access Journals (Sweden)

    Scherbakova Irina Vladimirovna

    2016-06-01

    Full Text Available The article examines the features of actualization of emotions in a non-verbal speech behavior of the character of a literary text. Emotions are considered basic, the most actively used method of literary character reaction to any object, action, or the communicative situation. Nonverbal ways of expressing emotions more fully give the reader an idea of the emotional state of the character. The main focus in the allocation of non-verbal means of communication in art is focused on the description of kinetic, proxemic and prosodic components. The material of the study is the microdialogue fragments extracted by continuous sampling of their works of art texts of the Russian-speaking and German-speaking classical and modern literature XIX - XX centuries. Fragments of the dialogues were analyzed, where the recorded voice of nonverbal behavior of the character of different emotional content (surprise, joy, fear, anger, rage, excitement, etc. was fixed. It was found that means of verbalization and descriptions of emotion of nonverbal behavior of the character are primarily indirect nomination, expressed verbal vocabulary, adjectives and adverbs. The lexical level is the most significant in the presentation of the emotional state of the character.

  16. Individual Differences in Verbal and Spatial Stroop Tasks: Interactive Role of Handedness and Domain

    Directory of Open Access Journals (Sweden)

    Mariagrazia Capizzi

    2017-11-01

    Full Text Available A longstanding debate in psychology concerns the relation between handedness and cognitive functioning. The present study aimed to contribute to this debate by comparing performance of right- and non-right-handers on verbal and spatial Stroop tasks. Previous studies have shown that non-right-handers have better inter-hemispheric interaction and greater access to right hemisphere processes. On this ground, we expected performance of right- and non-right-handers to differ on verbal and spatial Stroop tasks. Specifically, relative to right-handers, non-right-handers should have greater Stroop effect in the color-word Stroop task, for which inter-hemispheric interaction does not seem to be advantageous to performance. By contrast, non-right-handers should be better able to overcome interference in the spatial Stroop task. This is for their preferential access to the right hemisphere dealing with spatial material and their greater inter-hemispheric interaction with the left hemisphere hosting Stroop task processes. Our results confirmed these predictions, showing that handedness and the underlying brain asymmetries may be a useful variable to partly explain individual differences in executive functions.

  17. Using a Process Dissociation Approach to Assess Verbal Short-Term Memory for Item and Order Information in a Sample of Individuals with a Self-Reported Diagnosis of Dyslexia.

    Science.gov (United States)

    Wang, Xiaoli; Xuan, Yifu; Jarrold, Christopher

    2016-01-01

    Previous studies have examined whether difficulties in short-term memory for verbal information, that might be associated with dyslexia, are driven by problems in retaining either information about to-be-remembered items or the order in which these items were presented. However, such studies have not used process-pure measures of short-term memory for item or order information. In this work we adapt a process dissociation procedure to properly distinguish the contributions of item and order processes to verbal short-term memory in a group of 28 adults with a self-reported diagnosis of dyslexia and a comparison sample of 29 adults without a dyslexia diagnosis. In contrast to previous work that has suggested that individuals with dyslexia experience item deficits resulting from inefficient phonological representation and language-independent order memory deficits, the results showed no evidence of specific problems in short-term retention of either item or order information among the individuals with a self-reported diagnosis of dyslexia, despite this group showing expected difficulties on separate measures of word and non-word reading. However, there was some suggestive evidence of a link between order memory for verbal material and individual differences in non-word reading, consistent with other claims for a role of order memory in phonologically mediated reading. The data from the current study therefore provide empirical evidence to question the extent to which item and order short-term memory are necessarily impaired in dyslexia.

  18. Acoustic holography for piston sound radiation with non-uniform velocity profiles

    NARCIS (Netherlands)

    Aarts, R.M.; Janssen, A.J.E.M.

    2010-01-01

    The theory of orthogonal (Zernike) expansions of functions on a disk, as used in the diffraction theory of optical aberrations, is applied to obtain (semi-) analytical results for the radiation of sound due to a non-uniformly moving, baffled, circular piston. For this particular case, a scheme for

  19. Gender Differences in Variance and Means on the Naglieri Non-Verbal Ability Test: Data from the Philippines

    Science.gov (United States)

    Vista, Alvin; Care, Esther

    2011-01-01

    Background: Research on gender differences in intelligence has focused mostly on samples from Western countries and empirical evidence on gender differences from Southeast Asia is relatively sparse. Aims: This article presents results on gender differences in variance and means on a non-verbal intelligence test using a national sample of public…

  20. What characterizes changing-state speech in affecting short-term memory? An EEG study on the irrelevant sound effect.

    Science.gov (United States)

    Schlittmeier, Sabine J; Weisz, Nathan; Bertrand, Olivier

    2011-12-01

    The irrelevant sound effect (ISE) describes reduced verbal short-term memory during irrelevant changing-state sounds which consist of different and distinct auditory tokens. Steady-state sounds lack such changing-state features and do not impair performance. An EEG experiment (N=16) explored the distinguishing neurophysiological aspects of detrimental changing-state speech (3-token sequence) compared to ineffective steady-state speech (1-token sequence) on serial recall performance. We analyzed evoked and induced activity related to the memory items as well as spectral activity during the retention phase. The main finding is that the behavioral sound effect was exclusively reflected by attenuated token-induced gamma activation most pronounced between 50-60 Hz and 50-100 ms post-stimulus onset. Changing-state speech seems to disrupt a behaviorally relevant ongoing process during target presentation (e.g., the serial binding of the items). Copyright © 2011 Society for Psychophysiological Research.

  1. Auditory verbal hallucinations and cognitive functioning in healthy individuals.

    Science.gov (United States)

    Daalman, Kirstin; van Zandvoort, Martine; Bootsman, Florian; Boks, Marco; Kahn, René; Sommer, Iris

    2011-11-01

    Auditory verbal hallucinations (AVH) are a characteristic symptom in schizophrenia, and also occur in the general, non-clinical population. In schizophrenia patients, several specific cognitive deficits, such as in speech processing, working memory, source memory, attention, inhibition, episodic memory and self-monitoring have been associated with auditory verbal hallucinations. Such associations are interesting, as they may identify specific cognitive traits that constitute a predisposition for AVH. However, it is difficult to disentangle a specific relation with AVH in patients with schizophrenia, as so many other factors can affect the performance on cognitive tests. Examining the cognitive profile of healthy individuals experiencing AVH may reveal a more direct association between AVH and aberrant cognitive functioning in a specific domain. For the current study, performance in executive functioning, memory (both short- and long-term), processing speed, spatial ability, lexical access, abstract reasoning, language and intelligence performance was compared between 101 healthy individuals with AVH and 101 healthy controls, matched for gender, age, handedness and education. Although performance of both groups was within the normal range, not clinically impaired, significant differences between the groups were found in the verbal domain as well as in executive functioning. Performance on all other cognitive domains was similar in both groups. The predisposition to experience AVH is associated with lower performance in executive functioning and aberrant language performance. This association might be related to difficulties in the inhibition of irrelevant verbal information. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Using Decision Trees to Characterize Verbal Communication During Change and Stuck Episodes in the Therapeutic Process

    Directory of Open Access Journals (Sweden)

    Víctor Hugo eMasías

    2015-04-01

    Full Text Available Methods are needed for creating models to characterize verbal communication between therapists and their patients that are suitable for teaching purposes without losing analytical potential. A technique meeting these twin requirements is proposed that uses decision trees to identify both change and stuck episodes in therapist-patient communication. Three decision tree algorithms (C4.5, NBtree, and REPtree are applied to the problem of characterizing verbal responses into change and stuck episodes in the therapeutic process. The data for the problem is derived from a corpus of 8 successful individual therapy sessions with 1,760 speaking turns in a psychodynamic context. The decision tree model that performed best was generated by the C4.5 algorithm. It delivered 15 rules characterizing the verbal communication in the two types of episodes. Decision trees are a promising technique for analyzing verbal communication during significant therapy events and have much potential for use in teaching practice on changes in therapeutic communication. The development of pedagogical methods using decision trees can support the transmission of academic knowledge to therapeutic practice.

  3. Using decision trees to characterize verbal communication during change and stuck episodes in the therapeutic process.

    Science.gov (United States)

    Masías, Víctor H; Krause, Mariane; Valdés, Nelson; Pérez, J C; Laengle, Sigifredo

    2015-01-01

    Methods are needed for creating models to characterize verbal communication between therapists and their patients that are suitable for teaching purposes without losing analytical potential. A technique meeting these twin requirements is proposed that uses decision trees to identify both change and stuck episodes in therapist-patient communication. Three decision tree algorithms (C4.5, NBTree, and REPTree) are applied to the problem of characterizing verbal responses into change and stuck episodes in the therapeutic process. The data for the problem is derived from a corpus of 8 successful individual therapy sessions with 1760 speaking turns in a psychodynamic context. The decision tree model that performed best was generated by the C4.5 algorithm. It delivered 15 rules characterizing the verbal communication in the two types of episodes. Decision trees are a promising technique for analyzing verbal communication during significant therapy events and have much potential for use in teaching practice on changes in therapeutic communication. The development of pedagogical methods using decision trees can support the transmission of academic knowledge to therapeutic practice.

  4. An investigation of the use of co-verbal gestures in oral discourse among Chinese speakers with fluent versus non-fluent aphasia and healthy adults

    Directory of Open Access Journals (Sweden)

    Anthony Pak Hin Kong

    2015-04-01

    Full Text Available Introduction Co-verbal gestures can facilitate word production among persons with aphasia (PWA (Rose, Douglas, & Matyas, 2002 and play a communicative role for PWA to convey ideas (Sekine & Rose, 2013. Kong, Law, Kwan, Lai, and Lam (2015 recently reported a systematic approach to independently analyze gesture forms and functions in spontaneous oral discourse produced. When this annotation framework was used to compare speech-accompanying gestures used by PWA and unimpaired speakers, Kong, Law, Wat, and Lai (2013 found a significantly higher gesture-to-word ratio among PWAs. Speakers who were more severe in aphasia or produced a lower percentage of complete sentences or simple sentences in their narratives tended to use more gestures. Moreover, verbal-semantic processing impairment, but not the degree of hemiplegia, was found to affect PWAs’ employment of gestures. The current study aims to (1 investigate whether the frequency of gestural employment varied across speakers with non-fluent aphasia, fluent aphasia, and their controls, (2 examine how the distribution of gesture forms and functions differed across the three speaker groups, and (3 determine how well factors of complexity of linguistic output, aphasia severity, semantic processing integrity, and hemiplegia would predict the frequency of gesture use among PWAs. Method The participants included 23 Cantonese-speaking individuals with fluent aphasia, 21 with non-fluent aphasia, and 23 age- and education-matched controls. Three sets of language samples and video files were collected through the narrative tasks of recounting a personally important event, sequential description, and story-telling, using the Cantonese AphasiaBank protocol (Kong, Law, & Lee, 2009. While the language samples were linguistically quantified to reflect word- and sentential-level performance as well as discourse-level characteristics, the videos were annotated on the form and function of each gesture. All PWAs were

  5. Severity and Co-occurrence of Oral and Verbal Apraxias in Left Brain Damaged Adults

    Directory of Open Access Journals (Sweden)

    Fariba Yadegari

    2012-04-01

    Full Text Available Objective: Oral and verbal apraxias represent motor programming deficits of nonverbal and verbal movements respectively. Studying their properties may shed light on speech motor control processes. This study was focused on identifying cases with oral or verbal apraxia, their co–occurrences and severities. Materials & Methods: In this non-experimental study, 55 left adult subjects with left brain lesion including 22 women and 33 men with age range of 23 to 84 years, were examined and videotaped using oral apraxia and verbal apraxia tasks. Three speech and language pathologists independently scored apraxia severities. Data were analyzed by independent t test, Pearson, Phi and Contingency coefficients using SPSS 12. Results: Mean score of oral and verbal apraxias in patients with and without oral and verbal apraxias were significantly different (P<0.001. Forty- two patients had simultaneous oral and verbal apraxias, with significant correlation between their oral and verbal apraxia scores (r=0.75, P<0.001. Six patients showed no oral or verbal apraxia and 7 had just one type of apraxia. Comparison of co-occurrence of two disorders (Phi=0.59 and different oral and verbal intensities (C=0.68 were relatively high (P<0.001. Conclusion: The present research revealed co-occurrence of oral and verbal apraxias to a great extent. It appears that speech motor control is influenced by a more general verbal and nonverbal motor control.

  6. The effects of limited bandwidth and noise on verbal processing time and word recall in normal-hearing children.

    Science.gov (United States)

    McCreery, Ryan W; Stelmachowicz, Patricia G

    2013-09-01

    Understanding speech in acoustically degraded environments can place significant cognitive demands on school-age children who are developing the cognitive and linguistic skills needed to support this process. Previous studies suggest the speech understanding, word learning, and academic performance can be negatively impacted by background noise, but the effect of limited audibility on cognitive processes in children has not been directly studied. The aim of the present study was to evaluate the impact of limited audibility on speech understanding and working memory tasks in school-age children with normal hearing. Seventeen children with normal hearing between 6 and 12 years of age participated in the present study. Repetition of nonword consonant-vowel-consonant stimuli was measured under conditions with combinations of two different signal to noise ratios (SNRs; 3 and 9 dB) and two low-pass filter settings (3.2 and 5.6 kHz). Verbal processing time was calculated based on the time from the onset of the stimulus to the onset of the child's response. Monosyllabic word repetition and recall were also measured in conditions with a full bandwidth and 5.6 kHz low-pass cutoff. Nonword repetition scores decreased as audibility decreased. Verbal processing time increased as audibility decreased, consistent with predictions based on increased listening effort. Although monosyllabic word repetition did not vary between the full bandwidth and 5.6 kHz low-pass filter condition, recall was significantly poorer in the condition with limited bandwidth (low pass at 5.6 kHz). Age and expressive language scores predicted performance on word recall tasks, but did not predict nonword repetition accuracy or verbal processing time. Decreased audibility was associated with reduced accuracy for nonword repetition and increased verbal processing time in children with normal hearing. Deficits in free recall were observed even under conditions where word repetition was not affected

  7. The Efficiency of Peer Teaching of Developing Non Verbal Communication to Children with Autism Spectrum Disorder (ASD)

    Science.gov (United States)

    Alshurman, Wael; Alsreaa, Ihsani

    2015-01-01

    This study aimed at identifying the efficiency of peer teaching of developing non-verbal communication to children with autism spectrum disorder (ASD). The study was carried out on a sample of (10) children with autism spectrum disorder (ASD), diagnosed according to basics and criteria adopted at Al-taif qualification center at (2013) in The…

  8. A study of verbal and spatial information processing using event-related potentials and positron emission tomography

    International Nuclear Information System (INIS)

    Ninomiya, Hideaki; Ichimiya, Atsushi; Chen, Chung-Ho; Onitsuka, Toshiaki; Kuwabara, Yasuo; Otsuka, Makoto; Ichiya, Yuichi

    1997-01-01

    The activated cerebral regions and the timing of information processing in the hemispheres was investigated using event-related potentials (ERP) and regional cerebral blood flow (rCBF) as the neurophysiological indicators. Seven men and one woman (age 19-27 years) were asked to categorize two-syllable Japanese nouns (verbal condition) and to judge the difference between pairs of rectangles (spatial condition), both tests presented on a monochrome display. In the electroencephalogram (EEG) session, EEG were recorded from 16 electrode sites, with linked earlobe electrodes as reference. In the positron emission tomography (PET) session, rCBF were measured by the 15 O-labeled H 2 O bolus injection method. Regions of interest were the frontal, temporal, parietal, occipital and central lobes, and the entire cerebral hemispheres. When the subtracted voltages of the ERP in homologous scalp sites were compared for the verbal and spatial conditions, the significant differences were at F7·F8 and T5·T6 (the 10-20 system). The latencies of the differences at T5·T6 were around 200, 250 and 320 ms. A significant difference in rCBF between the verbal and spatial conditions was found only in the temporal region. It was concluded that early processing of information, that is, registration and simple recognition, may be performed mainly in the left temporal lobe for verbal information and in the right for spatial information. (author)

  9. Neural correlates of the spacing effect in explicit verbal semantic encoding support the deficient-processing theory.

    Science.gov (United States)

    Callan, Daniel E; Schweighofer, Nicolas

    2010-04-01

    Spaced presentations of to-be-learned items during encoding leads to superior long-term retention over massed presentations. Despite over a century of research, the psychological and neural basis of this spacing effect however is still under investigation. To test the hypotheses that the spacing effect results either from reduction in encoding-related verbal maintenance rehearsal in massed relative to spaced presentations (deficient processing hypothesis) or from greater encoding-related elaborative rehearsal of relational information in spaced relative to massed presentations (encoding variability hypothesis), we designed a vocabulary learning experiment in which subjects encoded paired-associates, each composed of a known word paired with a novel word, in both spaced and massed conditions during functional magnetic resonance imaging. As expected, recall performance in delayed cued-recall tests was significantly better for spaced over massed conditions. Analysis of brain activity during encoding revealed that the left frontal operculum, known to be involved in encoding via verbal maintenance rehearsal, was associated with greater performance-related increased activity in the spaced relative to massed condition. Consistent with the deficient processing hypothesis, a significant decrease in activity with subsequent episodes of presentation was found in the frontal operculum for the massed but not the spaced condition. Our results suggest that the spacing effect is mediated by activity in the frontal operculum, presumably by encoding-related increased verbal maintenance rehearsal, which facilitates binding of phonological and word level verbal information for transfer into long-term memory. Copyright 2009 Wiley-Liss, Inc.

  10. Speed of sound in hadronic matter using non-extensive statistics

    International Nuclear Information System (INIS)

    Khuntia, Arvind; Sahoo, Pragati; Garg, Prakhar; Sahoo, Raghunath; Jean Cleymans

    2015-01-01

    The evolution of the dense matter formed in high energy hadronic and nuclear collisions is controlled by the initial energy density and temperature. The expansion of the system is due to the very high initial pressure with lowering of temperature and energy density. The pressure (P) and energy density (ϵ) are related through speed of sound (c 2 s ) under the condition of local thermal equilibrium. The speed of sound plays a crucial role in hydrodynamical expansion of the dense matter created and the critical behaviour of the system evolving from deconfined Quark Gluon Phase (QGP) to confined hadronic phase. There have been several experimental and theoretical studies in this direction. The non-extensive Tsallis statistics gives better description of the transverse momentum spectra of the produced particles created in high energy p + p (p¯) and e + + e - collisions

  11. Cognitive correlates of verbal memory and verbal fluency in schizophrenia, and differential effects of various clinical symptoms between male and female patients.

    Science.gov (United States)

    Brébion, Gildas; Villalta-Gil, Victoria; Autonell, Jaume; Cervilla, Jorge; Dolz, Montserrat; Foix, Alexandrina; Haro, Josep Maria; Usall, Judith; Vilaplana, Miriam; Ochoa, Susana

    2013-06-01

    Impairment of higher cognitive functions in patients with schizophrenia might stem from perturbation of more basic functions, such as processing speed. Various clinical symptoms might affect cognitive efficiency as well. Notably, previous research has revealed the role of affective symptoms on memory performance in this population, and suggested sex-specific effects. We conducted a post-hoc analysis of an extensive neuropsychological study of 88 patients with schizophrenia. Regression analyses were conducted on verbal memory and verbal fluency data to investigate the contribution of semantic organisation and processing speed to performance. The role of negative and affective symptoms and of attention disorders in verbal memory and verbal fluency was investigated separately in male and female patients. Semantic clustering contributed to verbal recall, and a measure of reading speed contributed to verbal recall as well as to phonological and semantic fluency. Negative symptoms affected verbal recall and verbal fluency in the male patients, whereas attention disorders affected these abilities in the female patients. Furthermore, depression affected verbal recall in women, whereas anxiety affected it in men. These results confirm the association of processing speed with cognitive efficiency in patients with schizophrenia. They also confirm the previously observed sex-specific associations of depression and anxiety with memory performance in these patients, and suggest that negative symptoms and attention disorders likewise are related to cognitive efficiency differently in men and women. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Children with speech sound disorder: Comparing a non-linguistic auditory approach with a phonological intervention approach to improve phonological skills

    Directory of Open Access Journals (Sweden)

    Cristina eMurphy

    2015-02-01

    Full Text Available This study aimed to compare the effects of a non-linguistic auditory intervention approach with a phonological intervention approach on the phonological skills of children with speech sound disorder. A total of 17 children, aged 7-12 years, with speech sound disorder were randomly allocated to either the non-linguistic auditory temporal intervention group (n = 10, average age 7.7 ± 1.2 or phonological intervention group (n = 7, average age 8.6 ± 1.2. The intervention outcomes included auditory-sensory measures (auditory temporal processing skills and cognitive measures (attention, short-term memory, speech production and phonological awareness skills. The auditory approach focused on non-linguistic auditory training (eg. backward masking and frequency discrimination, whereas the phonological approach focused on speech sound training (eg. phonological organisation and awareness. Both interventions consisted of twelve 45-minute sessions delivered twice per week, for a total of nine hours. Intra-group analysis demonstrated that the auditory intervention group showed significant gains in both auditory and cognitive measures, whereas no significant gain was observed in the phonological intervention group. No significant improvement on phonological skills was observed in any of the groups. Inter-group analysis demonstrated significant differences between the improvement following training for both groups, with a more pronounced gain for the non-linguistic auditory temporal intervention in one of the visual attention measures and both auditory measures. Therefore, both analyses suggest that although the non-linguistic auditory intervention approach appeared to be the most effective intervention approach, it was not sufficient to promote the enhancement of phonological skills.

  13. Verbal aptitude and the use of grammar information in Serbian language

    Directory of Open Access Journals (Sweden)

    Lalović Dejan

    2006-01-01

    Full Text Available The research presented in this paper was an attempt to find differences in the use of grammatical information carried by the function words in Serbian. The aim was to determine the level of word processing at which grammatical information shows its differential effects in groups of subjects who themselves differ in verbal ability. For this purpose, the psycholinguistic tasks applied were grammatically primed reading aloud and grammatically primed grammatical classification with an appropriate control of extra-linguistic factors that may have affected aforementioned tasks. Verbal aptitude was assessed in a psychometric manner, and the subjects were divided into "high verbal" and "low verbal" groups. Taking into account statistical control of extra-linguistic factors, the results indicate that groups of high verbal and low verbal subjects cannot be differentiated based on reading aloud performance. The high verbal subjects, however, were more efficient in grammatical classification than low verbal subjects. The results also indicated that the presence of grammatical information embedded in function words-primes had a stronger effect on word processing in low verbal group. Such pattern of results testify to the advantage of high verbal subjects in lexical and post lexical processing, while no differences were established in the word recognition processes. The implications of these findings were considered in terms of test construction for the assessment of verbal ability in Serbian language. .

  14. Making fictions sound real

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2010-01-01

    This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related...... to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy...... of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences....

  15. Patterns of non-verbal social interactions within intensive mathematics intervention contexts

    Science.gov (United States)

    Thomas, Jonathan Norris; Harkness, Shelly Sheats

    2016-06-01

    This study examined the non-verbal patterns of interaction within an intensive mathematics intervention context. Specifically, the authors draw on social constructivist worldview to examine a teacher's use of gesture in this setting. The teacher conducted a series of longitudinal teaching experiments with a small number of young, school-age children in the context of early arithmetic development. From these experiments, the authors gathered extensive video records of teaching practice and, from an inductive analysis of these records, identified three distinct patterns of teacher gesture: behavior eliciting, behavior suggesting, and behavior replicating. Awareness of their potential to influence students via gesture may prompt teachers to more closely attend to their own interactions with mathematical tools and take these teacher interactions into consideration when forming interpretations of students' cognition.

  16. Verbal short-term memory as an articulatory system: evidence from an alternative paradigm.

    Science.gov (United States)

    Cheung, Him; Wooltorton, Lana

    2002-01-01

    In a series of experiments, the role of articulatory rehearsal in verbal [corrected] short-term memory was examined via a shadowing-plus-recall paradigm. In this paradigm, subjects shadowed a word target presented closely after an auditory memory list before they recalled the list. The phonological relationship between the shadowing target and the final item on the memory list was manipulated. Experiments 1 and 2 demonstrated that targets sounding similar to the list-final memory item generally took longer to shadow than unrelated targets. This inhibitory effect of phonological relatedness was more pronounced with tense- than lax-vowel pseudoword recall lists. The interaction between vowel tenseness and phonological relatedness was replicated in Experiment 3 using shorter lists of real words. In Experiment 4, concurrent articulation was applied during list learning to block rehearsal; consequently, neither the phonological relatedness effect nor its interaction with vowel tenseness emerged. Experiments 5 and 6 manipulated the occurrence frequencies and lexicality of the recall items, respectively, instead of vowel tenseness. Unlike vowel tenseness, these non-articulatory memory factors failed to interact with the phonological relatedness effect. Experiment 7 orthogonally manipulated the vowel tenseness and frequencies of the recall items; slowing in shadowing times due to phonological relatedness was modulated by vowel tenseness but not frequency. Taken together, these results suggest that under the present paradigm, the modifying effect of vowel tenseness on the magnitude of slowing in shadowing due to phonological relatedness is indicative of a prominent articulatory component in verbal short-term retention. The shadowing-plus-recall approach avoids confounding overt recall into internal memory processing, which is an inherent problem of the traditional immediate serial recall and span tasks.

  17. Sound Is Sound: Film Sound Techniques and Infrasound Data Array Processing

    Science.gov (United States)

    Perttu, A. B.; Williams, R.; Taisne, B.; Tailpied, D.

    2017-12-01

    A multidisciplinary collaboration between earth scientists and a sound designer/composer was established to explore the possibilities of audification analysis of infrasound array data. Through the process of audification of the infrasound we began to experiment with techniques and processes borrowed from cinema to manipulate the noise content of the signal. The results of this posed the question: "Would the accuracy of infrasound data array processing be enhanced by employing these techniques?". So a new area of research was born from this collaboration and highlights the value of these interactions and the unintended paths that can occur from them. Using a reference event database, infrasound data were processed using these new techniques and the results were compared with existing techniques to asses if there was any improvement to detection capability for the array. With just under one thousand volcanoes, and a high probability of eruption, Southeast Asia offers a unique opportunity to develop and test techniques for regional monitoring of volcanoes with different technologies. While these volcanoes are monitored locally (e.g. seismometer, infrasound, geodetic and geochemistry networks) and remotely (e.g. satellite and infrasound), there are challenges and limitations to the current monitoring capability. Not only is there a high fraction of cloud cover in the region, making plume observation more difficult via satellite, there have been examples of local monitoring networks and telemetry being destroyed early in the eruptive sequence. The success of local infrasound studies to identify explosions at volcanoes, and calculate plume heights from these signals, has led to an interest in retrieving source parameters for the purpose of ash modeling with a regional network independent of cloud cover.

  18. EEG correlates of verbal and nonverbal working memory

    Directory of Open Access Journals (Sweden)

    Danker Jared

    2005-11-01

    Full Text Available Abstract Background Distinct cognitive processes support verbal and nonverbal working memory, with verbal memory depending specifically on the subvocal rehearsal of items. Methods We recorded scalp EEG while subjects performed a Sternberg task. In each trial, subjects judged whether a probe item was one of the three items in a study list. Lists were composed of stimuli from one of five pools whose items either were verbally rehearsable (letters, words, pictures of common objects or resistant to verbal rehearsal (sinusoidal grating patterns, single dot locations. Results We found oscillatory correlates unique to verbal stimuli in the θ (4–8 Hz, α (9–12 Hz, β (14–28 Hz, and γ (30–50 Hz frequency bands. Verbal stimuli generally elicited greater power than did nonverbal stimuli. Enhanced verbal power was found bilaterally in the θ band, over frontal and occipital areas in the α and β bands, and centrally in the γ band. When we looked specifically for cases where oscillatory power in the time interval between item presentations was greater than oscillatory power during item presentation, we found enhanced β activity in the frontal and occipital regions. Conclusion These results implicate stimulus-induced oscillatory activity in verbal working memory and β activity in the process of subvocal rehearsal.

  19. Toward a functional analysis of private verbal self-regulation.

    OpenAIRE

    Taylor, I; O'Reilly, M F

    1997-01-01

    We developed a methodology, derived from the theoretical literatures on rule-governed behavior and private events, to experimentally investigate the relationship between covert verbal self-regulation and nonverbal behavior. The methodology was designed to assess whether (a) nonverbal behavior was under the control of covert rules and (b) verbal reports of these rules were functionally equivalent to the covert rules that control non-verbal behavior. The research was conducted in the context of...

  20. IRI-2012 MODEL ADAPTABILITY ESTIMATION FOR AUTOMATED PROCESSING OF VERTICAL SOUNDING IONOGRAMS

    Directory of Open Access Journals (Sweden)

    V. D. Nikolaeva

    2014-01-01

    Full Text Available The paper deals with possibility of IRI-2012 global empirical model applying to the vertical sounding of the ionosphere semiautomatic data processing. Main ionosphere characteristics from vertical sounding data at IZMIRAN Voeikovo station in February 2013 were compared with IRI-2012 model calculation results. 2688 model values and 1866 real values of f0F2, f0E, hmF2, hmE were processed. E and F2 layers critical frequency (f0E, f0F2 and the maximum altitudes (hmF2, hmE were determined from the ionograms. Vertical profiles of the electron concentration were restored with IRI-2012 model by measured frequency and height. The model calculation was also made without the inclusion of the real vertical sounding data. Monthly averages and standard deviations (σ for the parameters f0F2, f0E, hmF2, hmE for each hour of the day were calculated according to the vertical sounding and model values. Model applicability conditions for automated processing of ionograms for subauroral ionosphere were determined. Initial IRI-2012 model can be applied in the sub-auroral ionograms processing at daytime with undisturbed conditions in the absence of sporadic ionization. In this case model calculations can be adjusted by the near-time vertical sounding data. IRI-2012 model values for f0E (in daytime and hmF2 can be applied to reduce computational costs in the systems of automatic parameters search and preliminary determination of the searching area range for the main parameters. IRI-2012 model can be used for a more accurate approximation of the real data series in the absence of the real values. In view of sporadic ionization, ionosphere models of the high latitudes must be applied with corpuscular ions formation unit.

  1. Adults with Asperger Syndrome with and without a Cognitive Profile Associated with "Non-Verbal Learning Disability." A Brief Report

    Science.gov (United States)

    Nyden, Agneta; Niklasson, Lena; Stahlberg, Ola; Anckarsater, Henrik; Dahlgren-Sandberg, Annika; Wentz, Elisabet; Rastam, Maria

    2010-01-01

    Asperger syndrome (AS) and non-verbal learning disability (NLD) are both characterized by impairments in motor coordination, visuo-perceptual abilities, pragmatics and comprehension of language and social understanding. NLD is also defined as a learning disorder affecting functions in the right cerebral hemisphere. The present study investigates…

  2. The effect of visual information on verbal communication process in remote conversation

    OpenAIRE

    國田, 祥子; 中條, 和光

    2005-01-01

    This article examined how visual information affects verbal communication process in remote communication. In the experiment twenty pairs of subjects performed a collaborative task remotely via video and audio links or audio link only. During the task used in this experiment one of a pair (an instruction-giver) gave direction with a map to the other of the pair (an instruction-receiver). We recorded and analyzed contents of utterances. Consequently, the existence of visual information did not...

  3. NEGOSIASI PENERJEMAHAN VERBAL - VISUAL DESAIN GRAFIS

    Directory of Open Access Journals (Sweden)

    Moeljadi Pranata

    2000-01-01

    Full Text Available Design is commonly regarded as an act of individual creation to which both verbalization and logical analysis are only peripherally relevant. This article reviews a research study about talking design by Tomes et al (1998 which involving graphic designers and their clients. The conclusion is that talking design -- verbal and visual -- is the design itself. Comments from a design-major student give more light to the research s outputs. Abstract in Bahasa Indonesia : Desain umumnya dipandang sebagai karya ekspresi diri. Analisis logis dan penerjemahan verbal hanya dianggap relevan di permukaan saja. Artikel ini mereview kajian riset Tomes dkk. (1998 mengenai bahasan desain yang melibatkan tim desainer grafis dan kliennya. Simpulannya%2C bahasan desain ¾ verbal dan visual ¾ adalah desain itu sendiri. Artikel ini dilengkapi tanggapan mahasiswa desain terhadap hasil riset tersebut. graphic design%2C design process%2C verbal/visual communication

  4. MIXED SIGNS IN THE SEMIOTICS OF ENGLISH EDUCATIONAL DISCOURSE

    Directory of Open Access Journals (Sweden)

    Goncharova Darya Anatolyevna

    2014-11-01

    Full Text Available The article deals with the linguosemiotic explication of nomination process in educational discourse by signs of different types – verbal, non-verbal, mixed. Verbal signs are presented by lexical units, nominating agents, clients, non/material resources, artifacts, processes, incentives and forms of pedagogical influence. The non-verbal signs include paralinguistic signs (gestures, facial expressions, postures of the participants of the educational process; color-semiotic signs (coloremas, in which the information-impacting vector is directed to a color indication of messages that is important for successful educational communication; visual elements representing traditional British values and concepts; sound signs, topographic signs, that add meaning to the overall significance of a mixed sign. In linguosemiotic system of educational discourse, the mixed signs form the most numerous group and are represented mainly by emblems, anthems and school songs of secondary schools. The author checks and verifies the hypothesis that the semiotics of the educational process in British secondary schools includes the extensive and complex system of mixed signs, which consist of two non-homogeneous parts – verbal and non-verbal – belonging to other sign systems rather than natural language and expressed via graphics, colors, music, etc. Linguistic analysis is applied to the study of the semiotic space of educational discourse. The article determines that in the context of educational communication, verbal, non-verbal and mixed signs form the unity of the linguistic and extralinguistic parameters, being in different relationships and presenting a multilayer intersection of lexical groups, graphic description, color schemes and music accompaniment.

  5. Bi-directional effects of depressed mood in the postnatal period on mother-infant non-verbal engagement with picture books.

    Science.gov (United States)

    Reissland, Nadja; Burt, Mike

    2010-12-01

    The purpose of the present study is to examine the bi-directional nature of maternal depressed mood in the postnatal period on maternal and infant non-verbal behaviors while looking at a picture book. Although, it is acknowledged that non-verbal engagement with picture books in infancy plays an important role, the effect of maternal depressed mood on stimulating the interest of infants in books is not known. Sixty-one mothers and their infants, 38 boys and 23 girls, were observed twice approximately 3 months apart (first observation: mean age 6.8 months, range 3-11 months, 32 mothers with depressed mood; second observation: mean age 10.2 months, range 6-16 months, 17 mothers with depressed mood). There was a significant effect for depressed mood on negative behaviors: infants of mothers with depressed mood tended to push away and close books more often. The frequency of negative behaviors (pushing the book away/closing it on the part of the infant and withholding the book and restraining the infant on the part of the mother) were behaviors which if expressed during the first visit were more likely to be expressed during the second visit. Levels of negative behaviors by mother and infant were strongly related during each visit. Additionally, the pattern between visits suggests that maternal negative behavior may be the cause of her infant negative behavior. These results are discussed in terms of the effects of maternal depressed mood on the bi-directional relation of non-verbal engagement of mother and child. Crown Copyright © 2010. Published by Elsevier Inc. All rights reserved.

  6. Computerized training of non-verbal reasoning and working memory in children with intellectual disability

    Directory of Open Access Journals (Sweden)

    Stina eSöderqvist

    2012-10-01

    Full Text Available Children with intellectual disabilities show deficits in both reasoning ability and working memory (WM that impact everyday functioning and academic achievement. In this study we investigated the feasibility of cognitive training for improving WM and non-verbal reasoning (NVR ability in children with intellectual disability. Participants were randomized to a 5-week adaptive training program (intervention group or non-adaptive version of the program (active control group. Cognitive assessments were conducted prior to and directly after training, and one year later to examine effects of the training. Improvements during training varied largely and amount of progress during training predicted transfer to WM and comprehension of instructions, with higher training progress being associated with greater transfer effects. The strongest predictors for training progress were found to be gender, co-morbidity and baseline capacity on verbal WM. In particular, females without an additional diagnosis and with higher baseline performance showed greater progress. No significant effects of training were observed at the one-year follow-up, suggesting that training should be more intense or repeated in order for effects to persist in children with intellectual disabilities. A major finding of this study is that cognitive training is feasible in children with intellectual disabilities and can help improve their cognitive capacities. However, a minimum cognitive capacity or training ability seems necessary for the training to be beneficial, with some individuals showing little improvement in performance. Future studies of cognitive training should take into consideration how inter-individual differences in training progress influence transfer effects and further investigate how baseline capacities predict training outcome.

  7. Generation of Complex Verbal Morphology in First and Second Language Acquisition: Evidence from Russian

    Directory of Open Access Journals (Sweden)

    Kira Gor

    2004-07-01

    Full Text Available This study explores the structure of the mental lexicon and the processing of Russian verbal morphology by two groups of speakers, adult American learners of Russian and Russian children aged 4-6, and reports the results of two matching experiments conducted at the University of Maryland, USA and St. Petersburg State University, Russia. The theoretical framework for this study comes from research on the structure of the mental lexicon and modularity in morphological processing. So far, there are very few studies investigating the processing of complex verbal morphology, with most of the work done on Icelandic, Norwegian, Italian, and Russian. The current views are shaped predominantly by research on English regular and irregular past-tense inflection, which has been conducted within two competing approaches. This study investigates the processing of verbal morphology in Russian, a language with numerous verb classes differing in size and the number and complexity of conjugation rules. It assumes that instead of a sharp opposition of regular and irregular verb processing, a gradual parameter of regularity may be more appropriate for Russian. Therefore, the issue of symbolic rule application versus associative patterning can take on a new meaning for Russian, possibly, with the distinction between default and non-default processing replacing the regular-irregular distinction.

  8. Working memory still needs verbal rehearsal.

    Science.gov (United States)

    Lucidi, Annalisa; Langerock, Naomi; Hoareau, Violette; Lemaire, Benoît; Camos, Valérie; Barrouillet, Pierre

    2016-02-01

    The causal role of verbal rehearsal in working memory has recently been called into question. For example, the SOB-CS (Serial Order in a Box-Complex Span) model assumes that there is no maintenance process for the strengthening of items in working memory, but instead a process of removal of distractors that are involuntarily encoded and create interference with memory items. In the present study, we tested the idea that verbal working memory performance can be accounted for without assuming a causal role of the verbal rehearsal process. We demonstrate in two experiments using a complex span task and a Brown-Peterson paradigm that increasing the number of repetitions of the same distractor (the syllable ba that was read aloud at each of its occurrences on screen) has a detrimental effect on the concurrent maintenance of consonants whereas the maintenance of spatial locations remains unaffected. A detailed analysis of the tasks demonstrates that accounting for this effect within the SOB-CS model requires a series of unwarranted assumptions leading to undesirable further predictions contradicted by available experimental evidence. We argue that the hypothesis of a maintenance mechanism based on verbal rehearsal that is impeded by concurrent articulation still provides the simplest and most compelling account of our results.

  9. Contextual analysis of human non-verbal guide behaviors to inform the development of FROG, the Fun Robotic Outdoor Guide

    NARCIS (Netherlands)

    Karreman, Daphne Eleonora; van Dijk, Elisabeth M.A.G.; Evers, Vanessa

    2012-01-01

    This paper reports the first step in a series of studies to design the interaction behaviors of an outdoor robotic guide. We describe and report the use case development carried out to identify effective human tour guide behaviors. In this paper we focus on non-verbal communication cues in gaze,

  10. The nature of the verbal self-monitor

    NARCIS (Netherlands)

    Ganushchak, Aleksandra (Lesya) Yurievna

    2008-01-01

    This thesis investigated the correlates of verbal self-monitoring in healthy adults. The central questions addressed in the thesis are: Does verbal monitoring work in a similar way as action monitoring? If the Error-Related Negativity (ERN) is associated with error processing in action monitoring,

  11. Preverbal and verbal counting and computation.

    Science.gov (United States)

    Gallistel, C R; Gelman, R

    1992-08-01

    We describe the preverbal system of counting and arithmetic reasoning revealed by experiments on numerical representations in animals. In this system, numerosities are represented by magnitudes, which are rapidly but inaccurately generated by the Meck and Church (1983) preverbal counting mechanism. We suggest the following. (1) The preverbal counting mechanism is the source of the implicit principles that guide the acquisition of verbal counting. (2) The preverbal system of arithmetic computation provides the framework for the assimilation of the verbal system. (3) Learning to count involves, in part, learning a mapping from the preverbal numerical magnitudes to the verbal and written number symbols and the inverse mappings from these symbols to the preverbal magnitudes. (4) Subitizing is the use of the preverbal counting process and the mapping from the resulting magnitudes to number words in order to generate rapidly the number words for small numerosities. (5) The retrieval of the number facts, which plays a central role in verbal computation, is mediated via the inverse mappings from verbal and written numbers to the preverbal magnitudes and the use of these magnitudes to find the appropriate cells in tabular arrangements of the answers. (6) This model of the fact retrieval process accounts for the salient features of the reaction time differences and error patterns revealed by experiments on mental arithmetic. (7) The application of verbal and written computational algorithms goes on in parallel with, and is to some extent guided by, preverbal computations, both in the child and in the adult.

  12. Auditory Verbal Cues Alter the Perceived Flavor of Beverages and Ease of Swallowing: A Psychometric and Electrophysiological Analysis

    Directory of Open Access Journals (Sweden)

    Aya Nakamura

    2013-01-01

    Full Text Available We investigated the possible effects of auditory verbal cues on flavor perception and swallow physiology for younger and elder participants. Apple juice, aojiru (grass juice, and water were ingested with or without auditory verbal cues. Flavor perception and ease of swallowing were measured using a visual analog scale and swallow physiology by surface electromyography and cervical auscultation. The auditory verbal cues had significant positive effects on flavor and ease of swallowing as well as on swallow physiology. The taste score and the ease of swallowing score significantly increased when the participant’s anticipation was primed by accurate auditory verbal cues. There was no significant effect of auditory verbal cues on distaste score. Regardless of age, the maximum suprahyoid muscle activity significantly decreased when a beverage was ingested without auditory verbal cues. The interval between the onset of swallowing sounds and the peak timing point of the infrahyoid muscle activity significantly shortened when the anticipation induced by the cue was contradicted in the elderly participant group. These results suggest that auditory verbal cues can improve the perceived flavor of beverages and swallow physiology.

  13. Non-verbal Persuasion and Communication in an Affective Agent

    NARCIS (Netherlands)

    André, Elisabeth; Bevacqua, Elisabetta; Heylen, Dirk K.J.; Niewiadomski, Radoslaw; Pelachaud, Catherine; Peters, Christopher; Poggi, Isabella; Rehm, Matthias; Cowie, Roddy; Pelachaud, Catherine; Petta, Paolo

    2011-01-01

    This chapter deals with the communication of persuasion. Only a small percentage of communication involves words: as the old saying goes, “it’s not what you say, it’s how you say it‿. While this likely underestimates the importance of good verbal persuasion techniques, it is accurate in underlining

  14. From tacit to verbalized knowledge. Towards a culturally informed musical analysis of Central Javanese karawitan

    Directory of Open Access Journals (Sweden)

    Gerd Grupe

    2015-12-01

    Full Text Available In the music cultures of the world we encounter both tacit and verbalized musical knowledge to various degrees. In order to reconstruct emic views on musical concepts and practices which are a prerequisite of any seriously culturally informed musical analysis we need to disclose local knowledge by appropriate means even if it is not directly open to verbal discourse. Current computer technology enables us to set up interactive experiments where local experts can verbally address relevant musical features in discussing audio examples which have been prepared by the researcher. The performance of virtual musicians can be evaluated by the local experts and various relevant parameters may be investigated individually if suitable versions of customary pieces are available. Thus aspects which seem to be tacit knowledge because they usually elude verbal discourse can be made accessible and transformed into verbalized, declarative knowledge. The paper presents preliminary results of a case study on Central Javanese gamelan music (karawitan where renowned Javanese musicians commented on computer-generated versions of traditional compositions regarding the idiomatically appropriate performance practice of the virtual ensemble as well as the tuning and sound of various virtual gamelan sets emulated by the computer.

  15. Manipulating stored phonological input during verbal working memory

    Science.gov (United States)

    Cogan, Gregory B.; Iyer, Asha; Melloni, Lucia; Thesen, Thomas; Friedman, Daniel; Doyle, Werner; Devinsky, Orrin; Pesaran, Bijan

    2016-01-01

    Verbal working memory (vWM), involves storing and manipulating information in phonological sensory input. An influential theory of vWM proposes that manipulation is carried out by a central executive while storage is performed by two interacting systems: A phonological input buffer that captures sound-based information and an articulatory rehearsal system that controls speech motor output. Whether, when, and how neural activity in the brain encodes these components remains unknown. Here, we read-out the contents of vWM from neural activity in human subjects as they manipulate stored speech sounds. As predicted, we identify storage systems that contain both phonological sensory and articulatory motor representations. Surprisingly however, we find that manipulation does not involve a single central executive but rather involves two systems with distinct contributions to successful manipulation. We propose, therefore, that multiple subsystems comprise the central executive needed to manipulate stored phonological input for articulatory motor output in vWM. PMID:27941789

  16. What is Sound?

    OpenAIRE

    Nelson, Peter

    2014-01-01

    What is sound? This question is posed in contradiction to the every-day understanding that sound is a phenomenon apart from us, to be heard, made, shaped and organised. Thinking through the history of computer music, and considering the current configuration of digital communi-cations, sound is reconfigured as a type of network. This network is envisaged as non-hierarchical, in keeping with currents of thought that refuse to prioritise the human in the world. The relationship of sound to musi...

  17. Análise da comunicação verbal e não-verbal de crianças com deficiencia visual durante interação com a mãe Analysis of the verbal and non-verbal communication of children with visual impairment during interaction with their mothers

    Directory of Open Access Journals (Sweden)

    Jáima Pinheiro de Oliveira

    2005-12-01

    Full Text Available A importância da linguagem para as crianças deficientes visuais é indiscutível, pois se trata da principal forma de promover sua interação social, além de ser fundamental na mediação de todo o seu processo de aprendizagem. Nesse sentido, o objetivo deste estudo foi descrever o desempenho pragmático da linguagem de crianças cegas, com baixa visão e com visão normal, analisando as particularidades da interação entre mãe-criança em contextos livre e planejado. Participaram do estudo seis crianças: duas cegas, duas com baixa visão e duas com visão normal, selecionadas a partir de critérios específicos. Foram realizadas duas filmagens de cada díade em ambiente familiar, sendo uma em situação de brinquedo livre e outra em situação planejada. A análise contemplou a comunicação verbal e não-verbal das crianças e foi realizada por meio da categorização funcional, com protocolo previamente elaborado e aferido por juízes, no qual continham os meios, bem como as funções pragmáticas emitidas pelos participantes. Os dados revelaram que houve um predomínio do meio comunicativo verbal, tanto em situação livre como planejada. De maneira geral, os resultados do estudo indicaram que embora houvesse particularidades durante o seu uso, a linguagem das crianças deficientes visuais não se apresentou deficitária em relação a de crianças com visão normal. Além disso, as mães das crianças cegas e com baixa visão utilizaram estratégias que favoreceram esse desempenho, como descrições do ambiente, indicações e localizações de objetos durante a interação, tanto em contexto livre, quanto planejado.The importance of the language for the visual impairment children is unquestionable because it is a way to promote their social interaction. Besides, it is fundamental in the mediation of the learning process. In this context, the objective of this study was to compare the pragmatic performance of the language between

  18. Cognitive Predictors of Verbal Memory in a Mixed Clinical Pediatric Sample

    Directory of Open Access Journals (Sweden)

    Shelley C. Heaton

    2013-08-01

    Full Text Available Verbal memory problems, along with other cognitive difficulties, are common in children diagnosed with neurological and/or psychological disorders. Historically, these “memory problems” have been poorly characterized and often present with a heterogeneous pattern of performance across memory processes, even within a specific diagnostic group. The current study examined archival neuropsychological data from a large mixed clinical pediatric sample in order to understand whether functioning in other cognitive areas (i.e., verbal knowledge, attention, working memory, executive functioning may explain some of the performance variability seen across verbal memory tasks of the Children’s Memory Scale (CMS. Multivariate analyses revealed that among the cognitive functions examined, only verbal knowledge explained a significant amount of variance in overall verbal memory performance. Further univariate analyses examining the component processes of verbal memory indicated that verbal knowledge is specifically related to encoding, but not the retention or retrieval stages. Future research is needed to replicate these findings in other clinical samples, to examine whether verbal knowledge predicts performance on other verbal memory tasks and to explore whether these findings also hold true for visual memory tasks. Successful replication of the current study findings would indicate that interventions targeting verbal encoding deficits should include efforts to improve verbal knowledge.

  19. Non-intentional but not automatic: reduction of word- and arrow-based compatibility effects by sound distractors in the same categorical domain.

    Science.gov (United States)

    Miles, James D; Proctor, Robert W

    2009-10-01

    In the current study, we show that the non-intentional processing of visually presented words and symbols can be attenuated by sounds. Importantly, this attenuation is dependent on the similarity in categorical domain between the sounds and words or symbols. Participants performed a task in which left or right responses were made contingent on the color of a centrally presented target that was either a location word (LEFT or RIGHT) or a left or right arrow. Responses were faster when they were on the side congruent with the word or arrow. This bias was reduced for location words by a neutral spoken word and for arrows by a tone series, but not vice versa. We suggest that words and symbols are processed with minimal attentional requirements until they are categorized into specific knowledge domains, but then become sensitive to other information within the same domain regardless of the similarity between modalities.

  20. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  1. [Verbal patient information through nurses--a case of stroke patients].

    Science.gov (United States)

    Christmann, Elli; Holle, Regina; Schüssler, Dörte; Beier, Jutta; Dassen, Theo

    2004-06-01

    The article represents results of a theoretical work in the field of nursing education, with the topic: Verbal Patient Information through Nurses--A Case of Stroke Patients. The literature review and analysis show that there is a shortage in (stroke) patient information generally and a lack of successful concepts and strategies for the verbal (stroke) patient information through nurses in hospitals. The authors have developed a theoretical basis for health information as a nursing intervention and this represents a model of health information as a "communicational teach-and-learn process", which is of general application to all patients. The health information takes place as a separate nursing intervention within a non-public, face-to-face communication situation and in the steps-model of the nursing process. Health information is seen as a learning process for patients and nurses too. We consider learning as information production (constructivism) and information processing (cognitivism). Both processes are influenced by different factors and the illness-situation of patients, personality information content and the environment. For a successful health information output, it is necessary to take care of these aspects and this can be realized through a constructivational understanding of didactics. There is a need for an evaluation study to prove our concept of health information.

  2. Music and Sound in Time Processing of Children with ADHD.

    Science.gov (United States)

    Carrer, Luiz Rogério Jorgensen

    2015-01-01

    ADHD involves cognitive and behavioral aspects with impairments in many environments of children and their families' lives. Music, with its playful, spontaneous, affective, motivational, temporal, and rhythmic dimensions can be of great help for studying the aspects of time processing in ADHD. In this article, we studied time processing with simple sounds and music in children with ADHD with the hypothesis that children with ADHD have a different performance when compared with children with normal development in tasks of time estimation and production. The main objective was to develop sound and musical tasks to evaluate and correlate the performance of children with ADHD, with and without methylphenidate, compared to a control group with typical development. The study involved 36 participants of age 6-14 years, recruited at NANI-UNIFESP/SP, subdivided into three groups with 12 children in each. Data was collected through a musical keyboard using Logic Audio Software 9.0 on the computer that recorded the participant's performance in the tasks. Tasks were divided into sections: spontaneous time production, time estimation with simple sounds, and time estimation with music. (1) performance of ADHD groups in temporal estimation of simple sounds in short time intervals (30 ms) were statistically lower than that of control group (p < 0.05); (2) in the task comparing musical excerpts of the same duration (7 s), ADHD groups considered the tracks longer when the musical notes had longer durations, while in the control group, the duration was related to the density of musical notes in the track. The positive average performance observed in the three groups in most tasks perhaps indicates the possibility that music can, in some way, positively modulate the symptoms of inattention in ADHD.

  3. How do auditory cortex neurons represent communication sounds?

    Science.gov (United States)

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc

    2013-11-01

    A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Neural Correlates of Indicators of Sound Change in Cantonese: Evidence from Cortical and Subcortical Processes.

    Science.gov (United States)

    Maggu, Akshay R; Liu, Fang; Antoniou, Mark; Wong, Patrick C M

    2016-01-01

    Across time, languages undergo changes in phonetic, syntactic, and semantic dimensions. Social, cognitive, and cultural factors contribute to sound change, a phenomenon in which the phonetics of a language undergo changes over time. Individuals who misperceive and produce speech in a slightly divergent manner (called innovators ) contribute to variability in the society, eventually leading to sound change. However, the cause of variability in these individuals is still unknown. In this study, we examined whether such misperceptions are represented in neural processes of the auditory system. We investigated behavioral, subcortical (via FFR), and cortical (via P300) manifestations of sound change processing in Cantonese, a Chinese language in which several lexical tones are merging. Across the merging categories, we observed a similar gradation of speech perception abilities in both behavior and the brain (subcortical and cortical processes). Further, we also found that behavioral evidence of tone merging correlated with subjects' encoding at the subcortical and cortical levels. These findings indicate that tone-merger categories, that are indicators of sound change in Cantonese, are represented neurophysiologically with high fidelity. Using our results, we speculate that innovators encode speech in a slightly deviant neurophysiological manner, and thus produce speech divergently that eventually spreads across the community and contributes to sound change.

  5. Non-Gaussianity in multi-sound-speed disformally coupled inflation

    Energy Technology Data Exchange (ETDEWEB)

    De Bruck, Carsten van; Longden, Chris [Consortium for Fundamental Physics, School of Mathematics and Statistics, University of Sheffield, Hounsfield Road, Sheffield S3 7RH (United Kingdom); Koivisto, Tomi, E-mail: C.vandeBruck@sheffield.ac.uk, E-mail: tomi.koivisto@nordita.org, E-mail: cjlongden1@sheffield.ac.uk [Nordita, KTH Royal Institute of Technology and Stockholm University, Roslagstullsbacken 23, SE-10691 Stockholm (Sweden)

    2017-02-01

    Most, if not all, scalar-tensor theories are equivalent to General Relativity with a disformally coupled matter sector. In extra-dimensional theories such a coupling can be understood as a result of induction of the metric on a brane that matter is confined to. This article presents a first look at the non-Gaussianities in disformally coupled inflation, a simple two-field model that features a novel kinetic interaction. Cases with both canonical and Dirac-Born-Infeld (DBI) kinetic terms are taken into account, the latter motivated by the possible extra-dimensional origin of the disformality. The computations are carried out for the equilateral configuration in the slow-roll regime, wherein it is found that the non-Gaussianity is typically rather small and negative. This is despite the fact that the new kinetic interaction causes the perturbation modes to propagate with different sounds speeds, which may both significantly deviate from unity during inflation.

  6. Kreative metoder i verbal supervision

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard

    2013-01-01

    , bevægelser i rummet, etc.) og 4) der primært kommunikeres via verbal-sproglige udvekslinger. Efter en diskussion af forholdet mellem kreativitet og kreative metoder, fokuseres der på relevansen af og måder til adgang til ubevidste manifestationer. Sproget non- og paraverbale betydning inddrages. Et centralt...

  7. Young children's coding and storage of visual and verbal material.

    Science.gov (United States)

    Perlmutter, M; Myers, N A

    1975-03-01

    36 preschool children (mean age 4.2 years) were each tested on 3 recognition memory lists differing in test mode (visual only, verbal only, combined visual-verbal). For one-third of the children, original list presentation was visual only, for another third, presentation was verbal only, and the final third received combined visual-verbal presentation. The subjects generally performed at a high level of correct responding. Verbal-only presentation resulted in less correct recognition than did either visual-only or combined visual-verbal presentation. However, because performances under both visual-only and combined visual-verbal presentation were statistically comparable, and a high level of spontaneous labeling was observed when items were presented only visually, a dual-processing conceptualization of memory in 4-year-olds was suggested.

  8. Visuospatial working memory for locations, colours, and binding in typically developing children and in children with dyslexia and non-verbal learning disability.

    Science.gov (United States)

    Garcia, Ricardo Basso; Mammarella, Irene C; Tripodi, Doriana; Cornoldi, Cesare

    2014-03-01

    This study examined forward and backward recall of locations and colours and the binding of locations and colours, comparing typically developing children - aged between 8 and 10 years - with two different groups of children of the same age with learning disabilities (dyslexia in one group, non-verbal learning disability [NLD] in the other). Results showed that groups with learning disabilities had different visuospatial working memory problems and that children with NLD had particular difficulties in the backward recall of locations. The differences between the groups disappeared, however, when locations and colours were bound together. It was concluded that specific processes may be involved in children in the binding and backward recall of different types of information, as they are not simply the resultant of combining the single processes needed to recall single features. © 2013 The British Psychological Society.

  9. Processamento auditivo em indivíduos com epilepsia de lobo temporal Auditory processing in patients with temporal lobe epilepsy

    Directory of Open Access Journals (Sweden)

    Juliana Meneguello

    2006-08-01

    and nonverbal sounds. METHOD: eight individuals with temporal lobe epilepsy were assessed, after excluding those with non-confirmed diagnosis or with the focus of discharges not limited to this lobe. The evaluation was carried out through special auditory tests: Sound Localization Test, Duration Pattern Test, Digits Dichotic Test and Non-Verbal Dichotic Test. Their performances were compared to the performances of individuals without neurological diseases (case-control study. RESULTS: similar performances were observed between patients with temporal lobe epilepsy and the control group regarding the auditory mechanism of sound source direction discrimination. Comparing the other auditory mechanisms assessed, the patients with temporal lobe epilepsy presented worse results. CONCLUSION: individuals with temporal lobe epilepsy had more deficits in auditory processing than those without cortical damage.

  10. Development and psychometric validation of the verbal affective memory test

    DEFF Research Database (Denmark)

    Jensen, Christian Gaden; Hjordt, Liv V; Stenbæk, Dea S

    2015-01-01

    . Furthermore, larger seasonal decreases in positive recall significantly predicted larger increases in depressive symptoms. Retest reliability was satisfactory, rs ≥ .77. In conclusion, VAMT-24 is more thoroughly developed and validated than existing verbal affective memory tests and showed satisfactory...... psychometric properties. VAMT-24 seems especially sensitive to measuring positive verbal recall bias, perhaps due to the application of common, non-taboo words. Based on the psychometric and clinical results, we recommend VAMT-24 for international translations and studies of affective memory.......We here present the development and validation of the Verbal Affective Memory Test-24 (VAMT-24). First, we ensured face validity by selecting 24 words reliably perceived as positive, negative or neutral, respectively, according to healthy Danish adults' valence ratings of 210 common and non...

  11. Primate auditory recognition memory performance varies with sound type.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2009-10-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g., social status, kinship, environment), have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition and/or memory. The present study employs a delayed matching-to-sample task with auditory stimuli to examine auditory memory performance of rhesus macaques (Macaca mulatta), wherein two sounds are determined to be the same or different. Rhesus macaques seem to have relatively poor short-term memory with auditory stimuli, and we examine if particular sound types are more favorable for memory performance. Experiment 1 suggests memory performance with vocalization sound types (particularly monkey), are significantly better than when using non-vocalization sound types, and male monkeys outperform female monkeys overall. Experiment 2, controlling for number of sound exemplars and presentation pairings across types, replicates Experiment 1, demonstrating better performance or decreased response latencies, depending on trial type, to species-specific monkey vocalizations. The findings cannot be explained by acoustic differences between monkey vocalizations and the other sound types, suggesting the biological, and/or ethological meaning of these sounds are more effective for auditory memory. 2009 Elsevier B.V.

  12. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  13. A taste for words and sounds: a case of lexical-gustatory and sound-gustatory synesthesia

    NARCIS (Netherlands)

    Colizoli, O.; Murre, J.M.J.; Rouw, R.

    2013-01-01

    Gustatory forms of synesthesia involve the automatic and consistent experience of tastes that are triggered by non-taste related inducers. We present a case of lexical-gustatory and sound-gustatory synesthesia within one individual, SC. Most words and a subset of non-linguistic sounds induce the

  14. Verbal memory retrieval engages visual cortex in musicians.

    Science.gov (United States)

    Huang, Z; Zhang, J X; Yang, Z; Dong, G; Wu, J; Chan, A S; Weng, X

    2010-06-16

    As one major line of research on brain plasticity, many imaging studies have been conducted to identify the functional and structural reorganization associated with musical expertise. Based on previous behavioral research, the present study used functional magnetic resonance imaging to identify the neural correlates of superior verbal memory performance in musicians. Participants with and without musical training performed a verbal memory task to first encode a list of words auditorily delivered and then silently recall as many words as possible. They performed in separate blocks a control task involving pure tone pitch judgment. Post-scan recognition test showed better memory performance in musicians than non-musicians. During memory retrieval, the musicians showed significantly greater activations in bilateral though left-lateralized visual cortex relative to the pitch judgment baseline. In comparison, no such visual cortical activations were found in the non-musicians. No group differences were observed during the encoding stage. The results echo a previous report of visual cortical activation during verbal memory retrieval in the absence of any visual sensory stimulation in the blind population, who are also known to possess superior verbal memory. It is suggested that the visual cortex can be recruited to serve as extra memory resources and contributes to the superior verbal memory in special situations. While in the blind population, such cross-modal functional reorganization may be induced by sensory deprivation; in the musicians it may be induced by the long-term and demanding nature of musical training to use as much available neural resources as possible. 2010 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Adverse Life Events and Emotional and Behavioral Problems in Adolescence: The Role of Non-Verbal Cognitive Ability and Negative Cognitive Errors

    Science.gov (United States)

    Flouri, Eirini; Panourgia, Constantina

    2011-01-01

    The aim of this study was to test whether negative cognitive errors (overgeneralizing, catastrophizing, selective abstraction, and personalizing) mediate the moderator effect of non-verbal cognitive ability on the association between adverse life events (life stress) and emotional and behavioral problems in adolescence. The sample consisted of 430…

  16. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  17. Learning foreign sounds in an alien world: videogame training improves non-native speech categorization.

    Science.gov (United States)

    Lim, Sung-joo; Holt, Lori L

    2011-01-01

    Although speech categories are defined by multiple acoustic dimensions, some are perceptually weighted more than others and there are residual effects of native-language weightings in non-native speech perception. Recent research on nonlinguistic sound category learning suggests that the distribution characteristics of experienced sounds influence perceptual cue weights: Increasing variability across a dimension leads listeners to rely upon it less in subsequent category learning (Holt & Lotto, 2006). The present experiment investigated the implications of this among native Japanese learning English /r/-/l/ categories. Training was accomplished using a videogame paradigm that emphasizes associations among sound categories, visual information, and players' responses to videogame characters rather than overt categorization or explicit feedback. Subjects who played the game for 2.5h across 5 days exhibited improvements in /r/-/l/ perception on par with 2-4 weeks of explicit categorization training in previous research and exhibited a shift toward more native-like perceptual cue weights. Copyright © 2011 Cognitive Science Society, Inc.

  18. Direct observation of mother-child communication in pediatric cancer: assessment of verbal and non-verbal behavior and emotion.

    Science.gov (United States)

    Dunn, Madeleine J; Rodriguez, Erin M; Miller, Kimberly S; Gerhardt, Cynthia A; Vannatta, Kathryn; Saylor, Megan; Scheule, C Melanie; Compas, Bruce E

    2011-06-01

    To examine the acceptability and feasibility of coding observed verbal and nonverbal behavioral and emotional components of mother-child communication among families of children with cancer. Mother-child dyads (N=33, children ages 5-17 years) were asked to engage in a videotaped 15-min conversation about the child's cancer. Coding was done using the Iowa Family Interaction Rating Scale (IFIRS). Acceptability and feasibility of direct observation in this population were partially supported: 58% consented and 81% of those (47% of all eligible dyads) completed the task; trained raters achieved 78% agreement in ratings across codes. The construct validity of the IFIRS was demonstrated by expected associations within and between positive and negative behavioral/emotional code ratings and between mothers' and children's corresponding code ratings. Direct observation of mother-child communication about childhood cancer has the potential to be an acceptable and feasible method of assessing verbal and nonverbal behavior and emotion in this population.

  19. Verbal behavior

    OpenAIRE

    Michael, Jack

    1984-01-01

    The recent history and current status of the area of verbal behavior are considered in terms of three major thematic lines: the operant conditioning of adult verbal behavior, learning to be an effective speaker and listener, and developments directly related to Skinner's Verbal Behavior. Other topics not directly related to the main themes are also considered: the work of Kurt Salzinger, ape-language research, and human operant research related to rule-governed behavior.

  20. The Influence of verbalization on the pattern of cortical activation during mental arithmetic

    Directory of Open Access Journals (Sweden)

    Zarnhofer Sabrina

    2012-03-01

    Full Text Available Abstract Background The aim of the present functional magnetic resonance imaging (fMRI study at 3 T was to investigate the influence of the verbal-visual cognitive style on cerebral activation patterns during mental arithmetic. In the domain of arithmetic, a visual style might for example mean to visualize numbers and (intermediate results, and a verbal style might mean, that numbers and (intermediate results are verbally repeated. In this study, we investigated, first, whether verbalizers show activations in areas for language processing, and whether visualizers show activations in areas for visual processing during mental arithmetic. Some researchers have proposed that the left and right intraparietal sulcus (IPS, and the left angular gyrus (AG, two areas involved in number processing, show some domain or modality specificity. That is, verbal for the left AG, and visual for the left and right IPS. We investigated, second, whether the activation in these areas implied in number processing depended on an individual's cognitive style. Methods 42 young healthy adults participated in the fMRI study. The study comprised two functional sessions. In the first session, subtraction and multiplication problems were presented in an event-related design, and in the second functional session, multiplications were presented in two formats, as Arabic numerals and as written number words, in an event-related design. The individual's habitual use of visualization and verbalization during mental arithmetic was assessed by a short self-report assessment. Results We observed in both functional sessions that the use of verbalization predicts activation in brain areas associated with language (supramarginal gyrus and auditory processing (Heschl's gyrus, Rolandic operculum. However, we found no modulation of activation in the left AG as a function of verbalization. Conclusions Our results confirm that strong verbalizers use mental speech as a form of mental

  1. "He says, she says": a comparison of fathers' and mothers' verbal behavior during child cold pressor pain.

    Science.gov (United States)

    Moon, Erin C; Chambers, Christine T; McGrath, Patrick J

    2011-11-01

    Mothers' behavior has a powerful impact on child pain. Maternal attending talk (talk focused on child pain) is associated with increased child pain whereas maternal non-attending talk (talk not focused on child pain) is associated with decreased child pain. The present study compared mothers' and fathers' verbal behavior during child pain. Forty healthy 8- to 12-year-old children completed the cold pressor task (CPT)-once with their mothers present and once with their fathers present in a counterbalanced order. Parent verbalizations were coded as Attending Talk or Non-Attending Talk. Results indicated that child symptom complaints were positively correlated with parent Attending Talk and negatively correlated with parent Non-Attending Talk. Furthermore, child pain tolerance was negatively correlated with parent Attending Talk and positively correlated with parent Non-Attending Talk. Mothers and fathers did not use different proportions of Attending or Non-Attending Talk. Exploratory analyses of parent verbalization subcodes indicated that mothers used more nonsymptom-focused verbalizations whereas fathers used more criticism (a low-frequency occurence). The findings indicate that for both mothers and fathers, verbal attention is associated with higher child pain and verbal non-attention is associated with lower child pain. The results also suggest that mothers' and fathers' verbal behavior during child pain generally does not differ. To date, studies of the effects of parental behavior on child pain have focused almost exclusively on mothers. The present study compared mothers' and fathers' verbal behavior during child pain. The results can be used to inform clinical recommendations for mothers and fathers to help their children cope with pain. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.

  2. A common neural substrate for language production and verbal working memory.

    Science.gov (United States)

    Acheson, Daniel J; Hamidi, Massihullah; Binder, Jeffrey R; Postle, Bradley R

    2011-06-01

    Verbal working memory (VWM), the ability to maintain and manipulate representations of speech sounds over short periods, is held by some influential models to be independent from the systems responsible for language production and comprehension [e.g., Baddeley, A. D. Working memory, thought, and action. New York, NY: Oxford University Press, 2007]. We explore the alternative hypothesis that maintenance in VWM is subserved by temporary activation of the language production system [Acheson, D. J., & MacDonald, M. C. Verbal working memory and language production: Common approaches to the serial ordering of verbal information. Psychological Bulletin, 135, 50-68, 2009b]. Specifically, we hypothesized that for stimuli lacking a semantic representation (e.g., nonwords such as mun), maintenance in VWM can be achieved by cycling information back and forth between the stages of phonological encoding and articulatory planning. First, fMRI was used to identify regions associated with two different stages of language production planning: the posterior superior temporal gyrus (pSTG) for phonological encoding (critical for VWM of nonwords) and the middle temporal gyrus (MTG) for lexical-semantic retrieval (not critical for VWM of nonwords). Next, in the same subjects, these regions were targeted with repetitive transcranial magnetic stimulation (rTMS) during language production and VWM task performance. Results showed that rTMS to the pSTG, but not the MTG, increased error rates on paced reading (a language production task) and on delayed serial recall of nonwords (a test of VWM). Performance on a lexical-semantic retrieval task (picture naming), in contrast, was significantly sensitive to rTMS of the MTG. Because rTMS was guided by language production-related activity, these results provide the first causal evidence that maintenance in VWM directly depends on the long-term representations and processes used in speech production.

  3. Developmental changes in brain activation involved in the production of novel speech sounds in children.

    Science.gov (United States)

    Hashizume, Hiroshi; Taki, Yasuyuki; Sassa, Yuko; Thyreau, Benjamin; Asano, Michiko; Asano, Kohei; Takeuchi, Hikaru; Nouchi, Rui; Kotozaki, Yuka; Jeong, Hyeonjeong; Sugiura, Motoaki; Kawashima, Ryuta

    2014-08-01

    Older children are more successful at producing unfamiliar, non-native speech sounds than younger children during the initial stages of learning. To reveal the neuronal underpinning of the age-related increase in the accuracy of non-native speech production, we examined the developmental changes in activation involved in the production of novel speech sounds using functional magnetic resonance imaging. Healthy right-handed children (aged 6-18 years) were scanned while performing an overt repetition task and a perceptual task involving aurally presented non-native and native syllables. Productions of non-native speech sounds were recorded and evaluated by native speakers. The mouth regions in the bilateral primary sensorimotor areas were activated more significantly during the repetition task relative to the perceptual task. The hemodynamic response in the left inferior frontal gyrus pars opercularis (IFG pOp) specific to non-native speech sound production (defined by prior hypothesis) increased with age. Additionally, the accuracy of non-native speech sound production increased with age. These results provide the first evidence of developmental changes in the neural processes underlying the production of novel speech sounds. Our data further suggest that the recruitment of the left IFG pOp during the production of novel speech sounds was possibly enhanced due to the maturation of the neuronal circuits needed for speech motor planning. This, in turn, would lead to improvement in the ability to immediately imitate non-native speech. Copyright © 2014 Wiley Periodicals, Inc.

  4. Sound improves diminished visual temporal sensitivity in schizophrenia

    NARCIS (Netherlands)

    de Boer-Schellekens, L.; Stekelenburg, J.J.; Maes, J.P.; van Gool, A.R.; Vroomen, J.

    2014-01-01

    Visual temporal processing and multisensory integration (MSI) of sound and vision were examined in individuals with schizophrenia using a visual temporal order judgment (TOJ) task. Compared to a non-psychiatric control group, persons with schizophrenia were less sensitive judging the temporal order

  5. Brain regions for sound processing and song release in a small grasshopper.

    Science.gov (United States)

    Balvantray Bhavsar, Mit; Stumpner, Andreas; Heinrich, Ralf

    2017-05-01

    We investigated brain regions - mostly neuropils - that process auditory information relevant for the initiation of response songs of female grasshoppers Chorthippus biguttulus during bidirectional intraspecific acoustic communication. Male-female acoustic duets in the species Ch. biguttulus require the perception of sounds, their recognition as a species- and gender-specific signal and the initiation of commands that activate thoracic pattern generating circuits to drive the sound-producing stridulatory movements of the hind legs. To study sensory-to-motor processing during acoustic communication we used multielectrodes that allowed simultaneous recordings of acoustically stimulated electrical activity from several ascending auditory interneurons or local brain neurons and subsequent electrical stimulation of the recording site. Auditory activity was detected in the lateral protocerebrum (where most of the described ascending auditory interneurons terminate), in the superior medial protocerebrum and in the central complex, that has previously been implicated in the control of sound production. Neural responses to behaviorally attractive sound stimuli showed no or only poor correlation with behavioral responses. Current injections into the lateral protocerebrum, the central complex and the deuto-/tritocerebrum (close to the cerebro-cervical fascicles), but not into the superior medial protocerebrum, elicited species-typical stridulation with high success rate. Latencies and numbers of phrases produced by electrical stimulation were different between these brain regions. Our results indicate three brain regions (likely neuropils) where auditory activity can be detected with two of these regions being potentially involved in song initiation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. [Implications of mental image processing in the deficits of verbal information coding during normal aging].

    Science.gov (United States)

    Plaie, Thierry; Thomas, Delphine

    2008-06-01

    Our study specifies the contributions of image generation and image maintenance processes occurring at the time of imaginal coding of verbal information in memory during normal aging. The memory capacities of 19 young adults (average age of 24 years) and 19 older adults (average age of 75 years) were assessed using recall tasks according to the imagery value of the stimuli to learn. The mental visual imagery capacities are assessed using tasks of image generation and temporary storage of mental imagery. The variance analysis indicates a more important decrease with age of the concretness effect. The major contribution of our study rests on the fact that the decline with age of dual coding of verbal information in memory would result primarily from the decline of image maintenance capacities and from a slowdown in image generation. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  7. Tetrahydrocannabinol (THC) impairs encoding but not retrieval of verbal information.

    Science.gov (United States)

    Ranganathan, Mohini; Radhakrishnan, Rajiv; Addy, Peter H; Schnakenberg-Martin, Ashley M; Williams, Ashley H; Carbuto, Michelle; Elander, Jacqueline; Pittman, Brian; Andrew Sewell, R; Skosnik, Patrick D; D'Souza, Deepak Cyril

    2017-10-03

    Cannabis and agonists of the brain cannabinoid receptor (CB 1 R) produce acute memory impairments in humans. However, the extent to which cannabinoids impair the component processes of encoding and retrieval has not been established in humans. The objective of this analysis was to determine whether the administration of Δ 9 -Tetrahydrocannabinol (THC), the principal psychoactive constituent of cannabis, impairs encoding and/or retrieval of verbal information. Healthy subjects were recruited from the community. Subjects were administered the Rey-Auditory Verbal Learning Test (RAVLT) either before administration of THC (experiment #1) (n=38) or while under the influence of THC (experiment #2) (n=57). Immediate and delayed recall on the RAVLT was compared. Subjects received intravenous THC, in a placebo-controlled, double-blind, randomized manner at doses known to produce behavioral and subjective effects consistent with cannabis intoxication. Total immediate recall, short delayed recall, and long delayed recall were reduced in a statistically significant manner only when the RAVLT was administered to subjects while they were under the influence of THC (experiment #2) and not when the RAVLT was administered prior. THC acutely interferes with encoding of verbal memory without interfering with retrieval. These data suggest that learning information prior to the use of cannabis or cannabinoids is not likely to disrupt recall of that information. Future studies will be necessary to determine whether THC impairs encoding of non-verbal information, to what extent THC impairs memory consolidation, and the role of other cannabinoids in the memory-impairing effects of cannabis. Cannabinoids, Neural Synchrony, and Information Processing (THC-Gamma) http://clinicaltrials.gov/ct2/show/study/NCT00708994 NCT00708994 Pharmacogenetics of Cannabinoid Response http://clinicaltrials.gov/ct2/show/NCT00678730 NCT00678730. Copyright © 2017. Published by Elsevier Inc.

  8. Mood As Cumulative Expectation Mismatch: A Test of Theory Based on Data from Non-verbal Cognitive Bias Tests

    Directory of Open Access Journals (Sweden)

    Camille M. C. Raoult

    2017-12-01

    Full Text Available Affective states are known to influence behavior and cognitive processes. To assess mood (moderately long-term affective states, the cognitive judgment bias test was developed and has been widely used in various animal species. However, little is known about how mood changes, how mood can be experimentally manipulated, and how mood then feeds back into cognitive judgment. A recent theory argues that mood reflects the cumulative impact of differences between obtained outcomes and expectations. Here expectations refer to an established context. Situations in which an established context fails to match an outcome are then perceived as mismatches of expectation and outcome. We take advantage of the large number of studies published on non-verbal cognitive bias tests in recent years (95 studies with a total of 162 independent tests to test whether cumulative mismatch could indeed have led to the observed mood changes. Based on a criteria list, we assessed whether mismatch had occurred with the experimental procedure used to induce mood (mood induction mismatch, or in the context of the non-verbal cognitive bias procedure (testing mismatch. For the mood induction mismatch, we scored the mismatch between the subjects’ potential expectations and the manipulations conducted for inducing mood whereas, for the testing mismatch, we scored mismatches that may have occurred during the actual testing. We then investigated whether these two types of mismatch can predict the actual outcome of the cognitive bias study. The present evaluation shows that mood induction mismatch cannot well predict the success of a cognitive bias test. On the other hand, testing mismatch can modulate or even inverse the expected outcome. We think, cognitive bias studies should more specifically aim at creating expectation mismatch while inducing mood states to test the cumulative mismatch theory more properly. Furthermore, testing mismatch should be avoided as much as possible

  9. Measuring the 'complexity'of sound

    Indian Academy of Sciences (India)

    Sounds in the natural environment form an important class of biologically relevant nonstationary signals. We propose a dynamic spectral measure to characterize the spectral dynamics of such non-stationary sound signals and classify them based on rate of change of spectral dynamics. We categorize sounds with slowly ...

  10. THE INTONATION AND SOUND CHARACTERISTICS OF ADVERTISING PRONUNCIATION STYLE

    Directory of Open Access Journals (Sweden)

    Chernyavskaya Elena Sergeevna

    2014-06-01

    Full Text Available The article aims at describing the intonation and sound characteristics of advertising phonetic style. On the basis of acoustic analysis of transcripts of radio advertising tape recordings, broadcasted at different radio stations, as well as in the result of processing the representative part of phrases with the help of special computer programs, the author determines the parameters of superfix means. The article proves that the stylistic parameters of advertising phonetic style are oriented on modern orthoepy, and that the originality of radio advertising sounding is determined by two tendencies – the reduction of stressed vowels duration in the terminal and non-terminal word and the increase of pre-tonic and post-tonic vowels duration of non-terminal word in a phrase. The article also shows that the peculiarity of rhythmic structure of terminal and non-terminal word in radio advertising is formed by means of leveling stressed and unstressed sounds in length. The specificity of intonational structure of an advertising text consists in the following peculiarities: matching of syntactic and syntagmatic division, which allows to denote the blocks of semantic models, forming the text of radio advertising; the allocation of keywords into separate syntagmas; the design of informative parts of advertising text by means of symmetric length correlation of minimal speech segments; the combination of interstyle prosodic elements in the framework of sounding text. Thus, the conducted analysis allowed to conclude, that the texts of sounding advertising are designed using special pronunciation style, marked by sound duration.

  11. Robust segmentation and retrieval of environmental sounds

    Science.gov (United States)

    Wichern, Gordon

    The proliferation of mobile computing has provided much of the world with the ability to record any sound of interest, or possibly every sound heard in a lifetime. The technology to continuously record the auditory world has applications in surveillance, biological monitoring of non-human animal sounds, and urban planning. Unfortunately, the ability to record anything has led to an audio data deluge, where there are more recordings than time to listen. Thus, access to these archives depends on efficient techniques for segmentation (determining where sound events begin and end), indexing (storing sufficient information with each event to distinguish it from other events), and retrieval (searching for and finding desired events). While many such techniques have been developed for speech and music sounds, the environmental and natural sounds that compose the majority of our aural world are often overlooked. The process of analyzing audio signals typically begins with the process of acoustic feature extraction where a frame of raw audio (e.g., 50 milliseconds) is converted into a feature vector summarizing the audio content. In this dissertation, a dynamic Bayesian network (DBN) is used to monitor changes in acoustic features in order to determine the segmentation of continuously recorded audio signals. Experiments demonstrate effective segmentation performance on test sets of environmental sounds recorded in both indoor and outdoor environments. Once segmented, every sound event is indexed with a probabilistic model, summarizing the evolution of acoustic features over the course of the event. Indexed sound events are then retrieved from the database using different query modalities. Two important query types are sound queries (query-by-example) and semantic queries (query-by-text). By treating each sound event and semantic concept in the database as a node in an undirected graph, a hybrid (content/semantic) network structure is developed. This hybrid network can

  12. Ion-sound oscillations in strongly non-isotherm weakly ionized nonuniform hydrogen plasma

    International Nuclear Information System (INIS)

    Leleko, Ya.F.; Stepanov, K.N.

    2010-01-01

    A stationary distribution of strongly non-isotherm weakly ionized hydrogen plasma parameters is obtained in the hydrodynamic approximation in a quasi neutrality region in the transient layer between the plasma and dielectric taking the ionization, charge exchange, diffusion, viscosity, and a self-consistent field potential distribution. The ion-sound oscillation frequency and the collisional damping decrement as functions of the wave vector in the plasma with the obtained parameters are found in the local approximation.

  13. The Development of Verbal Relations in Analogical Reasonings.

    Science.gov (United States)

    Sternberg, Robert J.; Nigro, Georgia

    A six-process theory of analogical reasoning was tested by administering verbal analogy items to students in grades 3 through college. The items were classified according to five verbal relations: synonyms, antonyms, functional, linear ordering, and class membership. A new method of componential analysis that does not require precueing was used to…

  14. Using neuroplasticity-based auditory training to improve verbal memory in schizophrenia.

    Science.gov (United States)

    Fisher, Melissa; Holland, Christine; Merzenich, Michael M; Vinogradov, Sophia

    2009-07-01

    Impaired verbal memory in schizophrenia is a key rate-limiting factor for functional outcome, does not respond to currently available medications, and shows only modest improvement after conventional behavioral remediation. The authors investigated an innovative approach to the remediation of verbal memory in schizophrenia, based on principles derived from the basic neuroscience of learning-induced neuroplasticity. The authors report interim findings in this ongoing study. Fifty-five clinically stable schizophrenia subjects were randomly assigned to either 50 hours of computerized auditory training or a control condition using computer games. Those receiving auditory training engaged in daily computerized exercises that placed implicit, increasing demands on auditory perception through progressively more difficult auditory-verbal working memory and verbal learning tasks. Relative to the control group, subjects who received active training showed significant gains in global cognition, verbal working memory, and verbal learning and memory. They also showed reliable and significant improvement in auditory psychophysical performance; this improvement was significantly correlated with gains in verbal working memory and global cognition. Intensive training in early auditory processes and auditory-verbal learning results in substantial gains in verbal cognitive processes relevant to psychosocial functioning in schizophrenia. These gains may be due to a training method that addresses the early perceptual impairments in the illness, that exploits intact mechanisms of repetitive practice in schizophrenia, and that uses an intensive, adaptive training approach.

  15. Aberrant connectivity of areas for decoding degraded speech in patients with auditory verbal hallucinations.

    Science.gov (United States)

    Clos, Mareike; Diederen, Kelly M J; Meijering, Anne Lotte; Sommer, Iris E; Eickhoff, Simon B

    2014-03-01

    Auditory verbal hallucinations (AVH) are a hallmark of psychotic experience. Various mechanisms including misattribution of inner speech and imbalance between bottom-up and top-down factors in auditory perception potentially due to aberrant connectivity between frontal and temporo-parietal areas have been suggested to underlie AVH. Experimental evidence for disturbed connectivity of networks sustaining auditory-verbal processing is, however, sparse. We compared functional resting-state connectivity in 49 psychotic patients with frequent AVH and 49 matched controls. The analysis was seeded from the left middle temporal gyrus (MTG), thalamus, angular gyrus (AG) and inferior frontal gyrus (IFG) as these regions are implicated in extracting meaning from impoverished speech-like sounds. Aberrant connectivity was found for all seeds. Decreased connectivity was observed between the left MTG and its right homotope, between the left AG and the surrounding inferior parietal cortex (IPC) and the left inferior temporal gyrus, between the left thalamus and the right cerebellum, as well as between the left IFG and left IPC, and dorsolateral and ventrolateral prefrontal cortex (DLPFC/VLPFC). Increased connectivity was observed between the left IFG and the supplementary motor area (SMA) and the left insula and between the left thalamus and the left fusiform gyrus/hippocampus. The predisposition to experience AVH might result from decoupling between the speech production system (IFG, insula and SMA) and the self-monitoring system (DLPFC, VLPFC, IPC) leading to misattribution of inner speech. Furthermore, decreased connectivity between nodes involved in speech processing (AG, MTG) and other regions implicated in auditory processing might reflect aberrant top-down influences in AVH.

  16. The influence of (central) auditory processing disorder on the severity of speech-sound disorders in children.

    Science.gov (United States)

    Vilela, Nadia; Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Sanches, Seisse Gabriela Gandolfi; Wertzner, Haydée Fiszbein; Carvallo, Renata Mota Mamede

    2016-02-01

    To identify a cutoff value based on the Percentage of Consonants Correct-Revised index that could indicate the likelihood of a child with a speech-sound disorder also having a (central) auditory processing disorder . Language, audiological and (central) auditory processing evaluations were administered. The participants were 27 subjects with speech-sound disorders aged 7 to 10 years and 11 months who were divided into two different groups according to their (central) auditory processing evaluation results. When a (central) auditory processing disorder was present in association with a speech disorder, the children tended to have lower scores on phonological assessments. A greater severity of speech disorder was related to a greater probability of the child having a (central) auditory processing disorder. The use of a cutoff value for the Percentage of Consonants Correct-Revised index successfully distinguished between children with and without a (central) auditory processing disorder. The severity of speech-sound disorder in children was influenced by the presence of (central) auditory processing disorder. The attempt to identify a cutoff value based on a severity index was successful.

  17. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  18. Alterations in Resting-State Activity Relate to Performance in a Verbal Recognition Task

    Science.gov (United States)

    López Zunini, Rocío A.; Thivierge, Jean-Philippe; Kousaie, Shanna; Sheppard, Christine; Taler, Vanessa

    2013-01-01

    In the brain, resting-state activity refers to non-random patterns of intrinsic activity occurring when participants are not actively engaged in a task. We monitored resting-state activity using electroencephalogram (EEG) both before and after a verbal recognition task. We show a strong positive correlation between accuracy in verbal recognition and pre-task resting-state alpha power at posterior sites. We further characterized this effect by examining resting-state post-task activity. We found marked alterations in resting-state alpha power when comparing pre- and post-task periods, with more pronounced alterations in participants that attained higher task accuracy. These findings support a dynamical view of cognitive processes where patterns of ongoing brain activity can facilitate –or interfere– with optimal task performance. PMID:23785436

  19. Verbal behavior: The other reviews

    Science.gov (United States)

    Knapp, Terry J.

    1992-01-01

    The extensive attention devoted to Noam Chomsky's review of Verbal Behavior by B.F. Skinner has resulted in a neglect of more than a dozen other rewiews of the work. These are surveyed and found to be positive and congenial in tone, with many of the reviewers advancing his/her own analysis of speech and language. The dominant criticism of the book was its disregard of central or implicit processes and its lack of experimental data. An examination of the receptive history of Verbal Behavior offers a more balanced historical account than those which rely excessively on Chomsky's commentary PMID:22477049

  20. The heterogeneity of verbal short-term memory impairment in aphasia.

    Science.gov (United States)

    Majerus, Steve; Attout, Lucie; Artielle, Marie-Amélie; Van der Kaa, Marie-Anne

    2015-10-01

    Verbal short-term memory (STM) impairment represents a frequent and long-lasting deficit in aphasia, and it will prevent patients from recovering fully functional language abilities. The aim of this study was to obtain a more precise understanding of the nature of verbal STM impairment in aphasia, by determining whether verbal STM impairment is merely a consequence of underlying language impairment, as suggested by linguistic accounts of verbal STM, or whether verbal STM impairment reflects an additional, specific deficit. We investigated this question by contrasting item-based STM measures, supposed to depend strongly upon language activation, and order-based STM measures, supposed to reflect the operation of specific, serial order maintenance mechanisms, in a sample of patients with single-word processing deficits at the phonological and/or lexical level. A group-level analysis showed robust impairment for both item and serial order STM aspects in the aphasic group relative to an age-matched control group. An analysis of individual profiles revealed an important heterogeneity of verbal STM profiles, with patients presenting either selective item STM deficits, selective order STM deficits, generalized item and serial order STM deficits or no significant STM impairment. Item but not serial order STM impairment correlated with the severity of phonological impairment. These results disconfirm a strong version of the linguistic account of verbal STM impairment in aphasia, by showing variable impairment to both item and serial order processing aspects of verbal STM. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Memory and comprehension deficits in spatial descriptions of children with non-verbal and reading disabilities.

    Science.gov (United States)

    Mammarella, Irene C; Meneghetti, Chiara; Pazzaglia, Francesca; Cornoldi, Cesare

    2014-01-01

    The present study investigated the difficulties encountered by children with non-verbal learning disability (NLD) and reading disability (RD) when processing spatial information derived from descriptions, based on the assumption that both groups should find it more difficult than matched controls, but for different reasons, i.e., due to a memory encoding difficulty in cases of RD and to spatial information comprehension problems in cases of NLD. Spatial descriptions from both survey and route perspectives were presented to 9-12-year-old children divided into three groups: NLD (N = 12); RD (N = 12), and typically developing controls (TD; N = 15); then participants completed a sentence verification task and a memory for locations task. The sentence verification task was presented in two conditions: in one the children could refer to the text while answering the questions (i.e., text present condition), and in the other the text was withdrawn (i.e., text absent condition). Results showed that the RD group benefited from the text present condition, but was impaired to the same extent as the NLD group in the text absent condition, suggesting that the NLD children's difficulty is due mainly to their poor comprehension of spatial descriptions, while the RD children's difficulty is due more to a memory encoding problem. These results are discussed in terms of their implications in the neuropsychological profiles of children with NLD or RD, and the processes involved in spatial descriptions.

  2. Crianças com fissura isolada de palato: desempenho nos testes de processamento auditivo Cleft palate children: performance in auditory processing tests

    Directory of Open Access Journals (Sweden)

    Mirela Boscariol

    2009-04-01

    Full Text Available Muitas crianças com transtorno de processamento auditivo têm uma prevalência alta de otite média, alteração na orelha média de grande ocorrência na população com fissura labiopalatina. OBJETIVO: Verificar o desempenho de crianças com fissura isolada de palato (FP em testes do processamento auditivo. Estudo prospectivo. MATERIAL E MÉTODO: Vinte crianças (7 a 11 anos com FP foram submetidas aos testes de localização sonora (LS, memória para sons verbais (MSSV e não-verbais em seqüência (MSSNV, Fusão Auditiva-Revisado (AFT-R, Teste Pediátrico de Inteligibilidade de Fala/Sentenças Sintéticas (PSI/SSI, Dissílabos alternados (SSW e Dicótico de dígitos (DD. O desempenho das crianças nos testes foi classificado em ruim e bom. RESULTADOS: Não houve diferença estatística entre os gêneros e orelhas. Os valores médios obtidos foram 2,16, 2,42, 4,37, 60,50ms, de 40,71 a 67,33%, 96,25 a 99,38%, 73,55 a 73,88% e 58,38 a 65,47%, respectivamente, para os testes MSSNV, MSSV, LS, AFT-R, PSI/SSI com mensagem competitiva ipsilateral (PSI/SSIMCI e contralateral (PSI/SSI/MCC, DD e SSW. CONCLUSÃO: Uma alta porcentagem de crianças demonstrou seus piores desempenhos nos testes AFT-R, DD, SSW e no teste PSI/SSIMCI. Os melhores desempenhos ocorreram nos testes de localização sonora, memória seqüencial para sons não verbais e verbais e para PSI/SSIMCC.Many children with auditory processing disorders have a high prevalence of otitis media, a middle ear alterations greatly prevalent in children with palatine and lip clefts. AIM: to check the performance of children with palate cleft alone (PC in auditory processing tests. Prospective study. MATERIALS AND METHODS: twenty children (7 to 11 years with CP were submitted to sound location tests (SL, memory for verbal sounds (MSSV and non verbal sounds in sequence (MSSNV, Revised auditory fusion (AFT-R, Pediatric test of speech intelligibility/synthetic sentences (PSI/SSI, alternate

  3. Cortical processing of pitch: Model-based encoding and decoding of auditory fMRI responses to real-life sounds.

    Science.gov (United States)

    De Angelis, Vittoria; De Martino, Federico; Moerel, Michelle; Santoro, Roberta; Hausfeld, Lars; Formisano, Elia

    2017-11-13

    Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant

  4. Extraction of auditory features and elicitation of attributes for the assessment of multichannel reproduced sound

    DEFF Research Database (Denmark)

    Choisel, Sylvain; Wickelmaier, Florian Maria

    2006-01-01

    ), subjects were asked to directly assign verbal labels to the features when encountering them, and to subsequently rate the sounds on the scales thus obtained. The second method required the subjects to consistently use the perceptually relevant features in triadic comparisons, without having to assign them...

  5. Music to My Eyes: Cross-Modal Interactions in the Perception of Emotions in Musical Performance

    Science.gov (United States)

    Vines, Bradley W.; Krumhansl, Carol L.; Wanderley, Marcelo M.; Dalca, Ioana M.; Levitin, Daniel J.

    2011-01-01

    We investigate non-verbal communication through expressive body movement and musical sound, to reveal higher cognitive processes involved in the integration of emotion from multiple sensory modalities. Participants heard, saw, or both heard and saw recordings of a Stravinsky solo clarinet piece, performed with three distinct expressive styles:…

  6. [Verbal and gestural communication in interpersonal interaction with Alzheimer's disease patients].

    Science.gov (United States)

    Schiaratura, Loris Tamara; Di Pastena, Angela; Askevis-Leherpeux, Françoise; Clément, Sylvain

    2015-03-01

    Communication can be defined as a verbal and non verbal exchange of thoughts and emotions. While verbal communication deficit in Alzheimer's disease is well documented, very little is known about gestural communication, especially in interpersonal situations. This study examines the production of gestures and its relations with verbal aspects of communication. Three patients suffering from moderately severe Alzheimer's disease were compared to three healthy adults. Each one were given a series of pictures and asked to explain which one she preferred and why. The interpersonal interaction was video recorded. Analyses concerned verbal production (quantity and quality) and gestures. Gestures were either non representational (i.e., gestures of small amplitude punctuating speech or accentuating some parts of utterance) or representational (i.e., referring to the object of the speech). Representational gestures were coded as iconic (depicting of concrete aspects), metaphoric (depicting of abstract meaning) or deictic (pointing toward an object). In comparison with healthy participants, patients revealed a decrease in quantity and quality of speech. Nevertheless, their production of gestures was always present. This pattern is in line with the conception that gestures and speech depend on different communicational systems and look inconsistent with the assumption of a parallel dissolution of gesture and speech. Moreover, analyzing the articulation between verbal and gestural dimensions suggests that representational gestures may compensate for speech deficits. It underlines the importance for the role of gestures in maintaining interpersonal communication.

  7. Unique and shared validity of the "Wechsler logical memory test", the "California verbal learning test", and the "verbal learning and memory test" in patients with epilepsy.

    Science.gov (United States)

    Helmstaedter, Christoph; Wietzke, Jennifer; Lutz, Martin T

    2009-12-01

    This study was set-up to evaluate the construct validity of three verbal memory tests in epilepsy patients. Sixty-one consecutively evaluated patients with temporal lobe epilepsy (TLE) or extra-temporal epilepsy (E-TLE) underwent testing with the verbal learning and memory test (VLMT, the German equivalent of the Rey auditory verbal learning test, RAVLT); the California verbal learning test (CVLT); the logical memory and digit span subtests of the Wechsler memory scale, revised (WMS-R); and testing of intelligence, attention, speech and executive functions. Factor analysis of the memory tests resulted in test-specific rather than test over-spanning factors. Parameters of the CVLT and WMS-R, and to a much lesser degree of the VLMT, were highly correlated with attention, language function and vocabulary. Delayed recall measures of logical memory and the VLMT differentiated TLE from E-TLE. Learning and memory scores off all three tests differentiated mesial temporal sclerosis from other pathologies. A lateralization of the epilepsy was possible only for a subsample of 15 patients with mesial TLE. Although the three tests provide overlapping indicators for a temporal lobe epilepsy or a mesial pathology, they can hardly be taken in exchange. The tests have different demands on semantic processing and memory organization, and they appear differentially sensitive to performance in non-memory domains. The tests capability to lateralize appears to be poor. The findings encourage the further discussion of the dependency of memory outcomes on test selection.

  8. Verbal learning and memory in adolescent cannabis users, alcohol users and non-users.

    Science.gov (United States)

    Solowij, Nadia; Jones, Katy A; Rozman, Megan E; Davis, Sasha M; Ciarrochi, Joseph; Heaven, Patrick C L; Lubman, Dan I; Yücel, Murat

    2011-07-01

    Long-term heavy cannabis use can result in memory impairment. Adolescent users may be especially vulnerable to the adverse neurocognitive effects of cannabis. In a cross-sectional and prospective neuropsychological study of 181 adolescents aged 16-20 (mean 18.3 years), we compared performance indices from one of the most widely used measures of learning and memory--the Rey Auditory Verbal Learning Test--between cannabis users (n=52; mean 2.4 years of use, 14 days/month, median abstinence 20.3 h), alcohol users (n=67) and non-user controls (n=62) matched for age, education and premorbid intellectual ability (assessed prospectively), and alcohol consumption for cannabis and alcohol users. Cannabis users performed significantly worse than alcohol users and non-users on all performance indices. They recalled significantly fewer words overall (pmemory performance after controlling for extent of exposure to cannabis. Despite relatively brief exposure, adolescent cannabis users relative to their age-matched counterparts demonstrated similar memory deficits to those reported in adult long-term heavy users. The results indicate that cannabis adversely affects the developing brain and reinforce concerns regarding the impact of early exposure.

  9. Belief attribution despite verbal interference.

    Science.gov (United States)

    Forgeot d'Arc, Baudouin; Ramus, Franck

    2011-05-01

    False-belief (FB) tasks have been widely used to study the ability of individuals to represent the content of their conspecifics' mental states (theory of mind). However, the cognitive processes involved are still poorly understood, and it remains particularly debated whether language and inner speech are necessary for the attribution of beliefs to other agents. We present a completely nonverbal paradigm consisting of silent animated cartoons in five closely related conditions, systematically teasing apart different aspects of scene analysis and allowing the assessment of the attribution of beliefs, goals, and physical causation. In order to test the role of language in belief attribution, we used verbal shadowing as a dual task to inhibit inner speech. Data on 58 healthy adults indicate that verbal interference decreases overall performance, but has no specific effect on belief attribution. Participants remained able to attribute beliefs despite heavy concurrent demands on their verbal abilities. Our results are most consistent with the hypothesis that belief attribution is independent from inner speech.

  10. Effects of Classroom Bilingualism on Task Shifting, Verbal Memory, and Word Learning in Children

    Science.gov (United States)

    Kaushanskaya, Margarita; Gross, Megan; Buac, Milijana

    2014-01-01

    We examined the effects of classroom bilingual experience in children on an array of cognitive skills. Monolingual English-speaking children were compared with children who spoke English as the native language and who had been exposed to Spanish in the context of dual-immersion schooling for an average of two years. The groups were compared on a measure of non-linguistic task-shifting; measures of verbal short-term and working memory; and measures of word-learning. The two groups of children did not differ on measures of non-linguistic task-shifting and verbal short-term memory. However, the classroom-exposure bilingual group outperformed the monolingual group on the measure of verbal working memory and a measure of word-learning. Together, these findings indicate that while exposure to a second language in a classroom setting may not be sufficient to engender changes in cognitive control, it can facilitate verbal memory and verbal learning. PMID:24576079

  11. The present status of the study on the validity of concurrent verbalization

    International Nuclear Information System (INIS)

    Watanabe, Megumi; Takahashi, Hideaki.

    1993-09-01

    We reviewed study on the validity of the method of verbal reports. The method of verbal reports gives us detailed information about human cognitive process as compared with observing a sequence of actions, while it is subjected to criticism for the validity as data. Ericsson and Simon proposed a model of verbalization and investigated conditions to keep verbal reports valid. Although a lot of studies quote their model as a base of adopting the method of verbal reports, verification the validity of verbal reports is incomplete because effects of verbalization is not clear. We pointed out that it is necessary to take into consideration kinds of task strategies, effects of trial repetition, effects of task difficulty to examine precisely effects of verbalization. (author)

  12. Fourth sound of holographic superfluids

    International Nuclear Information System (INIS)

    Yarom, Amos

    2009-01-01

    We compute fourth sound for superfluids dual to a charged scalar and a gauge field in an AdS 4 background. For holographic superfluids with condensates that have a large scaling dimension (greater than approximately two), we find that fourth sound approaches first sound at low temperatures. For condensates that a have a small scaling dimension it exhibits non-conformal behavior at low temperatures which may be tied to the non-conformal behavior of the order parameter of the superfluid. We show that by introducing an appropriate scalar potential, conformal invariance can be enforced at low temperatures.

  13. Studies of Verbal Problem Solving. 1. Two Performance-Aiding Programs

    Science.gov (United States)

    1977-09-01

    REFERENCES Craik , F.I.M., E. Lockhart , R.S. Levels of processing : A framework for memory research. Journal of Verbal Learning and Verbal Behavior, 1972...the overworked "depth of processing " of Craik and Lockhart (1972), which they defined as the deployment of a flexible processor over any of several...restrict their studies to simple narrative forms; (2) its potential as a rich source of information about higher level cognitive processes , and (3

  14. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    Science.gov (United States)

    Salomons, Erik M; Lohman, Walter J A; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.

  15. Verbal Processing Reaction Times in "Normal" and "Poor" Readers.

    Science.gov (United States)

    Culbertson, Jack; And Others

    After it had been determined that reaction time (RT) was a sensitive measure of hemispheric dominance in a verbal task performed by normal adult readers, the reaction times of three groups of subjects (20 normal reading college students, 12 normal reading third graders and 11 poor reading grade school students) were compared. Ss were exposed to…

  16. Long-term exposure to noise impairs cortical sound processing and attention control.

    Science.gov (United States)

    Kujala, Teija; Shtyrov, Yury; Winkler, Istvan; Saher, Marieke; Tervaniemi, Mari; Sallinen, Mikael; Teder-Sälejärvi, Wolfgang; Alho, Kimmo; Reinikainen, Kalevi; Näätänen, Risto

    2004-11-01

    Long-term exposure to noise impairs human health, causing pathological changes in the inner ear as well as other anatomical and physiological deficits. Numerous individuals are daily exposed to excessive noise. However, there is a lack of systematic research on the effects of noise on cortical function. Here we report data showing that long-term exposure to noise has a persistent effect on central auditory processing and leads to concurrent behavioral deficits. We found that speech-sound discrimination was impaired in noise-exposed individuals, as indicated by behavioral responses and the mismatch negativity brain response. Furthermore, irrelevant sounds increased the distractibility of the noise-exposed subjects, which was shown by increased interference in task performance and aberrant brain responses. These results demonstrate that long-term exposure to noise has long-lasting detrimental effects on central auditory processing and attention control.

  17. Mapping symbols to sounds: electrophysiological correlates of the impaired reading process in dyslexia

    Directory of Open Access Journals (Sweden)

    Andreas eWidmann

    2012-03-01

    Full Text Available Dyslexic and control first grade school children were compared in a Symbol-to-Sound matching test based on a nonlinguistic audiovisual training which is known to have a remediating effect on dyslexia. Visual symbol patterns had to be matched with predicted sound patterns. Sounds incongruent with the corresponding visual symbol (thus not matching the prediction elicited the N2b and P3a event-related potential (ERP components relative to congruent sounds in control children. Their ERPs resembled the ERP effects previously reported for healthy adults with this paradigm. In dyslexic children, N2b onset latency was delayed and its amplitude significantly reduced over left hemisphere whereas P3a was absent. Moreover, N2b amplitudes significantly correlated with the reading skills. ERPs to sound changes in a control condition were unaffected. In addition, correctly predicted sounds, that is, sounds that are congruent with the visual symbol, elicited an early induced auditory gamma band response (GBR reflecting synchronization of brain activity in normal-reading children as previously observed in healthy adults. However, dyslexic children showed no GBR. This indicates that visual symbolic and auditory sensory information are not integrated into a unitary audiovisual object representation in them. Finally, incongruent sounds were followed by a later desynchronization of brain activity in the gamma band in both groups. This desynchronization was significantly larger in dyslexic children. Although both groups accomplished the task successfully remarkable group differences in brain responses suggest that normal-reading children and dyslexic children recruit (partly different brain mechanisms when solving the task. We propose that abnormal ERPs and GBRs in dyslexic readers indicate a deficit resulting in a widespread impairment in processing and integrating auditory and visual information and contributing to the reading impairment in dyslexia.

  18. Waveform analysis of sound

    CERN Document Server

    Tohyama, Mikio

    2015-01-01

    What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...

  19. Temporal integration: intentional sound discrimination does not modulate stimulus-driven processes in auditory event synthesis.

    Science.gov (United States)

    Sussman, Elyse; Winkler, István; Kreuzer, Judith; Saher, Marieke; Näätänen, Risto; Ritter, Walter

    2002-12-01

    Our previous study showed that the auditory context could influence whether two successive acoustic changes occurring within the temporal integration window (approximately 200ms) were pre-attentively encoded as a single auditory event or as two discrete events (Cogn Brain Res 12 (2001) 431). The aim of the current study was to assess whether top-down processes could influence the stimulus-driven processes in determining what constitutes an auditory event. Electroencepholagram (EEG) was recorded from 11 scalp electrodes to frequently occurring standard and infrequently occurring deviant sounds. Within the stimulus blocks, deviants either occurred only in pairs (successive feature changes) or both singly and in pairs. Event-related potential indices of change and target detection, the mismatch negativity (MMN) and the N2b component, respectively, were compared with the simultaneously measured performance in discriminating the deviants. Even though subjects could voluntarily distinguish the two successive auditory feature changes from each other, which was also indicated by the elicitation of the N2b target-detection response, top-down processes did not modify the event organization reflected by the MMN response. Top-down processes can extract elemental auditory information from a single integrated acoustic event, but the extraction occurs at a later processing stage than the one whose outcome is indexed by MMN. Initial processes of auditory event-formation are fully governed by the context within which the sounds occur. Perception of the deviants as two separate sound events (the top-down effects) did not change the initial neural representation of the same deviants as one event (indexed by the MMN), without a corresponding change in the stimulus-driven sound organization.

  20. Enhanced Excitatory Connectivity and Disturbed Sound Processing in the Auditory Brainstem of Fragile X Mice.

    Science.gov (United States)

    Garcia-Pino, Elisabet; Gessele, Nikodemus; Koch, Ursula

    2017-08-02

    Hypersensitivity to sounds is one of the prevalent symptoms in individuals with Fragile X syndrome (FXS). It manifests behaviorally early during development and is often used as a landmark for treatment efficacy. However, the physiological mechanisms and circuit-level alterations underlying this aberrant behavior remain poorly understood. Using the mouse model of FXS ( Fmr1 KO ), we demonstrate that functional maturation of auditory brainstem synapses is impaired in FXS. Fmr1 KO mice showed a greatly enhanced excitatory synaptic input strength in neurons of the lateral superior olive (LSO), a prominent auditory brainstem nucleus, which integrates ipsilateral excitation and contralateral inhibition to compute interaural level differences. Conversely, the glycinergic, inhibitory input properties remained unaffected. The enhanced excitation was the result of an increased number of cochlear nucleus fibers converging onto one LSO neuron, without changing individual synapse properties. Concomitantly, immunolabeling of excitatory ending markers revealed an increase in the immunolabeled area, supporting abnormally elevated excitatory input numbers. Intrinsic firing properties were only slightly enhanced. In line with the disturbed development of LSO circuitry, auditory processing was also affected in adult Fmr1 KO mice as shown with single-unit recordings of LSO neurons. These processing deficits manifested as an increase in firing rate, a broadening of the frequency response area, and a shift in the interaural level difference function of LSO neurons. Our results suggest that this aberrant synaptic development of auditory brainstem circuits might be a major underlying cause of the auditory processing deficits in FXS. SIGNIFICANCE STATEMENT Fragile X Syndrome (FXS) is the most common inheritable form of intellectual impairment, including autism. A core symptom of FXS is extreme sensitivity to loud sounds. This is one reason why individuals with FXS tend to avoid social

  1. Language Experience Affects Grouping of Musical Instrument Sounds

    Science.gov (United States)

    Bhatara, Anjali; Boll-Avetisyan, Natalie; Agus, Trevor; Höhle, Barbara; Nazzi, Thierry

    2016-01-01

    Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non-speech sounds. In this study, we investigated rhythmic perception of non-linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of…

  2. Creating wavelet-based models for real-time synthesis of perceptually convincing environmental sounds

    Science.gov (United States)

    Miner, Nadine Elizabeth

    1998-09-01

    This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.

  3. Verbal communication skills in typical language development: a case series.

    Science.gov (United States)

    Abe, Camila Mayumi; Bretanha, Andreza Carolina; Bozza, Amanda; Ferraro, Gyovanna Junya Klinke; Lopes-Herrera, Simone Aparecida

    2013-01-01

    The aim of the current study was to investigate verbal communication skills in children with typical language development and ages between 6 and 8 years. Participants were 10 children of both genders in this age range without language alterations. A 30-minute video of each child's interaction with an adult (father and/or mother) was recorded, fully transcribed, and analyzed by two trained researchers in order to determine reliability. The recordings were analyzed according to a protocol that categorizes verbal communicative abilities, including dialogic, regulatory, narrative-discursive, and non-interactive skills. The frequency of use of each category of verbal communicative ability was analyzed (in percentage) for each subject. All subjects used more dialogical and regulatory skills, followed by narrative-discursive and non-interactive skills. This suggests that children in this age range are committed to continue dialog, which shows that children with typical language development have more dialogic interactions during spontaneous interactions with a familiar adult.

  4. Verbal fluency in idiopathic Parkinson's disease

    International Nuclear Information System (INIS)

    Thut, G.; Antonini, A.; Roelcke, U.; Missimer, J.; Maguire, R.P.; Leenders, K.L.; Regard, M.

    1997-01-01

    In the present study, the relationship between resting metabolism and verbal fluency, a correlate of frontal lobe cognition, was examined in 33 PD patients. We aimed to determine brain structures involved in frontal lobe cognitive impairment with special emphasis on differences between demented and non-demented PD patients. (author) 3 figs., 2 refs

  5. The role of high-level processes for oscillatory phase entrainment to speech sound

    Directory of Open Access Journals (Sweden)

    Benedikt eZoefel

    2015-12-01

    Full Text Available Constantly bombarded with input, the brain has the need to filter out relevant information while ignoring the irrelevant rest. A powerful tool may be represented by neural oscillations which entrain their high-excitability phase to important input while their low-excitability phase attenuates irrelevant information. Indeed, the alignment between brain oscillations and speech improves intelligibility and helps dissociating speakers during a cocktail party. Although well-investigated, the contribution of low- and high-level processes to phase entrainment to speech sound has only recently begun to be understood. Here, we review those findings, and concentrate on three main results: (1 Phase entrainment to speech sound is modulated by attention or predictions, likely supported by top-down signals and indicating higher-level processes involved in the brain’s adjustment to speech. (2 As phase entrainment to speech can be observed without systematic fluctuations in sound amplitude or spectral content, it does not only reflect a passive steady-state ringing of the cochlea, but entails a higher-level process. (3 The role of intelligibility for phase entrainment is debated. Recent results suggest that intelligibility modulates the behavioral consequences of entrainment, rather than directly affecting the strength of entrainment in auditory regions. We conclude that phase entrainment to speech reflects a sophisticated mechanism: Several high-level processes interact to optimally align neural oscillations with predicted events of high relevance, even when they are hidden in a continuous stream of background noise.

  6. Processing Complex Sounds Passing through the Rostral Brainstem: The New Early Filter Model

    Science.gov (United States)

    Marsh, John E.; Campbell, Tom A.

    2016-01-01

    The rostral brainstem receives both “bottom-up” input from the ascending auditory system and “top-down” descending corticofugal connections. Speech information passing through the inferior colliculus of elderly listeners reflects the periodicity envelope of a speech syllable. This information arguably also reflects a composite of temporal-fine-structure (TFS) information from the higher frequency vowel harmonics of that repeated syllable. The amplitude of those higher frequency harmonics, bearing even higher frequency TFS information, correlates positively with the word recognition ability of elderly listeners under reverberatory conditions. Also relevant is that working memory capacity (WMC), which is subject to age-related decline, constrains the processing of sounds at the level of the brainstem. Turning to the effects of a visually presented sensory or memory load on auditory processes, there is a load-dependent reduction of that processing, as manifest in the auditory brainstem responses (ABR) evoked by to-be-ignored clicks. Wave V decreases in amplitude with increases in the visually presented memory load. A visually presented sensory load also produces a load-dependent reduction of a slightly different sort: The sensory load of visually presented information limits the disruptive effects of background sound upon working memory performance. A new early filter model is thus advanced whereby systems within the frontal lobe (affected by sensory or memory load) cholinergically influence top-down corticofugal connections. Those corticofugal connections constrain the processing of complex sounds such as speech at the level of the brainstem. Selective attention thereby limits the distracting effects of background sound entering the higher auditory system via the inferior colliculus. Processing TFS in the brainstem relates to perception of speech under adverse conditions. Attentional selectivity is crucial when the signal heard is degraded or masked: e

  7. Broadcast sound technology

    CERN Document Server

    Talbot-Smith, Michael

    1990-01-01

    Broadcast Sound Technology provides an explanation of the underlying principles of modern audio technology. Organized into 21 chapters, the book first describes the basic sound; behavior of sound waves; aspects of hearing, harming, and charming the ear; room acoustics; reverberation; microphones; phantom power; loudspeakers; basic stereo; and monitoring of audio signal. Subsequent chapters explore the processing of audio signal, sockets, sound desks, and digital audio. Analogue and digital tape recording and reproduction, as well as noise reduction, are also explained.

  8. Comunicação verbal e não verbal de mãe cega e com limitação motora durante alimentação da criança Comunicación verbal y no verbal de madre ciega y con limitación motora durante la alimentación delniño Verbal and nonverbal communication of a blind mother with limited dexterity during infant feeding

    Directory of Open Access Journals (Sweden)

    Giselly Oseni Laurentino Barbosa

    2011-01-01

    blind mother with limited dexterity with her son and a nurse during infant feeding. METHODS: This exploratory, descriptive case study used a quantitative approach, and was completed in 2009. The interviews were recorded, videotaped and analyzed by three evaluators. RESULTS: The results of verbal communication demonstrated the predominance of the mother as a recipient and the use of emotional function in the verbalizations with the child, and the non-verbal communication showed the prevalence of intimate distance between mother / son, personal space between the mother / nurse and the sitting posture . There was little face to face contact and physical contact with the child stood out. CONCLUSION: The mother suffered no losses in the establishment of the verbal communication process. The distance facilitated maternal interaction with the baby and with the professional.

  9. Sound topology, duality, coherence and wave-mixing an introduction to the emerging new science of sound

    CERN Document Server

    Deymier, Pierre

    2017-01-01

    This book offers an essential introduction to the notions of sound wave topology, duality, coherence and wave-mixing, which constitute the emerging new science of sound. It includes general principles and specific examples that illuminate new non-conventional forms of sound (sound topology), unconventional quantum-like behavior of phonons (duality), radical linear and nonlinear phenomena associated with loss and its control (coherence), and exquisite effects that emerge from the interaction of sound with other physical and biological waves (wave mixing).  The book provides the reader with the foundations needed to master these complex notions through simple yet meaningful examples. General principles for unraveling and describing the topology of acoustic wave functions in the space of their Eigen values are presented. These principles are then applied to uncover intrinsic and extrinsic approaches to achieving non-conventional topologies by breaking the time revers al symmetry of acoustic waves. Symmetry brea...

  10. Verbal Working Memory Is Related to the Acquisition of Cross-Linguistic Phonological Regularities.

    Science.gov (United States)

    Bosma, Evelyn; Heeringa, Wilbert; Hoekstra, Eric; Versloot, Arjen; Blom, Elma

    2017-01-01

    Closely related languages share cross-linguistic phonological regularities, such as Frisian -âld [ͻ:t] and Dutch -oud [ʱut], as in the cognate pairs kâld [kͻ:t] - koud [kʱut] 'cold' and wâld [wͻ:t] - woud [wʱut] 'forest'. Within Bybee's (1995, 2001, 2008, 2010) network model, these regularities are, just like grammatical rules within a language, generalizations that emerge from schemas of phonologically and semantically related words. Previous research has shown that verbal working memory is related to the acquisition of grammar, but not vocabulary. This suggests that verbal working memory supports the acquisition of linguistic regularities. In order to test this hypothesis we investigated whether verbal working memory is also related to the acquisition of cross-linguistic phonological regularities. For three consecutive years, 5- to 8-year-old Frisian-Dutch bilingual children ( n = 120) were tested annually on verbal working memory and a Frisian receptive vocabulary task that comprised four cognate categories: (1) identical cognates, (2) non-identical cognates that either do or (3) do not exhibit a phonological regularity between Frisian and Dutch, and (4) non-cognates. The results showed that verbal working memory had a significantly stronger effect on cognate category (2) than on the other three cognate categories. This suggests that verbal working memory is related to the acquisition of cross-linguistic phonological regularities. More generally, it confirms the hypothesis that verbal working memory plays a role in the acquisition of linguistic regularities.

  11. Propofol disrupts functional interactions between sensory and high-order processing of auditory verbal memory.

    Science.gov (United States)

    Liu, Xiaolin; Lauer, Kathryn K; Ward, Barney D; Rao, Stephen M; Li, Shi-Jiang; Hudetz, Anthony G

    2012-10-01

    Current theories suggest that disrupting cortical information integration may account for the mechanism of general anesthesia in suppressing consciousness. Human cognitive operations take place in hierarchically structured neural organizations in the brain. The process of low-order neural representation of sensory stimuli becoming integrated in high-order cortices is also known as cognitive binding. Combining neuroimaging, cognitive neuroscience, and anesthetic manipulation, we examined how cognitive networks involved in auditory verbal memory are maintained in wakefulness, disrupted in propofol-induced deep sedation, and re-established in recovery. Inspired by the notion of cognitive binding, an functional magnetic resonance imaging-guided connectivity analysis was utilized to assess the integrity of functional interactions within and between different levels of the task-defined brain regions. Task-related responses persisted in the primary auditory cortex (PAC), but vanished in the inferior frontal gyrus (IFG) and premotor areas in deep sedation. For connectivity analysis, seed regions representing sensory and high-order processing of the memory task were identified in the PAC and IFG. Propofol disrupted connections from the PAC seed to the frontal regions and thalamus, but not the connections from the IFG seed to a set of widely distributed brain regions in the temporal, frontal, and parietal lobes (with exception of the PAC). These later regions have been implicated in mediating verbal comprehension and memory. These results suggest that propofol disrupts cognition by blocking the projection of sensory information to high-order processing networks and thus preventing information integration. Such findings contribute to our understanding of anesthetic mechanisms as related to information and integration in the brain. Copyright © 2011 Wiley Periodicals, Inc.

  12. Initial uncertainty impacts statistical learning in sound sequence processing.

    Science.gov (United States)

    Todd, Juanita; Provost, Alexander; Whitson, Lisa; Mullens, Daniel

    2016-11-01

    This paper features two studies confirming a lasting impact of first learning on how subsequent experience is weighted in early relevance-filtering processes. In both studies participants were exposed to sequences of sound that contained a regular pattern on two different timescales. Regular patterning in sound is readily detected by the auditory system and used to form "prediction models" that define the most likely properties of sound to be encountered in a given context. The presence and strength of these prediction models is inferred from changes in automatically elicited components of auditory evoked potentials. Both studies employed sound sequences that contained both a local and longer-term pattern. The local pattern was defined by a regular repeating pure tone occasionally interrupted by a rare deviating tone (p=0.125) that was physically different (a 30msvs. 60ms duration difference in one condition and a 1000Hz vs. 1500Hz frequency difference in the other). The longer-term pattern was defined by the rate at which the two tones alternated probabilities (i.e., the tone that was first rare became common and the tone that was first common became rare). There was no task related to the tones and participants were asked to ignore them while focussing attention on a movie with subtitles. Auditory-evoked potentials revealed long lasting modulatory influences based on whether the tone was initially encountered as rare and unpredictable or common and predictable. The results are interpreted as evidence that probability (or indeed predictability) assigns a differential information-value to the two tones that in turn affects the extent to which prediction models are updated and imposed. These effects are exposed for both common and rare occurrences of the tones. The studies contribute to a body of work that reveals that probabilistic information is not faithfully represented in these early evoked potentials and instead exposes that predictability (or conversely

  13. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    Science.gov (United States)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  14. Annual Report: 2010-2011 Storm Season Sampling For NON-DRY DOCK STORMWATER MONITORING FOR PUGET SOUND NAVAL SHIPYARD, BREMERTON, WA

    Energy Technology Data Exchange (ETDEWEB)

    Brandenberger, Jill M.; Metallo, David; Johnston, Robert K.; Gebhardt, Christine; Hsu, Larry

    2012-09-01

    This interim report summarizes the stormwater monitoring conducted for non-dry dock outfalls in both the confined industrial area and the residential areas of Naval Base Kitsap within the Puget Sound Naval Shipyard (referred to as the Shipyard). This includes the collection, analyses, and descriptive statistics for stormwater sampling conducted from November 2010 through April 2011. Seven stormwater basins within the Shipyard were sampled during at least three storm events to characterize non-dry dock stormwater discharges at selected stormwater drains located within the facility. This serves as the Phase I component of the project and Phase II is planned for the 2011-2012 storm season. These data will assist the Navy, USEPA, Ecology and other stakeholders in understanding the nature and condition of stormwater discharges from the Shipyard and inform the permitting process for new outfall discharges. The data from Phase I was compiled with current stormwater data available from the Shipyard, Sinclair/Dyes Inlet watershed, and Puget Sound in order to support technical investigations for the Draft NPDES permit. The permit would require storm event sampling at selected stormwater drains located within the Shipyard. However, the data must be considered on multiple scales to truly understand potential impairments to beneficial uses within Sinclair and Dyes Inlets.

  15. A new signal development process and sound system for diverting fish from water intakes

    International Nuclear Information System (INIS)

    Klinet, D.A.; Loeffelman, P.H.; van Hassel, J.H.

    1992-01-01

    This paper reports that American Electric Power Service Corporation has explored the feasibility of using a patented signal development process and underwater sound system to divert fish away from water intake areas. The effect of water intakes on fish is being closely scrutinized as hydropower projects are re-licensed. The overall goal of this four-year research project was to develop an underwater guidance system which is biologically effective, reliable and cost-effective compared to other proposed methods of diversion, such as physical screens. Because different fish species have various listening ranges, it was essential to the success of this experiment that the sound system have a great amount of flexibility. Assuming a fish's sounds are heard by the same kind of fish, it was necessary to develop a procedure and acquire instrumentation to properly analyze the sounds that the target fish species create to communicate and any artificial signals being generated for diversion

  16. A taste for words and sounds: a case of lexical-gustatory and sound-gustatory synesthesia

    Directory of Open Access Journals (Sweden)

    Olympia eColizoli

    2013-10-01

    Full Text Available Gustatory forms of synesthesia involve the automatic and consistent experience of tastes that are triggered by non-taste related inducers. We present a case of lexical-gustatory and sound-gustatory synesthesia within one individual, SC. Most words and a subset of nonlinguistic sounds induce the experience of taste, smell and physical sensations for SC. SC’s lexical-gustatory associations were significantly more consistent than those of a group of controls. We tested for effects of presentation modality (visual vs. auditory, taste-related congruency, and synesthetic inducer-concurrent direction using a priming task. SC’s performance did not differ significantly from a trained control group. We used functional magnetic resonance imaging to investigate the neural correlates of SC’s synesthetic experiences by comparing her brain activation to the literature on brain networks related to language, music and sound processing, in addition to synesthesia. Words that induced a strong taste were contrasted to words that induced weak-to-no tastes (tasty vs. tasteless words. Brain activation was also measured during passive listening to music and environmental sounds. Brain activation patterns showed evidence that two regions are implicated in SC’s synesthetic experience of taste and smell: the left anterior insula and left superior parietal lobe. Anterior insula activation may reflect the synesthetic taste experience. The superior parietal lobe is proposed to be involved in binding sensory information across sub-types of synesthetes. We conclude that SC’s synesthesia is genuine and reflected in her brain activation. The type of inducer (visual-lexical, auditory-lexical, and non-lexical auditory stimuli could be differentiated based on patterns of brain activity.

  17. An auditory analog of the picture superiority effect.

    Science.gov (United States)

    Crutcher, Robert J; Beer, Jenay M

    2011-01-01

    Previous research has found that pictures (e.g., a picture of an elephant) are remembered better than words (e.g., the word "elephant"), an empirical finding called the picture superiority effect (Paivio & Csapo. Cognitive Psychology 5(2):176-206, 1973). However, very little research has investigated such memory differences for other types of sensory stimuli (e.g. sounds or odors) and their verbal labels. Four experiments compared recall of environmental sounds (e.g., ringing) and spoken verbal labels of those sounds (e.g., "ringing"). In contrast to earlier studies that have shown no difference in recall of sounds and spoken verbal labels (Philipchalk & Rowe. Journal of Experimental Psychology 91(2):341-343, 1971; Paivio, Philipchalk, & Rowe. Memory & Cognition 3(6):586-590, 1975), the experiments reported here yielded clear evidence for an auditory analog of the picture superiority effect. Experiments 1 and 2 showed that sounds were recalled better than the verbal labels of those sounds. Experiment 2 also showed that verbal labels are recalled as well as sounds when participants imagine the sound that the word labels. Experiments 3 and 4 extended these findings to incidental-processing task paradigms and showed that the advantage of sounds over words is enhanced when participants are induced to label the sounds.

  18. A comparison of sound quality judgments for monaural and binaural hearing aid processed stimuli.

    Science.gov (United States)

    Balfour, P B; Hawkins, D B

    1992-10-01

    Fifteen adults with bilaterally symmetrical mild and/or moderate sensorineural hearing loss completed a paired-comparison task designed to elicit sound quality preference judgments for monaural/binaural hearing aid processed signals. Three stimuli (speech-in-quiet, speech-in-noise, and music) were recorded separately in three listening environments (audiometric test booth, living room, and a music/lecture hall) through hearing aids placed on a Knowles Electronics Manikin for Acoustics Research. Judgments were made on eight separate sound quality dimensions (brightness, clarity, fullness, loudness, nearness, overall impression, smoothness, and spaciousness) for each of the three stimuli in three listening environments. Results revealed a distinct binaural preference for all eight sound quality dimensions independent of listening environment. Binaural preferences were strongest for overall impression, fullness, and spaciousness. Stimulus type effect was significant only for fullness and spaciousness, where binaural preferences were strongest for speech-in-quiet. After binaural preference data were obtained, subjects ranked each sound quality dimension with respect to its importance for binaural listening relative to monaural. Clarity was ranked highest in importance and brightness was ranked least important. The key to demonstration of improved binaural hearing aid sound quality may be the use of a paired-comparison format.

  19. Analysis of speech sounds is left-hemisphere predominant at 100-150ms after sound onset.

    Science.gov (United States)

    Rinne, T; Alho, K; Alku, P; Holi, M; Sinkkonen, J; Virtanen, J; Bertrand, O; Näätänen, R

    1999-04-06

    Hemispheric specialization of human speech processing has been found in brain imaging studies using fMRI and PET. Due to the restricted time resolution, these methods cannot, however, determine the stage of auditory processing at which this specialization first emerges. We used a dense electrode array covering the whole scalp to record the mismatch negativity (MMN), an event-related brain potential (ERP) automatically elicited by occasional changes in sounds, which ranged from non-phonetic (tones) to phonetic (vowels). MMN can be used to probe auditory central processing on a millisecond scale with no attention-dependent task requirements. Our results indicate that speech processing occurs predominantly in the left hemisphere at the early, pre-attentive level of auditory analysis.

  20. Selective verbal recognition memory impairments are associated with atrophy of the language network in non-semantic variants of primary progressive aphasia.

    Science.gov (United States)

    Nilakantan, Aneesha S; Voss, Joel L; Weintraub, Sandra; Mesulam, M-Marsel; Rogalski, Emily J

    2017-06-01

    Primary progressive aphasia (PPA) is clinically defined by an initial loss of language function and preservation of other cognitive abilities, including episodic memory. While PPA primarily affects the left-lateralized perisylvian language network, some clinical neuropsychological tests suggest concurrent initial memory loss. The goal of this study was to test recognition memory of objects and words in the visual and auditory modality to separate language-processing impairments from retentive memory in PPA. Individuals with non-semantic PPA had longer reaction times and higher false alarms for auditory word stimuli compared to visual object stimuli. Moreover, false alarms for auditory word recognition memory were related to cortical thickness within the left inferior frontal gyrus and left temporal pole, while false alarms for visual object recognition memory was related to cortical thickness within the right-temporal pole. This pattern of results suggests that specific vulnerability in processing verbal stimuli can hinder episodic memory in PPA, and provides evidence for differential contributions of the left and right temporal poles in word and object recognition memory. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Deficits in Letter-Speech Sound Associations but Intact Visual Conflict Processing in Dyslexia: Results from a Novel ERP-Paradigm

    OpenAIRE

    Bakos, Sarolta; Landerl, Karin; Bartling, Jürgen; Schulte-Körne, Gerd; Moll, Kristina

    2017-01-01

    The reading and spelling deficits characteristic of developmental dyslexia (dyslexia) have been related to problems in phonological processing and in learning associations between letters and speech-sounds. Even when children with dyslexia have learned the letters and their corresponding speech sounds, letter-speech sound associations might still be less automatized compared to children with age-adequate literacy skills. In order to examine automaticity in letter-speech sound associations and...

  2. Sound-Symbolism Boosts Novel Word Learning

    Science.gov (United States)

    Lockwood, Gwilym; Dingemanse, Mark; Hagoort, Peter

    2016-01-01

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally occurring sound-symbolic words that depict sensory…

  3. Human dorsal and ventral auditory streams subserve rehearsal-based and echoic processes during verbal working memory.

    Science.gov (United States)

    Buchsbaum, Bradley R; Olsen, Rosanna K; Koch, Paul; Berman, Karen Faith

    2005-11-23

    To hear a sequence of words and repeat them requires sensory-motor processing and something more-temporary storage. We investigated neural mechanisms of verbal memory by using fMRI and a task designed to tease apart perceptually based ("echoic") memory from phonological-articulatory memory. Sets of two- or three-word pairs were presented bimodally, followed by a cue indicating from which modality (auditory or visual) items were to be retrieved and rehearsed over a delay. Although delay-period activation in the planum temporale (PT) was insensible to the source modality and showed sustained delay-period activity, the superior temporal gyrus (STG) activated more vigorously when the retrieved items had arrived to the auditory modality and showed transient delay-period activity. Functional connectivity analysis revealed two topographically distinct fronto-temporal circuits, with STG co-activating more strongly with ventrolateral prefrontal cortex and PT co-activating more strongly with dorsolateral prefrontal cortex. These argue for separate contributions of ventral and dorsal auditory streams in verbal working memory.

  4. Cognitive flexibility modulates maturation and music-training-related changes in neural sound discrimination.

    Science.gov (United States)

    Saarikivi, Katri; Putkinen, Vesa; Tervaniemi, Mari; Huotilainen, Minna

    2016-07-01

    Previous research has demonstrated that musicians show superior neural sound discrimination when compared to non-musicians, and that these changes emerge with accumulation of training. Our aim was to investigate whether individual differences in executive functions predict training-related changes in neural sound discrimination. We measured event-related potentials induced by sound changes coupled with tests for executive functions in musically trained and non-trained children aged 9-11 years and 13-15 years. High performance in a set-shifting task, indexing cognitive flexibility, was linked to enhanced maturation of neural sound discrimination in both musically trained and non-trained children. Specifically, well-performing musically trained children already showed large mismatch negativity (MMN) responses at a young age as well as at an older age, indicating accurate sound discrimination. In contrast, the musically trained low-performing children still showed an increase in MMN amplitude with age, suggesting that they were behind their high-performing peers in the development of sound discrimination. In the non-trained group, in turn, only the high-performing children showed evidence of an age-related increase in MMN amplitude, and the low-performing children showed a small MMN with no age-related change. These latter results suggest an advantage in MMN development also for high-performing non-trained individuals. For the P3a amplitude, there was an age-related increase only in the children who performed well in the set-shifting task, irrespective of music training, indicating enhanced attention-related processes in these children. Thus, the current study provides the first evidence that, in children, cognitive flexibility may influence age-related and training-related plasticity of neural sound discrimination. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  5. Rethinking a Negative Event : The Affective Impact Of Ruminative versus Imagery-Based Processing Of Aversive Autobiographical Memories

    NARCIS (Netherlands)

    Slofstra, Christien; Eisma, Maarten C; Holmes, Emily A; Bockting, Claudi L H; Nauta, Maaike H

    2017-01-01

    INTRODUCTION: Ruminative (abstract verbal) processing during recall of aversive autobiographical memories may serve to dampen their short-term affective impact. Experimental studies indeed demonstrate that verbal processing of non-autobiographical material and positive autobiographical memories

  6. Rethinking a Negative Event : The Affective Impact of Ruminative versus Imagery-Based Processing of Aversive Autobiographical Memories

    NARCIS (Netherlands)

    Slofstra, Christien; Eisma, Maarten C; Holmes, Emily A; Bockting, Claudi L H; Nauta, Maaike H

    2017-01-01

    INTRODUCTION: Ruminative (abstract verbal) processing during recall of aversive autobiographical memories may serve to dampen their short-term affective impact. Experimental studies indeed demonstrate that verbal processing of non-autobiographical material and positive autobiographical memories

  7. Sound Clocks and Sonic Relativity

    Science.gov (United States)

    Todd, Scott L.; Menicucci, Nicolas C.

    2017-10-01

    Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, "do devices exist that will experience the relativity in these systems?" We describe a thought experiment in which `acoustic observers' possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ , with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.

  8. Verbal overshadowing of visual memories: some things are better left unsaid.

    Science.gov (United States)

    Schooler, J W; Engstler-Schooler, T Y

    1990-01-01

    It is widely believed that verbal processing generally improves memory performance. However, in a series of six experiments, verbalizing the appearance of previously seen visual stimuli impaired subsequent recognition performance. In Experiment 1, subjects viewed a videotape including a salient individual. Later, some subjects described the individual's face. Subjects who verbalized the face performed less well on a subsequent recognition test than control subjects who did not engage in memory verbalization. The results of Experiment 2 replicated those of Experiment 1 and further clarified the effect of memory verbalization by demonstrating that visualization does not impair face recognition. In Experiments 3 and 4 we explored the hypothesis that memory verbalization impairs memory for stimuli that are difficult to put into words. In Experiment 3 memory impairment followed the verbalization of a different visual stimulus: color. In Experiment 4 marginal memory improvement followed the verbalization of a verbal stimulus: a brief spoken statement. In Experiments 5 and 6 the source of verbally induced memory impairment was explored. The results of Experiment 5 suggested that the impairment does not reflect a temporary verbal set, but rather indicates relatively long-lasting memory interference. Finally, Experiment 6 demonstrated that limiting subjects' time to make recognition decisions alleviates the impairment, suggesting that memory verbalization overshadows but does not eradicate the original visual memory. This collection of results is consistent with a recording interference hypothesis: verbalizing a visual memory may produce a verbally biased memory representation that can interfere with the application of the original visual memory.

  9. Verbal working memory deficits predict levels of auditory hallucination in first-episode psychosis.

    Science.gov (United States)

    Gisselgård, Jens; Anda, Liss Gøril; Brønnick, Kolbjørn; Langeveld, Johannes; Ten Velden Hegelstad, Wenche; Joa, Inge; Johannessen, Jan Olav; Larsen, Tor Ketil

    2014-03-01

    Auditory verbal hallucinations are a characteristic symptom in schizophrenia. Recent causal models of auditory verbal hallucinations propose that cognitive mechanisms involving verbal working memory are involved in the genesis of auditory verbal hallucinations. Thus, in the present study, we investigate the hypothesis that verbal working memory is a specific factor behind auditory verbal hallucinations. In the present study, we investigated the association between verbal working memory manipulation (Backward Digit Span and Letter-Number Sequencing) and auditory verbal hallucinations in a population study (N=52) of first episode psychosis. The degree of auditory verbal hallucination as reported in the P3-subscale of the PANSS interview was included as dependent variable using sequential multiple regression, while controlling for age, psychosis symptom severity, executive cognitive functions and simple auditory working memory span. Multiple sequential regression analyses revealed verbal working memory manipulation to be the only significant predictor of verbal hallucination severity. Consistent with cognitive data from auditory verbal hallucinations in healthy individuals, the present results suggest a specific association between auditory verbal hallucinations, and cognitive processes involving the manipulation of phonological representations during a verbal working memory task. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Stage effects of negative emotion on spatial and verbal working memory

    Directory of Open Access Journals (Sweden)

    Chan Raymond CK

    2010-05-01

    Full Text Available Abstract Background The effects of negative emotion on different processing periods in spatial and verbal working memory (WM and the possible brain mechanism of the interaction between negative emotion and WM were explored using a high-time resolution event-related potential (ERP technique and time-locked delayed matching-to-sample task (DMST. Results Early P3b and late P3b were reduced in the negative emotion condition for both the spatial and verbal tasks at encoding. At retention, the sustained negative slow wave (NSW showed a significant interaction between emotional state and task type. Spatial trials in the negative emotion condition elicited a more negative deflection than they did in the neutral emotion condition. However, no such effect was observed for the verbal tasks. At retrieval, early P3b and late P3b were markedly more attenuated in the negative emotion condition than in the neutral emotion condition for both the spatial and verbal tasks. Conclusions The results indicate that the differential effects of negative emotion on spatial and verbal WM mainly take place during information maintenance processing, which implies that there is a systematic association between specific affects (e.g., negative emotion and certain cognitive processes (e.g., spatial retention.

  11. Verbal Working Memory in Children With Cochlear Implants

    Science.gov (United States)

    Caldwell-Tarr, Amanda; Low, Keri E.; Lowenstein, Joanna H.

    2017-01-01

    Purpose Verbal working memory in children with cochlear implants and children with normal hearing was examined. Participants Ninety-three fourth graders (47 with normal hearing, 46 with cochlear implants) participated, all of whom were in a longitudinal study and had working memory assessed 2 years earlier. Method A dual-component model of working memory was adopted, and a serial recall task measured storage and processing. Potential predictor variables were phonological awareness, vocabulary knowledge, nonverbal IQ, and several treatment variables. Potential dependent functions were literacy, expressive language, and speech-in-noise recognition. Results Children with cochlear implants showed deficits in storage and processing, similar in size to those at second grade. Predictors of verbal working memory differed across groups: Phonological awareness explained the most variance in children with normal hearing; vocabulary explained the most variance in children with cochlear implants. Treatment variables explained little of the variance. Where potentially dependent functions were concerned, verbal working memory accounted for little variance once the variance explained by other predictors was removed. Conclusions The verbal working memory deficits of children with cochlear implants arise due to signal degradation, which limits their abilities to acquire phonological awareness. That hinders their abilities to store items using a phonological code. PMID:29075747

  12. Verbal Working Memory Is Related to the Acquisition of Cross-Linguistic Phonological Regularities

    Directory of Open Access Journals (Sweden)

    Evelyn Bosma

    2017-09-01

    Full Text Available Closely related languages share cross-linguistic phonological regularities, such as Frisian -âld [ͻ:t] and Dutch -oud [ʱut], as in the cognate pairs kâld [kͻ:t] – koud [kʱut] ‘cold’ and wâld [wͻ:t] – woud [wʱut] ‘forest’. Within Bybee’s (1995, 2001, 2008, 2010 network model, these regularities are, just like grammatical rules within a language, generalizations that emerge from schemas of phonologically and semantically related words. Previous research has shown that verbal working memory is related to the acquisition of grammar, but not vocabulary. This suggests that verbal working memory supports the acquisition of linguistic regularities. In order to test this hypothesis we investigated whether verbal working memory is also related to the acquisition of cross-linguistic phonological regularities. For three consecutive years, 5- to 8-year-old Frisian-Dutch bilingual children (n = 120 were tested annually on verbal working memory and a Frisian receptive vocabulary task that comprised four cognate categories: (1 identical cognates, (2 non-identical cognates that either do or (3 do not exhibit a phonological regularity between Frisian and Dutch, and (4 non-cognates. The results showed that verbal working memory had a significantly stronger effect on cognate category (2 than on the other three cognate categories. This suggests that verbal working memory is related to the acquisition of cross-linguistic phonological regularities. More generally, it confirms the hypothesis that verbal working memory plays a role in the acquisition of linguistic regularities.

  13. Long-Term Impairment of Sound Processing in the Auditory Midbrain by Daily Short-Term Exposure to Moderate Noise

    Directory of Open Access Journals (Sweden)

    Liang Cheng

    2017-01-01

    Full Text Available Most citizen people are exposed daily to environmental noise at moderate levels with a short duration. The aim of the present study was to determine the effects of daily short-term exposure to moderate noise on sound level processing in the auditory midbrain. Sound processing properties of auditory midbrain neurons were recorded in anesthetized mice exposed to moderate noise (80 dB SPL, 2 h/d for 6 weeks and were compared with those from age-matched controls. Neurons in exposed mice had a higher minimum threshold and maximum response intensity, a longer first spike latency, and a higher slope and narrower dynamic range for rate level function. However, these observed changes were greater in neurons with the best frequency within the noise exposure frequency range compared with those outside the frequency range. These sound processing properties also remained abnormal after a 12-week period of recovery in a quiet laboratory environment after completion of noise exposure. In conclusion, even daily short-term exposure to moderate noise can cause long-term impairment of sound level processing in a frequency-specific manner in auditory midbrain neurons.

  14. Fluência verbal e variáveis sociodemográficas no processo de envelhecimento: um estudo epidemiológico Verbal fluency and sociodemographic variables in the aging process: an epidemiological study

    Directory of Open Access Journals (Sweden)

    Thaís Bento Lima da Silva

    2011-01-01

    Full Text Available A fluência verbal é um marcador das funções executivas, envolvendo a capacidade de busca e recuperação de dados, habilidades de organização, autorregulação e memória operacional. Objetivou-se identificar a existência de diferenças em fluência verbal (número de animais, categorias, grupos e alternância de categorias entre sexo, faixas etárias, faixas de escolaridade e renda. Trezentos e oitenta e três idosos (60 anos ou mais participaram de estudo epidemiológico de corte transversal. Foram aplicadas questões sociodemográficas e o teste de fluência verbal categoria animais. As variáveis do teste de fluência verbal foram influenciadas por sexo, idade e escolaridade, com melhor desempenho a favor dos homens, dos participantes mais jovens e mais escolarizados. Os resultados confirmam que o desempenho em fluência verbal deve ser interpretado à luz das informações sociodemográficas.Verbal fluency is a marker of executive functions which involves the ability of searching and retrieving information, organizational skills, self-regulation and working memory. The objective of this paper was to identify differences in verbal fluency (number of animals, categories, clusters and category switching associated with gender, age, education and income. Three hundred eighty three elderly (60 or older participated in an epidemiological cross-sectional study. Participants answered sociodemographic questions and completed the verbal fluency animal category test. Verbal fluency variables were influenced by gender, age, and education. Higher performance was reported for men and participants with lower age and higher education. Results confirm that performance in verbal fluency must be interpreted in the light of sociodemographic information.

  15. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer

  16. The verbal-visual discourse in Brazilian Sign Language – Libras

    Directory of Open Access Journals (Sweden)

    Tanya Felipe

    2013-11-01

    Full Text Available This article aims to broaden the discussion on verbal-visual utterances, reflecting upon theoretical assumptions of the Bakhtin Circle that can reinforce the argument that the utterances of a language that employs a visual-gestural modality convey plastic-pictorial and spatial values of signs also through non-manual markers (NMMs. This research highlights the difference between affective expressions, which are paralinguistic communications that may complement an utterance, and verbal-visual grammatical markers, which are linguistic because they are part of the architecture of phonological, morphological, syntactic-semantic and discursive levels in a particular language. These markers will be described, taking the Brazilian Sign Language–Libras as a starting point, thereby including this language in discussions of verbal-visual discourse when investigating the need to do research on this discourse also in the linguistic analyses of oral-auditory modality languages, including Transliguistics as an area of knowledge that analyzes discourse, focusing upon the verbal-visual markers used by the subjects in their utterance acts.

  17. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors.

    Science.gov (United States)

    Chen, Zhaocong; Wong, Francis C K; Jones, Jeffery A; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun

    2015-08-17

    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production.

  18. Acoustic processing of temporally modulated sounds in infants: evidence from a combined near-infrared spectroscopy and EEG study

    Directory of Open Access Journals (Sweden)

    Silke eTelkemeyer

    2011-04-01

    Full Text Available Speech perception requires rapid extraction of the linguistic content from the acoustic signal. The ability to efficiently process rapid changes in auditory information is important for decoding speech and thereby crucial during language acquisition. Investigating functional networks of speech perception in infancy might elucidate neuronal ensembles supporting perceptual abilities that gate language acquisition. Interhemispheric specializations for language have been demonstrated in infants. How these asymmetries are shaped by basic temporal acoustic properties is under debate. We recently provided evidence that newborns process non-linguistic sounds sharing temporal features with language in a differential and lateralized fashion. The present study used the same material while measuring brain responses of 6 and 3 month old infants using simultaneous recordings of electroencephalography (EEG and near-infrared spectroscopy (NIRS. NIRS reveals that the lateralization observed in newborns remains constant over the first months of life. While fast acoustic modulations elicit bilateral neuronal activations, slow modulations lead to right-lateralized responses. Additionally, auditory evoked potentials and oscillatory EEG responses show differential responses for fast and slow modulations indicating a sensitivity for temporal acoustic variations. Oscillatory responses reveal an effect of development, that is, 6 but not 3 month old infants show stronger theta-band desynchronization for slowly modulated sounds. Whether this developmental effect is due to increasing fine-grained perception for spectrotemporal sounds in general remains speculative. Our findings support the notion that a more general specialization for acoustic properties can be considered the basis for lateralization of speech perception. The results show that concurrent assessment of vascular based imaging and electrophysiological responses have great potential in the research on language

  19. Membangun Koneksi Matematis Siswa dalam Pemecahan Masalah Verbal

    Directory of Open Access Journals (Sweden)

    Nurfaidah Tasni

    2017-07-01

    Full Text Available [Bahasa]: Penelitian ini mendeskripsikan proses membangun koneksi matematis dalam pemecahan masalah verbal atau soal cerita. Pada proses penyelesaian masalah verbal, diidentifikasi beberapa jenis koneksi yang dibangun siswa. Jenis soal dikembangkan berdasarkan karakteristik koneksi matematis menurut NCTM, yaitu koneksi antar topik matematika, koneksi dengan disiplin ilmu lain, dan koneksi dalam kehidupan sehari-hari. Pengumpulan data dilakukan melalui hasil kerja siswa dan wawancara semi terstruktur terhadap 2 orang subjek yang dipilih dengan tehnik purposive sampling. Penelitian ini mengunkap ada tujuh jenis koneksi yang dibangun oleh siswa pada saat menyelesaikan masalah verbal, yaitu: koneksi pemahaman, koneksi jika maka, koneksi representasi yang setara, koneksi hirarki, koneksi perbandingan melalui bentuk umum, koneksi prosedur, dan koneksi justifikasi dan representasi. Kata kunci:   Koneksi Matematis; Pemecahan Masalah; Soal Verbal [English]: The current research aims to describe the process of developing mathematical connection in solving verbal or word mathematics problems. In solving problems, the mathematical connections developed by the subjects are identified. The mathematics problems refer to the characteristics of mathematical connections by NCTM, i.e. connections within mathematics topics, connection with other fileds, and connections with daily life. Data collection is conducted through students’ work and semi-structure interview with two subjects. The subjects are selected through purposive sampling. This research reveals seven kinds of mathematical connections developed by the subjects in solving verbal mathematics problems, i.e. connection in understanding, if then connection, equal representation connection, hierarchy connection, proportion connection through general form, procedure connection, and justification and representation connection.    Keywords: Mathematical Connection; Problem Solving; Verbal Problems

  20. Interaction Of Verbal Communication Of The Teacher From The Philippines In The Teaching Activity For Nursery II Students At The Singapore International School Medan

    Directory of Open Access Journals (Sweden)

    Tetti Nauli Panjaitan

    2017-07-01

    Full Text Available The title of the research was Interaction of Verbal Communication of the Teacher from thePhilippines in the Teaching Activity for Nursery II Students at the Singapore International School Medan. The objective of the research was to find out the verbal Interaction communicationin the teaching activity of the teacher from the Philippines in Nursery II Class at the Singapore International School Medan. The school is one of the international schools with foreign teachers. It uses English as the teaching medium in the teaching-learning process. The teacher in this class comes from the Philippines and the students are from 3 to 4 years old.The result of the research showed that the teaching activity in the Nursery II class at the Singapore International School Medan was done in two ways between teacher and students the teacher used more verbal communication while non-verbal communication was used as a supporting method. The learning process was done through singing telling stories games and using teaching tools like television pictures and toys in the communication process in order to make the students easier to understand what had been conveyed by the teacher.

  1. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  2. Co-verbal gestures among speakers with aphasia: Influence of aphasia severity, linguistic and semantic skills, and hemiplegia on gesture employment in oral discourse.

    Science.gov (United States)

    Kong, Anthony Pak-Hin; Law, Sam-Po; Wat, Watson Ka-Chun; Lai, Christy

    2015-01-01

    The use of co-verbal gestures is common in human communication and has been reported to assist word retrieval and to facilitate verbal interactions. This study systematically investigated the impact of aphasia severity, integrity of semantic processing, and hemiplegia on the use of co-verbal gestures, with reference to gesture forms and functions, by 131 normal speakers, 48 individuals with aphasia and their controls. All participants were native Cantonese speakers. It was found that the severity of aphasia and verbal-semantic impairment was associated with significantly more co-verbal gestures. However, there was no relationship between right-sided hemiplegia and gesture employment. Moreover, significantly more gestures were employed by the speakers with aphasia, but about 10% of them did not gesture. Among those who used gestures, content-carrying gestures, including iconic, metaphoric, deictic gestures, and emblems, served the function of enhancing language content and providing information additional to the language content. As for the non-content carrying gestures, beats were used primarily for reinforcing speech prosody or guiding speech flow, while non-identifiable gestures were associated with assisting lexical retrieval or with no specific functions. The above findings would enhance our understanding of the use of various forms of co-verbal gestures in aphasic discourse production and their functions. Speech-language pathologists may also refer to the current annotation system and the results to guide clinical evaluation and remediation of gestures in aphasia. None. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Co-verbal gestures among speakers with aphasia: Influence of aphasia severity, linguistic and semantic skills, and hemiplegia on gesture employment in oral discourse

    Science.gov (United States)

    Kong, Anthony Pak-Hin; Law, Sam-Po; Wat, Watson Ka-Chun; Lai, Christy

    2015-01-01

    The use of co-verbal gestures is common in human communication and has been reported to assist word retrieval and to facilitate verbal interactions. This study systematically investigated the impact of aphasia severity, integrity of semantic processing, and hemiplegia on the use of co-verbal gestures, with reference to gesture forms and functions, by 131 normal speakers, 48 individuals with aphasia and their controls. All participants were native Cantonese speakers. It was found that the severity of aphasia and verbal-semantic impairment was associated with significantly more co-verbal gestures. However, there was no relationship between right-sided hemiplegia and gesture employment. Moreover, significantly more gestures were employed by the speakers with aphasia, but about 10% of them did not gesture. Among those who used gestures, content-carrying gestures, including iconic, metaphoric, deictic gestures, and emblems, served the function of enhancing language content and providing information additional to the language content. As for the non-content carrying gestures, beats were used primarily for reinforcing speech prosody or guiding speech flow, while non-identifiable gestures were associated with assisting lexical retrieval or with no specific functions. The above findings would enhance our understanding of the use of various forms of co-verbal gestures in aphasic discourse production and their functions. Speech-language pathologists may also refer to the current annotation system and the results to guide clinical evaluation and remediation of gestures in aphasia. PMID:26186256

  4. Uranium-series radionuclides as tracers of geochemical processes in Long Island Sound

    International Nuclear Information System (INIS)

    Benninger, L.K.

    1976-05-01

    An estuary can be visualized as a membrane between land and the deep ocean, and the understanding of the estuarine processes which determine the permeability of this membrane to terrigenous materials is necessary for the estimation of fluxes of these materials to the oceans. Natural radionuclides are useful probes into estuarine geochemistry because of the time-dependent relationships among them and because, as analogs of stable elements, they are much less subject to contamination during sampling and analysis. In this study the flux of heavy metals through Long Island Sound is considered in light of the material balance for excess 210 Pb, and analyses of concurrent seston and water samples from central Long Island Sound are used to probe the internal workings of the estuary

  5. Students' Learning of a Generalized Theory of Sound Transmission from a Teaching-Learning Sequence about Sound, Hearing and Health

    Science.gov (United States)

    West, Eva; Wallin, Anita

    2013-04-01

    Learning abstract concepts such as sound often involves an ontological shift because to conceptualize sound transmission as a process of motion demands abandoning sound transmission as a transfer of matter. Thus, for students to be able to grasp and use a generalized model of sound transmission poses great challenges for them. This study involved 199 students aged 10-14. Their views about sound transmission were investigated before and after teaching by comparing their written answers about sound transfer in different media. The teaching was built on a research-based teaching-learning sequence (TLS), which was developed within a framework of design research. The analysis involved interpreting students' underlying theories of sound transmission, including the different conceptual categories that were found in their answers. The results indicated a shift in students' understandings from the use of a theory of matter before the intervention to embracing a theory of process afterwards. The described pattern was found in all groups of students irrespective of age. Thus, teaching about sound and sound transmission is fruitful already at the ages of 10-11. However, the older the students, the more advanced is their understanding of the process of motion. In conclusion, the use of a TLS about sound, hearing and auditory health promotes students' conceptualization of sound transmission as a process in all grades. The results also imply some crucial points in teaching and learning about the scientific content of sound.

  6. The brain correlates of the effects of monetary and verbal rewards on intrinsic motivation.

    Science.gov (United States)

    Albrecht, Konstanze; Abeler, Johannes; Weber, Bernd; Falk, Armin

    2014-01-01

    Apart from everyday duties, such as doing the laundry or cleaning the house, there are tasks we do for pleasure and enjoyment. We do such tasks, like solving crossword puzzles or reading novels, without any external pressure or force; instead, we are intrinsically motivated: we do the tasks because we enjoy doing them. Previous studies suggest that external rewards, i.e., rewards from the outside, affect the intrinsic motivation to engage in a task: while performance-based monetary rewards are perceived as controlling and induce a business-contract framing, verbal rewards praising one's competence can enhance the perceived self-determination. Accordingly, the former have been shown to decrease intrinsic motivation, whereas the latter have been shown to increase intrinsic motivation. The present study investigated the neural processes underlying the effects of monetary and verbal rewards on intrinsic motivation in a group of 64 subjects applying functional magnetic resonance imaging (fMRI). We found that, when participants received positive performance feedback, activation in the anterior striatum and midbrain was affected by the nature of the reward; compared to a non-rewarded control group, activation was higher while monetary rewards were administered. However, we did not find a decrease in activation after reward withdrawal. In contrast, we found an increase in activation for verbal rewards: after verbal rewards had been withdrawn, participants showed a higher activation in the aforementioned brain areas when they received success compared to failure feedback. We further found that, while participants worked on the task, activation in the lateral prefrontal cortex was enhanced after the verbal rewards were administered and withdrawn.

  7. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  8. Effect of fMRI acoustic noise on non-auditory working memory task: comparison between continuous and pulsed sound emitting EPI.

    Science.gov (United States)

    Haller, Sven; Bartsch, Andreas J; Radue, Ernst W; Klarhöfer, Markus; Seifritz, Erich; Scheffler, Klaus

    2005-11-01

    Conventional blood oxygenation level-dependent (BOLD) based functional magnetic resonance imaging (fMRI) is accompanied by substantial acoustic gradient noise. This noise can influence the performance as well as neuronal activations. Conventional fMRI typically has a pulsed noise component, which is a particularly efficient auditory stimulus. We investigated whether the elimination of this pulsed noise component in a recent modification of continuous-sound fMRI modifies neuronal activations in a cognitively demanding non-auditory working memory task. Sixteen normal subjects performed a letter variant n-back task. Brain activity and psychomotor performance was examined during fMRI with continuous-sound fMRI and conventional fMRI. We found greater BOLD responses in bilateral medial frontal gyrus, left middle frontal gyrus, left middle temporal gyrus, left hippocampus, right superior frontal gyrus, right precuneus and right cingulate gyrus with continuous-sound compared to conventional fMRI. Conversely, BOLD responses were greater in bilateral cingulate gyrus, left middle and superior frontal gyrus and right lingual gyrus with conventional compared to continuous-sound fMRI. There were no differences in psychomotor performance between both scanning protocols. Although behavioral performance was not affected, acoustic gradient noise interferes with neuronal activations in non-auditory cognitive tasks and represents a putative systematic confound.

  9. Achieving visibility? Use of non-verbal communication in interactions between patients and pharmacists who do not share a common language.

    Science.gov (United States)

    Stevenson, Fiona

    2014-06-01

    Despite the seemingly insatiable interest in healthcare professional-patient communication, less attention has been paid to the use of non-verbal communication in medical consultations. This article considers pharmacists' and patients' use of non-verbal communication to interact directly in consultations in which they do not share a common language. In total, 12 video-recorded, interpreted pharmacy consultations concerned with a newly prescribed medication or a change in medication were analysed in detail. The analysis focused on instances of direct communication initiated by either the patient or the pharmacist, despite the presence of a multilingual pharmacy assistant acting as an interpreter. Direct communication was shown to occur through (i) the demonstration of a medical device, (ii) the indication of relevant body parts and (iii) the use of limited English. These connections worked to make patients and pharmacists visible to each other and thus to maintain a sense of mutual involvement in consultations within which patients and pharmacists could enact professionally and socially appropriate roles. In a multicultural society this work is important in understanding the dynamics involved in consultations in situations in which language is not shared and thus in considering the development of future research and policy. © 2014 The Author. Sociology of Health & Illness published by John Wiley & Sons Ltd on behalf of Foundation for SHIL (SHIL).

  10. The first-to-zero-sound transition in non-superfluid liquid 4He

    International Nuclear Information System (INIS)

    Woods, A.D.B.; Svensson, E.C.; Martel, P.

    1976-01-01

    Neutron inelastic scattering from 4 He at T=2.3 K shows that for Q -1 'sound-wave' excitations propagate with the characteristics of ordinary or first sound while for Q > approximately 3nm -1 they propagate with the characteristics of zero sound. (Auth.)

  11. Roles of hippocampal subfields in verbal and visual episodic memory.

    Science.gov (United States)

    Zammit, Andrea R; Ezzati, Ali; Zimmerman, Molly E; Lipton, Richard B; Lipton, Michael L; Katz, Mindy J

    2017-01-15

    Selective hippocampal (HC) subfield atrophy has been reported in older adults with mild cognitive impairment and Alzheimer's disease. The goal of this study was to investigate the associations between the volume of hippocampal subfields and visual and verbal episodic memory in cognitively normal older adults. This study was conducted on a subset of 133 participants from the Einstein Aging Study (EAS), a community-based study of non-demented older adults systematically recruited from the Bronx, N.Y. All participants completed comprehensive EAS neuropsychological assessment. Visual episodic memory was assessed using the Complex Figure Delayed Recall subtest from the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS). Verbal episodic memory was assessed using Delayed Recall from the Free and Cued Selective Reminding Test (FCSRT). All participants underwent 3T MRI brain scanning with subsequent automatic measurement of the hemispheric hippocampal subfield volumes (CA1, CA2-CA3, CA4-dente gyrus, presubiculum, and subiculum). We used linear regressions to model the association between hippocampal subfield volumes and visual and verbal episodic memory tests while adjusting for age, sex, education, and total intracranial volume. Participants had a mean age of 78.9 (SD=5.1) and 60.2% were female. Total hippocampal volume was associated with Complex Figure Delayed Recall (β=0.31, p=0.001) and FCSRT Delayed Recall (β=0.27, p=0.007); subiculum volume was associated with Complex Figure Delayed Recall (β=0.27, p=0.002) and FCSRT Delayed Recall (β=0.24, p=0.010); CA1 was associated with Complex Figure Delayed Recall (β=0.26, pepisodic memory. Our results suggest that hippocampal subfields have sensitive roles in the process of visual and verbal episodic memory. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. School effects on non-verbal intelligence and nutritional status in rural Zambia.

    Science.gov (United States)

    Hein, Sascha; Tan, Mei; Reich, Jodi; Thuma, Philip E; Grigorenko, Elena L

    2016-02-01

    This study uses hierarchical linear modeling (HLM) to examine the school factors (i.e., related to school organization and teacher and student body) associated with non-verbal intelligence (NI) and nutritional status (i.e., body mass index; BMI) of 4204 3 rd to 7 th graders in rural areas of Southern Province, Zambia. Results showed that 23.5% and 7.7% of the NI and BMI variance, respectively, were conditioned by differences between schools. The set of 14 school factors accounted for 58.8% and 75.9% of the between-school differences in NI and BMI, respectively. Grade-specific HLM yielded higher between-school variation of NI (41%) and BMI (14.6%) for students in grade 3 compared to grades 4 to 7. School factors showed a differential pattern of associations with NI and BMI across grades. The distance to a health post and teacher's teaching experience were the strongest predictors of NI (particularly in grades 4, 6 and 7); the presence of a preschool was linked to lower BMI in grades 4 to 6. Implications for improving access and quality of education in rural Zambia are discussed.

  13. Descriptive study of the Socratic method: evidence for verbal shaping.

    Science.gov (United States)

    Calero-Elvira, Ana; Froján-Parga, María Xesús; Ruiz-Sancho, Elena María; Alpañés-Freitag, Manuel

    2013-12-01

    In this study we analyzed 65 fragments of session recordings in which a cognitive behavioral therapist employed the Socratic method with her patients. Specialized coding instruments were used to categorize the verbal behavior of the psychologist and the patients. First the fragments were classified as more or less successful depending on the overall degree of concordance between the patient's verbal behavior and the therapeutic objectives. Then the fragments were submitted to sequential analysis so as to discover regularities linking the patient's verbal behavior and the therapist's responses to it. Important differences between the more and the less successful fragments involved the therapist's approval or disapproval of verbalizations that approximated therapeutic goals. These approvals and disapprovals were associated with increases and decreases, respectively, in the patient's behavior. These results are consistent with the existence, in this particular case, of a process of shaping through which the therapist modifies the patient's verbal behavior in the overall direction of his or her chosen therapeutic objectives. © 2013.

  14. A Signal Processing Module for the Analysis of Heart Sounds and Heart Murmurs

    International Nuclear Information System (INIS)

    Javed, Faizan; Venkatachalam, P A; H, Ahmad Fadzil M

    2006-01-01

    In this paper a Signal Processing Module (SPM) for the computer-aided analysis of heart sounds has been developed. The module reveals important information of cardiovascular disorders and can assist general physician to come up with more accurate and reliable diagnosis at early stages. It can overcome the deficiency of expert doctors in rural as well as urban clinics and hospitals. The module has five main blocks: Data Acquisition and Pre-processing, Segmentation, Feature Extraction, Murmur Detection and Murmur Classification. The heart sounds are first acquired using an electronic stethoscope which has the capability of transferring these signals to the near by workstation using wireless media. Then the signals are segmented into individual cycles as well as individual components using the spectral analysis of heart without using any reference signal like ECG. Then the features are extracted from the individual components using Spectrogram and are used as an input to a MLP (Multiple Layer Perceptron) Neural Network that is trained to detect the presence of heart murmurs. Once the murmur is detected they are classified into seven classes depending on their timing within the cardiac cycle using Smoothed Pseudo Wigner-Ville distribution. The module has been tested with real heart sounds from 40 patients and has proved to be quite efficient and robust while dealing with a large variety of pathological conditions

  15. A Signal Processing Module for the Analysis of Heart Sounds and Heart Murmurs

    Energy Technology Data Exchange (ETDEWEB)

    Javed, Faizan; Venkatachalam, P A; H, Ahmad Fadzil M [Signal and Imaging Processing and Tele-Medicine Technology Research Group, Department of Electrical and Electronics Engineering, Universiti Teknologi PETRONAS, 31750 Tronoh, Perak (Malaysia)

    2006-04-01

    In this paper a Signal Processing Module (SPM) for the computer-aided analysis of heart sounds has been developed. The module reveals important information of cardiovascular disorders and can assist general physician to come up with more accurate and reliable diagnosis at early stages. It can overcome the deficiency of expert doctors in rural as well as urban clinics and hospitals. The module has five main blocks: Data Acquisition and Pre-processing, Segmentation, Feature Extraction, Murmur Detection and Murmur Classification. The heart sounds are first acquired using an electronic stethoscope which has the capability of transferring these signals to the near by workstation using wireless media. Then the signals are segmented into individual cycles as well as individual components using the spectral analysis of heart without using any reference signal like ECG. Then the features are extracted from the individual components using Spectrogram and are used as an input to a MLP (Multiple Layer Perceptron) Neural Network that is trained to detect the presence of heart murmurs. Once the murmur is detected they are classified into seven classes depending on their timing within the cardiac cycle using Smoothed Pseudo Wigner-Ville distribution. The module has been tested with real heart sounds from 40 patients and has proved to be quite efficient and robust while dealing with a large variety of pathological conditions.

  16. Confronting the sound speed of dark energy with future cluster surveys

    DEFF Research Database (Denmark)

    Basse, Tobias; Eggers Bjaelde, Ole; Hannestad, Steen

    2012-01-01

    Future cluster surveys will observe galaxy clusters numbering in the hundred thousands. We consider this work how these surveys can be used to constrain dark energy parameters: in particular, the equation of state parameter w and the non-adiabatic sound speed c_s^2. We demonstrate that, in combin......Future cluster surveys will observe galaxy clusters numbering in the hundred thousands. We consider this work how these surveys can be used to constrain dark energy parameters: in particular, the equation of state parameter w and the non-adiabatic sound speed c_s^2. We demonstrate that......, in combination with Cosmic Microwave Background (CMB) observations from Planck, cluster surveys such as that in the ESA Euclid project will be able to determine a time-independent w with subpercent precision. Likewise, if the dark energy sound horizon falls within the length scales probed by the cluster survey......, then c_s^2 can be pinned down to within an order of magnitude. In the course of this work, we also investigate the process of dark energy virialisation in the presence of an arbitrary sound speed. We find that dark energy clustering and virialisation can lead to dark energy contributing to the total...

  17. Input frequencies in processing of verbal morphology in L1 and L2: Evidence from Russian

    Directory of Open Access Journals (Sweden)

    Tatiana Chernigovskaya

    2011-02-01

    Full Text Available In this study we take a usage-based perspective on the analysis of data from the acquisition of verbal morphology by Norwegian adult learners of L2 Russian, as compared to children acquiring Russian as an L1. According to the usage-based theories, language learning is input-driven and frequency of occurrence of grammatical structures and lexical items in the input plays a key role in this process. We have analysed to what extent the acquisition and processing of Russian verbal morphology by children and adult L2 learners is dependent on the input factors, in particular on type and token frequencies. Our analysis of the L2 input based on the written material used in the instruction shows a different distribution of frequencies as compared to the target language at large. The results of the tests that elicited present tense forms of verbs belonging to four different inflectional classes (-AJ-, -A-, -I-, and -OVA- have demonstrated that for both Russian children and L2 learners type frequency appears to be an important factor, influencing both correct stem recognition and generalisations. The results have also demonstrated token frequency effects. For L2 learners we observed also effects of formal instruction and greater reliance on morphological cues. In spite of the fact that L2 learners did not match completely any of the child groups, there are many similarities between L1 and L2 morphological processing, the main one being the role of frequency.

  18. Dementias show differential physiological responses to salient sounds

    Directory of Open Access Journals (Sweden)

    Phillip David Fletcher

    2015-03-01

    Full Text Available Abnormal responsiveness to salient sensory signals is often a prominent feature of dementia diseases, particularly the frontotemporal lobar degenerations, but has been little studied. Here we assessed processing of one important class of salient signals, looming sounds, in canonical dementia syndromes. We manipulated tones using intensity cues to create percepts of salient approaching (‘looming’ or less salient withdrawing sounds. Pupil dilatation responses and behavioural rating responses to these stimuli were compared in patients fulfilling consensus criteria for dementia syndromes (semantic dementia, n=10; behavioural variant frontotemporal dementia, n=16, progressive non-fluent aphasia, n=12; amnestic Alzheimer’s disease, n=10 and a cohort of 26 healthy age-matched individuals. Approaching sounds were rated as more salient than withdrawing sounds by healthy older individuals but this behavioural response to salience did not differentiate healthy individuals from patients with dementia syndromes. Pupil responses to approaching sounds were greater than responses to withdrawing sounds in healthy older individuals and in patients with semantic dementia: this differential pupil response was reduced in patients with progressive nonfluent aphasia and Alzheimer’s disease relative both to the healthy control and semantic dementia groups, and did not correlate with nonverbal auditory semantic function. Autonomic responses to auditory salience are differentially affected by dementias and may constitute a novel biomarker of these diseases.

  19. Plastic modes of listening: affordance in constructed sound environments

    Science.gov (United States)

    Sjolin, Anders

    This thesis is concerned with how the ecological approach to perception with the inclusion of listening modes, informs the creation of sound art installation, or more specifically as referred to in this thesis as constructed sound environments. The basis for the thesis has been a practiced based research where the aim and purpose of the written part of this PhD project has been to critically investigate the area of sound art, in order to map various approaches towards participating in and listening to a constructed sound environment. The main areas has been the notion of affordance as coined by James J. Gibson (1986), listening modes as coined by Pierre Schaeffer (1966) and further developed by Michel Chion (1994), aural architects as coined by Blesser and Salter (2007) and the holistic approach towards understanding sound art developed by Brandon LaBelle (2006). The findings within the written part of the thesis, based on a qualitative analysis, have informed the practice that has resulted in artefacts in the form of seven constructed sound environments that also functions as case studies for further analysis. The aim of the practice has been to exemplify the methodology, strategy and progress behind the organisation and construction of sound environments The research concerns point towards the acknowledgment of affordance as the crucial factor in understanding a constructed sound environment. The affordance approach govern the idea that perceiving a sound environment is a top-down process where the autonomic quality of a constructed sound environment is based upon the perception of structures of the sound material and its relationship with speaker placement and surrounding space. This enables a researcher to side step the conflicting poles of musical/abstract and non-musical/realistic classification of sound elements and regard these poles as included, not separated elements in the analysis of a constructed sound environment.

  20. Subjective Loudness and Reality of Auditory Verbal Hallucinations and Activation of the Inner Speech Processing Network

    NARCIS (Netherlands)

    Vercammen, Ans; Knegtering, Henderikus; Bruggeman, Richard; Aleman, Andre

    Background: One of the most influential cognitive models of auditory verbal hallucinations (AVH) suggests that a failure to adequately monitor the production of one's own inner speech leads to verbal thought being misidentified as an alien voice. However, it is unclear whether this theory can

  1. Sounds in context

    DEFF Research Database (Denmark)

    Weed, Ethan

    A sound is never just a sound. It is becoming increasingly clear that auditory processing is best thought of not as a one-way afferent stream, but rather as an ongoing interaction between interior processes and the environment. Even the earliest stages of auditory processing in the nervous system...... time-course of contextual influence on auditory processing in three different paradigms: a simple mismatch negativity paradigm with tones of differing pitch, a multi-feature mismatch negativity paradigm in which tones were embedded in a complex musical context, and a cross-modal paradigm, in which...... auditory processing of emotional speech was modulated by an accompanying visual context. I then discuss these results in terms of their implication for how we conceive of the auditory processing stream....

  2. Cerebrocerebellar networks during articulatory rehearsal and verbal working memory tasks.

    Science.gov (United States)

    Chen, S H Annabel; Desmond, John E

    2005-01-15

    Converging evidence has implicated the cerebellum in verbal working memory. The current fMRI study sought to further characterize cerebrocerebellar participation in this cognitive process by revealing regions of activation common to a verbal working task and an articulatory control task, as well as regions that are uniquely activated by working memory. Consistent with our model's predictions, load-dependent activations were observed in Broca's area (BA 44/6) and the superior cerebellar hemisphere (VI/CrusI) for both working memory and motoric rehearsal. In contrast, activations unique to verbal working memory were found in the inferior parietal lobule (BA 40) and the right inferior cerebellum hemisphere (VIIB). These findings provide evidence for two cerebrocerebellar networks for verbal working memory: a frontal/superior cerebellar articulatory control system and a parietal/inferior cerebellar phonological storage system.

  3. From Hearing Sounds to Recognizing Phonemes: Primary Auditory Cortex is A Truly Perceptual Language Area

    Directory of Open Access Journals (Sweden)

    Byron Bernal

    2016-11-01

    Full Text Available The aim of this article is to present a systematic review about the anatomy, function, connectivity, and functional activation of the primary auditory cortex (PAC (Brodmann areas 41/42 when involved in language paradigms. PAC activates with a plethora of diverse basic stimuli including but not limited to tones, chords, natural sounds, consonants, and speech. Nonetheless, the PAC shows specific sensitivity to speech. Damage in the PAC is associated with so-called “pure word-deafness” (“auditory verbal agnosia”. BA41, and to a lesser extent BA42, are involved in early stages of phonological processing (phoneme recognition. Phonological processing may take place in either the right or left side, but customarily the left exerts an inhibitory tone over the right, gaining dominance in function. BA41/42 are primary auditory cortices harboring complex phoneme perception functions with asymmetrical expression, making it possible to include them as core language processing areas (Wernicke’s area.

  4. Evidence against Decay in Verbal Working Memory

    Science.gov (United States)

    Oberauer, Klaus; Lewandowsky, Stephan

    2013-01-01

    The article tests the assumption that forgetting in working memory for verbal materials is caused by time-based decay, using the complex-span paradigm. Participants encoded 6 letters for serial recall; each letter was preceded and followed by a processing period comprising 4 trials of difficult visual search. Processing duration, during which…

  5. Processos de substituição e variabilidade articulatória na fala de sujeitos com dispraxia verbal Substitution processes and articulatory variability in the speech of subjects with verbal dyspraxia

    Directory of Open Access Journals (Sweden)

    Inaê Costa Rechia

    2009-01-01

    Full Text Available O objetivo do presente estudo foi analisar o papel das variáveis linguísticas na ocorrência dos processos de substituição na fala de sujeitos com dispraxia verbal (DV. Para isso, foi realizada a análise fonológica de sete sujeitos com idades entre 2:6 (anos:meses e 4:2, com hipótese diagnóstica de DV. As ocorrências dos processos de substituições usuais e idiossincráticas, de assimilações e de variabilidade articulatória foram analisadas estatisticamente por meio do pacote computacional VARBRUL. A variável extensão da palavra foi estatisticamente significante para a ocorrência de assimilações e substituições não usuais, indicando que as variantes trissilábicas e polissilábicas foram as maiores favorecedoras de ocorrência desses processos. A tonicidade foi estatisticamente significante para a ocorrência da variabilidade articulatória e substituições usuais, sendo que o processo apresentou maior probabilidade de ocorrência em sílaba tônica e pós-tônica (sílabas dentro do pé métrico do acento, respectivamente. A classe de sons foi significativa para a realização de substituições usuais pelos sujeitos estudados, ocorrendo quando os segmentos são fonemas líquidos e fricativos. Por fim, a estrutura silábica foi estatisticamente significante para as substituições idiossincráticas. As posições de coda final e de onset simples medial foram as mais suscetíveis à ocorrência do processo. Os dados desta pesquisa sugerem que as substituições, de uma forma geral, tendem a ocorrer em palavras com mais de duas sílabas, em alvos líquidos e fricativos, dentro do pé-métrico do acento (em tônica e pós-tônica, em posição de onset simples medial e coda final.The aim of the present study was to analyze the role of linguistic variables in the occurrence of substitution processes in the speech of subjects with verbal dyspraxia (VD. Therefore, it was carried out the phonological analysis of the speech of

  6. Dissociating Cortical Activity during Processing of Native and Non-Native Audiovisual Speech from Early to Late Infancy

    Directory of Open Access Journals (Sweden)

    Eswen Fava

    2014-08-01

    Full Text Available Initially, infants are capable of discriminating phonetic contrasts across the world’s languages. Starting between seven and ten months of age, they gradually lose this ability through a process of perceptual narrowing. Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech. Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds. But infants learn from speech produced by people talking to them, meaning they learn from a complex audiovisual signal. Here, we use near-infrared spectroscopy to measure blood concentration changes in the bilateral temporal cortices of infants in three different age groups: 3-to-6 months, 7-to-10 months, and 11-to-14-months. Critically, all three groups of infants were tested with continuous audiovisual speech in both their native and another, unfamiliar language. We found that at each age range, infants showed different patterns of cortical activity in response to the native and non-native stimuli. Infants in the youngest group showed bilateral cortical activity that was greater overall in response to non-native relative to native speech; the oldest group showed left lateralized activity in response to native relative to non-native speech. These results highlight perceptual tuning as a dynamic process that happens across modalities and at different levels of stimulus complexity.

  7. Convection measurement package for space processing sounding rocket flights. [low gravity manufacturing - fluid dynamics

    Science.gov (United States)

    Spradley, L. W.

    1975-01-01

    The effects on heated fluids of nonconstant accelerations, rocket vibrations, and spin rates, was studied. A system is discussed which can determine the influence of the convective effects on fluid experiments. The general suitability of sounding rockets for performing these experiments is treated. An analytical investigation of convection in an enclosure which is heated in low gravity is examined. The gravitational body force was taken as a time-varying function using anticipated sounding rocket accelerations, since accelerometer flight data were not available. A computer program was used to calculate the flow rates and heat transfer in fluids with geometries and boundary conditions typical of space processing configurations. Results of the analytical investigation identify the configurations, fluids and boundary values which are most suitable for measuring the convective environment of sounding rockets. A short description of fabricated fluid cells and the convection measurement package is given. Photographs are included.

  8. Selection of words for implementation of the Picture Exchange Communication System - PECS in non-verbal autistic children.

    Science.gov (United States)

    Ferreira, Carine; Bevilacqua, Monica; Ishihara, Mariana; Fiori, Aline; Armonia, Aline; Perissinoto, Jacy; Tamanaha, Ana Carina

    2017-03-09

    It is known that some autistic individuals are considered non-verbal, since they are unable to use verbal language and barely use gestures to compensate for the absence of speech. Therefore, these individuals' ability to communicate may benefit from the use of the Picture Exchange Communication System - PECS. The objective of this study was to verify the most frequently used words in the implementation of PECS in autistic children, and on a complementary basis, to analyze the correlation between the frequency of these words and the rate of maladaptive behaviors. This is a cross-sectional study. The sample was composed of 31 autistic children, twenty-five boys and six girls, aged between 5 and 10 years old. To identify the most frequently used words in the initial period of implementation of PECS, the Vocabulary Selection Worksheet was used. And to measure the rate of maladaptive behaviors, we applied the Autism Behavior Checklist (ABC). There was a significant prevalence of items in the category "food", followed by "activities" and "beverages". There was no correlation between the total amount of items identified by the families and the rate of maladaptive behaviors. The categories of words most mentioned by the families could be identified, and it was confirmed that the level of maladaptive behaviors did not interfere directly in the preparation of the vocabulary selection worksheet for the children studied.

  9. Estudo do código visual originado do código verbal na linguagem jornalística Study of the visual code originated the verbal code in journalistic language

    Directory of Open Access Journals (Sweden)

    Leange Severo Alves

    1982-11-01

    Full Text Available Estudo da interação e importância da linguagem verbal e não verbal na comunicação na mensagem jornalística. Analisamos, dentro deste enfoque, a escrita como representação, a arte gráfica, o traçado e corpo do tipo. Analisamos também o modo e medida de composição e as cores no jornal como recursos visuais que chamam e prendem a atenção do leitor.Study on the interaction and importance of the verbal and non-verbal language in the journalistic message communication, Within this subject, we have analysed representative writing, graphic arts, design and letter-types body. We have also analysed composition format, colours and layout as visual appears which catch the reader's attention.

  10. Mismatch and lexical retrieval gestures are associated with visual information processing, verbal production, and symptomatology in youth at high risk for psychosis.

    Science.gov (United States)

    Millman, Zachary B; Goss, James; Schiffman, Jason; Mejias, Johana; Gupta, Tina; Mittal, Vijay A

    2014-09-01

    Gesture is integrally linked with language and cognitive systems, and recent years have seen a growing attention to these movements in patients with schizophrenia. To date, however, there have been no investigations of gesture in youth at ultra high risk (UHR) for psychosis. Examining gesture in UHR individuals may help to elucidate other widely recognized communicative and cognitive deficits in this population and yield new clues for treatment development. In this study, mismatch (indicating semantic incongruency between the content of speech and a given gesture) and retrieval (used during pauses in speech while a person appears to be searching for a word or idea) gestures were evaluated in 42 UHR individuals and 36 matched healthy controls. Cognitive functions relevant to gesture production (i.e., speed of visual information processing and verbal production) as well as positive and negative symptomatologies were assessed. Although the overall frequency of cases exhibiting these behaviors was low, UHR individuals produced substantially more mismatch and retrieval gestures than controls. The UHR group also exhibited significantly poorer verbal production performance when compared with controls. In the patient group, mismatch gestures were associated with poorer visual processing speed and elevated negative symptoms, while retrieval gestures were associated with higher speed of visual information-processing and verbal production, but not symptoms. Taken together these findings indicate that gesture abnormalities are present in individuals at high risk for psychosis. While mismatch gestures may be closely related to disease processes, retrieval gestures may be employed as a compensatory mechanism. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Sound-symbolism boosts novel word learning

    NARCIS (Netherlands)

    Lockwood, G.F.; Dingemanse, M.; Hagoort, P.

    2016-01-01

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally

  12. Spatial and verbal working memory: A functional magnetic resonance imaging study

    Directory of Open Access Journals (Sweden)

    Blaž Koritnik

    2004-08-01

    Full Text Available According to numerous studies, working memory is not a unitary system. Baddeley's model of working memory includes besides central executive also two separate systems for verbal and visuo-spatial information processing. A modality- and process-specific specialization presumably exists in working memory system of the frontal lobes. In our preliminary study, we have used functional magnetic resonance imaging to study the pattern of cortical activation during spatial and verbal n-back task in six healthy subjects. A bilateral fronto-parietal cortical network was activated in both tasks. There was larger activation of right parietal and bilateral occipital areas in spatial than in vebal task. Activation of left sensorimotor area was larger in verbal compared to spatial task. No task-specific differences were found in the prefrontal cortex. Our results support the assumption that modality-specific processes exist within the working-memory system.

  13. Dual-task interference effects on cross-modal numerical order and sound intensity judgments: the more the louder?

    Science.gov (United States)

    Alards-Tomalin, Doug; Walker, Alexander C; Nepon, Hillary; Leboe-McGowan, Launa C

    2017-09-01

    In the current study, cross-task interactions between number order and sound intensity judgments were assessed using a dual-task paradigm. Participants first categorized numerical sequences composed of Arabic digits as either ordered (ascending, descending) or non-ordered. Following each number sequence, participants then had to judge the intensity level of a target sound. Experiment 1 emphasized processing the two tasks independently (serial processing), while Experiments 2 and 3 emphasized processing the two tasks simultaneously (parallel processing). Cross-task interference occurred only when the task required parallel processing and was specific to ascending numerical sequences, which led to a higher proportion of louder sound intensity judgments. In Experiment 4 we examined whether this unidirectional interaction was the result of participants misattributing enhanced processing fluency experienced on ascending sequences as indicating a louder target sound. The unidirectional finding could not be entirely attributed to misattributed processing fluency, and may also be connected to experientially derived conceptual associations between ascending number sequences and greater magnitude, consistent with conceptual mapping theory.

  14. The influence of environmental sound training on the perception of spectrally degraded speech and environmental sounds.

    Science.gov (United States)

    Shafiro, Valeriy; Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N

    2012-06-01

    Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients.

  15. Vibrotactile Identification of Signal-Processed Sounds from Environmental Events Presented by a Portable Vibrator: A Laboratory Study

    Directory of Open Access Journals (Sweden)

    Parivash Ranjbar

    2008-09-01

    Full Text Available Objectives: To evaluate different signal-processing algorithms for tactile identification of environmental sounds in a monitoring aid for the deafblind. Two men and three women, sensorineurally deaf or profoundly hearing impaired with experience of vibratory experiments, age 22-36 years. Methods: A closed set of 45 representative environmental sounds were processed using two transposing (TRHA, TR1/3 and three modulating algorithms (AM, AMFM, AMMC and presented as tactile stimuli using a portable vibrator in three experiments. The algorithms TRHA, TR1/3, AMFM and AMMC had two alternatives (with and without adaption to vibratory thresholds. In Exp. 1, the sounds were preprocessed and directly fed to the vibrator. In Exp. 2 and 3, the sounds were presented in an acoustic test room, without or with background noise (SNR=+5 dB, and processed in real time. Results: In Exp. 1, Algorithm AMFM and AMFM(A consistently had the lowest identification scores, and were thus excluded in Exp. 2 and 3. TRHA, AM, AMMC, and AMMC(A showed comparable identification scores (30%-42% and the addition of noise did not deteriorate the performance. Discussion: Algorithm TRHA, AM, AMMC, and AMMC(A showed good performance in all three experiments and were robust in noise they can therefore be used in further testing in real environments.

  16. PENGARUH DARI PROBLEM POSING METHOD TERHADAP KREATIVITAS VERBAL SISWA SMP KELAS VII

    Directory of Open Access Journals (Sweden)

    Bagus Priambodo

    2013-10-01

    Full Text Available Verbal creativity is the ability to think fluent, flexible, and original that manifested through the words. Psychological freedom is one factor that can develop creativity. One alternative teaching methods that provide freedom in an atmosphere of learning is the Problem Posing Method (PPM which is triggered by Paulo Freire. This research aims to determine the presence or absence of the influence of PPM on verbal creativity. Characteristic of the subjects was junior high school students in grade 7th, received conventional learning materials, and have never had learning by using PPM. This study used a non-randomized pretest-posttest control group design. Subjects in the study were divided into two, experimental group (N = 33 and control group (N= 35. The data was collected using the Verbal Creativity Test. The results of hypothesis testing used Independent Sample T Test techniques showed the differences of mean = 3.294, α = 0.014 with (p<0.05. Keywords: Verbal creativity, problem posing method, a test of verbal creativity, junior high school students

  17. Neuromimetic Sound Representation for Percept Detection and Manipulation

    Directory of Open Access Journals (Sweden)

    Chi Taishih

    2005-01-01

    Full Text Available The acoustic wave received at the ears is processed by the human auditory system to separate different sounds along the intensity, pitch, and timbre dimensions. Conventional Fourier-based signal processing, while endowed with fast algorithms, is unable to easily represent a signal along these attributes. In this paper, we discuss the creation of maximally separable sounds in auditory user interfaces and use a recently proposed cortical sound representation, which performs a biomimetic decomposition of an acoustic signal, to represent and manipulate sound for this purpose. We briefly overview algorithms for obtaining, manipulating, and inverting a cortical representation of a sound and describe algorithms for manipulating signal pitch and timbre separately. The algorithms are also used to create sound of an instrument between a "guitar" and a "trumpet." Excellent sound quality can be achieved if processing time is not a concern, and intelligible signals can be reconstructed in reasonable processing time (about ten seconds of computational time for a one-second signal sampled at . Work on bringing the algorithms into the real-time processing domain is ongoing.

  18. THE BRAIN CORRELATES OF THE EFFECTS OF MONETARY AND VERBAL REWARDS ON INTRINSIC MOTIVATION

    Directory of Open Access Journals (Sweden)

    Konstanze eAlbrecht

    2014-09-01

    Full Text Available Apart from everyday duties, such as doing the laundry or cleaning the house, there are tasks we do for pleasure and enjoyment. We do such tasks, like solving crossword puzzles or reading novels, without any external pressure or force; instead, we are intrinsically motivated: We do the tasks because we enjoy doing them. Previous studies suggest that external rewards, i.e., rewards from the outside, affect the intrinsic motivation to engage in a task: While performance-based monetary rewards are perceived as controlling and induce a business-contract framing, verbal rewards praising one’s competence can enhance the perceived self-determination. Accordingly, the former have been shown to decrease intrinsic motivation, whereas the latter have been shown to increase intrinsic motivation. The present study investigated the neural processes underlying the effects of monetary and verbal rewards on intrinsic motivation in a group of 64 subjects applying functional magnetic resonance imaging (fMRI. We found that, when participants received positive performance feedback, activation in the anterior striatum and midbrain was affected by the nature of the reward; compared to a non-rewarded control group, activation was higher while monetary rewards were administered. However, we did not find a decrease in activation after reward withdrawal. In contrast, we found an increase in activation for verbal rewards: After verbal rewards had been withdrawn, participants showed a higher activation in the aforementioned brain areas when they received success compared to failure feedback. We further found that, while participants worked on the task, activation in the lateral prefrontal cortex was enhanced after the verbal rewards were administered and withdrawn.

  19. Music improves verbal memory encoding while decreasing prefrontal cortex activity: an fNIRS study

    OpenAIRE

    Ferreri, Laura; Aucouturier, Jean-Julien; Muthalib, Makii; Bigand, Emmanuel; Bugaiska, Aurelia

    2013-01-01

    Listening to music engages the whole brain, thus stimulating cognitive performance in a range of non-purely musical activities such as language and memory tasks. This article addresses an ongoing debate on the link between music and memory for words. While evidence on healthy and clinical populations suggests that music listening can improve verbal memory in a variety of situations, it is still unclear what specific memory process is affected and how. This study was designed to explore the hy...

  20. Rethinking a Negative Event: The Affective Impact of Ruminative versus Imagery-Based Processing of Aversive Autobiographical Memories.

    Science.gov (United States)

    Slofstra, Christien; Eisma, Maarten C; Holmes, Emily A; Bockting, Claudi L H; Nauta, Maaike H

    2017-01-01

    Ruminative (abstract verbal) processing during recall of aversive autobiographical memories may serve to dampen their short-term affective impact. Experimental studies indeed demonstrate that verbal processing of non-autobiographical material and positive autobiographical memories evokes weaker affective responses than imagery-based processing. In the current study, we hypothesized that abstract verbal or concrete verbal processing of an aversive autobiographical memory would result in weaker affective responses than imagery-based processing. The affective impact of abstract verbal versus concrete verbal versus imagery-based processing during recall of an aversive autobiographical memory was investigated in a non-clinical sample ( n  = 99) using both an observational and an experimental design. Observationally, it was examined whether spontaneous use of processing modes (both state and trait measures) was associated with impact of aversive autobiographical memory recall on negative and positive affect. Experimentally, the causal relation between processing modes and affective impact was investigated by manipulating the processing mode during retrieval of the same aversive autobiographical memory. Main findings were that higher levels of trait (but not state) measures of both ruminative and imagery-based processing and depressive symptomatology were positively correlated with higher levels of negative affective impact in the observational part of the study. In the experimental part, no main effect of processing modes on affective impact of autobiographical memories was found. However, a significant moderating effect of depressive symptomatology was found. Only for individuals with low levels of depressive symptomatology, concrete verbal (but not abstract verbal) processing of the aversive autobiographical memory did result in weaker affective responses, compared to imagery-based processing. These results cast doubt on the hypothesis that ruminative processing of

  1. Can verbal working memory training improve reading?

    Science.gov (United States)

    Banales, Erin; Kohnen, Saskia; McArthur, Genevieve

    2015-01-01

    The aim of the current study was to determine whether poor verbal working memory is associated with poor word reading accuracy because the former causes the latter, or the latter causes the former. To this end, we tested whether (a) verbal working memory training improves poor verbal working memory or poor word reading accuracy, and whether (b) reading training improves poor reading accuracy or verbal working memory in a case series of four children with poor word reading accuracy and verbal working memory. Each child completed 8 weeks of verbal working memory training and 8 weeks of reading training. Verbal working memory training improved verbal working memory in two of the four children, but did not improve their reading accuracy. Similarly, reading training improved word reading accuracy in all children, but did not improve their verbal working memory. These results suggest that the causal links between verbal working memory and reading accuracy may not be as direct as has been assumed.

  2. Sound as Popular Culture

    DEFF Research Database (Denmark)

    The wide-ranging texts in this book take as their premise the idea that sound is a subject through which popular culture can be analyzed in an innovative way. From an infant’s gurgles over a baby monitor to the roar of the crowd in a stadium to the sub-bass frequencies produced by sound systems...... in the disco era, sound—not necessarily aestheticized as music—is inextricably part of the many domains of popular culture. Expanding the view taken by many scholars of cultural studies, the contributors consider cultural practices concerning sound not merely as semiotic or signifying processes but as material......, physical, perceptual, and sensory processes that integrate a multitude of cultural traditions and forms of knowledge. The chapters discuss conceptual issues as well as terminologies and research methods; analyze historical and contemporary case studies of listening in various sound cultures; and consider...

  3. Autonomy of imagery and production of original verbal images.

    Science.gov (United States)

    Khatena, J

    1976-08-01

    90 college students (31 men and 59 women) were categorized as moderately autonomous, less autonomous (less highly controlled) and non-autonomous (high controlled) imagers according to the Gordon Test of Visual Imagery Control Moderately autonomous imagers produced significantly more original verbal images than less autonomous and non-autonomous imagers with less autonomous imagers scoring higher than non-autonomous imagers as measured by Onomatopoeia and Images. There were no significant sex main effects of interaction of autonomy of imagery level X sex.

  4. Macroscopic brain dynamics during verbal and pictorial processing of affective stimuli.

    Science.gov (United States)

    Keil, Andreas

    2006-01-01

    Emotions can be viewed as action dispositions, preparing an individual to act efficiently and successfully in situations of behavioral relevance. To initiate optimized behavior, it is essential to accurately process the perceptual elements indicative of emotional relevance. The present chapter discusses effects of affective content on neural and behavioral parameters of perception, across different information channels. Electrocortical data are presented from studies examining affective perception with pictures and words in different task contexts. As a main result, these data suggest that sensory facilitation has an important role in affective processing. Affective pictures appear to facilitate perception as a function of emotional arousal at multiple levels of visual analysis. If the discrimination between affectively arousing vs. nonarousing content relies on fine-grained differences, amplification of the cortical representation may occur as early as 60-90 ms after stimulus onset. Affectively arousing information as conveyed via visual verbal channels was not subject to such very early enhancement. However, electrocortical indices of lexical access and/or activation of semantic networks showed that affectively arousing content may enhance the formation of semantic representations during word encoding. It can be concluded that affective arousal is associated with activation of widespread networks, which act to optimize sensory processing. On the basis of prioritized sensory analysis for affectively relevant stimuli, subsequent steps such as working memory, motor preparation, and action may be adjusted to meet the adaptive requirements of the situation perceived.

  5. SELKIRK'S THEORY OF VERBAL COMPOUNDING: A CRITICAL ...

    African Journals Online (AJOL)

    Selkirk presents her theory of verbal compounding as part of a more general theory ... typical lexicalist vein, words are assigned a dual status (Selkirk 1981: 230), On ..... nonhead and a deverbal head ~s an extremely product~ve process. Con-.

  6. Working memory capacity and visual-verbal cognitive load modulate auditory-sensory gating in the brainstem: toward a unified view of attention.

    Science.gov (United States)

    Sörqvist, Patrik; Stenfelt, Stefan; Rönnberg, Jerker

    2012-11-01

    Two fundamental research questions have driven attention research in the past: One concerns whether selection of relevant information among competing, irrelevant, information takes place at an early or at a late processing stage; the other concerns whether the capacity of attention is limited by a central, domain-general pool of resources or by independent, modality-specific pools. In this article, we contribute to these debates by showing that the auditory-evoked brainstem response (an early stage of auditory processing) to task-irrelevant sound decreases as a function of central working memory load (manipulated with a visual-verbal version of the n-back task). Furthermore, individual differences in central/domain-general working memory capacity modulated the magnitude of the auditory-evoked brainstem response, but only in the high working memory load condition. The results support a unified view of attention whereby the capacity of a late/central mechanism (working memory) modulates early precortical sensory processing.

  7. Strategic verbal rehearsal in adolescents with mild intellectual disabilities: A multi-centre European study.

    Science.gov (United States)

    Poloczek, Sebastian; Henry, Lucy A; Danielson, Henrik; Büttner, Gerhard; Mähler, Claudia; Messer, David J; Schuchardt, Kirsten; Molen, Mariët J van der

    2016-11-01

    There is a long-held view that verbal short-term memory problems of individuals with intellectual disabilities (ID) might be due to a deficit in verbal rehearsal. However, the evidence is inconclusive and word length effects as indicator of rehearsal have been criticised. The aim of this multi-site European study was to investigate verbal rehearsal in adolescents with mild ID (n=90) and a comparison group of typically developing children matched individually for mental age (MA, n=90). The investigation involved: (1) a word length experiment with non-verbal recall using pointing and (2) 'self-paced' inspection times to infer whether verbal strategies were utilised when memorising a set of pictorial items. The word length effect on recall did not interact with group, suggesting that adolescents with ID and MA comparisons used similar verbal strategies, possibly phonological recoding of picture names. The inspection time data suggested that high span individuals in both groups used verbal labelling or single item rehearsal on more demanding lists, as long named items had longer inspection times. The findings suggest that verbal strategy use is not specifically impaired in adolescents with mild ID and is mental age appropriate, supporting a developmental perspective. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. [Pre-verbality in focusing and the need for self check. An attempt at "focusing check"].

    Science.gov (United States)

    Masui, T; Ikemi, A; Murayama, S

    1983-06-01

    Though the Focusing process is not entirely non-verbal, in Focusing, careful attention is paid by the Focuser and the Listener to the pre-verbal experiential process. In other words, Focusing involves attending to the felt sense that is not easily expressed in words immediately. Hence, during the process of learning to Focus, the Focusing teacher attempts to communicate the experiences of Focusing to the student which are not easily done by words. Due to such difficulties, the Focusing student may (and quite frequently does) mistake the experiential process in Focusing with other processes. Often, the felt sense can be confused with other phenomena such as "autogenic discharge". Also the Focuser may not stay with the felt sense and drift into "free association" or frequently, certain processes in "meditation" can be confused with Focusing. Therefore, there is a need for a "check" by which the Focusing student can confirm the Focusing experience for himself. For the Focusing student, such a "check" serves not only to confirm the Focusing process, but also an aid to learning Focusing. We will report here a "Focusing Check" which we developed by translating Eugene Gendlin's "Focusing Check" and making several modifications in it so that it will be more understandable to the Japanese. Along with the "Focusing Check" we developed, the authors discuss the need for such a check.

  9. Musical and verbal semantic memory: two distinct neural networks?

    Science.gov (United States)

    Groussard, M; Viader, F; Hubert, V; Landeau, B; Abbas, A; Desgranges, B; Eustache, F; Platel, H

    2010-02-01

    Semantic memory has been investigated in numerous neuroimaging and clinical studies, most of which have used verbal or visual, but only very seldom, musical material. Clinical studies have suggested that there is a relative neural independence between verbal and musical semantic memory. In the present study, "musical semantic memory" is defined as memory for "well-known" melodies without any knowledge of the spatial or temporal circumstances of learning, while "verbal semantic memory" corresponds to general knowledge about concepts, again without any knowledge of the spatial or temporal circumstances of learning. Our aim was to compare the neural substrates of musical and verbal semantic memory by administering the same type of task in each modality. We used high-resolution PET H(2)O(15) to observe 11 young subjects performing two main tasks: (1) a musical semantic memory task, where the subjects heard the first part of familiar melodies and had to decide whether the second part they heard matched the first, and (2) a verbal semantic memory task with the same design, but where the material consisted of well-known expressions or proverbs. The musical semantic memory condition activated the superior temporal area and inferior and middle frontal areas in the left hemisphere and the inferior frontal area in the right hemisphere. The verbal semantic memory condition activated the middle temporal region in the left hemisphere and the cerebellum in the right hemisphere. We found that the verbal and musical semantic processes activated a common network extending throughout the left temporal neocortex. In addition, there was a material-dependent topographical preference within this network, with predominantly anterior activation during musical tasks and predominantly posterior activation during semantic verbal tasks. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  10. Vowel identity between note labels confuses pitch identification in non-absolute pitch possessors.

    Directory of Open Access Journals (Sweden)

    Alfredo Brancucci

    Full Text Available The simplest and likeliest assumption concerning the cognitive bases of absolute pitch (AP is that at its origin there is a particularly skilled function which matches the height of the perceived pitch to the verbal label of the musical tone. Since there is no difference in sound frequency resolution between AP and non-AP (NAP musicians, the hypothesis of the present study is that the failure of NAP musicians in pitch identification relies mainly in an inability to retrieve the correct verbal label to be assigned to the perceived musical note. The primary hypothesis is that, when asked to identify tones, NAP musicians confuse the verbal labels to be attached to the stimulus on the basis of their phonetic content. Data from two AP tests are reported, in which subjects had to respond in the presence or in the absence of visually presented verbal note labels (fixed Do solmization. Results show that NAP musicians confuse more frequently notes having a similar vowel in the note label. They tend to confuse e.g. a 261 Hz tone (Do more often with Sol than, e.g., with La. As a second goal, we wondered whether this effect is lateralized, i.e. whether one hemisphere is more responsible than the other in the confusion of notes with similar labels. This question was addressed by observing pitch identification during dichotic listening. Results showed that there is a right hemispheric disadvantage, in NAP but not AP musicians, in the retrieval of the verbal label to be assigned to the perceived pitch. The present results indicate that absolute pitch has strong verbal bases, at least from a cognitive point of view.

  11. Evidence against decay in verbal working memory.

    Science.gov (United States)

    Oberauer, Klaus; Lewandowsky, Stephan

    2013-05-01

    The article tests the assumption that forgetting in working memory for verbal materials is caused by time-based decay, using the complex-span paradigm. Participants encoded 6 letters for serial recall; each letter was preceded and followed by a processing period comprising 4 trials of difficult visual search. Processing duration, during which memory could decay, was manipulated via search set size. This manipulation increased retention interval by up to 100% without having any effect on recall accuracy. This result held with and without articulatory suppression. Two experiments using a dual-task paradigm showed that the visual search process required central attention. Thus, even when memory maintenance by central attention and by articulatory rehearsal was prevented, a large delay had no effect on memory performance, contrary to the decay notion. Most previous experiments that manipulated the retention interval and the opportunity for maintenance processes in complex span have confounded these variables with time pressure during processing periods. Three further experiments identified time pressure as the variable that affected recall. We conclude that time-based decay does not contribute to the capacity limit of verbal working memory. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  12. CHILD COMPREHENSION OF ADULTS’ VERBAL INPUT: A CASE OF BILINGUAL ACQUISITION IN INFANCY

    Directory of Open Access Journals (Sweden)

    Ni Luh Putu Sri Adnyani

    2017-05-01

    Full Text Available Research concerning comprehension in early simultaneous bilingualism is still very limited. Thus, this study focuses on describing a bilingual infant’s comprehension of adults’ verbal input addressed to the child in an Indonesian-German language environment, and the child’s understanding of translation equivalents (TEs. The child, who was exposed to Indonesian and German simultaneously from birth, was observed from age 0;9 to age 1;3 using a diary supplemented with weekly video recordings. A “one parent-one language” system was applied in which the child received Indonesian language from the mother and German language from the father from birth. Since the family live in Indonesia and have regular contact to the collective family members, the child received dominant exposure in Indonesian compared to German. The data was transcribed and analysed using ELAN. The results show that the adults’ verbal inputs in the form of speech addressed to the child were in the form of short utterances which very often had a high-pitched sound and were rich in repetition. The adults’ speech was able to be discriminated by the child. In the pre-production stage, the child could understand approximately 6 (six proper nouns, 18 (eighteen Indonesian words and 14 (fourteen German words. The result reveals that the child could comprehend more words in Indonesian than in German. It was also found that the child could understand some bilingual synonyms, which implies that at the pre-production stage, the child already went through a process of bilingual development.

  13. The role of orbitofrontal cortex in processing empathy stories in 4-8 year-old children

    Directory of Open Access Journals (Sweden)

    Tila Tabea eBrink

    2011-04-01

    Full Text Available This study investigates the neuronal correlates of empathic processing in childrenaged 4 to 8 years, an age range discussed to be crucial for the development ofempathy. Empathy, defined as the ability to understand and share another person’sinner life, consists of two components: affective (emotion-sharing and cognitiveempathy (Theory of Mind. We examined the hemodynamic responses of pre-schooland school children (N=48, while they processed verbal (auditory and non-verbal(cartoons empathy stories in a passive following paradigm, using functional NearInfrared Spectroscopy (fNIRS. To control for the two types of empathy, childrenwere presented blocks of stories eliciting either affective or cognitive empathy, orneutral scenes which relied on the understanding of physical causalities.By contrasting the activations of the younger and older children, we expected toobserve developmental changes in brain activations when children process storieseliciting empathy in either stimulus modality towards a greater involvement ofanterior frontal brain regions. Our results indicate that children's processing of storieseliciting affective and cognitive empathy is associated with medial and bilateralorbitofrontal cortex (OFC activation. In contrast to what is known from studies usingadult participants, no additional recruitment of posterior brain regions was observed,often associated with the processing of stories eliciting empathy. Developmentalchanges were found only for stories eliciting affective empathy with increasedactivation, in older children, in medial OFC, left inferior frontal gyrus (IFG, and theleft dorsolateral prefrontal cortex (dlPFC. Activations for the two modalities differonly little, with non-verbal presentation of the stimuli having a greater impact onempathy processing in children, showing more similarities to adult processing thanthe verbal one. This might be caused by the fact that non-verbal processing developsearlier in life

  14. CARDIOVASCULAR RESPONSE TO VERBAL COMMUNICATION A STUDY IN BUSINESS PROCESS OUTSOURCING EMPLOYEES

    Directory of Open Access Journals (Sweden)

    Divya .P

    2015-04-01

    Full Text Available Background: Cardiovascular changes to daily activity and stressors have been proposed as a mechanism for promoting the progression of atherosclerosis and coronary heart diseases. Hence, purpose of the study with objective is to assess the cardiovascular parameters such as heart rate, blood pressure and rating of perceived exertion responses to verbal communication in Business process outsourcing (BPO employees. Method: A cross sectional survey design, selected 150 healthy subjects between age group of 25 to 35 years from BPO industry, Bangalore. Subjects who fulfilled inclusion criteria were included into the study. Heart rate and blood pressure were recorded before and after shift. The Borg rating of perceived exertion scale was also administered to find the difference of amount of exertion, which was felt by subjects before and after shift. Results: The analysis of measured variable shown that before shift the means Heart rate was 81.76 beats, the mean systolic blood pressure is 117.82 the mean diastolic blood pressure is 80.69 and the mean rate of perceive d exertion is 7.19. After shift the means of Heart rate was 83.02 beats, the mean systolic blood pressure is 120.32 the mean diastolic blood pressure is 83.26 and the mean rate of perceive d exertion is 10.65. When analysed using paired t test there is a statistically significant difference in before and after shift means of heart rate, blood pressure and rate of perceived exertion. Conclusion: It was concluded that in BPO employees in response to their verbal communication there was significant increase in cardiovascular responses including Heart Rate, Systolic Blood Pressure and Diastolic Blood Pressure. There was also a significant increase in Borg rating of perceived exertion before and after shift

  15. Language differences in verbal short-term memory do not exclusively originate in the process of subvocal rehearsal.

    Science.gov (United States)

    Thorn, A S; Gathercole, S E

    2001-06-01

    Language differences in verbal short-term memory were investigated in two experiments. In Experiment 1, bilinguals with high competence in English and French and monolingual English adults with extremely limited knowledge of French were assessed on their serial recall of words and nonwords in both languages. In all cases recall accuracy was superior in the language with which individuals were most familiar, a first-language advantage that remained when variation due to differential rates of articulation in the two languages was taken into account. In Experiment 2, bilinguals recalled lists of English and French words with and without concurrent articulatory suppression. First-language superiority persisted under suppression, suggesting that the language differences in recall accuracy were not attributable to slower rates of subvocal rehearsal in the less familiar language. The findings indicate that language-specific differences in verbal short-term memory do not exclusively originate in the subvocal rehearsal process. It is suggested that one source of language-specific variation might relate to the use of long-term knowledge to support short-term memory performance.

  16. The Impact of Discrepant Verbal-Nonverbal Messages in the Teacher-Student Interaction.

    Science.gov (United States)

    Karr-Kidwell, PJ

    Noting that teachers' nonverbal behaviors are frequently inconsistent with their verbal messages, a situation that detracts from student learning, this paper offers an activity for focusing prospective teachers' attentions on the frequency and impact of discrepant verbal-nonverbal messages occurring in the classroom. The step-by-step process is…

  17. [Deficit of verbal recall caused by left dorso-lateral thalamic infarction].

    Science.gov (United States)

    Rousseaux, M; Cabaret, M; Benaim, C; Steinling, M

    1995-01-01

    A case of amnesia with preferential disorder of verbal recall, associated to a limited infarct of the left superior, external and anterior thalamus, is reported. This lesion involved the anterior and middle dorso-lateral nuclei and the centrolateral nucleus, sparing most of the structures classically incriminated in diencephalic amnesia. At the initial stage, the patient presented discrete language impairment and severe deficit of semantic processing, which later recovered. At the late stage, the anterograde and retrograde amnesia principally concerned the recall of verbal information used in daily life, verbal learning using short-term and long-term recall, questionnaires evaluating retrograde memory and requiring the evocation of proper names. Verbal priming was also affected. Verbal recognition was preserved. Evocation of the most recent events of the personal life was also impaired. Confrontation of this case with others previously reported suggests that various thalamic amnesias may be described, associated to different cognitive deficits, in relation with the preferential situation of lesions.

  18. The Role of Verbal and Visual Text in the Process of Institutionalization

    DEFF Research Database (Denmark)

    Jancsary, Dennis; Meyer, Renate; Höllerer, Markus A.

    2017-01-01

    In this article, we develop novel theory on the differentiated impact of verbal and visual texts on the emergence, rise, establishment, and consolidation of institutions. Integrating key insights from social semiotics into a discursive model of institutionalization, we identify distinct affordanc...

  19. Comunicação não-verbal: uma contribuição para o aconselhamento em amamentação Comunicación no verbal: una contribución para la consejería en lactancia materna Non verbal communication: a contribution to breastfeeding counseling

    Directory of Open Access Journals (Sweden)

    Adriana Moraes Leite

    2004-04-01

    . Los autores percibieron que las habilidades del curso están centradas en técnicas solamente orientadas hacia las actitudes de los profesionales. Es fundamental estar atento a las señales no verbales de la mujer, pues estas retratan sus emociones. Tales señales pueden ser los indicadores de las dificultades que la mujer enfrenta, de las interpretaciones a cerca de elementos interactivos en su contexto, y muchas veces, son los indicativos del curso que podrán imprimir al proceso de lactancia.The "Course on Breastfeeding Counseling", elaborated and implemented by the United Nation's Children's Emergency Fund - UNICEF in partnership with the World Health Organization - WHO, represents one of the most important initiatives towards the valorization of women as breastfeeding agents. With a view to understanding and facilitating the application of the nonverbal communication skills this course intends to develop among professionals, this study aims to organize the theoretical frameworks that will support the teaching of Listening and Learning Skills -1 - "Use of non verbal communication", considering the concepts of human communication found in different authors. We found out that the skills of the course are centered in techniques only directed at the professional's attitudes. However, it is necessary to pay attention to women's nonverbal signs, as they reflect their emotions. These signs can indicate the difficulties women are facing, their interpretations regarding the interaction elements in their context and, often, they are indicative of how they will direct the breastfeeding process.

  20. Cognitive flexibility in verbal and nonverbal domains and decision making in anorexia nervosa patients: a pilot study

    Directory of Open Access Journals (Sweden)

    Marzola Enrica

    2011-10-01

    Full Text Available Abstract Background This paper aimed to investigate cognitive rigidity and decision making impairments in patients diagnosed with Anorexia Nervosa Restrictive type (AN-R, assessing also verbal components. Methods Thirty patients with AN-R were compared with thirty age-matched healthy controls (HC. All participants completed a comprehensive neuropsychological battery comprised of the Trail Making Test, Wisconsin Card Sorting Test, Hayling Sentence Completion Task, and the Iowa Gambling Task. The Beck Depression Inventory was administered to evaluate depressive symptomatology. The influence of both illness duration and neuropsychological variables was considered. Body Mass Index (BMI, years of education, and depression severity were considered as covariates in statistical analyses. Results The AN-R group showed poorer performance on all neuropsychological tests. There was a positive correlation between illness duration and the Hayling Sentence Completion Task Net score, and number of completion answers in part B. There was a partial effect of years of education and BMI on neuropsychological test performance. Response inhibition processes and verbal fluency impairment were not associated with BMI and years of education, but were associated with depression severity. Conclusions These data provide evidence that patients with AN-R have cognitive rigidity in both verbal and non-verbal domains. The role of the impairment on verbal domains should be considered in treatment. Further research is warranted to better understand the relationship between illness state and cognitive rigidity and impaired decision-making.

  1. Verbal Aggressiveness Among Physicians and Trainees.

    Science.gov (United States)

    Lazarus, Jenny Lynn; Hosseini, Motahar; Kamangar, Farin; Levien, David H; Rowland, Pamela A; Kowdley, Gopal C; Cunningham, Steven C

    2016-01-01

    To better understand verbal aggressiveness among physicians and trainees, including specialty-specific differences. The Infante Verbal Aggressiveness Scale (IVAS) was administered as part of a survey to 48 medical students, 24 residents, and 257 attending physicians. The 72 trainees received the IVAS and demographic questions, whereas the attending physicians received additional questions regarding type of practice, career satisfaction, litigation, and personality type. The IVAS scores showed high reliability (Cronbach α = 0.83). Among all trainees, 56% were female with mean age 28 years, whereas among attending physicians, 63% were male with mean age 50 years. Average scores of trainees were higher than attending physicians with corresponding averages of 1.88 and 1.68, respectively. Among trainees, higher IVAS scores were significantly associated with male sex, non-US birthplace, choice of surgery, and a history of bullying. Among attending physicians, higher IVAS scores were significantly associated with male sex, younger age, self-reported low-quality of patient-physician relationships, and low enjoyment talking to patients. General surgery and general internal medicine physicians were significantly associated with higher IVAS scores than other specialties. General practitioners (surgeons and medical physicians) had higher IVAS scores than the specialists in their corresponding fields. No significant correlation was found between IVAS scores and threats of legal action against attending physicians, or most personality traits. Additional findings regarding bullying in medical school, physician-patient interactions, and having a method to deal with inappropriate behavior at work were observed. Individuals choosing general specialties display more aggressive verbal communication styles, general surgeons displaying the highest. The IVAS scoring system may identify subgroups of physicians with overly aggressive (problematic) communication skills and may provide a

  2. Sex-specific asymmetries in communication sound perception are not related to hand preference in an early primate

    Directory of Open Access Journals (Sweden)

    Scheumann Marina

    2008-01-01

    Full Text Available Abstract Background Left hemispheric dominance of language processing and handedness, previously thought to be unique to humans, is currently under debate. To gain an insight into the origin of lateralization in primates, we have studied gray mouse lemurs, suggested to represent the most ancestral primate condition. We explored potential functional asymmetries on the behavioral level by applying a combined handedness and auditory perception task. For testing handedness, we used a forced food-grasping task. For testing auditory perception, we adapted the head turn paradigm, originally established for exploring hemispheric specializations in conspecific sound processing in Old World monkeys, and exposed 38 subjects to control sounds and conspecific communication sounds of positive and negative emotional valence. Results The tested mouse lemur population did not show an asymmetry in hand preference or in orientation towards conspecific communication sounds. However, males, but not females, exhibited a significant right ear-left hemisphere bias when exposed to conspecific communication sounds of negative emotional valence. Orientation asymmetries were not related to hand preference. Conclusion Our results provide the first evidence for sex-specific asymmetries for conspecific communication sound perception in non-human primates. Furthermore, they suggest that hemispheric dominance for communication sound processing evolved before handedness and independently from each other.

  3. Effect of background music on auditory-verbal memory performance

    Directory of Open Access Journals (Sweden)

    Sona Matloubi

    2014-12-01

    Full Text Available Background and Aim: Music exists in all cultures; many scientists are seeking to understand how music effects cognitive development such as comprehension, memory, and reading skills. More recently, a considerable number of neuroscience studies on music have been developed. This study aimed to investigate the effects of null and positive background music in comparison with silence on auditory-verbal memory performance.Methods: Forty young adults (male and female with normal hearing, aged between 18 and 26, participated in this comparative-analysis study. An auditory and speech evaluation was conducted in order to investigate the effects of background music on working memory. Subsequently, the Rey auditory-verbal learning test was performed for three conditions: silence, positive, and null music.Results: The mean score of the Rey auditory-verbal learning test in silence condition was higher than the positive music condition (p=0.003 and the null music condition (p=0.01. The tests results did not reveal any gender differences.Conclusion: It seems that the presence of competitive music (positive and null music and the orientation of auditory attention have negative effects on the performance of verbal working memory. It is possibly owing to the intervention of music with verbal information processing in the brain.

  4. Perspectiva funcional de los procesos verbales en los escritos estudiantiles de literatura e historia en español A perspectiva funcional dos processos verbais nos escritos estudiantis de história e literatura em espanhol Functional perspective of verbal processes in the writing in Spanish of students of Literature and History

    Directory of Open Access Journals (Sweden)

    Natalia Ignatieva

    2012-01-01

    Full Text Available El objetivo de este artículo es presentar un análisis sistémico funcional de procesos verbales en textos estudiantiles. El trabajo realizado forma parte de un proyecto de investigación que actualmente se desarrolla en la Universidad Nacional Autónoma de México. El análisis que presentamos está basado en textos de literatura e historia pertenecientes a dos géneros: pregunta-respuesta y ensayo. En primer lugar, se define el grupo de procesos verbales y se determina su frecuencia; posteriormente se explora el contexto de su uso y se identifican los participantes en las cláusulas verbales. Finalmente, se comparan las dos áreas y los dos géneros analizados a partir de las características de los procesos verbales presentes en cada uno de ellos.Este trabalho visa apresentar uma análise sistémica funcional de processos verbais nos textos dos alunos. O trabalho faz parte de um projeto de pesquisa em andamento na Universidad Nacional Autónoma de México. A análise apresentada baseia-se em textos de literatura e história de dois gêneros: pergunta-resposta e ensaio. Em primeiro lugar, definimos o grupo de processos verbais e determinamos sua frequência; a seguir, exploramos o contexto de seu uso e se identificam os participantes nas cláusulas verbais. Finalmente, se contrastam as duas áreas e analisados os dois gêneros a partir das características dos processos verbais presentes em cada um deles.The aim of this paper is to present a systemic functional analysis of verbal processes in student texts. This work forms part of the on-going research project developed at the National Autonomous University of Mexico. The present analysis is based on literature and history texts belonging to two genres: question-answer and essay. First, a group of verbal processes is defined and their frequency is determined; then the context of their use is explored and the participants in the verbal clauses are identified. Finally, the two areas and the two

  5. Cross-cultural analysis of the verbal conflict behavior of the graduate mining engineers

    Directory of Open Access Journals (Sweden)

    Pevneva Inna

    2017-01-01

    Full Text Available The article is devoted to the crucial issue of the interpersonal communication skills of engineering graduates and studies the verbal behavior of the graduates majoring in mining engineering in conflict professional communication considered in a cross-cultural aspect. The research is based on the needs that future mining engineers have for conducting successful communication, work in teams and run an effective discourse both verbally and in writing. Verbal communication involves a strategic process by which a speaker defines the language resources for its implementation. By choosing a strategy which should contribute to the goals and objectives of the interaction a speaker makes the process of communication either successful or leading to a communicative failure. The scientific importance of this work is in multidiscipline approach and cross-cultural study of ethnic and cultural influences, gender and other characteristics of the verbal behavior of Russian and American engineering graduates.

  6. Analysis of adventitious lung sounds originating from pulmonary tuberculosis.

    Science.gov (United States)

    Becker, K W; Scheffer, C; Blanckenberg, M M; Diacon, A H

    2013-01-01

    Tuberculosis is a common and potentially deadly infectious disease, usually affecting the respiratory system and causing the sound properties of symptomatic infected lungs to differ from non-infected lungs. Auscultation is often ruled out as a reliable diagnostic technique for TB due to the random distribution of the infection and the varying severity of damage to the lungs. However, advancements in signal processing techniques for respiratory sounds can improve the potential of auscultation far beyond the capabilities of the conventional mechanical stethoscope. Though computer-based signal analysis of respiratory sounds has produced a significant body of research, there have not been any recent investigations into the computer-aided analysis of lung sounds associated with pulmonary Tuberculosis (TB), despite the severity of the disease in many countries. In this paper, respiratory sounds were recorded from 14 locations around the posterior and anterior chest walls of healthy volunteers and patients infected with pulmonary TB. The most significant signal features in both the time and frequency domains associated with the presence of TB, were identified by using the statistical overlap factor (SOF). These features were then employed to train a neural network to automatically classify the auscultation recordings into their respective healthy or TB-origin categories. The neural network yielded a diagnostic accuracy of 73%, but it is believed that automated filtering of the noise in the clinics, more training samples and perhaps other signal processing methods can improve the results of future studies. This work demonstrates the potential of computer-aided auscultation as an aid for the diagnosis and treatment of TB.

  7. Seizure-related factors and non-verbal intelligence in children with epilepsy. A population-based study from Western Norway.

    Science.gov (United States)

    Høie, B; Mykletun, A; Sommerfelt, K; Bjørnaes, H; Skeidsvoll, H; Waaler, P E

    2005-06-01

    To study the relationship between seizure-related factors, non-verbal intelligence, and socio-economic status (SES) in a population-based sample of children with epilepsy. The latest ILAE International classifications of epileptic seizures and syndromes were used to classify seizure types and epileptic syndromes in all 6-12 year old children (N=198) with epilepsy in Hordaland County, Norway. The children had neuropediatric and EEG examinations. Of the 198 patients, demographic characteristics were collected on 183 who participated in psychological studies including Raven matrices. 126 healthy controls underwent the same testing. Severe non-verbal problems (SNVP) were defined as a Raven score at or Raven percentile group, whereas controls were highly over-represented in the higher percentile groups. SNVP were present in 43% of children with epilepsy and 3% of controls. These problems were especially common in children with remote symptomatic epilepsy aetiology, undetermined epilepsy syndromes, myoclonic seizures, early seizure debut, high seizure frequency and in children with polytherapy. Seizure-related characteristics that were not usually associated with SNVP were idiopathic epilepsies, localization related (LR) cryptogenic epilepsies, absence and simple partial seizures, and a late debut of epilepsy. Adjusting for socio-economic status factors did not significantly change results. In childhood epilepsy various seizure-related factors, but not SES factors, were associated with the presence or absence of SNVP. Such deficits may be especially common in children with remote symptomatic epilepsy aetiology and in complex and therapy resistant epilepsies. Low frequencies of SNVP may be found in children with idiopathic and LR cryptogenic epilepsy syndromes, simple partial or absence seizures and a late epilepsy debut. Our study contributes to an overall picture of cognitive function and its relation to central seizure characteristics in a childhood epilepsy population

  8. Examining the direct and indirect effects of visual-verbal paired associate learning on Chinese word reading.

    Science.gov (United States)

    Georgiou, George; Liu, Cuina; Xu, Shiyang

    2017-08-01

    Associative learning, traditionally measured with paired associate learning (PAL) tasks, has been found to predict reading ability in several languages. However, it remains unclear whether it also predicts word reading in Chinese, which is known for its ambiguous print-sound correspondences, and whether its effects are direct or indirect through the effects of other reading-related skills such as phonological awareness and rapid naming. Thus, the purpose of this study was to examine the direct and indirect effects of visual-verbal PAL on word reading in an unselected sample of Chinese children followed from the second to the third kindergarten year. A sample of 141 second-year kindergarten children (71 girls and 70 boys; mean age=58.99months, SD=3.17) were followed for a year and were assessed at both times on measures of visual-verbal PAL, rapid naming, and phonological awareness. In the third kindergarten year, they were also assessed on word reading. The results of path analysis showed that visual-verbal PAL exerted a significant direct effect on word reading that was independent of the effects of phonological awareness and rapid naming. However, it also exerted significant indirect effects through phonological awareness. Taken together, these findings suggest that variations in cross-modal associative learning (as measured by visual-verbal PAL) place constraints on the development of word recognition skills irrespective of the characteristics of the orthography children are learning to read. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. (Re)Constructing the Wicked Problem Through the Visual and the Verbal

    DEFF Research Database (Denmark)

    Holm Jacobsen, Peter; Harty, Chris; Tryggestad, Kjell

    2016-01-01

    Wicked problems are open ended and complex societal problems. There is a lack of empirical research into the dynamics and mechanisms that (re) construct problems to become wicked. This paper builds on an ethnographic study of a dialogue-based architect competition to do just that. The competition...... processes creates new knowledge and insights, but at the same time present new problems related to the ongoing verbal feedback. The design problem being (re) constructed appears as Heracles' fight with Hydra: Every time Heracles cut of a head, two new heads grow back. The paper contributes to understanding...... the relationship between the visual and the verbal (dialogue) in complex design processes in the early phases of large construction projects, and how the dynamic interplay between the design visualization and verbal dialogue develops before the competition produces, or negotiates, “a "winning design”....

  10. The effects of hand gestures on verbal recall as a function of high- and low-verbal-skill levels.

    Science.gov (United States)

    Frick-Horbury, Donna

    2002-04-01

    The author examined the effects of cueing for verbal recall with the accompanying self-generated hand gestures as a function of verbal skill. There were 36 participants, half with low SAT verbal scores and half with high SAT verbal scores. Half of the participants of each verbal-skill level were cued for recall with their own gestures, and the remaining half was given a free-recall test. Cueing with self-generated gestures aided the low-verbal-skill participants so that their retrieval rate equaled that of the high-verbal-skill participants and their loss of recall over a 2-week period was minimal. This effect was stable for both concrete and abstract words. The findings support the hypothesis that gestures serve as an auxiliary code for memory retrieval.

  11. Does caffeine modulate verbal working memory processes? An fMRI study.

    Science.gov (United States)

    Koppelstaetter, F; Poeppel, T D; Siedentopf, C M; Ischebeck, A; Verius, M; Haala, I; Mottaghy, F M; Rhomberg, P; Golaszewski, S; Gotwald, T; Lorenz, I H; Kolbitsch, C; Felber, S; Krause, B J

    2008-01-01

    To assess the effect of caffeine on the functional MRI signal during a 2-back verbal working memory task, we examined blood oxygenation level-dependent regional brain activity in 15 healthy right-handed males. The subjects, all moderate caffeine consumers, underwent two scanning sessions on a 1.5-T MR-Scanner separated by a 24- to 48-h interval. Each participant received either placebo or 100 mg caffeine 20 min prior to the performance of the working memory task in blinded crossover fashion. The study was implemented as a blocked-design. Analysis was performed using SPM2. In both conditions, the characteristic working memory network of frontoparietal cortical activation including the precuneus and the anterior cingulate could be shown. In comparison to placebo, caffeine caused an increased response in the bilateral medial frontopolar cortex (BA 10), extending to the right anterior cingulate cortex (BA 32). These results suggest that caffeine modulates neuronal activity as evidenced by fMRI signal changes in a network of brain areas associated with executive and attentional functions during working memory processes.

  12. Auditory-Motor Mapping Training in a More Verbal Child with Autism

    Directory of Open Access Journals (Sweden)

    Karen V. Chenausky

    2017-09-01

    Full Text Available We tested the effect of Auditory-Motor Mapping Training (AMMT, a novel, intonation-based treatment for spoken language originally developed for minimally verbal (MV children with autism, on a more-verbal child with autism. We compared this child’s performance after 25 therapy sessions with that of: (1 a child matched on age, autism severity, and expressive language level who received 25 sessions of a non-intonation-based control treatment Speech Repetition Therapy (SRT; and (2 a matched pair of MV children (one of whom received AMMT; the other, SRT. We found a significant Time × Treatment effect in favor of AMMT for number of Syllables Correct and Consonants Correct per stimulus for both pairs of children, as well as a significant Time × Treatment effect in favor of AMMT for number of Vowels Correct per stimulus for the more-verbal pair. Magnitudes of the difference in post-treatment performance between AMMT and SRT, adjusted for Baseline differences, were: (a larger for the more-verbal pair than for the MV pair; and (b associated with very large effect sizes (Cohen’s d > 1.3 in the more-verbal pair. Results hold promise for the efficacy of AMMT for improving spoken language production in more-verbal children with autism as well as their MV peers and suggest hypotheses about brain function that are testable in both correlational and causal behavioral-imaging studies.

  13. Emotional Verbalization and Identification of Facial Expressions in Teenagers’ Communication

    Directory of Open Access Journals (Sweden)

    I. S. Ivanova

    2013-01-01

    Full Text Available The paper emphasizes the need for studying the subjective effectiveness criteria of interpersonal communication and importance of effective communication for personality development in adolescence. The problemof undeveloped representation of positive emotions in communication process is discussed. Both the identification and verbalization of emotions are regarded by the author as the basic communication skills. The experimental data regarding the longitude and age levels are described, the gender differences in identification and verbalization of emotions considered. The outcomes of experimental study demonstrate that the accuracy of facial emotional expressions of teenage boys and girls changes at different rates. The prospects of defining the age norms for identification and verbalization of emotions are analyzed.

  14. Pre-attentive processing of spectrally complex sounds with asynchronous onsets: an event-related potential study with human subjects.

    Science.gov (United States)

    Tervaniemi, M; Schröger, E; Näätänen, R

    1997-05-23

    Neuronal mechanisms involved in the processing of complex sounds with asynchronous onsets were studied in reading subjects. The sound onset asynchrony (SOA) between the leading partial and the remaining complex tone was varied between 0 and 360 ms. Infrequently occurring deviant sounds (in which one out of 10 harmonics was different in pitch relative to the frequently occurring standard sound) elicited the mismatch negativity (MMN), a change-specific cortical event-related potential (ERP) component. This indicates that the pitch of standard stimuli had been pre-attentively coded by sensory-memory traces. Moreover, when the complex-tone onset fell within temporal integration window initiated by the leading-partial onset, the deviants elicited the N2b component. This indexes that involuntary attention switch towards the sound change occurred. In summary, the present results support the existence of pre-perceptual integration mechanism of 100-200 ms duration and emphasize its importance in switching attention towards the stimulus change.

  15. Verbal abuse from nurse colleagues and work environment of early career registered nurses.

    Science.gov (United States)

    Budin, Wendy C; Brewer, Carol S; Chao, Ying-Yu; Kovner, Christine

    2013-09-01

    This study examined relationships between verbal abuse from nurse colleagues and demographic characteristics, work attributes, and work attitudes of early career registered nurses (RNs). Data are from the fourth wave of a national panel survey of early career RNs begun in 2006. The final analytic sample included 1,407 RNs. Descriptive statistics were used to describe the sample, analysis of variance to compare means, and chi square to compare categorical variables. RNs reporting higher levels of verbal abuse from nurse colleagues were more likely to be unmarried, work in a hospital setting, or work in a non-magnet hospital. They also had lower job satisfaction, and less organizational commitment, autonomy, and intent to stay. Lastly, they perceived their work environments unfavorably. Data support the hypothesis that early career RNs are vulnerable to the effects of verbal abuse from nurse colleagues. Although more verbal abuse is seen in environments with unfavorable working conditions, and RNs working in such environments tend to have less favorable work attitudes, one cannot assume causality. It is unclear if poor working conditions create an environment where verbal abuse is tolerated or if verbal abuse creates an unfavorable work environment. There is a need to develop and test evidence-based interventions to deal with the problems inherent with verbal abuse from nurse colleagues. © 2013 Sigma Theta Tau International.

  16. Predictive Brain Mechanisms in Sound-to-Meaning Mapping during Speech Processing.

    Science.gov (United States)

    Lyu, Bingjiang; Ge, Jianqiao; Niu, Zhendong; Tan, Li Hai; Gao, Jia-Hong

    2016-10-19

    Spoken language comprehension relies not only on the identification of individual words, but also on the expectations arising from contextual information. A distributed frontotemporal network is known to facilitate the mapping of speech sounds onto their corresponding meanings. However, how prior expectations influence this efficient mapping at the neuroanatomical level, especially in terms of individual words, remains unclear. Using fMRI, we addressed this question in the framework of the dual-stream model by scanning native speakers of Mandarin Chinese, a language highly dependent on context. We found that, within the ventral pathway, the violated expectations elicited stronger activations in the left anterior superior temporal gyrus and the ventral inferior frontal gyrus (IFG) for the phonological-semantic prediction of spoken words. Functional connectivity analysis showed that expectations were mediated by both top-down modulation from the left ventral IFG to the anterior temporal regions and enhanced cross-stream integration through strengthened connections between different subregions of the left IFG. By further investigating the dynamic causality within the dual-stream model, we elucidated how the human brain accomplishes sound-to-meaning mapping for words in a predictive manner. In daily communication via spoken language, one of the core processes is understanding the words being used. Effortless and efficient information exchange via speech relies not only on the identification of individual spoken words, but also on the contextual information giving rise to expected meanings. Despite the accumulating evidence for the bottom-up perception of auditory input, it is still not fully understood how the top-down modulation is achieved in the extensive frontotemporal cortical network. Here, we provide a comprehensive description of the neural substrates underlying sound-to-meaning mapping and demonstrate how the dual-stream model functions in the modulation of

  17. Diffuse sound field: challenges and misconceptions

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho

    2016-01-01

    Diffuse sound field is a popular, yet widely misused concept. Although its definition is relatively well established, acousticians use this term for different meanings. The diffuse sound field is defined by a uniform sound pressure distribution (spatial diffusion or homogeneity) and uniform...... tremendously in different chambers because the chambers are non-diffuse in variously different ways. Therefore, good objective measures that can quantify the degree of diffusion and potentially indicate how to fix such problems in reverberation chambers are needed. Acousticians often blend the concept...... of mixing and diffuse sound field. Acousticians often refer diffuse reflections from surfaces to diffuseness in rooms, and vice versa. Subjective aspects of diffuseness have not been much investigated. Finally, ways to realize a diffuse sound field in a finite space are discussed....

  18. Research and Implementation of Heart Sound Denoising

    Science.gov (United States)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  19. New contexts, new processes, new strategies: the co-construction of meaning in plurilingual interactions

    Directory of Open Access Journals (Sweden)

    Filomena Capucho

    2016-11-01

    In this paper, we will present the analysis of an extract from the Bucharest-Cinco corpus that will allow us to identify the strategies developed in the process of co-construction of meaning in multilingual contexts through a close examination of verbal and non-verbal features.

  20. BIBLIOGRAPHY ON VERBAL LEARNING.

    Science.gov (United States)

    Harvard Univ., Cambridge, MA. Graduate School of Education.

    THIS BIBLIOGRAPHY LISTS MATERIAL ON VARIOUS ASPECTS OF VERBAL LEARNING. APPROXIMATELY 50 UNANNOTATED REFERENCES ARE PROVIDED TO DOCUMENTS DATING FROM 1960 TO 1965. JOURNALS, BOOKS, AND REPORT MATERIALS ARE LISTED. SUBJECT AREAS INCLUDED ARE CONDITIONING, VERBAL BEHAVIOR, PROBLEM SOLVING, SEMANTIC SATIATION, STIMULUS DURATION, AND VERBAL…