WorldWideScience

Sample records for audiovisual non-verbal dynamic

  1. Emotion Recognition as a Real Strength in Williams Syndrome: Evidence From a Dynamic Non-verbal Task

    Directory of Open Access Journals (Sweden)

    Laure Ibernon

    2018-04-01

    Full Text Available The hypersocial profile characterizing individuals with Williams syndrome (WS, and particularly their attraction to human faces and their desire to form relationships with other people, could favor the development of their emotion recognition capacities. This study seeks to better understand the development of emotion recognition capacities in WS. The ability to recognize six emotions was assessed in 15 participants with WS. Their performance was compared to that of 15 participants with Down syndrome (DS and 15 typically developing (TD children of the same non-verbal developmental age, as assessed with Raven’s Colored Progressive Matrices (RCPM; Raven et al., 1998. The analysis of the three groups’ results revealed that the participants with WS performed better than the participants with DS and also than the TD children. Individuals with WS performed at a similar level to TD participants in terms of recognizing different types of emotions. The study of development trajectories confirmed that the participants with WS presented the same development profile as the TD participants. These results seem to indicate that the recognition of emotional facial expressions constitutes a real strength in people with WS.

  2. Categorization of natural dynamic audiovisual scenes.

    Directory of Open Access Journals (Sweden)

    Olli Rummukainen

    Full Text Available This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database.

  3. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Non-verbal Full Body Emotional and Social Interaction: A Case Study on Multimedia Systems for Active Music Listening

    Science.gov (United States)

    Camurri, Antonio

    Research on HCI and multimedia systems for art and entertainment based on non-verbal, full-body, emotional and social interaction is the main topic of this paper. A short review of previous research projects in this area at our centre are presented, to introduce the main issues discussed in the paper. In particular, a case study based on novel paradigms of social active music listening is presented. Active music listening experience enables users to dynamically mould expressive performance of music and of audiovisual content. This research is partially supported by the 7FP EU-ICT Project SAME (Sound and Music for Everyone, Everyday, Everywhere, Every Way, www.sameproject.eu).

  5. [Non-verbal communication in Alzheimer's disease].

    Science.gov (United States)

    Schiaratura, Loris Tamara

    2008-09-01

    This review underlines the importance of non-verbal communication in Alzheimer's disease. A social psychological perspective of communication is privileged. Non-verbal behaviors such as looks, head nods, hand gestures, body posture or facial expression provide a lot of information about interpersonal attitudes, behavioral intentions, and emotional experiences. Therefore they play an important role in the regulation of interaction between individuals. Non-verbal communication is effective in Alzheimer's disease even in the late stages. Patients still produce non-verbal signals and are responsive to others. Nevertheless, few studies have been devoted to the social factors influencing the non-verbal exchange. Misidentification and misinterpretation of behaviors may have negative consequences for the patients. Thus, improving the comprehension of and the response to non-verbal behavior would increase first the quality of the interaction, then the physical and psychological well-being of patients and that of caregivers. The role of non-verbal behavior in social interactions should be approached from an integrative and functional point of view.

  6. The role of emotion in dynamic audiovisual integration of faces and voices.

    Science.gov (United States)

    Kokinous, Jenny; Kotz, Sonja A; Tavano, Alessandro; Schröger, Erich

    2015-05-01

    We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  7. Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection.

    Science.gov (United States)

    Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica

    2015-09-01

    Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration.

  8. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis.

    Science.gov (United States)

    Altieri, Nicholas; Wenger, Michael J

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of -12 dB, and S/N ratio of -18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity.

  9. Perceived synchrony for realistic and dynamic audiovisual events.

    Science.gov (United States)

    Eg, Ragnhild; Behne, Dawn M

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli.

  10. Infant Attention to Dynamic Audiovisual Stimuli: Look Duration from 3 to 9 Months of Age

    Science.gov (United States)

    Reynolds, Greg D.; Zhang, Dantong; Guy, Maggie W.

    2013-01-01

    The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3-, 6-, and 9-month-old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous…

  11. A Meta-study of musicians' non-verbal interaction

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer; Marchetti, Emanuela

    2010-01-01

    interruptions. Hence, despite the fact that the skill to engage in a non-verbal interaction is described as tacit knowledge, it is fundamental for both musicians and teachers (Davidson and Good 2002). Typical observed non-verbal cues are for example: physical gestures, modulations of sound, steady eye contact...

  12. Non-Verbal Communication in Children with Visual Impairment

    Science.gov (United States)

    Mallineni, Sharmila; Nutheti, Rishita; Thangadurai, Shanimole; Thangadurai, Puspha

    2006-01-01

    The aim of this study was to determine: (a) whether children with visual and additional impairments show any non-verbal behaviors, and if so what were the common behaviors; (b) whether two rehabilitation professionals interpreted the non-verbal behaviors similarly; and (c) whether a speech pathologist and a rehabilitation professional interpreted…

  13. Guidelines for Teaching Non-Verbal Communications Through Visual Media

    Science.gov (United States)

    Kundu, Mahima Ranjan

    1976-01-01

    There is a natural unique relationship between non-verbal communication and visual media such as television and film. Visual media will have to be used extensively--almost exclusively--in teaching non-verbal communications, as well as other methods requiring special teaching skills. (Author/ER)

  14. Non-verbal communication barriers when dealing with Saudi sellers

    Directory of Open Access Journals (Sweden)

    Yosra Missaoui

    2015-12-01

    Full Text Available Communication has a major impact on how customers perceive sellers and their organizations. Especially, the non-verbal communication such as body language, appearance, facial expressions, gestures, proximity, posture, eye contact that can influence positively or negatively the first impression of customers and their experiences in stores. Salespeople in many countries, especially the developing ones, are just telling about their companies’ products because they are unaware of the real role of sellers and the importance of non-verbal communication. In Saudi Arabia, the seller profession has been exclusively for foreign labor until 2006. It is very recently that Saudi workforce enters to the retailing sector as sellers. The non-verbal communication of those sellers has never been evaluated from consumer’s point of view. Therefore, the aim of this paper is to explore the non-verbal communication barriers that customers are facing when dealing with Saudi sellers. After discussing the non-verbal communication skills that sellers must have in the light of the previous academic research and the depth interviews with seven focus groups of Saudi customers, this study found that the Saudi customers were not totally satisfied with the current non-verbal communication skills of Saudi sellers. Therefore, it is strongly recommended to develop the non-verbal communication skills of Saudi sellers by intensive trainings, to distinguish more the appearance of their sellers, especially the female ones, to focus on the time of intervention as well as the proximity to customers.

  15. From SOLER to SURETY for effective non-verbal communication.

    Science.gov (United States)

    Stickley, Theodore

    2011-11-01

    This paper critiques the model for non-verbal communication referred to as SOLER (which stands for: "Sit squarely"; "Open posture"; "Lean towards the other"; "Eye contact; "Relax"). It has been approximately thirty years since Egan (1975) introduced his acronym SOLER as an aid for teaching and learning about non-verbal communication. There is evidence that the SOLER framework has been widely used in nurse education with little published critical appraisal. A new acronym that might be appropriate for non-verbal communication skills training and education is proposed and this is SURETY (which stands for "Sit at an angle"; "Uncross legs and arms"; "Relax"; "Eye contact"; "Touch"; "Your intuition"). The proposed model advances the SOLER model by including the use of touch and the importance of individual intuition is emphasised. The model encourages student nurse educators to also think about therapeutic space when they teach skills of non-verbal communication. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Audio-Visual Tibetan Speech Recognition Based on a Deep Dynamic Bayesian Network for Natural Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Yue Zhao

    2012-12-01

    Full Text Available Audio-visual speech recognition is a natural and robust approach to improving human-robot interaction in noisy environments. Although multi-stream Dynamic Bayesian Network and coupled HMM are widely used for audio-visual speech recognition, they fail to learn the shared features between modalities and ignore the dependency of features among the frames within each discrete state. In this paper, we propose a Deep Dynamic Bayesian Network (DDBN to perform unsupervised extraction of spatial-temporal multimodal features from Tibetan audio-visual speech data and build an accurate audio-visual speech recognition model under a no frame-independency assumption. The experiment results on Tibetan speech data from some real-world environments showed the proposed DDBN outperforms the state-of-art methods in word recognition accuracy.

  17. Modulations of 'late' event-related brain potentials in humans by dynamic audiovisual speech stimuli.

    Science.gov (United States)

    Lebib, Riadh; Papo, David; Douiri, Abdel; de Bode, Stella; Gillon Dowens, Margaret; Baudonnière, Pierre-Marie

    2004-11-30

    Lipreading reliably improve speech perception during face-to-face conversation. Within the range of good dubbing, however, adults tolerate some audiovisual (AV) discrepancies and lipreading, then, can give rise to confusion. We used event-related brain potentials (ERPs) to study the perceptual strategies governing the intermodal processing of dynamic and bimodal speech stimuli, either congruently dubbed or not. Electrophysiological analyses revealed that non-coherent audiovisual dubbings modulated in amplitude an endogenous ERP component, the N300, we compared to a 'N400-like effect' reflecting the difficulty to integrate these conflicting pieces of information. This result adds further support for the existence of a cerebral system underlying 'integrative processes' lato sensu. Further studies should take advantage of this 'N400-like effect' with AV speech stimuli to open new perspectives in the domain of psycholinguistics.

  18. Young Children's Understanding of Markedness in Non-Verbal Communication

    Science.gov (United States)

    Liebal, Kristin; Carpenter, Malinda; Tomasello, Michael

    2011-01-01

    Speakers often anticipate how recipients will interpret their utterances. If they wish some other, less obvious interpretation, they may "mark" their utterance (e.g. with special intonations or facial expressions). We investigated whether two- and three-year-olds recognize when adults mark a non-verbal communicative act--in this case a pointing…

  19. Videotutoring, Non-Verbal Communication and Initial Teacher Training.

    Science.gov (United States)

    Nichol, Jon; Watson, Kate

    2000-01-01

    Describes the use of video tutoring for distance education within the context of a post-graduate teacher training course at the University of Exeter. Analysis of the tapes used a protocol based on non-verbal communication research, and findings suggest that the interaction of participants was significantly different from face-to-face…

  20. Language, Power, Multilingual and Non-Verbal Multicultural Communication

    NARCIS (Netherlands)

    Marácz, L.; Zhuravleva, E.A.

    2014-01-01

    Due to developments in internal migration and mobility there is a proliferation of linguistic diversity, multilingual and non-verbal multicultural communication. At the same time the recognition of the use of one’s first language receives more and more support in international political, legal and

  1. Non-verbal behaviour in nurse-elderly patient communication.

    NARCIS (Netherlands)

    Caris-Verhallen, W.M.C.M.; Kerkstra, A.; Bensing, J.M.

    1999-01-01

    This study explores the occurence of non-verbal communication in nurse-elderly patient interaction in two different care settings: home nursing and a home for the elderly. In a sample of 181 nursing encounters involving 47 nurses a study was made of videotaped nurse-patient communication. Six

  2. A comprehensive model of audiovisual perception: both percept and temporal dynamics.

    Directory of Open Access Journals (Sweden)

    Patricia Besson

    Full Text Available The sparse information captured by the sensory systems is used by the brain to apprehend the environment, for example, to spatially locate the source of audiovisual stimuli. This is an ill-posed inverse problem whose inherent uncertainty can be solved by jointly processing the information, as well as introducing constraints during this process, on the way this multisensory information is handled. This process and its result--the percept--depend on the contextual conditions perception takes place in. To date, perception has been investigated and modeled on the basis of either one of two of its dimensions: the percept or the temporal dynamics of the process. Here, we extend our previously proposed audiovisual perception model to predict both these dimensions to capture the phenomenon as a whole. Starting from a behavioral analysis, we use a data-driven approach to elicit a bayesian network which infers the different percepts and dynamics of the process. Context-specific independence analyses enable us to use the model's structure to directly explore how different contexts affect the way subjects handle the same available information. Hence, we establish that, while the percepts yielded by a unisensory stimulus or by the non-fusion of multisensory stimuli may be similar, they result from different processes, as shown by their differing temporal dynamics. Moreover, our model predicts the impact of bottom-up (stimulus driven factors as well as of top-down factors (induced by instruction manipulation on both the perception process and the percept itself.

  3. Non-verbal numerical cognition: from reals to integers.

    Science.gov (United States)

    Gallistel; Gelman

    2000-02-01

    Data on numerical processing by verbal (human) and non-verbal (animal and human) subjects are integrated by the hypothesis that a non-verbal counting process represents discrete (countable) quantities by means of magnitudes with scalar variability. These appear to be identical to the magnitudes that represent continuous (uncountable) quantities such as duration. The magnitudes representing countable quantity are generated by a discrete incrementing process, which defines next magnitudes and yields a discrete ordering. In the case of continuous quantities, the continuous accumulation process does not define next magnitudes, so the ordering is also continuous ('dense'). The magnitudes representing both countable and uncountable quantity are arithmetically combined in, for example, the computation of the income to be expected from a foraging patch. Thus, on the hypothesis presented here, the primitive machinery for arithmetic processing works with real numbers (magnitudes).

  4. Physical growth and non-verbal intelligence: Associations in Zambia

    Science.gov (United States)

    Hein, Sascha; Reich, Jodi; Thuma, Philip E.; Grigorenko, Elena L.

    2014-01-01

    Objectives To investigate normative developmental BMI trajectories and associations of physical growth indicators (ie, height, weight, head circumference [HC], body mass index [BMI]) with non-verbal intelligence in an understudied population of children from Sub-Saharan Africa. Study design A sample of 3981 students (50.8% male), grades 3 to 7, with a mean age of 12.75 years was recruited from 34 rural Zambian schools. Children with low scores on vision and hearing screenings were excluded. Height, weight and HC were measured, and non-verbal intelligence was assessed using UNIT-symbolic memory and KABC-II-triangles. Results Results showed that students in higher grades have a higher BMI over and above the effect of age. Girls showed a marginally higher BMI, although that for both boys and girls was approximately 1 SD below the international CDC and WHO norms. Controlling for the effect of age, non-verbal intelligence showed small but significant positive relationships with HC (r = .17) and BMI (r = .11). HC and BMI accounted for 1.9% of the variance in non-verbal intelligence, over and above the contribution of grade and sex. Conclusions BMI-for-age growth curves of Zambian children follow observed worldwide developmental trajectories. The positive relationships between BMI and intelligence underscore the importance of providing adequate nutritional and physical growth opportunities for children worldwide and in sub-Saharan Africa in particular. Directions for future studies are discussed with regard to maximizing the cognitive potential of all rural African children. PMID:25217196

  5. Context, culture and (non-verbal) communication affect handover quality.

    Science.gov (United States)

    Frankel, Richard M; Flanagan, Mindy; Ebright, Patricia; Bergman, Alicia; O'Brien, Colleen M; Franks, Zamal; Allen, Andrew; Harris, Angela; Saleem, Jason J

    2012-12-01

    Transfers of care, also known as handovers, remain a substantial patient safety risk. Although research on handovers has been done since the 1980s, the science is incomplete. Surprisingly few interventions have been rigorously evaluated and, of those that have, few have resulted in long-term positive change. Researchers, both in medicine and other high reliability industries, agree that face-to-face handovers are the most reliable. It is not clear, however, what the term face-to-face means in actual practice. We studied the use of non-verbal behaviours, including gesture, posture, bodily orientation, facial expression, eye contact and physical distance, in the delivery of information during face-to-face handovers. To address this question and study the role of non-verbal behaviour on the quality and accuracy of handovers, we videotaped 52 nursing, medicine and surgery handovers covering 238 patients. Videotapes were analysed using immersion/crystallisation methods of qualitative data analysis. A team of six researchers met weekly for 18 months to view videos together using a consensus-building approach. Consensus was achieved on verbal, non-verbal, and physical themes and patterns observed in the data. We observed four patterns of non-verbal behaviour (NVB) during handovers: (1) joint focus of attention; (2) 'the poker hand'; (3) parallel play and (4) kerbside consultation. In terms of safety, joint focus of attention was deemed to have the best potential for high quality and reliability; however, it occurred infrequently, creating opportunities for education and improvement. Attention to patterns of NVB in face-to-face handovers coupled with education and practice can improve quality and reliability.

  6. Prosody Predicts Contest Outcome in Non-Verbal Dialogs.

    Science.gov (United States)

    Dreiss, Amélie N; Chatelain, Philippe G; Roulin, Alexandre; Richner, Heinz

    2016-01-01

    Non-verbal communication has important implications for inter-individual relationships and negotiation success. However, to what extent humans can spontaneously use rhythm and prosody as a sole communication tool is largely unknown. We analysed human ability to resolve a conflict without verbal dialogs, independently of semantics. We invited pairs of subjects to communicate non-verbally using whistle sounds. Along with the production of more whistles, participants unwittingly used a subtle prosodic feature to compete over a resource (ice-cream scoops). Winners can be identified by their propensity to accentuate the first whistles blown when replying to their partner, compared to the following whistles. Naive listeners correctly identified this prosodic feature as a key determinant of which whistler won the interaction. These results suggest that in the absence of other communication channels, individuals spontaneously use a subtle variation of sound accentuation (prosody), instead of merely producing exuberant sounds, to impose themselves in a conflict of interest. We discuss the biological and cultural bases of this ability and their link with verbal communication. Our results highlight the human ability to use non-verbal communication in a negotiation process.

  7. Getting the Message Across; Non-Verbal Communication in the Classroom.

    Science.gov (United States)

    Levy, Jack

    This handbook presents selected theories, activities, and resources which can be utilized by educators in the area of non-verbal communication. Particular attention is given to the use of non-verbal communication in a cross-cultural context. Categories of non-verbal communication such as proxemics, haptics, kinesics, smiling, sound, clothing, and…

  8. Cross-cultural Differences of Stereotypes about Non-verbal Communication of Russian and Chinese Students

    Directory of Open Access Journals (Sweden)

    I A Novikova

    2011-09-01

    Full Text Available The article deals with peculiarities of non-verbal communication as a factor of cross-cultural intercourse and adaptation of representatives of different cultures. The possibility of studying of ethnic stereotypes concerning non-verbal communication is considered. The results of empiric research of stereotypes about non-verbal communication of Russian and Chinese students are presented.

  9. Audiovisual Capture with Ambiguous Audiovisual Stimuli

    Directory of Open Access Journals (Sweden)

    Jean-Michel Hupé

    2011-10-01

    Full Text Available Audiovisual capture happens when information across modalities get fused into a coherent percept. Ambiguous multi-modal stimuli have the potential to be powerful tools to observe such effects. We used such stimuli made of temporally synchronized and spatially co-localized visual flashes and auditory tones. The flashes produced bistable apparent motion and the tones produced ambiguous streaming. We measured strong interferences between perceptual decisions in each modality, a case of audiovisual capture. However, does this mean that audiovisual capture occurs before bistable decision? We argue that this is not the case, as the interference had a slow temporal dynamics and was modulated by audiovisual congruence, suggestive of high-level factors such as attention or intention. We propose a framework to integrate bistability and audiovisual capture, which distinguishes between “what” competes and “how” it competes (Hupé et al., 2008. The audiovisual interactions may be the result of contextual influences on neural representations (“what” competes, quite independent from the causal mechanisms of perceptual switches (“how” it competes. This framework predicts that audiovisual capture can bias bistability especially if modalities are congruent (Sato et al., 2007, but that is fundamentally distinct in nature from the bistable competition mechanism.

  10. Anatomical Correlates of Non-Verbal Perception in Dementia Patients

    Directory of Open Access Journals (Sweden)

    Pin-Hsuan Lin

    2016-08-01

    Full Text Available Purpose: Patients with dementia who have dissociations in verbal and non-verbal sound processing may offer insights into the anatomic basis for highly related auditory modes. Methods: To determine the neuronal networks on non-verbal perception, 16 patients with Alzheimer’s dementia (AD, 15 with behavior variant fronto-temporal dementia (bv-FTD, 14 with semantic dementia (SD were evaluated and compared with 15 age-matched controls. Neuropsychological and auditory perceptive tasks were included to test the ability to compare pitch changes, scale-violated melody and for naming and associating with environmental sound. The brain 3D T1 images were acquired and voxel-based morphometry (VBM was used to compare and correlated the volumetric measures with task scores. Results: The SD group scored the lowest among 3 groups in pitch or scale-violated melody tasks. In the environmental sound test, the SD group also showed impairment in naming and also in associating sound with pictures. The AD and bv-FTD groups, compared with the controls, showed no differences in all tests. VBM with task score correlation showed that atrophy in the right supra-marginal and superior temporal gyri was strongly related to deficits in detecting violated scales, while atrophy in the bilateral anterior temporal poles and left medial temporal structures was related to deficits in environmental sound recognition. Conclusions: Auditory perception of pitch, scale-violated melody or environmental sound reflects anatomical degeneration in dementia patients and the processing of non-verbal sounds is mediated by distinct neural circuits.

  11. Drama to promote non-verbal communication skills.

    Science.gov (United States)

    Kelly, Martina; Nixon, Lara; Broadfoot, Kirsten; Hofmeister, Marianna; Dornan, Tim

    2018-05-23

    Non-verbal communication skills (NVCS) help physicians to deliver relationship-centred care, and the effective use of NVCS is associated with improved patient satisfaction, better use of health services and high-quality clinical care. In contrast to verbal communication skills, NVCS training is under developed in communication curricula for the health care professions. One of the challenges teaching NVCS is their tacit nature. In this study, we evaluated drama exercises to raise awareness of NVCS by making familiar activities 'strange'. Workshops based on drama exercises were designed to heighten an awareness of sight, hearing, touch and proxemics in non-verbal communication. These were conducted at eight medical education conferences, held between 2014 and 2016, and were open to all conference participants. Workshops were evaluated by recording narrative data generated during the workshops and an open-ended questionnaire following the workshop. Data were analysed qualitatively, using thematic analysis. Non-verbal communication skills help doctors to deliver relationship-centred care RESULTS: One hundred and twelve participants attended workshops, 73 (65%) of whom completed an evaluation form: 56 physicians, nine medical students and eight non-physician faculty staff. Two themes were described: an increased awareness of NVCS and the importance of NVCS in relationship building. Drama exercises enabled participants to experience NVCS, such as sight, sound, proxemics and touch, in novel ways. Participants reflected on how NCVS contribute to developing trust and building relationships in clinical practice. Drama-based exercises elucidate the tacit nature of NVCS and require further evaluation in formal educational settings. © 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  12. Non-verbal Persuasion and Communication in an Affective Agent

    DEFF Research Database (Denmark)

    André, Elisabeth; Bevacqua, Elisabetta; Heylen, Dirk

    2011-01-01

    the critical role of non-verbal behaviour during face-to-face communication. In this chapter we restrict the discussion to body language. We also consider embodied virtual agents. As is the case with humans, there are a number of fundamental factors to be considered when constructing persuasive agents......This chapter deals with the communication of persuasion. Only a small percentage of communication involves words: as the old saying goes, “it’s not what you say, it’s how you say it”. While this likely underestimates the importance of good verbal persuasion techniques, it is accurate in underlining...

  13. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    Directory of Open Access Journals (Sweden)

    Jonathan M P Wilbiks

    Full Text Available Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations are combined with the temporal unpredictability of the critical frame (Experiment 2, or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4. Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  14. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    Science.gov (United States)

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  15. Verbal and non-verbal behaviour and patient perception of communication in primary care: an observational study.

    Science.gov (United States)

    Little, Paul; White, Peter; Kelly, Joanne; Everitt, Hazel; Gashi, Shkelzen; Bikker, Annemieke; Mercer, Stewart

    2015-06-01

    Few studies have assessed the importance of a broad range of verbal and non-verbal consultation behaviours. To explore the relationship of observer ratings of behaviours of videotaped consultations with patients' perceptions. Observational study in general practices close to Southampton, Southern England. Verbal and non-verbal behaviour was rated by independent observers blind to outcome. Patients competed the Medical Interview Satisfaction Scale (MISS; primary outcome) and questionnaires addressing other communication domains. In total, 275/360 consultations from 25 GPs had useable videotapes. Higher MISS scores were associated with slight forward lean (an 0.02 increase for each degree of lean, 95% confidence interval [CI] = 0.002 to 0.03), the number of gestures (0.08, 95% CI = 0.01 to 0.15), 'back-channelling' (for example, saying 'mmm') (0.11, 95% CI = 0.02 to 0.2), and social talk (0.29, 95% CI = 0.4 to 0.54). Starting the consultation with professional coolness ('aloof') was helpful and optimism unhelpful. Finishing with non-verbal 'cut-offs' (for example, looking away), being professionally cool ('aloof'), or patronising, ('infantilising') resulted in poorer ratings. Physical contact was also important, but not traditional verbal communication. These exploratory results require confirmation, but suggest that patients may be responding to several non-verbal behaviours and non-specific verbal behaviours, such as social talk and back-channelling, more than traditional verbal behaviours. A changing consultation dynamic may also help, from professional 'coolness' at the beginning of the consultation to becoming warmer and avoiding non-verbal cut-offs at the end. © British Journal of General Practice 2015.

  16. Electrocortical Dynamics in Children with a Language-Learning Impairment Before and After Audiovisual Training.

    Science.gov (United States)

    Heim, Sabine; Choudhury, Naseem; Benasich, April A

    2016-05-01

    Detecting and discriminating subtle and rapid sound changes in the speech environment is a fundamental prerequisite of language processing, and deficits in this ability have frequently been observed in individuals with language-learning impairments (LLI). One approach to studying associations between dysfunctional auditory dynamics and LLI, is to implement a training protocol tapping into this potential while quantifying pre- and post-intervention status. Event-related potentials (ERPs) are highly sensitive to the brain correlates of these dynamic changes and are therefore ideally suited for examining hypotheses regarding dysfunctional auditory processes. In this study, ERP measurements to rapid tone sequences (standard and deviant tone pairs) along with behavioral language testing were performed in 6- to 9-year-old LLI children (n = 21) before and after audiovisual training. A non-treatment group of children with typical language development (n = 12) was also assessed twice at a comparable time interval. The results indicated that the LLI group exhibited considerable gains on standardized measures of language. In terms of ERPs, we found evidence of changes in the LLI group specifically at the level of the P2 component, later than 250 ms after the onset of the second stimulus in the deviant tone pair. These changes suggested enhanced discrimination of deviant from standard tone sequences in widespread cortices, in LLI children after training.

  17. Sex differences in the ability to recognise non-verbal displays of emotion: a meta-analysis.

    Science.gov (United States)

    Thompson, Ashley E; Voyer, Daniel

    2014-01-01

    The present study aimed to quantify the magnitude of sex differences in humans' ability to accurately recognise non-verbal emotional displays. Studies of relevance were those that required explicit labelling of discrete emotions presented in the visual and/or auditory modality. A final set of 551 effect sizes from 215 samples was included in a multilevel meta-analysis. The results showed a small overall advantage in favour of females on emotion recognition tasks (d=0.19). However, the magnitude of that sex difference was moderated by several factors, namely specific emotion, emotion type (negative, positive), sex of the actor, sensory modality (visual, audio, audio-visual) and age of the participants. Method of presentation (computer, slides, print, etc.), type of measurement (response time, accuracy) and year of publication did not significantly contribute to variance in effect sizes. These findings are discussed in the context of social and biological explanations of sex differences in emotion recognition.

  18. Neurophysiological Modulations of Non-Verbal and Verbal Dual-Tasks Interference during Word Planning.

    Directory of Open Access Journals (Sweden)

    Raphaël Fargier

    Full Text Available Running a concurrent task while speaking clearly interferes with speech planning, but whether verbal vs. non-verbal tasks interfere with the same processes is virtually unknown. We investigated the neural dynamics of dual-task interference on word production using event-related potentials (ERPs with either tones or syllables as concurrent stimuli. Participants produced words from pictures in three conditions: without distractors, while passively listening to distractors and during a distractor detection task. Production latencies increased for tasks with higher attentional demand and were longer for syllables relative to tones. ERP analyses revealed common modulations by dual-task for verbal and non-verbal stimuli around 240 ms, likely corresponding to lexical selection. Modulations starting around 350 ms prior to vocal onset were only observed when verbal stimuli were involved. These later modulations, likely reflecting interference with phonological-phonetic encoding, were observed only when overlap between tasks was maximal and the same underlying neural circuits were engaged (cross-talk.

  19. Neurophysiological Modulations of Non-Verbal and Verbal Dual-Tasks Interference during Word Planning.

    Science.gov (United States)

    Fargier, Raphaël; Laganaro, Marina

    2016-01-01

    Running a concurrent task while speaking clearly interferes with speech planning, but whether verbal vs. non-verbal tasks interfere with the same processes is virtually unknown. We investigated the neural dynamics of dual-task interference on word production using event-related potentials (ERPs) with either tones or syllables as concurrent stimuli. Participants produced words from pictures in three conditions: without distractors, while passively listening to distractors and during a distractor detection task. Production latencies increased for tasks with higher attentional demand and were longer for syllables relative to tones. ERP analyses revealed common modulations by dual-task for verbal and non-verbal stimuli around 240 ms, likely corresponding to lexical selection. Modulations starting around 350 ms prior to vocal onset were only observed when verbal stimuli were involved. These later modulations, likely reflecting interference with phonological-phonetic encoding, were observed only when overlap between tasks was maximal and the same underlying neural circuits were engaged (cross-talk).

  20. On the embedded cognition of non-verbal narratives

    DEFF Research Database (Denmark)

    Bruni, Luis Emilio; Baceviciute, Sarune

    2014-01-01

    Acknowledging that narratives are an important resource in human communication and cognition, the focus of this article is on the cognitive aspects of involvement with visual and auditory non-verbal narratives, particularly in relation to the newest immersive media and digital interactive...... representational technologies. We consider three relevant trends in narrative studies that have emerged in the 60 years of cognitive and digital revolution. The issue at hand could have implications for developmental psychology, pedagogics, cognitive science, cognitive psychology, ethology and evolutionary studies...... of language. In particular, it is of great importance for narratology in relation to interactive media and new representational technologies. Therefore we outline a research agenda for a bio-cognitive semiotic interdisciplinary investigation on how people understand, react to, and interact with narratives...

  1. The role of interaction of verbal and non-verbal means of communication in different types of discourse

    OpenAIRE

    Orlova M. А.

    2010-01-01

    Communication relies on verbal and non-verbal interaction. To be most effective, group members need to improve verbal and non-verbal communication. Non-verbal communication fulfills functions within groups that are sometimes difficult to communicate verbally. But interpreting non-verbal messages requires a great deal of skill because multiple meanings abound in these messages.

  2. The impact of the teachers? non-verbal communication on success in teaching

    OpenAIRE

    BAMBAEEROO, FATEMEH; SHOKRPOUR, NASRIN

    2017-01-01

    Introduction: Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers’ non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers’ use of non-verbal communication and ...

  3. Non-verbal communication of compassion: measuring psychophysiologic effects.

    Science.gov (United States)

    Kemper, Kathi J; Shaltout, Hossam A

    2011-12-20

    Calm, compassionate clinicians comfort others. To evaluate the direct psychophysiologic benefits of non-verbal communication of compassion (NVCC), it is important to minimize the effect of subjects' expectation. This preliminary study was designed to a) test the feasibility of two strategies for maintaining subject blinding to non-verbal communication of compassion (NVCC), and b) determine whether blinded subjects would experience psychophysiologic effects from NVCC. Subjects were healthy volunteers who were told the study was evaluating the effect of time and touch on the autonomic nervous system. The practitioner had more than 10 years' experience with loving-kindness meditation (LKM), a form of NVCC. Subjects completed 10-point visual analog scales (VAS) for stress, relaxation, and peacefulness before and after LKM. To assess physiologic effects, practitioners and subjects wore cardiorespiratory monitors to assess respiratory rate (RR), heart rate (HR) and heart rate variability (HRV) throughout the 4 10-minute study periods: Baseline (both practitioner and subjects read neutral material); non-tactile-LKM (subjects read while the practitioner practiced LKM while pretending to read); tactile-LKM (subjects rested while the practitioner practiced LKM while lightly touching the subject on arms, shoulders, hands, feet, and legs); Post-Intervention Rest (subjects rested; the practitioner read). To assess blinding, subjects were asked after the interventions what the practitioner was doing during each period (reading, touch, or something else). Subjects' mean age was 43.6 years; all were women. Blinding was maintained and the practitioner was able to maintain meditation for both tactile and non-tactile LKM interventions as reflected in significantly reduced RR. Despite blinding, subjects' VAS scores improved from baseline to post-intervention for stress (5.5 vs. 2.2), relaxation (3.8 vs. 8.8) and peacefulness (3.8 vs. 9.0, P non-tactile LKM. It is possible to test the

  4. A qualitative study on non-verbal sensitivity in nursing students.

    Science.gov (United States)

    Chan, Zenobia C Y

    2013-07-01

    To explore nursing students' perception of the meanings and roles of non-verbal communication and sensitivity. It also attempts to understand how different factors influence their non-verbal communication style. The importance of non-verbal communication in the health arena lies in the need for good communication for efficient healthcare delivery. Understanding nursing students' non-verbal communication with patients and the influential factors is essential to prepare them for field work in the future. Qualitative approach based on 16 in-depth interviews. Sixteen nursing students from the Master of Nursing and the Year 3 Bachelor of Nursing program were interviewed. Major points in the recorded interviews were marked down for content analysis. Three main themes were developed: (1) understanding students' non-verbal communication, which shows how nursing students value and experience non-verbal communication in the nursing context; (2) factors that influence the expression of non-verbal cues, which reveals the effect of patients' demographic background (gender, age, social status and educational level) and participants' characteristics (character, age, voice and appearance); and (3) metaphors of non-verbal communication, which is further divided into four subthemes: providing assistance, individualisation, dropping hints and promoting interaction. Learning about students' non-verbal communication experiences in the clinical setting allowed us to understand their use of non-verbal communication and sensitivity, as well as to understand areas that may need further improvement. The experiences and perceptions revealed by the nursing students could provoke nurses to reconsider the effects of the different factors suggested in this study. The results might also help students and nurses to learn and ponder their missing gap, leading them to rethink, train and pay more attention to their non-verbal communication style and sensitivity. © 2013 John Wiley & Sons Ltd.

  5. Effects of proactive interference on non-verbal working memory.

    Science.gov (United States)

    Cyr, Marilyn; Nee, Derek E; Nelson, Eric; Senger, Thea; Jonides, John; Malapani, Chara

    2017-02-01

    Working memory (WM) is a cognitive system responsible for actively maintaining and processing relevant information and is central to successful cognition. A process critical to WM is the resolution of proactive interference (PI), which involves suppressing memory intrusions from prior memories that are no longer relevant. Most studies that have examined resistance to PI in a process-pure fashion used verbal material. By contrast, studies using non-verbal material are scarce, and it remains unclear whether the effect of PI is domain-general or whether it applies solely to the verbal domain. The aim of the present study was to examine the effect of PI in visual WM using both objects with high and low nameability. Using a Directed-Forgetting paradigm, we varied discriminability between WM items on two dimensions, one verbal (high-nameability vs. low-nameability objects) and one perceptual (colored vs. gray objects). As in previous studies using verbal material, effects of PI were found with object stimuli, even after controlling for verbal labels being used (i.e., low-nameability condition). We also found that the addition of distinctive features (color, verbal label) increased performance in rejecting intrusion probes, most likely through an increase in discriminability between content-context bindings in WM.

  6. Culture and Social Relationship as Factors of Affecting Communicative Non-Verbal Behaviors

    DEFF Research Database (Denmark)

    Lipi, Afia Akhter; Nakano, Yukiko; Rehm, Matthias

    2010-01-01

    The goal of this paper is to link a bridge between social relationship and cultural variation to predict conversants' non-verbal behaviors. This idea serves as a basis of establishing a parameter based socio-cultural model, which determines non-verbal expressive parameters that specify the shapes...

  7. Oncologists’ non-verbal behavior and analog patients’ recall of information

    NARCIS (Netherlands)

    Hillen, M.A.; de Haes, H.C.J.M.; van Tienhoven, G.; van Laarhoven, H.W.M.; van Weert, J.C.M.; Vermeulen, D.M.; Smets, E.M.A.

    2016-01-01

    Background Information in oncological consultations is often excessive. Those patients who better recall information are more satisfied, less anxious and more adherent. Optimal recall may be enhanced by the oncologist’s non-verbal communication. We tested the influence of three non-verbal behaviors,

  8. Oncologists' non-verbal behavior and analog patients' recall of information

    NARCIS (Netherlands)

    Hillen, Marij A.; de Haes, Hanneke C. J. M.; van Tienhoven, Geertjan; van Laarhoven, Hanneke W. M.; van Weert, Julia C. M.; Vermeulen, Daniëlle M.; Smets, Ellen M. A.

    2016-01-01

    Background Information in oncological consultations is often excessive. Those patients who better recall information are more satisfied, less anxious and more adherent. Optimal recall may be enhanced by the oncologist's non-verbal communication. We tested the influence of three non-verbal behaviors,

  9. Virtual Chironomia: A Multimodal Study of Verbal and Non-Verbal Communication in a Virtual World

    Science.gov (United States)

    Verhulsdonck, Gustav

    2010-01-01

    This mixed methods study examined the various aspects of multimodal use of non-verbal communication in virtual worlds during dyadic negotiations. Quantitative analysis uncovered a treatment effect whereby people with more rhetorical certainty used more neutral non-verbal communication; whereas people that were rhetorically less certain used more…

  10. Cross-cultural features of gestures in non-verbal communication

    Directory of Open Access Journals (Sweden)

    Chebotariova N. A.

    2017-09-01

    Full Text Available this article is devoted to analysis of the concept of non-verbal communication and ways of expressing it. Gesticulation is studied in detail as it is the main element of non-verbal communication and has different characteristics in various countries of the world.

  11. Non-verbal communication in meetings of psychiatrists and patients with schizophrenia.

    Science.gov (United States)

    Lavelle, M; Dimic, S; Wildgrube, C; McCabe, R; Priebe, S

    2015-03-01

    Recent evidence found that patients with schizophrenia display non-verbal behaviour designed to avoid social engagement during the opening moments of their meetings with psychiatrists. This study aimed to replicate, and build on, this finding, assessing the non-verbal behaviour of patients and psychiatrists during meetings, exploring changes over time and its association with patients' symptoms and the quality of the therapeutic relationship. 40-videotaped routine out-patient consultations, involving patients with schizophrenia, were analysed. Non-verbal behaviour of patients and psychiatrists was assessed during three fixed, 2-min intervals using a modified Ethological Coding System for Interviews. Symptoms, satisfaction with communication and the quality of the therapeutic relationship were also measured. Over time, patients' non-verbal behaviour remained stable, whilst psychiatrists' flight behaviour decreased. Patients formed two groups based on their non-verbal profiles, one group (n = 25) displaying pro-social behaviour, inviting interaction and a second (n = 15) displaying flight behaviour, avoiding interaction. Psychiatrists interacting with pro-social patients displayed more pro-social behaviours (P communication (P non-verbal behaviour during routine psychiatric consultations remains unchanged, and is linked to both their psychiatrist's non-verbal behaviour and the quality of the therapeutic relationship. © 2014 The Authors. Acta Psychiatrica Scandinavica Published by John Wiley & Sons Ltd.

  12. [Non-verbal communication of patients submitted to heart surgery: from awaking after anesthesia to extubation].

    Science.gov (United States)

    Werlang, Sueli da Cruz; Azzolin, Karina; Moraes, Maria Antonieta; de Souza, Emiliane Nogueira

    2008-12-01

    Preoperative orientation is an essential tool for patient's communication after surgery. This study had the objective of evaluating non-verbal communication of patients submitted to cardiac surgery from the time of awaking from anesthesia until extubation, after having received preoperative orientation by nurses. A quantitative cross-sectional study was developed in a reference hospital of the state of Rio Grande do Sul, Brazil, from March to July 2006. Data were collected in the pre and post operative periods. A questionnaire to evaluate non-verbal communication on awaking from sedation was applied to a sample of 100 patients. Statistical analysis included Student, Wilcoxon, and Mann Whittney tests. Most of the patients responded satisfactorily to non-verbal communication strategies as instructed on the preoperative orientation. Thus, non-verbal communication based on preoperative orientation was helpful during the awaking period.

  13. Parents and Physiotherapists Recognition of Non-Verbal Communication of Pain in Individuals with Cerebral Palsy.

    Science.gov (United States)

    Riquelme, Inmaculada; Pades Jiménez, Antonia; Montoya, Pedro

    2017-08-29

    Pain assessment is difficult in individuals with cerebral palsy (CP). This is of particular relevance in children with communication difficulties, when non-verbal pain behaviors could be essential for appropriate pain recognition. Parents are considered good proxies in the recognition of pain in their children; however, health professionals also need a good understanding of their patients' pain experience. This study aims at analyzing the agreement between parents' and physiotherapists' assessments of verbal and non-verbal pain behaviors in individuals with CP. A written survey about pain characteristics and non-verbal pain expression of 96 persons with CP (45 classified as communicative, and 51 as non-communicative individuals) was performed. Parents and physiotherapists displayed a high agreement in their estimations of the presence of chronic pain, healthcare seeking, pain intensity and pain interference, as well as in non-verbal pain behaviors. Physiotherapists and parents can recognize pain behaviors in individuals with CP regardless of communication disabilities.

  14. Non-verbal mother-child communication in conditions of maternal HIV in an experimental environment.

    Science.gov (United States)

    de Sousa Paiva, Simone; Galvão, Marli Teresinha Gimeniz; Pagliuca, Lorita Marlena Freitag; de Almeida, Paulo César

    2010-01-01

    Non-verbal communication is predominant in the mother-child relation. This study aimed to analyze non-verbal mother-child communication in conditions of maternal HIV. In an experimental environment, five HIV-positive mothers were evaluated during care delivery to their babies of up to six months old. Recordings of the care were analyzed by experts, observing aspects of non-verbal communication, such as: paralanguage, kinesics, distance, visual contact, tone of voice, maternal and infant tactile behavior. In total, 344 scenes were obtained. After statistical analysis, these permitted inferring that mothers use non-verbal communication to demonstrate their close attachment to their children and to perceive possible abnormalities. It is suggested that the mothers infection can be a determining factor for the formation of mothers strong attachment to their children after birth.

  15. Dissociation of neural correlates of verbal and non-verbal visual working memory with different delays

    Directory of Open Access Journals (Sweden)

    Endestad Tor

    2007-10-01

    Full Text Available Abstract Background Dorsolateral prefrontal cortex (DLPFC, posterior parietal cortex, and regions in the occipital cortex have been identified as neural sites for visual working memory (WM. The exact involvement of the DLPFC in verbal and non-verbal working memory processes, and how these processes depend on the time-span for retention, remains disputed. Methods We used functional MRI to explore the neural correlates of the delayed discrimination of Gabor stimuli differing in orientation. Twelve subjects were instructed to code the relative orientation either verbally or non-verbally with memory delays of short (2 s or long (8 s duration. Results Blood-oxygen level dependent (BOLD 3-Tesla fMRI revealed significantly more activity for the short verbal condition compared to the short non-verbal condition in bilateral superior temporal gyrus, insula and supramarginal gyrus. Activity in the long verbal condition was greater than in the long non-verbal condition in left language-associated areas (STG and bilateral posterior parietal areas, including precuneus. Interestingly, right DLPFC and bilateral superior frontal gyrus was more active in the non-verbal long delay condition than in the long verbal condition. Conclusion The results point to a dissociation between the cortical sites involved in verbal and non-verbal WM for long and short delays. Right DLPFC seems to be engaged in non-verbal WM tasks especially for long delays. Furthermore, the results indicate that even slightly different memory maintenance intervals engage largely differing networks and that this novel finding may explain differing results in previous verbal/non-verbal WM studies.

  16. The impact of culture and education on non-verbal neuropsychological measurements: a critical review.

    Science.gov (United States)

    Rosselli, Mónica; Ardila, Alfredo

    2003-08-01

    Clinical neuropsychology has frequently considered visuospatial and non-verbal tests to be culturally and educationally fair or at least fairer than verbal tests. This paper reviews the cross-cultural differences in performance on visuoperceptual and visuoconstructional ability tasks and analyzes the impact of education and culture on non-verbal neuropsychological measurements. This paper compares: (1) non-verbal test performance among groups with different educational levels, and the same cultural background (inter-education intra-culture comparison); (2) the test performance among groups with the same educational level and different cultural backgrounds (intra-education inter-culture comparisons). Several studies have demonstrated a strong association between educational level and performance on common non-verbal neuropsychological tests. When neuropsychological test performance in different cultural groups is compared, significant differences are evident. Performance on non-verbal tests such as copying figures, drawing maps or listening to tones can be significantly influenced by the individual's culture. Arguments against the use of some current neuropsychological non-verbal instruments, procedures, and norms in the assessment of diverse educational and cultural groups are discussed and possible solutions to this problem are presented.

  17. The role of non-verbal behaviour in racial disparities in health care: implications and solutions.

    Science.gov (United States)

    Levine, Cynthia S; Ambady, Nalini

    2013-09-01

    People from racial minority backgrounds report less trust in their doctors and have poorer health outcomes. Although these deficiencies have multiple roots, one important set of explanations involves racial bias, which may be non-conscious, on the part of providers, and minority patients' fears that they will be treated in a biased way. Here, we focus on one mechanism by which this bias may be communicated and reinforced: namely, non-verbal behaviour in the doctor-patient interaction. We review 2 lines of research on race and non-verbal behaviour: (i) the ways in which a patient's race can influence a doctor's non-verbal behaviour toward the patient, and (ii) the relative difficulty that doctors can have in accurately understanding the nonverbal communication of non-White patients. Further, we review research on the implications that both lines of work can have for the doctor-patient relationship and the patient's health. The research we review suggests that White doctors interacting with minority group patients are likely to behave and respond in ways that are associated with worse health outcomes. As doctors' disengaged non-verbal behaviour towards minority group patients and lower ability to read minority group patients' non-verbal behaviours may contribute to racial disparities in patients' satisfaction and health outcomes, solutions that target non-verbal behaviour may be effective. A number of strategies for such targeting are discussed. © 2013 John Wiley & Sons Ltd.

  18. Evaluating verbal and non-verbal communication skills, in an ethnogeriatric OSCE.

    Science.gov (United States)

    Collins, Lauren G; Schrimmer, Anne; Diamond, James; Burke, Janice

    2011-05-01

    Communication during medical interviews plays a large role in patient adherence, satisfaction with care, and health outcomes. Both verbal and non-verbal communication (NVC) skills are central to the development of rapport between patients and healthcare professionals. The purpose of this study was to assess the role of non-verbal and verbal communication skills on evaluations by standardized patients during an ethnogeriatric Objective Structured Clinical Examination (OSCE). Interviews from 19 medical students, residents, and fellows in an ethnogeriatric OSCE were analyzed. Each interview was videotaped and evaluated on a 14 item verbal and an 8 item non-verbal communication checklist. The relationship between verbal and non-verbal communication skills on interview evaluations by standardized patients were examined using correlational analyses. Maintaining adequate facial expression (FE), using affirmative gestures (AG), and limiting both unpurposive movements (UM) and hand gestures (HG) had a significant positive effect on perception of interview quality during this OSCE. Non-verbal communication skills played a role in perception of overall interview quality as well as perception of culturally competent communication. Incorporating formative and summative evaluation of both verbal and non-verbal communication skills may be a critical component of curricular innovations in ethnogeriatrics, such as the OSCE. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  19. Patients' perceptions of GP non-verbal communication: a qualitative study.

    Science.gov (United States)

    Marcinowicz, Ludmila; Konstantynowicz, Jerzy; Godlewski, Cezary

    2010-02-01

    During doctor-patient interactions, many messages are transmitted without words, through non-verbal communication. To elucidate the types of non-verbal behaviours perceived by patients interacting with family GPs and to determine which cues are perceived most frequently. In-depth interviews with patients of family GPs. Nine family practices in different regions of Poland. At each practice site, interviews were performed with four patients who were scheduled consecutively to see their family doctor. Twenty-four of 36 studied patients spontaneously perceived non-verbal behaviours of the family GP during patient-doctor encounters. They reported a total of 48 non-verbal cues. The most frequent features were tone of voice, eye contact, and facial expressions. Less frequent were examination room characteristics, touch, interpersonal distance, GP clothing, gestures, and posture. Non-verbal communication is an important factor by which patients spontaneously describe and evaluate their interactions with a GP. Family GPs should be trained to better understand and monitor their own non-verbal behaviours towards patients.

  20. Condom use: exploring verbal and non-verbal communication strategies among Latino and African American men and women.

    Science.gov (United States)

    Zukoski, Ann P; Harvey, S Marie; Branch, Meredith

    2009-08-01

    A growing body of literature provides evidence of a link between communication with sexual partners and safer sexual practices, including condom use. More research is needed that explores the dynamics of condom communication including gender differences in initiation, and types of communication strategies. The overall objective of this study was to explore condom use and the dynamics surrounding condom communication in two distinct community-based samples of African American and Latino heterosexual couples at increased risk for HIV. Based on 122 in-depth interviews, 80% of women and 74% of men reported ever using a condom with their primary partner. Of those who reported ever using a condom with their current partner, the majority indicated that condom use was initiated jointly by men and women. In addition, about one-third of the participants reported that the female partner took the lead and let her male partner know she wanted to use a condom. A sixth of the sample reported that men initiated use. Although over half of the respondents used bilateral verbal strategies (reminding, asking and persuading) to initiate condom use, one-fourth used unilateral verbal strategies (commanding and threatening to withhold sex). A smaller number reported using non-verbal strategies involving condoms themselves (e.g. putting a condom on or getting condoms). The results suggest that interventions designed to improve condom use may need to include both members of a sexual dyad and focus on improving verbal and non-verbal communication skills of individuals and couples.

  1. Non-verbal communication in severe aphasia: influence of aphasia, apraxia, or semantic processing?

    Science.gov (United States)

    Hogrefe, Katharina; Ziegler, Wolfram; Weidinger, Nicole; Goldenberg, Georg

    2012-09-01

    Patients suffering from severe aphasia have to rely on non-verbal means of communication to convey a message. However, to date it is not clear which patients are able to do so. Clinical experience indicates that some patients use non-verbal communication strategies like gesturing very efficiently whereas others fail to transmit semantic content by non-verbal means. Concerns have been expressed that limb apraxia would affect the production of communicative gestures. Research investigating if and how apraxia influences the production of communicative gestures, led to contradictory outcomes. The purpose of this study was to investigate the impact of limb apraxia on spontaneous gesturing. Further, linguistic and non-verbal semantic processing abilities were explored as potential factors that might influence non-verbal expression in aphasic patients. Twenty-four aphasic patients with highly limited verbal output were asked to retell short video-clips. The narrations were videotaped. Gestural communication was analyzed in two ways. In the first part of the study, we used a form-based approach. Physiological and kinetic aspects of hand movements were transcribed with a notation system for sign languages. We determined the formal diversity of the hand gestures as an indicator of potential richness of the transmitted information. In the second part of the study, comprehensibility of the patients' gestural communication was evaluated by naive raters. The raters were familiarized with the model video-clips and shown the recordings of the patients' retelling without sound. They were asked to indicate, for each narration, which story was being told and which aspects of the stories they recognized. The results indicate that non-verbal faculties are the most important prerequisites for the production of hand gestures. Whereas results on standardized aphasia testing did not correlate with any gestural indices, non-verbal semantic processing abilities predicted the formal diversity

  2. MODELO DE COMUNICACIÓN NO VERBAL EN DEPORTE Y BALLET NON-VERBAL COMMUNICATION MODELS IN SPORTS AND BALLET

    Directory of Open Access Journals (Sweden)

    Gloria Vallejo

    2010-12-01

    Full Text Available Este estudio analiza el modelo de comunicación que se genera en los entrenadores de fútbol y de gimnasia artística a nivel profesional, y en los instructores de ballet en modalidad folklórica, tomando como referente el lenguaje corporal dinámico propio de la comunicación especializada de deportistas y bailarines, en la que se evidencia lenguaje no verbal. Este último se estudió tanto en prácticas psicomotrices como sociomotrices, para identificar y caracterizar relaciones entre diferentes conceptos y su correspondiente representación gestual. Los resultados indican que el lenguaje no verbal de los entrenadores e instructores toma ocasionalmente el lugar del lenguaje verbal, cuando este último resulta insuficiente o inapropiado para describir una acción motriz de gran precisión, debido a las condiciones de distancia o de interferencias acústicas. En los instructores de ballet se encontró una forma generalizada de dirigir los ensayos utilizando conteos rítmicos con las palmas o los pies. De igual forma, se destacan los componentes paralingüísticos de los diversos actos de habla, especialmente, en lo que se refiere a entonación, duración e intensidad.This study analyzes the communication model generated among professional soccer trainers, artistic gymnastics trainers, and folkloric ballet instructors, on the basis of the dynamic body language typical of specialized communication among sportspeople and dancers, which includes a high percentage of non-verbal language. Non-verbal language was observed in both psychomotor and sociomotor practices in order to identify and characterize relations between different concepts and their corresponding gestural representation. This made it possible to generate a communication model that takes into account the non-verbal aspects of specialized communicative contexts. The results indicate that the non-verbal language of trainers and instructors occasionally replaces verbal language when the

  3. The impact of the teachers’ non-verbal communication on success in teaching

    Directory of Open Access Journals (Sweden)

    FATEMEH BAMBAEEROO

    2017-04-01

    Full Text Available Introduction: Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers’ non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers’ use of non-verbal communication and also its impact on success in teaching. Methods: Considering the research method, i.e. a review article, we searched for all articles in this field using key words such as success in teaching, verbal communication and non-verbal communication. In this study, we did not encode the articles. Results: The results of this revealed that there was a strong relationship among the quality, amount and the method of using non-verbal communication by teachers while teaching. Based on the findings of the studies reviewed, it was found that the more the teachers used verbal and non-verbal communication, the more efficacious their education and the students’ academic progress were. Under non-verbal communication, some other patterns were used. For example, emotive, team work, supportive, imaginative, purposive, and balanced communication using speech, body, and pictures all have been effective in students’ learning and academic success. The teachers’ attention to the students’ non-verbal reactions and arranging the syllabus considering the students’ mood and readiness have been emphasized in the studies reviewed. Conclusion: It was concluded that if this skill is practiced by teachers, it will have a positive and profound effect on the students’ mood. Non-verbal communication is highly reliable in the communication process, so if the recipient of a message is between two contradictory verbal and nonverbal messages, logic dictates that we push him toward the non-verbal message

  4. The impact of the teachers' non-verbal communication on success in teaching.

    Science.gov (United States)

    Bambaeeroo, Fatemeh; Shokrpour, Nasrin

    2017-04-01

    Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers' non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers' use of non-verbal communication and also its impact on success in teaching. Considering the research method, i.e. a review article, we searched for all articles in this field using key words such as success in teaching, verbal communication and non-verbal communication. In this study, we did not encode the articles. The results of this revealed that there was a strong relationship among the quality, amount and the method of using non-verbal communication by teachers while teaching. Based on the findings of the studies reviewed, it was found that the more the teachers used verbal and non-verbal communication, the more efficacious their education and the students' academic progress were. Under non-verbal communication, some other patterns were used. For example, emotive, team work, supportive, imaginative, purposive, and balanced communication using speech, body, and pictures all have been effective in students' learning and academic success. The teachers' attention to the students' non-verbal reactions and arranging the syllabus considering the students' mood and readiness have been emphasized in the studies reviewed. It was concluded that if this skill is practiced by teachers, it will have a positive and profound effect on the students' mood. Non-verbal communication is highly reliable in the communication process, so if the recipient of a message is between two contradictory verbal and nonverbal messages, logic dictates that we push him toward the non-verbal message and ask him to pay more attention to non-verbal than verbal messages because non-verbal

  5. The impact of the teachers’ non-verbal communication on success in teaching

    Science.gov (United States)

    BAMBAEEROO, FATEMEH; SHOKRPOUR, NASRIN

    2017-01-01

    Introduction: Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers’ non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers’ use of non-verbal communication and also its impact on success in teaching. Methods: Considering the research method, i.e. a review article, we searched for all articles in this field using key words such as success in teaching, verbal communication and non-verbal communication. In this study, we did not encode the articles. Results: The results of this revealed that there was a strong relationship among the quality, amount and the method of using non-verbal communication by teachers while teaching. Based on the findings of the studies reviewed, it was found that the more the teachers used verbal and non-verbal communication, the more efficacious their education and the students’ academic progress were. Under non-verbal communication, some other patterns were used. For example, emotive, team work, supportive, imaginative, purposive, and balanced communication using speech, body, and pictures all have been effective in students’ learning and academic success. The teachers’ attention to the students’ non-verbal reactions and arranging the syllabus considering the students’ mood and readiness have been emphasized in the studies reviewed. Conclusion: It was concluded that if this skill is practiced by teachers, it will have a positive and profound effect on the students’ mood. Non-verbal communication is highly reliable in the communication process, so if the recipient of a message is between two contradictory verbal and nonverbal messages, logic dictates that we push him toward the non-verbal message and ask him to pay

  6. Audiovisual Interaction

    DEFF Research Database (Denmark)

    Karandreas, Theodoros-Alexandros

    in a manner that allowed the subjective audiovisual evaluation of loudspeakers under controlled conditions. Additionally, unimodal audio and visual evaluations were used as a baseline for comparison. The same procedure was applied in the investigation of the validity of less than optimal stimuli presentations...

  7. Audiovisual Review

    Science.gov (United States)

    Physiology Teacher, 1976

    1976-01-01

    Lists and reviews recent audiovisual materials in areas of medical, dental, nursing and allied health, and veterinary medicine; undergraduate, and high school studies. Each is classified as to level, type of instruction, usefulness, and source of availability. Topics include respiration, renal physiology, muscle mechanics, anatomy, evolution,…

  8. Negative Symptoms and Avoidance of Social Interaction: A Study of Non-Verbal Behaviour.

    Science.gov (United States)

    Worswick, Elizabeth; Dimic, Sara; Wildgrube, Christiane; Priebe, Stefan

    2018-01-01

    Non-verbal behaviour is fundamental to social interaction. Patients with schizophrenia display an expressivity deficit of non-verbal behaviour, exhibiting behaviour that differs from both healthy subjects and patients with different psychiatric diagnoses. The present study aimed to explore the association between non-verbal behaviour and symptom domains, overcoming methodological shortcomings of previous studies. Standardised interviews with 63 outpatients diagnosed with schizophrenia were videotaped. Symptoms were assessed using the Clinical Assessment Interview for Negative Symptoms (CAINS), the Positive and Negative Syndrome Scale (PANSS) and the Calgary Depression Scale. Independent raters later analysed the videos for non-verbal behaviour, using a modified version of the Ethological Coding System for Interviews (ECSI). Patients with a higher level of negative symptoms displayed significantly fewer prosocial (e.g., nodding and smiling), gesture, and displacement behaviours (e.g., fumbling), but significantly more flight behaviours (e.g., looking away, freezing). No gender differences were found, and these associations held true when adjusted for antipsychotic medication dosage. Negative symptoms are associated with both a lower level of actively engaging non-verbal behaviour and an increased active avoidance of social contact. Future research should aim to identify the mechanisms behind flight behaviour, with implications for the development of treatments to improve social functioning. © 2017 S. Karger AG, Basel.

  9. Parts of Speech in Non-typical Function: (Asymmetrical Encoding of Non-verbal Predicates in Erzya

    Directory of Open Access Journals (Sweden)

    Rigina Turunen

    2011-01-01

    Full Text Available Erzya non-verbal conjugation refers to symmetric paradigms in which non-verbal predicates behave morphosyntactically in a similar way to verbal predicates. Notably, though, non-verbal conjugational paradigms are asymmetric, which is seen as an outcome of paradigmatic neutralisation in less frequent/less typical contexts. For non-verbal predicates it is not obligatory to display the same amount of behavioural potential as it is for verbal predicates, and the lexical class of non-verbal predicate operates in such a way that adjectival predicates are more likely to be conjugated than nominals. Further, besides symmetric paradigms and constructions, in Erzya there are non-verbal predicate constructions which display a more overt structural encoding than do verbal ones, namely, copula constructions. Complexity in the domain of non-verbal predication in Erzya decreases the symmetry of the paradigms. Complexity increases in asymmetric constructions, as well as in paradigmatic neutralisation when non-verbal predicates cannot be inflected in all the tenses and moods occurring in verbal predication. The results would be the reverse if we were to measure complexity in terms of the morphological structure. The asymmetric features in non-verbal predication are motivated language-externally, because non-verbal predicates refer to states and occur less frequently as predicates than verbal categories. The symmetry of the paradigms and constructions is motivated language-internally: a grammatical system with fewer rules is economical.

  10. Non-verbal Communication in a Neonatal Intensive Care Unit: A Video Audit Using Non-verbal Immediacy Scale (NIS-O).

    Science.gov (United States)

    Nimbalkar, Somashekhar Marutirao; Raval, Himalaya; Bansal, Satvik Chaitanya; Pandya, Utkarsh; Pathak, Ajay

    2018-05-03

    Effective communication with parents is a very important skill for pediatricians especially in a neonatal setup. The authors analyzed non-verbal communication of medical caregivers during counseling sessions. Recorded videos of counseling sessions from the months of March-April 2016 were audited. Counseling episodes were scored using Non-verbal Immediacy Scale Observer Report (NIS-O). A total of 150 videos of counseling sessions were audited. The mean (SD) total score on (NIS-O) was 78.96(7.07). Female counseled sessions had significantly higher proportion of low scores (p communication skills in a neonatal unit. This study lays down a template on which other Neonatal intensive care units (NICUs) can carry out gap defining audits.

  11. Phenomenology of non-verbal communication as a representation of sports activities

    Directory of Open Access Journals (Sweden)

    Liubov Karpets

    2018-04-01

    Full Text Available The priority of language professional activity in sports is such non-verbal communication as body language. Purpose: to delete the main aspects of non-verbal communication as a representation of sports activities. Material & Methods: in the study participated members of sports teams, individual athletes, in particular, for such sports: basketball, handball, volleyball, football, hockey, bodybuilding. Results: in the process of research it was revealed that in sports activities such nonverbal communication as gestures, facial expressions, physique, etc., are lapped, and, as a consequence, the position "everything is language" (Lyotard is embodied. Conclusions: non-verbal communication is one of the most significant forms of communication in sports. Additional means of communication through the "language" of the body help the athletes to realize themselves and self-determination.

  12. The Effects of Verbal and Non-Verbal Features on the Reception of DRTV Commercials

    Directory of Open Access Journals (Sweden)

    Smiljana Komar

    2016-12-01

    Full Text Available Analyses of consumer response are important for successful advertising as they help advertisers to find new, original and successful ways of persuasion. Successful advertisements have to boost the product’s benefits but they also have to appeal to consumers’ emotions. In TV advertisements, this is done by means of verbal and non-verbal strategies. The paper presents the results of an empirical investigation whose purpose was to examine the viewers’ emotional responses to a DRTV commercial induced by different verbal and non-verbal features, the amount of credibility and persuasiveness of the commercial and its general acceptability. Our findings indicate that (1 an overload of the same verbal and non-verbal information decreases persuasion; and (2 highly marked prosodic delivery is either exaggerated or funny, while the speaker is perceived as annoying.

  13. Culture and Social Relationship as Factors of Affecting Communicative Non-verbal Behaviors

    Science.gov (United States)

    Akhter Lipi, Afia; Nakano, Yukiko; Rehm, Mathias

    The goal of this paper is to link a bridge between social relationship and cultural variation to predict conversants' non-verbal behaviors. This idea serves as a basis of establishing a parameter based socio-cultural model, which determines non-verbal expressive parameters that specify the shapes of agent's nonverbal behaviors in HAI. As the first step, a comparative corpus analysis is done for two cultures in two specific social relationships. Next, by integrating the cultural and social parameters factors with the empirical data from corpus analysis, we establish a model that predicts posture. The predictions from our model successfully demonstrate that both cultural background and social relationship moderate communicative non-verbal behaviors.

  14. Persistent non-verbal memory impairment in remitted major depression - caused by encoding deficits?

    Science.gov (United States)

    Behnken, Andreas; Schöning, Sonja; Gerss, Joachim; Konrad, Carsten; de Jong-Meyer, Renate; Zwanzger, Peter; Arolt, Volker

    2010-04-01

    While neuropsychological impairments are well described in acute phases of major depressive disorders (MDD), little is known about the neuropsychological profile in remission. There is evidence for episodic memory impairments in both acute depressed and remitted patients with MDD. Learning and memory depend on individuals' ability to organize information during learning. This study investigates non-verbal memory functions in remitted MDD and whether nonverbal memory performance is mediated by organizational strategies whilst learning. 30 well-characterized fully remitted individuals with unipolar MDD and 30 healthy controls matching in age, sex and education were investigated. Non-verbal learning and memory were measured by the Rey-Osterrieth-Complex-Figure-Test (RCFT). The RCFT provides measures of planning, organizational skills, perceptual and non-verbal memory functions. For assessing the mediating effects of organizational strategies, we used the Savage Organizational Score. Compared to healthy controls, participants with remitted MDD showed more deficits in their non-verbal memory function. Moreover, participants with remitted MDD demonstrated difficulties in organizing non-verbal information appropriately during learning. In contrast, no impairments regarding visual-spatial functions in remitted MDD were observed. Except for one patient, all the others were taking psychopharmacological medication. The neuropsychological function was solely investigated in the remitted phase of MDD. Individuals with MDD in remission showed persistent non-verbal memory impairments, modulated by a deficient use of organizational strategies during encoding. Therefore, our results strongly argue for additional therapeutic interventions in order to improve these remaining deficits in cognitive function. Copyright 2009 Elsevier B.V. All rights reserved.

  15. Executive functioning and non-verbal intelligence as predictors of bullying in early elementary school

    NARCIS (Netherlands)

    Verlinden, Marina; Veenstra, René; Ghassabian, Akhgar; Jansen, P.W.; Hofman, Albert; Jaddoe, Vincent W. V.; Verhulst, F.C.; Tiemeier, Henning

    Executive function and intelligence are negatively associated with aggression, yet the role of executive function has rarely been examined in the context of school bullying. We studied whether different domains of executive function and non-verbal intelligence are associated with bullying

  16. Toward a digitally mediated, transgenerational negotiation of verbal and non-verbal concepts in daycare

    DEFF Research Database (Denmark)

    Chimirri, Niklas Alexander

    an adult researcher’s research problem and her/his conceptual knowledge of the child-adult-digital media interaction are able to do justice to what the children actually intend to communicate about their experiences and actions, both verbally and non-verbally, by and large remains little explored...

  17. “Communication by impact” and other forms of non-verbal ...

    African Journals Online (AJOL)

    This article aims to review the importance, place and especially the emotional impact of non-verbal communication in psychiatry. The paper argues that while biological psychiatry is in the ascendency with increasing discoveries being made about the functioning of the brain and psycho-pharmacology, it is important to try ...

  18. Development of non-verbal intellectual capacity in school-age children with cerebral palsy

    NARCIS (Netherlands)

    Smits, D. W.; Ketelaar, M.; Gorter, J. W.; van Schie, P. E.; Becher, J. G.; Lindeman, E.; Jongmans, M. J.

    Background Children with cerebral palsy (CP) are at greater risk for a limited intellectual development than typically developing children. Little information is available which children with CP are most at risk. This study aimed to describe the development of non-verbal intellectual capacity of

  19. Presentation Trainer: a toolkit for learning non-verbal public speaking skills

    NARCIS (Netherlands)

    Schneider, Jan; Börner, Dirk; Van Rosmalen, Peter; Specht, Marcus

    2014-01-01

    The paper presents and outlines the demonstration of Presentation Trainer, a prototype that works as a public speaking instructor. It tracks and analyses the body posture, movements and voice of the user in order to give in- structional feedback on non-verbal communication skills. Besides exploring

  20. Non-verbal communication between primary care physicians and older patients: how does race matter?

    Science.gov (United States)

    Stepanikova, Irena; Zhang, Qian; Wieland, Darryl; Eleazer, G Paul; Stewart, Thomas

    2012-05-01

    Non-verbal communication is an important aspect of the diagnostic and therapeutic process, especially with older patients. It is unknown how non-verbal communication varies with physician and patient race. To examine the joint influence of physician race and patient race on non-verbal communication displayed by primary care physicians during medical interviews with patients 65 years or older. Video-recordings of visits of 209 patients 65 years old or older to 30 primary care physicians at three clinics located in the Midwest and Southwest. Duration of physicians' open body position, eye contact, smile, and non-task touch, coded using an adaption of the Nonverbal Communication in Doctor-Elderly Patient Transactions form. African American physicians with African American patients used more open body position, smile, and touch, compared to the average across other dyads (adjusted mean difference for open body position = 16.55, p non-verbal communication with older patients. Its influence is best understood when physician race and patient race are considered jointly.

  1. Interactive use of communication by verbal and non-verbal autistic children.

    Science.gov (United States)

    Amato, Cibelle Albuquerque de la Higuera; Fernandes, Fernanda Dreux Miranda

    2010-01-01

    Communication of autistic children. To assess the communication functionality of verbal and non-verbal children of the autistic spectrum and to identify possible associations amongst the groups. Subjects were 20 children of the autistic spectrum divided into two groups: V with 10 verbal children and NV with 10 non-verbal children with ages varying between 2y10m and 10y6m. All subjects were video recorded during 30 minutes of spontaneous interaction with their mothers. The samples were analyzed according to the functional communicative profile and comparisons within and between groups were conducted. Data referring to the occupation of communicative space suggest that there is an even balance between each child and his mother. The number of communicative acts per minute shows a clear difference between verbal and non-verbal children. Both verbal and non-verbal children use mostly the gestual communicative mean in their interactions. Data about the use of interpersonal communicative functions point out to the autistic children's great interactive impairment. The characterization of the functional communicative profile proposed in this study confirmed the autistic children's difficulties with interpersonal communication and that these difficulties do not depend on the preferred communicative mean.

  2. An executable model of the interaction between verbal and non-verbal communication.

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.

    2000-01-01

    In this paper an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The model has been formalised by three-levelled partial temporal models, covering both the material and mental processes and their relations. The generic

  3. Consonant Differentiation Mediates the Discrepancy between Non-verbal and Verbal Abilities in Children with ASD

    Science.gov (United States)

    Key, A. P.; Yoder, P. J.; Stone, W. L.

    2016-01-01

    Background: Many children with autism spectrum disorder (ASD) demonstrate verbal communication disorders reflected in lower verbal than non-verbal abilities. The present study examined the extent to which this discrepancy is associated with atypical speech sound differentiation. Methods: Differences in the amplitude of auditory event-related…

  4. Non-Verbal Communication Training: An Avenue for University Professionalizing Programs?

    Science.gov (United States)

    Gazaille, Mariane

    2011-01-01

    In accordance with today's workplace expectations, many university programs identify the ability to communicate as a crucial asset for future professionals. Yet, if the teaching of verbal communication is clearly identifiable in most university programs, the same cannot be said of non-verbal communication (NVC). Knowing the importance of the…

  5. An Executable Model of the Interaction between Verbal and Non-Verbal Communication

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.; Dignum, F.; Greaves, M.

    2000-01-01

    In this paper an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The model has been formalised by three-levelled partial temporal models, covering both the material and mental processes and their relations. The generic

  6. Automated Video Analysis of Non-verbal Communication in a Medical Setting.

    Science.gov (United States)

    Hart, Yuval; Czerniak, Efrat; Karnieli-Miller, Orit; Mayo, Avraham E; Ziv, Amitai; Biegon, Anat; Citron, Atay; Alon, Uri

    2016-01-01

    Non-verbal communication plays a significant role in establishing good rapport between physicians and patients and may influence aspects of patient health outcomes. It is therefore important to analyze non-verbal communication in medical settings. Current approaches to measure non-verbal interactions in medicine employ coding by human raters. Such tools are labor intensive and hence limit the scale of possible studies. Here, we present an automated video analysis tool for non-verbal interactions in a medical setting. We test the tool using videos of subjects that interact with an actor portraying a doctor. The actor interviews the subjects performing one of two scripted scenarios of interviewing the subjects: in one scenario the actor showed minimal engagement with the subject. The second scenario included active listening by the doctor and attentiveness to the subject. We analyze the cross correlation in total kinetic energy of the two people in the dyad, and also characterize the frequency spectrum of their motion. We find large differences in interpersonal motion synchrony and entrainment between the two performance scenarios. The active listening scenario shows more synchrony and more symmetric followership than the other scenario. Moreover, the active listening scenario shows more high-frequency motion termed jitter that has been recently suggested to be a marker of followership. The present approach may be useful for analyzing physician-patient interactions in terms of synchrony and dominance in a range of medical settings.

  7. Quality Matters! Differences between Expressive and Receptive Non-Verbal Communication Skills in Adolescents with ASD

    Science.gov (United States)

    Grossman, Ruth B.; Tager-Flusberg, Helen

    2012-01-01

    We analyzed several studies of non-verbal communication (prosody and facial expressions) completed in our lab and conducted a secondary analysis to compare performance on receptive vs. expressive tasks by adolescents with ASD and their typically developing peers. Results show a significant between-group difference for the aggregate score of…

  8. Interpersonal Interactions in Instrumental Lessons: Teacher/Student Verbal and Non-Verbal Behaviours

    Science.gov (United States)

    Zhukov, Katie

    2013-01-01

    This study examined verbal and non-verbal teacher/student interpersonal interactions in higher education instrumental music lessons. Twenty-four lessons were videotaped and teacher/student behaviours were analysed using a researcher-designed instrument. The findings indicate predominance of student and teacher joke among the verbal behaviours with…

  9. The Introduction of Non-Verbal Communication in Greek Education: A Literature Review

    Science.gov (United States)

    Stamatis, Panagiotis J.

    2012-01-01

    Introduction: The introductory part of this paper underlines the research interest of the educational community in the issue of non-verbal communication in education. The question for the introduction of this scientific field in Greek education enter within the context of this research which include many aspects. Method: The paper essentially…

  10. Effect of interaction with clowns on vital signs and non-verbal communication of hospitalized children.

    Science.gov (United States)

    Alcântara, Pauline Lima; Wogel, Ariane Zonho; Rossi, Maria Isabela Lobo; Neves, Isabela Rodrigues; Sabates, Ana Llonch; Puggina, Ana Cláudia

    2016-12-01

    Compare the non-verbal communication of children before and during interaction with clowns and compare their vital signs before and after this interaction. Uncontrolled, intervention, cross-sectional, quantitative study with children admitted to a public university hospital. The intervention was performed by medical students dressed as clowns and included magic tricks, juggling, singing with the children, making soap bubbles and comedic performances. The intervention time was 20minutes. Vital signs were assessed in two measurements with an interval of one minute immediately before and after the interaction. Non-verbal communication was observed before and during the interaction using the Non-Verbal Communication Template Chart, a tool in which nonverbal behaviors are assessed as effective or ineffective in the interactions. The sample consisted of 41 children with a mean age of 7.6±2.7 years; most were aged 7 to 11 years (n=23; 56%) and were males (n=26; 63.4%). There was a statistically significant difference in systolic and diastolic blood pressure, pain and non-verbal behavior of children with the intervention. Systolic and diastolic blood pressure increased and pain scales showed decreased scores. The playful interaction with clowns can be a therapeutic resource to minimize the effects of the stressing environment during the intervention, improve the children's emotional state and reduce the perception of pain. Copyright © 2016 Sociedade de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.

  11. Verbal and Non-Verbal Communication and Coordination in Mission Control

    Science.gov (United States)

    Vinkhuyzen, Erik; Norvig, Peter (Technical Monitor)

    1998-01-01

    In this talk I will present some video-materials gathered in Mission Control during simulations. The focus of the presentation will be on verbal and non-verbal communication between the officers in the front and backroom, especially the practices that have evolved around a peculiar communications technology called voice loops.

  12. Trauma team leaders' non-verbal communication: video registration during trauma team training.

    Science.gov (United States)

    Härgestam, Maria; Hultin, Magnus; Brulin, Christine; Jacobsson, Maritha

    2016-03-25

    There is widespread consensus on the importance of safe and secure communication in healthcare, especially in trauma care where time is a limiting factor. Although non-verbal communication has an impact on communication between individuals, there is only limited knowledge of how trauma team leaders communicate. The purpose of this study was to investigate how trauma team members are positioned in the emergency room, and how leaders communicate in terms of gaze direction, vocal nuances, and gestures during trauma team training. Eighteen trauma teams were audio and video recorded during trauma team training in the emergency department of a hospital in northern Sweden. Quantitative content analysis was used to categorize the team members' positions and the leaders' non-verbal communication: gaze direction, vocal nuances, and gestures. The quantitative data were interpreted in relation to the specific context. Time sequences of the leaders' gaze direction, speech time, and gestures were identified separately and registered as time (seconds) and proportions (%) of the total training time. The team leaders who gained control over the most important area in the emergency room, the "inner circle", positioned themselves as heads over the team, using gaze direction, gestures, vocal nuances, and verbal commands that solidified their verbal message. Changes in position required both attention and collaboration. Leaders who spoke in a hesitant voice, or were silent, expressed ambiguity in their non-verbal communication: and other team members took over the leader's tasks. In teams where the leader had control over the inner circle, the members seemed to have an awareness of each other's roles and tasks, knowing when in time and where in space these tasks needed to be executed. Deviations in the leaders' communication increased the ambiguity in the communication, which had consequences for the teamwork. Communication cannot be taken for granted; it needs to be practiced

  13. Multi-level prediction of short-term outcome of depression : non-verbal interpersonal processes, cognitions and personality traits

    NARCIS (Netherlands)

    Geerts, E; Bouhuys, N

    1998-01-01

    It was hypothesized that personality factors determine the short-term outcome of depression, and that they may do this via non-verbal interpersonal interactions and via cognitive interpretations of non-verbal behaviour. Twenty-six hospitalized depressed patients entered the study. Personality

  14. Improviser non verbalement pour l’apprentissage de la langue parlée

    Directory of Open Access Journals (Sweden)

    Francine Chaîné

    2015-04-01

    Full Text Available Un texte réflexif sur la pratique de l'improvisation dans un contexte scolaire en vue d'apprendre la langue parlée. D'aucun penserait que l'improvisation verbale est le moyen par excellence pour faire l'apprentissage de la langue, mais l'expérience nous a fait découvrir la richesse de l'improvisation non-verbale suivie de prise de parole sur la pratique comme moyen privilégié. L'article est illustré d'un atelier d'improvisation-non verbale s'adressant à des enfants ou à des adolescents.

  15. A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    OpenAIRE

    Mavridis, Nikolaos

    2014-01-01

    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-lookin...

  16. Oncologists' non-verbal behavior and analog patients' recall of information.

    Science.gov (United States)

    Hillen, Marij A; de Haes, Hanneke C J M; van Tienhoven, Geertjan; van Laarhoven, Hanneke W M; van Weert, Julia C M; Vermeulen, Daniëlle M; Smets, Ellen M A

    2016-06-01

    Background Information in oncological consultations is often excessive. Those patients who better recall information are more satisfied, less anxious and more adherent. Optimal recall may be enhanced by the oncologist's non-verbal communication. We tested the influence of three non-verbal behaviors, i.e. eye contact, body posture and smiling, on patients' recall of information and perceived friendliness of the oncologist. Moreover, the influence of patient characteristics on recall was examined, both directly or as a moderator of non-verbal communication. Material and methods Non-verbal communication of an oncologist was experimentally varied using video vignettes. In total 194 breast cancer patients/survivors and healthy women participated as 'analog patients', viewing a randomly selected video version while imagining themselves in the role of the patient. Directly after viewing, they evaluated the oncologist. From 24 to 48 hours later, participants' passive recall, i.e. recognition, and free recall of information provided by the oncologist were assessed. Results Participants' recognition was higher if the oncologist maintained more consistent eye contact (β = 0.17). More eye contact and smiling led to a perception of the oncologist as more friendly. Body posture and smiling did not significantly influence recall. Older age predicted significantly worse recognition (β = -0.28) and free recall (β = -0.34) of information. Conclusion Oncologists may be able to facilitate their patients' recall functioning through consistent eye contact. This seems particularly relevant for older patients, whose recall is significantly worse. These findings can be used in training, focused on how to maintain eye contact while managing computer tasks.

  17. Shall we use non-verbal fluency in schizophrenia? A pilot study.

    Science.gov (United States)

    Rinaldi, Romina; Trappeniers, Julie; Lefebvre, Laurent

    2014-05-30

    Over the last few years, numerous studies have attempted to explain fluency impairments in people with schizophrenia, leading to heterogeneous results. This could notably be due to the fact that fluency is often used in its verbal form where semantic dimensions are implied. In order to gain an in-depth understanding of fluency deficits, a non-verbal fluency task - the Five-Point Test (5PT) - was proposed to 24 patients with schizophrenia and to 24 healthy subjects categorized in terms of age, gender and schooling. The 5PT involves producing as many abstract figures as possible within 1min by connecting points with straight lines. All subjects also completed the Frontal Assessment Battery (FAB) while those with schizophrenia were further assessed using the Positive and Negative Syndrome Scale (PANSS). Results show that the 5PT evaluation differentiates patients from healthy subjects with regard to the number of figures produced. Patients׳ results also suggest that the number of figures produced is linked to the "overall executive functioning" and to some inhibition components. Although this study is a first step in the non-verbal efficiency research field, we believe that experimental psychopathology could benefit from the investigations on non-verbal fluency. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Non-verbal auditory cognition in patients with temporal epilepsy before and after anterior temporal lobectomy

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-11-01

    Full Text Available For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL - i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri - is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits.

  19. Historia audiovisual para una sociedad audiovisual

    Directory of Open Access Journals (Sweden)

    Julio Montero Díaz

    2013-04-01

    Full Text Available This article analyzes the possibilities of presenting an audiovisual history in a society in which audiovisual media has progressively gained greater protagonism. We analyze specific cases of films and historical documentaries and we assess the difficulties faced by historians to understand the keys of audiovisual language and by filmmakers to understand and incorporate history into their productions. We conclude that it would not be possible to disseminate history in the western world without audiovisual resources circulated through various types of screens (cinema, television, computer, mobile phone, video games.

  20. Deaf children’s non-verbal working memory is impacted by their language experience

    Directory of Open Access Journals (Sweden)

    Chloe eMarshall

    2015-05-01

    Full Text Available Recent studies suggest that deaf children perform more poorly on working memory tasks compared to hearing children, but do not say whether this poorer performance arises directly from deafness itself or from deaf children’s reduced language exposure. The issue remains unresolved because findings come from (1 tasks that are verbal as opposed to non-verbal, and (2 involve deaf children who use spoken communication and therefore may have experienced impoverished input and delayed language acquisition. This is in contrast to deaf children who have been exposed to a sign language since birth from Deaf parents (and who therefore have native language-learning opportunities. A more direct test of how the type and quality of language exposure impacts working memory is to use measures of non-verbal working memory (NVWM and to compare hearing children with two groups of deaf signing children: those who have had native exposure to a sign language, and those who have experienced delayed acquisition compared to their native-signing peers. In this study we investigated the relationship between NVWM and language in three groups aged 6-11 years: hearing children (n=27, deaf native users of British Sign Language (BSL; n=7, and deaf children non native signers (n=19. We administered a battery of non-verbal reasoning, NVWM, and language tasks. We examined whether the groups differed on NVWM scores, and if language tasks predicted scores on NVWM tasks. For the two NVWM tasks, the non-native signers performed less accurately than the native signer and hearing groups (who did not differ from one another. Multiple regression analysis revealed that the vocabulary measure predicted scores on NVWM tasks. Our results suggest that whatever the language modality – spoken or signed – rich language experience from birth, and the good language skills that result from this early age of aacquisition, play a critical role in the development of NVWM and in performance on NVWM

  1. Network structure underlying resolution of conflicting non-verbal and verbal social information.

    Science.gov (United States)

    Watanabe, Takamitsu; Yahata, Noriaki; Kawakubo, Yuki; Inoue, Hideyuki; Takano, Yosuke; Iwashiro, Norichika; Natsubori, Tatsunobu; Takao, Hidemasa; Sasaki, Hiroki; Gonoi, Wataru; Murakami, Mizuho; Katsura, Masaki; Kunimatsu, Akira; Abe, Osamu; Kasai, Kiyoto; Yamasue, Hidenori

    2014-06-01

    Social judgments often require resolution of incongruity in communication contents. Although previous studies revealed that such conflict resolution recruits brain regions including the medial prefrontal cortex (mPFC) and posterior inferior frontal gyrus (pIFG), functional relationships and networks among these regions remain unclear. In this functional magnetic resonance imaging study, we investigated the functional dissociation and networks by measuring human brain activity during resolving incongruity between verbal and non-verbal emotional contents. First, we found that the conflict resolutions biased by the non-verbal contents activated the posterior dorsal mPFC (post-dmPFC), bilateral anterior insula (AI) and right dorsal pIFG, whereas the resolutions biased by the verbal contents activated the bilateral ventral pIFG. In contrast, the anterior dmPFC (ant-dmPFC), bilateral superior temporal sulcus and fusiform gyrus were commonly involved in both of the resolutions. Second, we found that the post-dmPFC and right ventral pIFG were hub regions in networks underlying the non-verbal- and verbal-content-biased resolutions, respectively. Finally, we revealed that these resolution-type-specific networks were bridged by the ant-dmPFC, which was recruited for the conflict resolutions earlier than the two hub regions. These findings suggest that, in social conflict resolutions, the ant-dmPFC selectively recruits one of the resolution-type-specific networks through its interaction with resolution-type-specific hub regions. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  2. School effects on non-verbal intelligence and nutritional status in rural Zambia

    OpenAIRE

    Hein, Sascha; Tan, Mei; Reich, Jodi; Thuma, Philip E.; Grigorenko, Elena L.

    2015-01-01

    This study uses hierarchical linear modeling (HLM) to examine the school factors (i.e., related to school organization and teacher and student body) associated with non-verbal intelligence (NI) and nutritional status (i.e., body mass index; BMI) of 4204 3rd to 7th graders in rural areas of Southern Province, Zambia. Results showed that 23.5% and 7.7% of the NI and BMI variance, respectively, were conditioned by differences between schools. The set of 14 school factors accounted for 58.8% and ...

  3. Perception of non-verbal auditory stimuli in Italian dyslexic children.

    Science.gov (United States)

    Cantiani, Chiara; Lorusso, Maria Luisa; Valnegri, Camilla; Molteni, Massimo

    2010-01-01

    Auditory temporal processing deficits have been proposed as the underlying cause of phonological difficulties in Developmental Dyslexia. The hypothesis was tested in a sample of 20 Italian dyslexic children aged 8-14, and 20 matched control children. Three tasks of auditory processing of non-verbal stimuli, involving discrimination and reproduction of sequences of rapidly presented short sounds were expressly created. Dyslexic subjects performed more poorly than control children, suggesting the presence of a deficit only partially influenced by the duration of the stimuli and of inter-stimulus intervals (ISIs).

  4. Linguistic analysis of verbal and non-verbal communication in the operating room.

    Science.gov (United States)

    Moore, Alison; Butt, David; Ellis-Clarke, Jodie; Cartmill, John

    2010-12-01

    Surgery can be a triumph of co-operation, the procedure evolving as a result of joint action between multiple participants. The communication that mediates the joint action of surgery is conveyed by verbal but particularly by non-verbal signals. Competing priorities superimposed by surgical learning must also be negotiated within this context and this paper draws on techniques of systemic functional linguistics to observe and analyse the flow of information during such a phase of surgery. © 2010 The Authors. ANZ Journal of Surgery © 2010 Royal Australasian College of Surgeons.

  5. The Functional Role of Neural Oscillations in Non-Verbal Emotional Communication.

    Science.gov (United States)

    Symons, Ashley E; El-Deredy, Wael; Schwartze, Michael; Kotz, Sonja A

    2016-01-01

    Effective interpersonal communication depends on the ability to perceive and interpret nonverbal emotional expressions from multiple sensory modalities. Current theoretical models propose that visual and auditory emotion perception involves a network of brain regions including the primary sensory cortices, the superior temporal sulcus (STS), and orbitofrontal cortex (OFC). However, relatively little is known about how the dynamic interplay between these regions gives rise to the perception of emotions. In recent years, there has been increasing recognition of the importance of neural oscillations in mediating neural communication within and between functional neural networks. Here we review studies investigating changes in oscillatory activity during the perception of visual, auditory, and audiovisual emotional expressions, and aim to characterize the functional role of neural oscillations in nonverbal emotion perception. Findings from the reviewed literature suggest that theta band oscillations most consistently differentiate between emotional and neutral expressions. While early theta synchronization appears to reflect the initial encoding of emotionally salient sensory information, later fronto-central theta synchronization may reflect the further integration of sensory information with internal representations. Additionally, gamma synchronization reflects facilitated sensory binding of emotional expressions within regions such as the OFC, STS, and, potentially, the amygdala. However, the evidence is more ambiguous when it comes to the role of oscillations within the alpha and beta frequencies, which vary as a function of modality (or modalities), presence or absence of predictive information, and attentional or task demands. Thus, the synchronization of neural oscillations within specific frequency bands mediates the rapid detection, integration, and evaluation of emotional expressions. Moreover, the functional coupling of oscillatory activity across multiples

  6. Verbal and non-verbal semantic impairment: From fluent primary progressive aphasia to semantic dementia

    Directory of Open Access Journals (Sweden)

    Mirna Lie Hosogi Senaha

    Full Text Available Abstract Selective disturbances of semantic memory have attracted the interest of many investigators and the question of the existence of single or multiple semantic systems remains a very controversial theme in the literature. Objectives: To discuss the question of multiple semantic systems based on a longitudinal study of a patient who presented semantic dementia from fluent primary progressive aphasia. Methods: A 66 year-old woman with selective impairment of semantic memory was examined on two occasions, undergoing neuropsychological and language evaluations, the results of which were compared to those of three paired control individuals. Results: In the first evaluation, physical examination was normal and the score on the Mini-Mental State Examination was 26. Language evaluation revealed fluent speech, anomia, disturbance in word comprehension, preservation of the syntactic and phonological aspects of the language, besides surface dyslexia and dysgraphia. Autobiographical and episodic memories were relatively preserved. In semantic memory tests, the following dissociation was found: disturbance of verbal semantic memory with preservation of non-verbal semantic memory. Magnetic resonance of the brain revealed marked atrophy of the left anterior temporal lobe. After 14 months, the difficulties in verbal semantic memory had become more severe and the semantic disturbance, limited initially to the linguistic sphere, had worsened to involve non-verbal domains. Conclusions: Given the dissociation found in the first examination, we believe there is sufficient clinical evidence to refute the existence of a unitary semantic system.

  7. Motor system contributions to verbal and non-verbal working memory

    Directory of Open Access Journals (Sweden)

    Diana A Liao

    2014-09-01

    Full Text Available Working memory (WM involves the ability to maintain and manipulate information held in mind. Neuroimaging studies have shown that secondary motor areas activate during WM for verbal content (e.g., words or letters, in the absence of primary motor area activation. This activation pattern may reflect an inner speech mechanism supporting online phonological rehearsal. Here, we examined the causal relationship between motor system activity and WM processing by using transcranial magnetic stimulation (TMS to manipulate motor system activity during WM rehearsal. We tested WM performance for verbalizable (words and pseudowords and non-verbalizable (Chinese characters visual information. We predicted that disruption of motor circuits would specifically affect WM processing of verbalizable information. We found that TMS targeting motor cortex slowed response times on verbal WM trials with high (pseudoword vs. low (real word phonological load. However, non-verbal WM trials were also significantly slowed with motor TMS. WM performance was unaffected by sham stimulation or TMS over visual cortex. Self-reported use of motor strategy predicted the degree of motor stimulation disruption on WM performance. These results provide evidence of the motor system’s contributions to verbal and non-verbal WM processing. We speculate that the motor system supports WM by creating motor traces consistent with the type of information being rehearsed during maintenance.

  8. [Non-verbal communication and executive function impairment after traumatic brain injury: a case report].

    Science.gov (United States)

    Sainson, C

    2007-05-01

    Following post-traumatic impairment in executive function, failure to adjust to communication situations often creates major obstacles to social and professional reintegration. The analysis of pathological verbal communication has been based on clinical scales since the 1980s, but that of nonverbal elements has been neglected, although their importance should be acknowledged. The aim of this research was to study non-verbal aspects of communication in a case of executive-function impairment after traumatic brain injury. During the patient's conversation with an interlocutor, all nonverbal parameters - coverbal gestures, gaze, posture, proxemics and facial expressions - were studied in as much an ecological way as possible, to closely approximate natural conversation conditions. Such an approach highlights the difficulties such patients experience in communicating, difficulties of a pragmatic kind, that have so far been overlooked by traditional investigations, which mainly take into account the formal linguistic aspects of language. The analysis of the patient's conversation revealed non-verbal dysfunctions, not only on a pragmatic and interactional level but also in terms of enunciation. Moreover, interactional adjustment phenomena were noted in the interlocutor's behaviour. The two inseparable aspects of communication - verbal and nonverbal - should be equally assessed in patients with communication difficulties; highlighting distortions in each area might bring about an improvement in the rehabilitation of such people.

  9. How physician electronic health record screen sharing affects patient and doctor non-verbal communication in primary care.

    Science.gov (United States)

    Asan, Onur; Young, Henry N; Chewning, Betty; Montague, Enid

    2015-03-01

    Use of electronic health records (EHRs) in primary-care exam rooms changes the dynamics of patient-physician interaction. This study examines and compares doctor-patient non-verbal communication (eye-gaze patterns) during primary care encounters for three different screen/information sharing groups: (1) active information sharing, (2) passive information sharing, and (3) technology withdrawal. Researchers video recorded 100 primary-care visits and coded the direction and duration of doctor and patient gaze. Descriptive statistics compared the length of gaze patterns as a percentage of visit length. Lag sequential analysis determined whether physician eye-gaze influenced patient eye gaze, and vice versa, and examined variations across groups. Significant differences were found in duration of gaze across groups. Lag sequential analysis found significant associations between several gaze patterns. Some, such as DGP-PGD ("doctor gaze patient" followed by "patient gaze doctor") were significant for all groups. Others, such DGT-PGU ("doctor gaze technology" followed by "patient gaze unknown") were unique to one group. Some technology use styles (active information sharing) seem to create more patient engagement, while others (passive information sharing) lead to patient disengagement. Doctors can engage patients in communication by using EHRs in the visits. EHR training and design should facilitate this. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Linking social cognition with social interaction: Non-verbal expressivity, social competence and "mentalising" in patients with schizophrenia spectrum disorders

    Directory of Open Access Journals (Sweden)

    Lehmkämper Caroline

    2009-01-01

    Full Text Available Abstract Background Research has shown that patients with schizophrenia spectrum disorders (SSD can be distinguished from controls on the basis of their non-verbal expression. For example, patients with SSD use facial expressions less than normals to invite and sustain social interaction. Here, we sought to examine whether non-verbal expressivity in patients corresponds with their impoverished social competence and neurocognition. Method Fifty patients with SSD were videotaped during interviews. Non-verbal expressivity was evaluated using the Ethological Coding System for Interviews (ECSI. Social competence was measured using the Social Behaviour Scale and psychopathology was rated using the Positive and Negative Symptom Scale. Neurocognitive variables included measures of IQ, executive functioning, and two mentalising tasks, which tapped into the ability to appreciate mental states of story characters. Results Non-verbal expressivity was reduced in patients relative to controls. Lack of "prosocial" nonverbal signals was associated with poor social competence and, partially, with impaired understanding of others' minds, but not with non-social cognition or medication. Conclusion This is the first study to link deficits in non-verbal expressivity to levels of social skills and awareness of others' thoughts and intentions in patients with SSD.

  11. The Bursts and Lulls of Multimodal Interaction: Temporal Distributions of Behavior Reveal Differences Between Verbal and Non-Verbal Communication.

    Science.gov (United States)

    Abney, Drew H; Dale, Rick; Louwerse, Max M; Kello, Christopher T

    2018-04-06

    Recent studies of naturalistic face-to-face communication have demonstrated coordination patterns such as the temporal matching of verbal and non-verbal behavior, which provides evidence for the proposal that verbal and non-verbal communicative control derives from one system. In this study, we argue that the observed relationship between verbal and non-verbal behaviors depends on the level of analysis. In a reanalysis of a corpus of naturalistic multimodal communication (Louwerse, Dale, Bard, & Jeuniaux, ), we focus on measuring the temporal patterns of specific communicative behaviors in terms of their burstiness. We examined burstiness estimates across different roles of the speaker and different communicative modalities. We observed more burstiness for verbal versus non-verbal channels, and for more versus less informative language subchannels. Using this new method for analyzing temporal patterns in communicative behaviors, we show that there is a complex relationship between verbal and non-verbal channels. We propose a "temporal heterogeneity" hypothesis to explain how the language system adapts to the demands of dialog. Copyright © 2018 Cognitive Science Society, Inc.

  12. Non-verbal communication of the residents living in homes for the older people in Slovenia.

    Science.gov (United States)

    Zaletel, Marija; Kovacev, Asja Nina; Sustersic, Olga; Kragelj, Lijana Zaletel

    2010-09-01

    Aging of the population is a growing problem in all developed societies. The older people need more health and social services, and their life quality in there is getting more and more important. The study aimed at determining the characteristics of non-verbal communication of the older people living in old people's homes (OPH). The sample consisted of 267 residents of the OPH, aged 65-96 years, and 267 caregivers from randomly selected twenty-seven OPH. Three types of non-verbal communication were observed and analysed using univariate and multivariate statistical methods. In face expressions and head movements about 75% older people looked at the eyes of their caregivers, and about 60% were looking around, while laughing or pressing the lips together was rarely noticed. The differences between genders were not statistically significant while statistically significant differences among different age groups was observed in dropping the eyes (p = 0.004) and smiling (0.008). In hand gestures and trunk movements, majority of older people most often moved forwards and clenched fingers, while most rarely they stroked and caressed their caregivers. The differences between genders were statistically significant in leaning on the table (p = 0.001), and changing the position on the chair (0.013). Statistically significant differences among age groups were registered in leaning forwards (p = 0.006) and pointing to the others (p = 0.036). In different modes of speaking and paralinguistic signs almost 75% older people spoke normally, about 70% kept silent, while they rarely quarrelled. The differences between genders were not statistically significant while statistically significant differences among age groups was observed in persuasive speaking (p = 0.007). The present study showed that older people in OPH in Slovenia communicated significantly less frequently with hand gestures and trunk movements than with face expressions and head movements or different modes of speaking

  13. Achieving visibility? Use of non-verbal communication in interactions between patients and pharmacists who do not share a common language.

    Science.gov (United States)

    Stevenson, Fiona

    2014-06-01

    Despite the seemingly insatiable interest in healthcare professional-patient communication, less attention has been paid to the use of non-verbal communication in medical consultations. This article considers pharmacists' and patients' use of non-verbal communication to interact directly in consultations in which they do not share a common language. In total, 12 video-recorded, interpreted pharmacy consultations concerned with a newly prescribed medication or a change in medication were analysed in detail. The analysis focused on instances of direct communication initiated by either the patient or the pharmacist, despite the presence of a multilingual pharmacy assistant acting as an interpreter. Direct communication was shown to occur through (i) the demonstration of a medical device, (ii) the indication of relevant body parts and (iii) the use of limited English. These connections worked to make patients and pharmacists visible to each other and thus to maintain a sense of mutual involvement in consultations within which patients and pharmacists could enact professionally and socially appropriate roles. In a multicultural society this work is important in understanding the dynamics involved in consultations in situations in which language is not shared and thus in considering the development of future research and policy. © 2014 The Author. Sociology of Health & Illness published by John Wiley & Sons Ltd on behalf of Foundation for SHIL (SHIL).

  14. Patterns of non-verbal social interactions within intensive mathematics intervention contexts

    Science.gov (United States)

    Thomas, Jonathan Norris; Harkness, Shelly Sheats

    2016-06-01

    This study examined the non-verbal patterns of interaction within an intensive mathematics intervention context. Specifically, the authors draw on social constructivist worldview to examine a teacher's use of gesture in this setting. The teacher conducted a series of longitudinal teaching experiments with a small number of young, school-age children in the context of early arithmetic development. From these experiments, the authors gathered extensive video records of teaching practice and, from an inductive analysis of these records, identified three distinct patterns of teacher gesture: behavior eliciting, behavior suggesting, and behavior replicating. Awareness of their potential to influence students via gesture may prompt teachers to more closely attend to their own interactions with mathematical tools and take these teacher interactions into consideration when forming interpretations of students' cognition.

  15. Judging the urgency of non-verbal auditory alarms: a case study.

    Science.gov (United States)

    Arrabito, G Robert; Mondor, Todd; Kent, Kimberley

    2004-06-22

    When designed correctly, non-verbal auditory alarms can convey different levels of urgency to the aircrew, and thereby permit the operator to establish the appropriate level of priority to address the alarmed condition. The conveyed level of urgency of five non-verbal auditory alarms presently used in the Canadian Forces CH-146 Griffon helicopter was investigated. Pilots of the CH-146 Griffon helicopter and non-pilots rated the perceived urgency of the signals using a rating scale. The pilots also ranked the urgency of the alarms in a post-experiment questionnaire to reflect their assessment of the actual situation that triggers the alarms. The results of this investigation revealed that participants' ratings of perceived urgency appear to be based on the acoustic properties of the alarms which are known to affect the listener's perceived level of urgency. Although for 28% of the pilots the mapping of perceived urgency to the urgency of their perception of the triggering situation was statistically significant for three of the five alarms, the overall data suggest that the triggering situations are not adequately conveyed by the acoustic parameters inherent in the alarms. The pilots' judgement of the triggering situation was intended as a means of evaluating the reliability of the alerting system. These data will subsequently be discussed with respect to proposed enhancements in alerting systems as it relates to addressing the problem of phase of flight. These results call for more serious consideration of incorporating situational awareness in the design and assignment of auditory alarms in aircraft.

  16. Individual differences in non-verbal number acuity correlate with maths achievement.

    Science.gov (United States)

    Halberda, Justin; Mazzocco, Michèle M M; Feigenson, Lisa

    2008-10-02

    Human mathematical competence emerges from two representational systems. Competence in some domains of mathematics, such as calculus, relies on symbolic representations that are unique to humans who have undergone explicit teaching. More basic numerical intuitions are supported by an evolutionarily ancient approximate number system that is shared by adults, infants and non-human animals-these groups can all represent the approximate number of items in visual or auditory arrays without verbally counting, and use this capacity to guide everyday behaviour such as foraging. Despite the widespread nature of the approximate number system both across species and across development, it is not known whether some individuals have a more precise non-verbal 'number sense' than others. Furthermore, the extent to which this system interfaces with the formal, symbolic maths abilities that humans acquire by explicit instruction remains unknown. Here we show that there are large individual differences in the non-verbal approximation abilities of 14-year-old children, and that these individual differences in the present correlate with children's past scores on standardized maths achievement tests, extending all the way back to kindergarten. Moreover, this correlation remains significant when controlling for individual differences in other cognitive and performance factors. Our results show that individual differences in achievement in school mathematics are related to individual differences in the acuity of an evolutionarily ancient, unlearned approximate number sense. Further research will determine whether early differences in number sense acuity affect later maths learning, whether maths education enhances number sense acuity, and the extent to which tertiary factors can affect both.

  17. Non-verbal emotion communication training induces specific changes in brain function and structure.

    Science.gov (United States)

    Kreifelts, Benjamin; Jacob, Heike; Brück, Carolin; Erb, Michael; Ethofer, Thomas; Wildgruber, Dirk

    2013-01-01

    The perception of emotional cues from voice and face is essential for social interaction. However, this process is altered in various psychiatric conditions along with impaired social functioning. Emotion communication trainings have been demonstrated to improve social interaction in healthy individuals and to reduce emotional communication deficits in psychiatric patients. Here, we investigated the impact of a non-verbal emotion communication training (NECT) on cerebral activation and brain structure in a controlled and combined functional magnetic resonance imaging (fMRI) and voxel-based morphometry study. NECT-specific reductions in brain activity occurred in a distributed set of brain regions including face and voice processing regions as well as emotion processing- and motor-related regions presumably reflecting training-induced familiarization with the evaluation of face/voice stimuli. Training-induced changes in non-verbal emotion sensitivity at the behavioral level and the respective cerebral activation patterns were correlated in the face-selective cortical areas in the posterior superior temporal sulcus and fusiform gyrus for valence ratings and in the temporal pole, lateral prefrontal cortex and midbrain/thalamus for the response times. A NECT-induced increase in gray matter (GM) volume was observed in the fusiform face area. Thus, NECT induces both functional and structural plasticity in the face processing system as well as functional plasticity in the emotion perception and evaluation system. We propose that functional alterations are presumably related to changes in sensory tuning in the decoding of emotional expressions. Taken together, these findings highlight that the present experimental design may serve as a valuable tool to investigate the altered behavioral and neuronal processing of emotional cues in psychiatric disorders as well as the impact of therapeutic interventions on brain function and structure.

  18. The Influence of Manifest Strabismus and Stereoscopic Vision on Non-Verbal Abilities of Visually Impaired Children

    Science.gov (United States)

    Gligorovic, Milica; Vucinic, Vesna; Eskirovic, Branka; Jablan, Branka

    2011-01-01

    This research was conducted in order to examine the influence of manifest strabismus and stereoscopic vision on non-verbal abilities of visually impaired children aged between 7 and 15. The sample included 55 visually impaired children from the 1st to the 6th grade of elementary schools for visually impaired children in Belgrade. RANDOT stereotest…

  19. Contextual analysis of human non-verbal guide behaviors to inform the development of FROG, the Fun Robotic Outdoor Guide

    NARCIS (Netherlands)

    Karreman, Daphne Eleonora; van Dijk, Elisabeth M.A.G.; Evers, Vanessa

    2012-01-01

    This paper reports the first step in a series of studies to design the interaction behaviors of an outdoor robotic guide. We describe and report the use case development carried out to identify effective human tour guide behaviors. In this paper we focus on non-verbal communication cues in gaze,

  20. Treating depressive symptoms in psychosis : A Network Meta-Analysis on the Effects of Non-Verbal Therapies

    NARCIS (Netherlands)

    Steenhuis, L. A.; Nauta, M. H.; Bockting, C. L. H.; Pijnenborg, G. H. M.

    2015-01-01

    AIMS: The aim of this study was to examine whether non-verbal therapies are effective in treating depressive symptoms in psychotic disorders. MATERIAL AND METHODS: A systematic literature search was performed in PubMed, Psychinfo, Picarta, Embase and ISI Web of Science, up to January 2015.

  1. The similar effects of verbal and non-verbal intervening tasks on word recall in an elderly population.

    Science.gov (United States)

    Williams, B R; Sullivan, S K; Morra, L F; Williams, J R; Donovick, P J

    2014-01-01

    Vulnerability to retroactive interference has been shown to increase with cognitive aging. Consistent with the findings of memory and aging literature, the authors of the California Verbal Learning Test-II (CVLT-II) suggest that a non-verbal task be administered during the test's delay interval to minimize the effects of retroactive interference on delayed recall. The goal of the present study was to determine the extent to which retroactive interference caused by non-verbal and verbal intervening tasks affects recall of verbal information in non-demented, older adults. The effects of retroactive interference on recall of words during Long-Delay recall on the California Verbal Learning Test-II (CVLT-II) were evaluated. Participants included 85 adults age 60 and older. During a 20-minute delay interval on the CVLT-II, participants received either a verbal (WAIS-III Vocabulary or Peabody Picture Vocabulary Test-IIIB) or non-verbal (Raven's Standard Progressive Matrices or WAIS-III Block Design) intervening task. Similarly to previous research with young adults (Williams & Donovick, 2008), older adults recalled the same number of words across all groups, regardless of the type of intervening task. These findings suggest that the administration of verbal intervening tasks during the CVLT-II do not elicit more retroactive interference than non-verbal intervening tasks, and thus verbal tasks need not be avoided during the delay interval of the CVLT-II.

  2. Treating depressive symptoms in psychosis : A network meta-analysis on the effects of non-verbal therapies

    NARCIS (Netherlands)

    Steenhuis, Laura A.; Nauta, Maaike H.; Bocking, Claudi L.H.; Pijnenborg, Gerdina H.M.

    2015-01-01

    AIMS: The aim of this study was to examine whether non-verbal therapies are effective in treating depressive symptoms in psychotic disorders. MATERIAL AND METHODS: A systematic literature search was performed in PubMed, Psychinfo, Picarta, Embase and ISI Web of Science, up to January 2015.

  3. Adults with Asperger Syndrome with and without a Cognitive Profile Associated with "Non-Verbal Learning Disability." A Brief Report

    Science.gov (United States)

    Nyden, Agneta; Niklasson, Lena; Stahlberg, Ola; Anckarsater, Henrik; Dahlgren-Sandberg, Annika; Wentz, Elisabet; Rastam, Maria

    2010-01-01

    Asperger syndrome (AS) and non-verbal learning disability (NLD) are both characterized by impairments in motor coordination, visuo-perceptual abilities, pragmatics and comprehension of language and social understanding. NLD is also defined as a learning disorder affecting functions in the right cerebral hemisphere. The present study investigates…

  4. Near Real-Time Comprehension Classification with Artificial Neural Networks: Decoding e-Learner Non-Verbal Behavior

    Science.gov (United States)

    Holmes, Mike; Latham, Annabel; Crockett, Keeley; O'Shea, James D.

    2018-01-01

    Comprehension is an important cognitive state for learning. Human tutors recognize comprehension and non-comprehension states by interpreting learner non-verbal behavior (NVB). Experienced tutors adapt pedagogy, materials, and instruction to provide additional learning scaffold in the context of perceived learner comprehension. Near real-time…

  5. Gender Differences in Variance and Means on the Naglieri Non-Verbal Ability Test: Data from the Philippines

    Science.gov (United States)

    Vista, Alvin; Care, Esther

    2011-01-01

    Background: Research on gender differences in intelligence has focused mostly on samples from Western countries and empirical evidence on gender differences from Southeast Asia is relatively sparse. Aims: This article presents results on gender differences in variance and means on a non-verbal intelligence test using a national sample of public…

  6. Role of Auditory Non-Verbal Working Memory in Sentence Repetition for Bilingual Children with Primary Language Impairment

    Science.gov (United States)

    Ebert, Kerry Danahy

    2014-01-01

    Background: Sentence repetition performance is attracting increasing interest as a valuable clinical marker for primary (or specific) language impairment (LI) in both monolingual and bilingual populations. Multiple aspects of memory appear to contribute to sentence repetition performance, but non-verbal memory has not yet been considered. Aims: To…

  7. The Efficiency of Peer Teaching of Developing Non Verbal Communication to Children with Autism Spectrum Disorder (ASD)

    Science.gov (United States)

    Alshurman, Wael; Alsreaa, Ihsani

    2015-01-01

    This study aimed at identifying the efficiency of peer teaching of developing non-verbal communication to children with autism spectrum disorder (ASD). The study was carried out on a sample of (10) children with autism spectrum disorder (ASD), diagnosed according to basics and criteria adopted at Al-taif qualification center at (2013) in The…

  8. The use of virtual characters to assess and train non-verbal communication in high-functioning autism.

    Science.gov (United States)

    Georgescu, Alexandra Livia; Kuzmanovic, Bojana; Roth, Daniel; Bente, Gary; Vogeley, Kai

    2014-01-01

    High-functioning autism (HFA) is a neurodevelopmental disorder, which is characterized by life-long socio-communicative impairments on the one hand and preserved verbal and general learning and memory abilities on the other. One of the areas where particular difficulties are observable is the understanding of non-verbal communication cues. Thus, investigating the underlying psychological processes and neural mechanisms of non-verbal communication in HFA allows a better understanding of this disorder, and potentially enables the development of more efficient forms of psychotherapy and trainings. However, the research on non-verbal information processing in HFA faces several methodological challenges. The use of virtual characters (VCs) helps to overcome such challenges by enabling an ecologically valid experience of social presence, and by providing an experimental platform that can be systematically and fully controlled. To make this field of research accessible to a broader audience, we elaborate in the first part of the review the validity of using VCs in non-verbal behavior research on HFA, and we review current relevant paradigms and findings from social-cognitive neuroscience. In the second part, we argue for the use of VCs as either agents or avatars in the context of "transformed social interactions." This allows for the implementation of real-time social interaction in virtual experimental settings, which represents a more sensitive measure of socio-communicative impairments in HFA. Finally, we argue that VCs and environments are a valuable assistive, educational and therapeutic tool for HFA.

  9. Measuring Verbal and Non-Verbal Communication in Aphasia: Reliability, Validity, and Sensitivity to Change of the Scenario Test

    Science.gov (United States)

    van der Meulen, Ineke; van de Sandt-Koenderman, W. Mieke E.; Duivenvoorden, Hugo J.; Ribbers, Gerard M.

    2010-01-01

    Background: This study explores the psychometric qualities of the Scenario Test, a new test to assess daily-life communication in severe aphasia. The test is innovative in that it: (1) examines the effectiveness of verbal and non-verbal communication; and (2) assesses patients' communication in an interactive setting, with a supportive…

  10. School effects on non-verbal intelligence and nutritional status in rural Zambia.

    Science.gov (United States)

    Hein, Sascha; Tan, Mei; Reich, Jodi; Thuma, Philip E; Grigorenko, Elena L

    2016-02-01

    This study uses hierarchical linear modeling (HLM) to examine the school factors (i.e., related to school organization and teacher and student body) associated with non-verbal intelligence (NI) and nutritional status (i.e., body mass index; BMI) of 4204 3 rd to 7 th graders in rural areas of Southern Province, Zambia. Results showed that 23.5% and 7.7% of the NI and BMI variance, respectively, were conditioned by differences between schools. The set of 14 school factors accounted for 58.8% and 75.9% of the between-school differences in NI and BMI, respectively. Grade-specific HLM yielded higher between-school variation of NI (41%) and BMI (14.6%) for students in grade 3 compared to grades 4 to 7. School factors showed a differential pattern of associations with NI and BMI across grades. The distance to a health post and teacher's teaching experience were the strongest predictors of NI (particularly in grades 4, 6 and 7); the presence of a preschool was linked to lower BMI in grades 4 to 6. Implications for improving access and quality of education in rural Zambia are discussed.

  11. Exploring laterality and memory effects in the haptic discrimination of verbal and non-verbal shapes.

    Science.gov (United States)

    Stoycheva, Polina; Tiippana, Kaisa

    2018-03-14

    The brain's left hemisphere often displays advantages in processing verbal information, while the right hemisphere favours processing non-verbal information. In the haptic domain due to contra-lateral innervations, this functional lateralization is reflected in a hand advantage during certain functions. Findings regarding the hand-hemisphere advantage for haptic information remain contradictory, however. This study addressed these laterality effects and their interaction with memory retention times in the haptic modality. Participants performed haptic discrimination of letters, geometric shapes and nonsense shapes at memory retention times of 5, 15 and 30 s with the left and right hand separately, and we measured the discriminability index d'. The d' values were significantly higher for letters and geometric shapes than for nonsense shapes. This might result from dual coding (naming + spatial) or/and from a low stimulus complexity. There was no stimulus-specific laterality effect. However, we found a time-dependent laterality effect, which revealed that the performance of the left hand-right hemisphere was sustained up to 15 s, while the performance of the right-hand-left hemisphere decreased progressively throughout all retention times. This suggests that haptic memory traces are more robust to decay when they are processed by the left hand-right hemisphere.

  12. Consistency between verbal and non-verbal affective cues: a clue to speaker credibility.

    Science.gov (United States)

    Gillis, Randall L; Nilsen, Elizabeth S

    2017-06-01

    Listeners are exposed to inconsistencies in communication; for example, when speakers' words (i.e. verbal) are discrepant with their demonstrated emotions (i.e. non-verbal). Such inconsistencies introduce ambiguity, which may render a speaker to be a less credible source of information. Two experiments examined whether children make credibility discriminations based on the consistency of speakers' affect cues. In Experiment 1, school-age children (7- to 8-year-olds) preferred to solicit information from consistent speakers (e.g. those who provided a negative statement with negative affect), over novel speakers, to a greater extent than they preferred to solicit information from inconsistent speakers (e.g. those who provided a negative statement with positive affect) over novel speakers. Preschoolers (4- to 5-year-olds) did not demonstrate this preference. Experiment 2 showed that school-age children's ratings of speakers were influenced by speakers' affect consistency when the attribute being judged was related to information acquisition (speakers' believability, "weird" speech), but not general characteristics (speakers' friendliness, likeability). Together, findings suggest that school-age children are sensitive to, and use, the congruency of affect cues to determine whether individuals are credible sources of information.

  13. Memory and comprehension deficits in spatial descriptions of children with non-verbal and reading disabilities.

    Science.gov (United States)

    Mammarella, Irene C; Meneghetti, Chiara; Pazzaglia, Francesca; Cornoldi, Cesare

    2014-01-01

    The present study investigated the difficulties encountered by children with non-verbal learning disability (NLD) and reading disability (RD) when processing spatial information derived from descriptions, based on the assumption that both groups should find it more difficult than matched controls, but for different reasons, i.e., due to a memory encoding difficulty in cases of RD and to spatial information comprehension problems in cases of NLD. Spatial descriptions from both survey and route perspectives were presented to 9-12-year-old children divided into three groups: NLD (N = 12); RD (N = 12), and typically developing controls (TD; N = 15); then participants completed a sentence verification task and a memory for locations task. The sentence verification task was presented in two conditions: in one the children could refer to the text while answering the questions (i.e., text present condition), and in the other the text was withdrawn (i.e., text absent condition). Results showed that the RD group benefited from the text present condition, but was impaired to the same extent as the NLD group in the text absent condition, suggesting that the NLD children's difficulty is due mainly to their poor comprehension of spatial descriptions, while the RD children's difficulty is due more to a memory encoding problem. These results are discussed in terms of their implications in the neuropsychological profiles of children with NLD or RD, and the processes involved in spatial descriptions.

  14. Digital audiovisual archives

    CERN Document Server

    Stockinger, Peter

    2013-01-01

    Today, huge quantities of digital audiovisual resources are already available - everywhere and at any time - through Web portals, online archives and libraries, and video blogs. One central question with respect to this huge amount of audiovisual data is how they can be used in specific (social, pedagogical, etc.) contexts and what are their potential interest for target groups (communities, professionals, students, researchers, etc.).This book examines the question of the (creative) exploitation of digital audiovisual archives from a theoretical, methodological, technical and practical

  15. Relationship of Non-Verbal Intelligence Materials as Catalyst for Academic Achievement and Peaceful Co-Existence among Secondary School Students in Nigeria

    Science.gov (United States)

    Sambo, Aminu

    2015-01-01

    This paper examines students' performance in Non-verbal Intelligence tests relative academic achievement of some selected secondary school students. Two hypotheses were formulated with a view to generating data for the ease of analyses. Two non-verbal intelligent tests viz: Raven's Standard Progressive Matrices (SPM) and AH[subscript 4] Part II…

  16. Symbiotic Relations of Verbal and Non-Verbal Components of Creolized Text on the Example of Stephen King’s Books Covers Analysis

    OpenAIRE

    Anna S. Kobysheva; Viktoria A. Nakaeva

    2017-01-01

    The article examines the symbiotic relationships between non-verbal and verbal components of the creolized text. The research focuses on the analysis of the correlation between verbal and visual elements of horror book covers based on three types of correlations between verbal and non-verbal text constituents, i.e. recurrent, additive and emphatic.

  17. Symbiotic Relations of Verbal and Non-Verbal Components of Creolized Text on the Example of Stephen King’s Books Covers Analysis

    Directory of Open Access Journals (Sweden)

    Anna S. Kobysheva

    2017-12-01

    Full Text Available The article examines the symbiotic relationships between non-verbal and verbal components of the creolized text. The research focuses on the analysis of the correlation between verbal and visual elements of horror book covers based on three types of correlations between verbal and non-verbal text constituents, i.e. recurrent, additive and emphatic.

  18. Computerized training of non-verbal reasoning and working memory in children with intellectual disability

    Directory of Open Access Journals (Sweden)

    Stina eSöderqvist

    2012-10-01

    Full Text Available Children with intellectual disabilities show deficits in both reasoning ability and working memory (WM that impact everyday functioning and academic achievement. In this study we investigated the feasibility of cognitive training for improving WM and non-verbal reasoning (NVR ability in children with intellectual disability. Participants were randomized to a 5-week adaptive training program (intervention group or non-adaptive version of the program (active control group. Cognitive assessments were conducted prior to and directly after training, and one year later to examine effects of the training. Improvements during training varied largely and amount of progress during training predicted transfer to WM and comprehension of instructions, with higher training progress being associated with greater transfer effects. The strongest predictors for training progress were found to be gender, co-morbidity and baseline capacity on verbal WM. In particular, females without an additional diagnosis and with higher baseline performance showed greater progress. No significant effects of training were observed at the one-year follow-up, suggesting that training should be more intense or repeated in order for effects to persist in children with intellectual disabilities. A major finding of this study is that cognitive training is feasible in children with intellectual disabilities and can help improve their cognitive capacities. However, a minimum cognitive capacity or training ability seems necessary for the training to be beneficial, with some individuals showing little improvement in performance. Future studies of cognitive training should take into consideration how inter-individual differences in training progress influence transfer effects and further investigate how baseline capacities predict training outcome.

  19. Incongruence between Verbal and Non-Verbal Information Enhances the Late Positive Potential.

    Science.gov (United States)

    Morioka, Shu; Osumi, Michihiro; Shiotani, Mayu; Nobusako, Satoshi; Maeoka, Hiroshi; Okada, Yohei; Hiyamizu, Makoto; Matsuo, Atsushi

    2016-01-01

    Smooth social communication consists of both verbal and non-verbal information. However, when presented with incongruence between verbal information and nonverbal information, the relationship between an individual judging trustworthiness in those who present the verbal-nonverbal incongruence and the brain activities observed during judgment for trustworthiness are not clear. In the present study, we attempted to identify the impact of incongruencies between verbal information and facial expression on the value of trustworthiness and brain activity using event-related potentials (ERP). Combinations of verbal information [positive/negative] and facial expressions [smile/angry] expressions were presented randomly on a computer screen to 17 healthy volunteers. The value of trustworthiness of the presented facial expression was evaluated by the amount of donation offered by the observer to the person depicted on the computer screen. In addition, the time required to judge the value of trustworthiness was recorded for each trial. Using electroencephalography, ERP were obtained by averaging the wave patterns recorded while the participants judged the value of trustworthiness. The amount of donation offered was significantly lower when the verbal information and facial expression were incongruent, particularly for [negative × smile]. The amplitude of the early posterior negativity (EPN) at the temporal lobe showed no significant difference between all conditions. However, the amplitude of the late positive potential (LPP) at the parietal electrodes for the incongruent condition [negative × smile] was higher than that for the congruent condition [positive × smile]. These results suggest that the LPP amplitude observed from the parietal cortex is involved in the processing of incongruence between verbal information and facial expression.

  20. Audiovisual Speech Synchrony Measure: Application to Biometrics

    Directory of Open Access Journals (Sweden)

    Gérard Chollet

    2007-01-01

    Full Text Available Speech is a means of communication which is intrinsically bimodal: the audio signal originates from the dynamics of the articulators. This paper reviews recent works in the field of audiovisual speech, and more specifically techniques developed to measure the level of correspondence between audio and visual speech. It overviews the most common audio and visual speech front-end processing, transformations performed on audio, visual, or joint audiovisual feature spaces, and the actual measure of correspondence between audio and visual speech. Finally, the use of synchrony measure for biometric identity verification based on talking faces is experimented on the BANCA database.

  1. Non-verbal communication between nurses and people with an intellectual disability: a review of the literature.

    Science.gov (United States)

    Martin, Anne-Marie; O'Connor-Fenelon, Maureen; Lyons, Rosemary

    2010-12-01

    This article critically synthesizes current literature regarding communication between nurses and people with an intellectual disability who communicate non-verbally. The unique context of communication between the intellectual disability nurse and people with intellectual disability and the review aims and strategies are outlined. Communication as a concept is explored in depth. Communication between the intellectual disability nurse and the person with an intellectual disability is then comprehensively examined in light of existing literature. Issues including knowledge of the person with intellectual disability, mismatch of communication ability, and knowledge of communication arose as predominant themes. A critical review of the importance of communication in nursing practice follows. The paucity of literature relating to intellectual disability nursing and non-verbal communication clearly indicates a need for research.

  2. The Use of Virtual Characters to Assess and Train Non-Verbal Communication in High-Functioning Autism

    Science.gov (United States)

    Georgescu, Alexandra Livia; Kuzmanovic, Bojana; Roth, Daniel; Bente, Gary; Vogeley, Kai

    2014-01-01

    High-functioning autism (HFA) is a neurodevelopmental disorder, which is characterized by life-long socio-communicative impairments on the one hand and preserved verbal and general learning and memory abilities on the other. One of the areas where particular difficulties are observable is the understanding of non-verbal communication cues. Thus, investigating the underlying psychological processes and neural mechanisms of non-verbal communication in HFA allows a better understanding of this disorder, and potentially enables the development of more efficient forms of psychotherapy and trainings. However, the research on non-verbal information processing in HFA faces several methodological challenges. The use of virtual characters (VCs) helps to overcome such challenges by enabling an ecologically valid experience of social presence, and by providing an experimental platform that can be systematically and fully controlled. To make this field of research accessible to a broader audience, we elaborate in the first part of the review the validity of using VCs in non-verbal behavior research on HFA, and we review current relevant paradigms and findings from social-cognitive neuroscience. In the second part, we argue for the use of VCs as either agents or avatars in the context of “transformed social interactions.” This allows for the implementation of real-time social interaction in virtual experimental settings, which represents a more sensitive measure of socio-communicative impairments in HFA. Finally, we argue that VCs and environments are a valuable assistive, educational and therapeutic tool for HFA. PMID:25360098

  3. Randomised controlled trial of a brief intervention targeting predominantly non-verbal communication in general practice consultations.

    Science.gov (United States)

    Little, Paul; White, Peter; Kelly, Joanne; Everitt, Hazel; Mercer, Stewart

    2015-06-01

    The impact of changing non-verbal consultation behaviours is unknown. To assess brief physician training on improving predominantly non-verbal communication. Cluster randomised parallel group trial among adults aged ≥16 years attending general practices close to the study coordinating centres in Southampton. Sixteen GPs were randomised to no training, or training consisting of a brief presentation of behaviours identified from a prior study (acronym KEPe Warm: demonstrating Knowledge of the patient; Encouraging [back-channelling by saying 'hmm', for example]; Physically engaging [touch, gestures, slight lean]; Warm-up: cool/professional initially, warming up, avoiding distancing or non-verbal cut-offs at the end of the consultation); and encouragement to reflect on videos of their consultation. Outcomes were the Medical Interview Satisfaction Scale (MISS) mean item score (1-7) and patients' perceptions of other domains of communication. Intervention participants scored higher MISS overall (0.23, 95% confidence interval [CI] = 0.06 to 0.41), with the largest changes in the distress-relief and perceived relationship subscales. Significant improvement occurred in perceived communication/partnership (0.29, 95% CI = 0.09 to 0.49) and health promotion (0.26, 95% CI = 0.05 to 0.46). Non-significant improvements occurred in perceptions of a personal relationship, a positive approach, and understanding the effects of the illness on life. Brief training of GPs in predominantly non-verbal communication in the consultation and reflection on consultation videotapes improves patients' perceptions of satisfaction, distress, a partnership approach, and health promotion. © British Journal of General Practice 2015.

  4. Comparative Analysis of Verbal and Non-Verbal Mental Activity Components Regarding the Young People with Different Intellectual Levels

    Directory of Open Access Journals (Sweden)

    Y. M. Revenko

    2013-01-01

    Full Text Available The paper maintains that for developing the educational pro- grams and technologies adequate to the different stages of students’ growth and maturity, there is a need for exploring the natural determinants of intel- lectual development as well as the students’ individual qualities affecting the cognition process. The authors investigate the differences of the intellect manifestations with the reference to the gender principle, and analyze the correlations be- tween verbal and non-verbal components in boys and girls’ mental activity depending on their general intellect potential. The research, carried out in Si- berian State Automobile Road Academy and focused on the first year stu- dents, demonstrates the absence of gender differences in students’ general in- tellect levels; though, there are some other conformities: the male students of different intellectual levels show the same correlation coefficient of verbal and non-verbal intellect while the female ones have the same correlation only at the high intellect level. In conclusion, the authors emphasize the need for the integral ap- proach to raising students’ mental abilities considering the close interrelation between the verbal and non-verbal component development. The teaching materials should stimulate different mental qualities by differentiating the educational process to develop students’ individual abilities. 

  5. Maternal postpartum depressive symptoms predict delay in non-verbal communication in 14-month-old infants.

    Science.gov (United States)

    Kawai, Emiko; Takagai, Shu; Takei, Nori; Itoh, Hiroaki; Kanayama, Naohiro; Tsuchiya, Kenji J

    2017-02-01

    We investigated the potential relationship between maternal depressive symptoms during the postpartum period and non-verbal communication skills of infants at 14 months of age in a birth cohort study of 951 infants and assessed what factors may influence this association. Maternal depressive symptoms were measured using the Edinburgh Postnatal Depression Scale, and non-verbal communication skills were measured using the MacArthur-Bates Communicative Development Inventories, which include Early Gestures and Later Gestures domains. Infants whose mothers had a high level of depressive symptoms (13+ points) during both the first month postpartum and at 10 weeks were approximately 0.5 standard deviations below normal in Early Gestures scores and 0.5-0.7 standard deviations below normal in Later Gestures scores. These associations were independent of potential explanations, such as maternal depression/anxiety prior to birth, breastfeeding practices, and recent depressive symptoms among mothers. These findings indicate that infants whose mothers have postpartum depressive symptoms may be at increased risk of experiencing delay in non-verbal development. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    Science.gov (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  7. Frontal brain deactivation during a non-verbal cognitive judgement bias test in sheep.

    Science.gov (United States)

    Guldimann, Kathrin; Vögeli, Sabine; Wolf, Martin; Wechsler, Beat; Gygax, Lorenz

    2015-02-01

    Animal welfare concerns have raised an interest in animal affective states. These states also play an important role in the proximate control of behaviour. Due to their potential to modulate short-term emotional reactions, one specific focus is on long-term affective states, that is, mood. These states can be assessed by using non-verbal cognitive judgement bias paradigms. Here, we conducted a spatial variant of such a test on 24 focal animals that were kept under either unpredictable, stimulus-poor or predictable, stimulus-rich housing conditions to induce differential mood states. Based on functional near-infrared spectroscopy, we measured haemodynamic frontal brain reactions during 10 s in which the sheep could observe the configuration of the cognitive judgement bias trial before indicating their assessment based on the go/no-go reaction. We used (generalised) mixed-effects models to evaluate the data. Sheep from the unpredictable, stimulus-poor housing conditions took longer and were less likely to reach the learning criterion and reacted slightly more optimistically in the cognitive judgement bias test than sheep from the predictable, stimulus-rich housing conditions. A frontal cortical increase in deoxy-haemoglobin [HHb] and a decrease in oxy-haemoglobin [O2Hb] were observed during the visual assessment of the test situation by the sheep, indicating a frontal cortical brain deactivation. This deactivation was more pronounced with the negativity of the test situation, which was reflected by the provenance of the sheep from the unpredictable, stimulus-poor housing conditions, the proximity of the cue to the negatively reinforced cue location, or the absence of a go reaction in the trial. It seems that (1) sheep from the unpredictable, stimulus-poor in comparison to sheep from the predictable, stimulus-rich housing conditions dealt less easily with the test conditions rich in stimuli, that (2) long-term housing conditions seemingly did not influence mood

  8. Peculiarities of Stereotypes about Non-Verbal Communication and their Role in Cross-Cultural Interaction between Russian and Chinese Students

    Directory of Open Access Journals (Sweden)

    I A Novikova

    2012-12-01

    Full Text Available The article is devoted to the analysis of the peculiarities of the stereotypes about non-verbal communication, formed in Russian and Chinese cultures. The results of the experimental research of the role of ethnic auto- and heterostereotypes about non-verbal communication in cross-cultural interaction between Russian and Chinese students of the Peoples’ Friendship University of Russia are presented.

  9. Prevalence of inter-hemispheric asymetry in children and adolescents with interdisciplinary diagnosis of non-verbal learning disorder.

    Science.gov (United States)

    Wajnsztejn, Alessandra Bernardes Caturani; Bianco, Bianca; Barbosa, Caio Parente

    2016-01-01

    To describe clinical and epidemiological features of children and adolescents with interdisciplinary diagnosis of non-verbal learning disorder and to investigate the prevalence of inter-hemispheric asymmetry in this population group. Cross-sectional study including children and adolescents referred for interdisciplinary assessment with learning difficulty complaints, who were given an interdisciplinary diagnosis of non-verbal learning disorder. The following variables were included in the analysis: sex-related prevalence, educational system, initial presumptive diagnoses and respective prevalence, overall non-verbal learning disorder prevalence, prevalence according to school year, age range at the time of assessment, major family complaints, presence of inter-hemispheric asymmetry, arithmetic deficits, visuoconstruction impairments and major signs and symptoms of non-verbal learning disorder. Out of 810 medical records analyzed, 14 were from individuals who met the diagnostic criteria for non-verbal learning disorder, including the presence of inter-hemispheric asymmetry. Of these 14 patients, 8 were male. The high prevalence of inter-hemispheric asymmetry suggests this parameter can be used to predict or support the diagnosis of non-verbal learning disorder. Descrever as características clínicas e epidemiológicas de crianças e adolescentes com transtorno de aprendizagem não verbal, e investigar a prevalência de assimetria inter-hemisférica neste grupo populacional. Estudo transversal que incluiu crianças e adolescentes encaminhados para uma avaliação interdisciplinar, com queixas de dificuldades de aprendizagem e que receberam diagnóstico interdisciplinar de transtorno de aprendizagem não verbal. As variáveis avaliadas foram prevalência por sexo, sistema de ensino, hipóteses diagnósticas iniciais e respectivas prevalências, prevalência de condições em relação à amostra total, prevalência geral do transtorno de aprendizagem não verbal

  10. A brain-computer interface for potential non-verbal facial communication based on EEG signals related to specific emotions.

    Science.gov (United States)

    Kashihara, Koji

    2014-01-01

    Unlike assistive technology for verbal communication, the brain-machine or brain-computer interface (BMI/BCI) has not been established as a non-verbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients' emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based non-verbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600-700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus (FG). This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals. A classification method based on a support vector machine enables the easy classification of neutral faces that trigger specific individual emotions. In

  11. Audiovisual Script Writing.

    Science.gov (United States)

    Parker, Norton S.

    In audiovisual writing the writer must first learn to think in terms of moving visual presentation. The writer must research his script, organize it, and adapt it to a limited running time. By use of a pleasant-sounding narrator and well-written narration, the visual and narrative can be successfully integrated. There are two types of script…

  12. Cross-cultural differences in the processing of non-verbal affective vocalizations by Japanese and canadian listeners.

    Science.gov (United States)

    Koeda, Michihiko; Belin, Pascal; Hama, Tomoko; Masuda, Tadashi; Matsuura, Masato; Okubo, Yoshiro

    2013-01-01

    The Montreal Affective Voices (MAVs) consist of a database of non-verbal affect bursts portrayed by Canadian actors, and high recognitions accuracies were observed in Canadian listeners. Whether listeners from other cultures would be as accurate is unclear. We tested for cross-cultural differences in perception of the MAVs: Japanese listeners were asked to rate the MAVs on several affective dimensions and ratings were compared to those obtained by Canadian listeners. Significant Group × Emotion interactions were observed for ratings of Intensity, Valence, and Arousal. Whereas Intensity and Valence ratings did not differ across cultural groups for sad and happy vocalizations, they were significantly less intense and less negative in Japanese listeners for angry, disgusted, and fearful vocalizations. Similarly, pleased vocalizations were rated as less intense and less positive by Japanese listeners. These results demonstrate important cross-cultural differences in affective perception not just of non-verbal vocalizations expressing positive affect (Sauter et al., 2010), but also of vocalizations expressing basic negative emotions.

  13. Cross-Cultural Differences in the Processing of Non-Verbal Affective Vocalizations by Japanese and Canadian Listeners

    Science.gov (United States)

    Koeda, Michihiko; Belin, Pascal; Hama, Tomoko; Masuda, Tadashi; Matsuura, Masato; Okubo, Yoshiro

    2013-01-01

    The Montreal Affective Voices (MAVs) consist of a database of non-verbal affect bursts portrayed by Canadian actors, and high recognitions accuracies were observed in Canadian listeners. Whether listeners from other cultures would be as accurate is unclear. We tested for cross-cultural differences in perception of the MAVs: Japanese listeners were asked to rate the MAVs on several affective dimensions and ratings were compared to those obtained by Canadian listeners. Significant Group × Emotion interactions were observed for ratings of Intensity, Valence, and Arousal. Whereas Intensity and Valence ratings did not differ across cultural groups for sad and happy vocalizations, they were significantly less intense and less negative in Japanese listeners for angry, disgusted, and fearful vocalizations. Similarly, pleased vocalizations were rated as less intense and less positive by Japanese listeners. These results demonstrate important cross-cultural differences in affective perception not just of non-verbal vocalizations expressing positive affect (Sauter et al., 2010), but also of vocalizations expressing basic negative emotions. PMID:23516137

  14. "You can also save a life!": children's drawings as a non-verbal assessment of the impact of cardiopulmonary resuscitation training.

    Science.gov (United States)

    Petriş, Antoniu Octavian; Tatu-Chiţoiu, Gabriel; Cimpoeşu, Diana; Ionescu, Daniela Florentina; Pop, Călin; Oprea, Nadia; Ţînţ, Diana

    2017-04-01

    Drawings made by training children into cardiopulmonary resuscitation (CPR) during the special education week called "School otherwise" can be used as non-verbal means of expression and communication to assess the impact of such training. We analyzed the questionnaires and drawings completed by 327 schoolchildren in different stages of education. After a brief overview of the basic life support (BLS) steps and after watching a video presenting the dynamic performance of the BLS sequence, subjects were asked to complete a questionnaire and make a drawing to express main CPR messages. Questionnaires were filled completely in 97.6 % and drawings were done in 90.2 % cases. Half of the subjects had already witnessed a kind of medical emergency and 96.94 % knew the correct "112" emergency phone number. The drawings were single images (83.81 %) and less cartoon strips (16.18 %). Main themes of the slogans were "Save a life!", "Help!", "Call 112!", "Do not be indifferent/insensible/apathic!" through the use of drawings interpretation, CPR trainers can use art as a way to build a better relation with schoolchildren, to connect to their thoughts and feelings and obtain the highest quality education.

  15. Do children with autism have a theory of mind? A non-verbal test of autism vs. specific language impairment.

    Science.gov (United States)

    Colle, Livia; Baron-Cohen, Simon; Hill, Jacqueline

    2007-04-01

    Children with autism have delays in the development of theory of mind. However, the sub-group of children with autism who have little or no language have gone untested since false belief tests (FB) typically involve language. FB understanding has been reported to be intact in children with specific language impairment (SLI). This raises the possibility that a non-verbal FB test would distinguish children with autism vs. children with SLI. The present study tested two predictions: (1) FB understanding is to some extent independent of language ability; and (2) Children with autism with low language levels show specific impairment in theory of mind. Results confirmed both predictions. Results are discussed in terms of the role of language in the development of mindreading.

  16. Deficits in visual short-term memory binding in children at risk of non-verbal learning disabilities.

    Science.gov (United States)

    Garcia, Ricardo Basso; Mammarella, Irene C; Pancera, Arianna; Galera, Cesar; Cornoldi, Cesare

    2015-01-01

    It has been hypothesized that learning disabled children meet short-term memory (STM) problems especially when they must bind different types of information, however the hypothesis has not been systematically tested. This study assessed visual STM for shapes and colors and the binding of shapes and colors, comparing a group of children (aged between 8 and 10 years) at risk of non-verbal learning disabilities (NLD) with a control group of children matched for general verbal abilities, age, gender, and socioeconomic level. Results revealed that groups did not differ in retention of either shapes or colors, but children at risk of NLD were poorer than controls in memory for shape-color bindings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Mood As Cumulative Expectation Mismatch: A Test of Theory Based on Data from Non-verbal Cognitive Bias Tests

    Directory of Open Access Journals (Sweden)

    Camille M. C. Raoult

    2017-12-01

    Full Text Available Affective states are known to influence behavior and cognitive processes. To assess mood (moderately long-term affective states, the cognitive judgment bias test was developed and has been widely used in various animal species. However, little is known about how mood changes, how mood can be experimentally manipulated, and how mood then feeds back into cognitive judgment. A recent theory argues that mood reflects the cumulative impact of differences between obtained outcomes and expectations. Here expectations refer to an established context. Situations in which an established context fails to match an outcome are then perceived as mismatches of expectation and outcome. We take advantage of the large number of studies published on non-verbal cognitive bias tests in recent years (95 studies with a total of 162 independent tests to test whether cumulative mismatch could indeed have led to the observed mood changes. Based on a criteria list, we assessed whether mismatch had occurred with the experimental procedure used to induce mood (mood induction mismatch, or in the context of the non-verbal cognitive bias procedure (testing mismatch. For the mood induction mismatch, we scored the mismatch between the subjects’ potential expectations and the manipulations conducted for inducing mood whereas, for the testing mismatch, we scored mismatches that may have occurred during the actual testing. We then investigated whether these two types of mismatch can predict the actual outcome of the cognitive bias study. The present evaluation shows that mood induction mismatch cannot well predict the success of a cognitive bias test. On the other hand, testing mismatch can modulate or even inverse the expected outcome. We think, cognitive bias studies should more specifically aim at creating expectation mismatch while inducing mood states to test the cumulative mismatch theory more properly. Furthermore, testing mismatch should be avoided as much as possible

  18. Venezuela: Nueva Experiencia Audiovisual

    Directory of Open Access Journals (Sweden)

    Revista Chasqui

    2015-01-01

    Full Text Available La Universidad Simón Bolívar (USB creó en 1986, la Fundación para el Desarrollo del Arte Audiovisual, ARTEVISION. Su objetivo general es la promoción y venta de servicios y productos para la televisión, radio, cine, diseño y fotografía de alta calidad artística y técnica. Todo esto sin descuidar los aspectos teóricos-académicos de estas disciplinas.

  19. Hysteresis in audiovisual synchrony perception.

    Directory of Open Access Journals (Sweden)

    Jean-Rémy Martin

    Full Text Available The effect of stimulation history on the perception of a current event can yield two opposite effects, namely: adaptation or hysteresis. The perception of the current event thus goes in the opposite or in the same direction as prior stimulation, respectively. In audiovisual (AV synchrony perception, adaptation effects have primarily been reported. Here, we tested if perceptual hysteresis could also be observed over adaptation in AV timing perception by varying different experimental conditions. Participants were asked to judge the synchrony of the last (test stimulus of an AV sequence with either constant or gradually changing AV intervals (constant and dynamic condition, respectively. The onset timing of the test stimulus could be cued or not (prospective vs. retrospective condition, respectively. We observed hysteretic effects for AV synchrony judgments in the retrospective condition that were independent of the constant or dynamic nature of the adapted stimuli; these effects disappeared in the prospective condition. The present findings suggest that knowing when to estimate a stimulus property has a crucial impact on perceptual simultaneity judgments. Our results extend beyond AV timing perception, and have strong implications regarding the comparative study of hysteresis and adaptation phenomena.

  20. A comparison of processing load during non-verbal decision-making in two individuals with aphasia

    Directory of Open Access Journals (Sweden)

    Salima Suleman

    2015-05-01

    Full Text Available INTRODUCTION A growing body of evidence suggests people with aphasia (PWA can have impairments to cognitive functions such as attention, working memory and executive functions.(1-5 Such cognitive impairments have been shown to negatively affect the decision-making (DM abilities adults with neurological damage. (6,7 However, little is known about DM abilities of PWA.(8 Pupillometry is “the measurement of changes in pupil diameter”.(9;p.1 Researchers have reported a positive relationship between processing load and phasic pupil size (i.e., as processing load increases, pupil size increases.(10 Thus pupillometry has the potential to be a useful tool for investigating processing load during DM in PWA. AIMS The primary aim of this study was to establish the feasibility of using pupillometry during a non-verbal DM task with PWA. The secondary aim was to explore non-verbal DM performance in PWA and determine the relationship between DM performance and processing load using pupillometry. METHOD DESIGN. A single-subject case-study design with two participants was used in this study. PARTICIPANTS. Two adult males with anomic aphasia participated in this study. Participants were matched for age and education. Both participants were independent, able to drive, and had legal autonomy. MEASURES. PERFORMANCE ON A DM TASK. We used a computerized risk-taking card game called the Iowa Gambling Task (IGT as our non-verbal DM task.(11 In the IGT, participants made 100 selections (via eye gaze from four decks of cards presented on the computer screen with the goal of maximizing their overall hypothetical monetary gain. PROCESSING LOAD. The EyeLink 1000+ eye tracking system was used to collect pupil size measures while participants deliberated before each deck selection during the IGT. For this analysis, we calculated change in pupil size as a measure of processing load. RESULTS P1. P1 made increasingly advantageous decisions as the task progressed (Fig.1. When

  1. Adverse Life Events and Emotional and Behavioral Problems in Adolescence: The Role of Non-Verbal Cognitive Ability and Negative Cognitive Errors

    Science.gov (United States)

    Flouri, Eirini; Panourgia, Constantina

    2011-01-01

    The aim of this study was to test whether negative cognitive errors (overgeneralizing, catastrophizing, selective abstraction, and personalizing) mediate the moderator effect of non-verbal cognitive ability on the association between adverse life events (life stress) and emotional and behavioral problems in adolescence. The sample consisted of 430…

  2. Referential Interactions of Turkish-Learning Children with Their Caregivers about Non-Absent Objects: Integration of Non-Verbal Devices and Prior Discourse

    Science.gov (United States)

    Ates, Beyza S.; Küntay, Aylin C.

    2018-01-01

    This paper examines the way children younger than two use non-verbal devices (i.e., deictic gestures and communicative functional acts) and pay attention to discourse status (i.e., prior mention vs. newness) of referents in interactions with caregivers. Data based on semi-naturalistic interactions with caregivers of four children, at ages 1;00,…

  3. Individual Differences in Verbal and Non-Verbal Affective Responses to Smells: Influence of Odor Label Across Cultures.

    Science.gov (United States)

    Ferdenzi, Camille; Joussain, Pauline; Digard, Bérengère; Luneau, Lucie; Djordjevic, Jelena; Bensafi, Moustafa

    2017-01-01

    Olfactory perception is highly variable from one person to another, as a function of individual and contextual factors. Here, we investigated the influence of 2 important factors of variation: culture and semantic information. More specifically, we tested whether cultural-specific knowledge and presence versus absence of odor names modulate odor perception, by measuring these effects in 2 populations differing in cultural background but not in language. Participants from France and Quebec, Canada, smelled 4 culture-specific and 2 non-specific odorants in 2 conditions: first without label, then with label. Their ratings of pleasantness, familiarity, edibility, and intensity were collected as well as their psychophysiological and olfactomotor responses. The results revealed significant effects of culture and semantic information, both at the verbal and non-verbal level. They also provided evidence that availability of semantic information reduced cultural differences. Semantic information had a unifying action on olfactory perception that overrode the influence of cultural background. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Cortical Auditory Disorders: A Case of Non-Verbal Disturbances Assessed with Event-Related Brain Potentials

    Directory of Open Access Journals (Sweden)

    Sönke Johannes

    1998-01-01

    Full Text Available In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians’ musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19–30 and by event-related potentials (ERP recorded in a modified 'oddball paradigm’. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  5. Contrasting visual working memory for verbal and non-verbal material with multivariate analysis of fMRI

    Science.gov (United States)

    Habeck, Christian; Rakitin, Brian; Steffener, Jason; Stern, Yaakov

    2012-01-01

    We performed a delayed-item-recognition task to investigate the neural substrates of non-verbal visual working memory with event-related fMRI (‘Shape task’). 25 young subjects (mean age: 24.0 years; STD=3.8 years) were instructed to study a list of either 1,2 or 3 unnamable nonsense line drawings for 3 seconds (‘stimulus phase’ or STIM). Subsequently, the screen went blank for 7 seconds (‘retention phase’ or RET), and then displayed a probe stimulus for 3 seconds in which subject indicated with a differential button press whether the probe was contained in the studied shape-array or not (‘probe phase’ or PROBE). Ordinal Trend Canonical Variates Analysis (Habeck et al., 2005a) was performed to identify spatial covariance patterns that showed a monotonic increase in expression with memory load during all task phases. Reliable load-related patterns were identified in the stimulus and retention phase (pmemory loads (pmemory load, and mediofrontal and temporal regions that were decreasing. Mean subject expression of both patterns across memory load during retention also correlated positively with recognition accuracy (dL) in the Shape task (prehearsal processes. Encoding processes, on the other hand, are critically dependent on the to-be-remembered material, and seem to necessitate material-specific neural substrates. PMID:22652306

  6. Cortical auditory disorders: a case of non-verbal disturbances assessed with event-related brain potentials.

    Science.gov (United States)

    Johannes, Sönke; Jöbges, Michael E.; Dengler, Reinhard; Münte, Thomas F.

    1998-01-01

    In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians' musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19-30) and by event-related potentials (ERP) recorded in a modified 'oddball paradigm'. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  7. Verbal and Non-verbal Fluency in Adults with Developmental Dyslexia: Phonological Processing or Executive Control Problems?

    Science.gov (United States)

    Smith-Spark, James H; Henry, Lucy A; Messer, David J; Zięcik, Adam P

    2017-08-01

    The executive function of fluency describes the ability to generate items according to specific rules. Production of words beginning with a certain letter (phonemic fluency) is impaired in dyslexia, while generation of words belonging to a certain semantic category (semantic fluency) is typically unimpaired. However, in dyslexia, verbal fluency has generally been studied only in terms of overall words produced. Furthermore, performance of adults with dyslexia on non-verbal design fluency tasks has not been explored but would indicate whether deficits could be explained by executive control, rather than phonological processing, difficulties. Phonemic, semantic and design fluency tasks were presented to adults with dyslexia and without dyslexia, using fine-grained performance measures and controlling for IQ. Hierarchical regressions indicated that dyslexia predicted lower phonemic fluency, but not semantic or design fluency. At the fine-grained level, dyslexia predicted a smaller number of switches between subcategories on phonemic fluency, while dyslexia did not predict the size of phonemically related clusters of items. Overall, the results suggested that phonological processing problems were at the root of dyslexia-related fluency deficits; however, executive control difficulties could not be completely ruled out as an alternative explanation. Developments in research methodology, equating executive demands across fluency tasks, may resolve this issue. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Selection of words for implementation of the Picture Exchange Communication System - PECS in non-verbal autistic children.

    Science.gov (United States)

    Ferreira, Carine; Bevilacqua, Monica; Ishihara, Mariana; Fiori, Aline; Armonia, Aline; Perissinoto, Jacy; Tamanaha, Ana Carina

    2017-03-09

    It is known that some autistic individuals are considered non-verbal, since they are unable to use verbal language and barely use gestures to compensate for the absence of speech. Therefore, these individuals' ability to communicate may benefit from the use of the Picture Exchange Communication System - PECS. The objective of this study was to verify the most frequently used words in the implementation of PECS in autistic children, and on a complementary basis, to analyze the correlation between the frequency of these words and the rate of maladaptive behaviors. This is a cross-sectional study. The sample was composed of 31 autistic children, twenty-five boys and six girls, aged between 5 and 10 years old. To identify the most frequently used words in the initial period of implementation of PECS, the Vocabulary Selection Worksheet was used. And to measure the rate of maladaptive behaviors, we applied the Autism Behavior Checklist (ABC). There was a significant prevalence of items in the category "food", followed by "activities" and "beverages". There was no correlation between the total amount of items identified by the families and the rate of maladaptive behaviors. The categories of words most mentioned by the families could be identified, and it was confirmed that the level of maladaptive behaviors did not interfere directly in the preparation of the vocabulary selection worksheet for the children studied.

  9. The influence of non-verbal educational and therapeutic Practices in autism spectrum disorder: the possibilities for physical education professionals

    Directory of Open Access Journals (Sweden)

    Adryelle Fabiane Campelo de Lima

    2017-09-01

    Full Text Available The individual with autism spectrum disorder (ASD have symptoms that begin in childhood and affects the individual's ability to function in life and in their day to day. For reduce and control the symptoms of ASD exist several types of practices. Thus, this study aims to analyze the contributions of the main pedagogical and therapeutic practices of non-verbal communication in motivation, emotional stability, communication and socialization of individuals with autism spectrum disorders, which may collaborate in the intervention of the physical education professional. The study was done through a systematic review that was conducted in the electronic databases. Initially, 390 documents have been identified. After the reading and analysis of the titles of the documents, have selected 109. After reading the summaries were considered eligible 53 and, finally, we've included 18, which completely satisfy our criteria for inclusion. The results showed that intervention programs are distinct and the majority is in music therapy. This systematic review showed that there is direct intervention of physical education professionals only in psychomotricity.

  10. Plantilla 1: El documento audiovisual: elementos importantes

    OpenAIRE

    Alemany, Dolores

    2011-01-01

    Concepto de documento audiovisual y de documentación audiovisual, profundizando en la distinción de documentación de imagen en movimiento con posible incorporación de sonido frente al concepto de documentación audiovisual según plantea Jorge Caldera. Diferenciación entre documentos audiovisuales, obras audiovisuales y patrimonio audiovisual según Félix del Valle.

  11. The natural statistics of audiovisual speech.

    Directory of Open Access Journals (Sweden)

    Chandramouli Chandrasekaran

    2009-07-01

    Full Text Available Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2-7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.

  12. What a Smile Means: Contextual Beliefs and Facial Emotion Expressions in a Non-verbal Zero-Sum Game.

    Science.gov (United States)

    Pádua Júnior, Fábio P; Prado, Paulo H M; Roeder, Scott S; Andrade, Eduardo B

    2016-01-01

    Research into the authenticity of facial emotion expressions often focuses on the physical properties of the face while paying little attention to the role of beliefs in emotion perception. Further, the literature most often investigates how people express a pre-determined emotion rather than what facial emotion expressions people strategically choose to express. To fill these gaps, this paper proposes a non-verbal zero-sum game - the Face X Game - to assess the role of contextual beliefs and strategic displays of facial emotion expression in interpersonal interactions. This new research paradigm was used in a series of three studies, where two participants are asked to play the role of the sender (individual expressing emotional information on his/her face) or the observer (individual interpreting the meaning of that expression). Study 1 examines the outcome of the game with reference to the sex of the pair, where senders won more frequently when the pair was comprised of at least one female. Study 2 examines the strategic display of facial emotion expressions. The outcome of the game was again contingent upon the sex of the pair. Among female pairs, senders won the game more frequently, replicating the pattern of results from study 1. We also demonstrate that senders who strategically express an emotion incongruent with the valence of the event (e.g., smile after seeing a negative event) are able to mislead observers, who tend to hold a congruent belief about the meaning of the emotion expression. If sending an incongruent signal helps to explain why female senders win more frequently, it logically follows that female observers were more prone to hold a congruent, and therefore inaccurate, belief. This prospect implies that while female senders are willing and/or capable of displaying fake smiles, paired-female observers are not taking this into account. Study 3 investigates the role of contextual factors by manipulating female observers' beliefs. When prompted

  13. Subjective Evaluation of Audiovisual Signals

    Directory of Open Access Journals (Sweden)

    F. Fikejz

    2010-01-01

    Full Text Available This paper deals with subjective evaluation of audiovisual signals, with emphasis on the interaction between acoustic and visual quality. The subjective test is realized by a simple rating method. The audiovisual signal used in this test is a combination of images compressed by JPEG compression codec and sound samples compressed by MPEG-1 Layer III. Images and sounds have various contents. It simulates a real situation when the subject listens to compressed music and watches compressed pictures without the access to original, i.e. uncompressed signals.

  14. Search in audiovisual broadcast archives

    NARCIS (Netherlands)

    Huurnink, B.

    2010-01-01

    Documentary makers, journalists, news editors, and other media professionals routinely require previously recorded audiovisual material for new productions. For example, a news editor might wish to reuse footage from overseas services for the evening news, or a documentary maker describing the

  15. Bi-directional effects of depressed mood in the postnatal period on mother-infant non-verbal engagement with picture books.

    Science.gov (United States)

    Reissland, Nadja; Burt, Mike

    2010-12-01

    The purpose of the present study is to examine the bi-directional nature of maternal depressed mood in the postnatal period on maternal and infant non-verbal behaviors while looking at a picture book. Although, it is acknowledged that non-verbal engagement with picture books in infancy plays an important role, the effect of maternal depressed mood on stimulating the interest of infants in books is not known. Sixty-one mothers and their infants, 38 boys and 23 girls, were observed twice approximately 3 months apart (first observation: mean age 6.8 months, range 3-11 months, 32 mothers with depressed mood; second observation: mean age 10.2 months, range 6-16 months, 17 mothers with depressed mood). There was a significant effect for depressed mood on negative behaviors: infants of mothers with depressed mood tended to push away and close books more often. The frequency of negative behaviors (pushing the book away/closing it on the part of the infant and withholding the book and restraining the infant on the part of the mother) were behaviors which if expressed during the first visit were more likely to be expressed during the second visit. Levels of negative behaviors by mother and infant were strongly related during each visit. Additionally, the pattern between visits suggests that maternal negative behavior may be the cause of her infant negative behavior. These results are discussed in terms of the effects of maternal depressed mood on the bi-directional relation of non-verbal engagement of mother and child. Crown Copyright © 2010. Published by Elsevier Inc. All rights reserved.

  16. Exploring Children’s Peer Relationships through Verbal and Non-verbal Communication: A Qualitative Action Research Focused on Waldorf Pedagogy

    Directory of Open Access Journals (Sweden)

    Aida Milena Montenegro Mantilla

    2007-12-01

    Full Text Available This study analyzes the relationships that children around seven and eight years old establish in a classroom. It shows that peer relationships have a positive dimension with features such as the development of children’s creativity to communicate and modify norms. These features were found through an analysis of children’s verbal and non-verbal communication and an interdisciplinary view of children’s learning process from Rudolf Steiner, founder of Waldorf Pedagogy, and Jean Piaget and Lev Vygotsky, specialists in children’s cognitive and social dimensions. This research is an invitation to recognize children’s capacity to construct their own rules in peer relationships.

  17. The neural basis of non-verbal communication-enhanced processing of perceived give-me gestures in 9-month-old girls.

    Science.gov (United States)

    Bakker, Marta; Kaduk, Katharina; Elsner, Claudia; Juvrud, Joshua; Gustaf Gredebäck

    2015-01-01

    This study investigated the neural basis of non-verbal communication. Event-related potentials were recorded while 29 nine-month-old infants were presented with a give-me gesture (experimental condition) and the same hand shape but rotated 90°, resulting in a non-communicative hand configuration (control condition). We found different responses in amplitude between the two conditions, captured in the P400 ERP component. Moreover, the size of this effect was modulated by participants' sex, with girls generally demonstrating a larger relative difference between the two conditions than boys.

  18. Copyright for audiovisual work and analysis of websites offering audiovisual works

    OpenAIRE

    Chrastecká, Nicolle

    2014-01-01

    This Bachelor's thesis deals with the matter of audiovisual piracy. It discusses the question of audiovisual piracy being caused not by the wrong interpretation of law but by the lack of competitiveness among websites with legal audiovisual content. This thesis questions the quality of legal interpretation in the matter of audiovisual piracy and focuses on its sufficiency. It analyses the responsibility of website providers, providers of the illegal content, the responsibility of illegal cont...

  19. Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.

    Science.gov (United States)

    Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu

    2018-05-01

    Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.

  20. Audiovisual speech facilitates voice learning.

    Science.gov (United States)

    Sheffert, Sonya M; Olson, Elizabeth

    2004-02-01

    In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.

  1. Learning sparse generative models of audiovisual signals

    OpenAIRE

    Monaci, Gianluca; Sommer, Friedrich T.; Vandergheynst, Pierre

    2008-01-01

    This paper presents a novel framework to learn sparse represen- tations for audiovisual signals. An audiovisual signal is modeled as a sparse sum of audiovisual kernels. The kernels are bimodal functions made of synchronous audio and video components that can be positioned independently and arbitrarily in space and time. We design an algorithm capable of learning sets of such audiovi- sual, synchronous, shift-invariant functions by alternatingly solving a coding and a learning pr...

  2. Net neutrality and audiovisual services

    OpenAIRE

    van Eijk, N.; Nikoltchev, S.

    2011-01-01

    Net neutrality is high on the European agenda. New regulations for the communication sector provide a legal framework for net neutrality and need to be implemented on both a European and a national level. The key element is not just about blocking or slowing down traffic across communication networks: the control over the distribution of audiovisual services constitutes a vital part of the problem. In this contribution, the phenomenon of net neutrality is described first. Next, the European a...

  3. Quality models for audiovisual streaming

    Science.gov (United States)

    Thang, Truong Cong; Kim, Young Suk; Kim, Cheon Seog; Ro, Yong Man

    2006-01-01

    Quality is an essential factor in multimedia communication, especially in compression and adaptation. Quality metrics can be divided into three categories: within-modality quality, cross-modality quality, and multi-modality quality. Most research has so far focused on within-modality quality. Moreover, quality is normally just considered from the perceptual perspective. In practice, content may be drastically adapted, even converted to another modality. In this case, we should consider the quality from semantic perspective as well. In this work, we investigate the multi-modality quality from the semantic perspective. To model the semantic quality, we apply the concept of "conceptual graph", which consists of semantic nodes and relations between the nodes. As an typical of multi-modality example, we focus on audiovisual streaming service. Specifically, we evaluate the amount of information conveyed by a audiovisual content where both video and audio channels may be strongly degraded, even audio are converted to text. In the experiments, we also consider the perceptual quality model of audiovisual content, so as to see the difference with semantic quality model.

  4. Auditory-motor mapping training as an intervention to facilitate speech output in non-verbal children with autism: a proof of concept study.

    Directory of Open Access Journals (Sweden)

    Catherine Y Wan

    Full Text Available Although up to 25% of children with autism are non-verbal, there are very few interventions that can reliably produce significant improvements in speech output. Recently, a novel intervention called Auditory-Motor Mapping Training (AMMT has been developed, which aims to promote speech production directly by training the association between sounds and articulatory actions using intonation and bimanual motor activities. AMMT capitalizes on the inherent musical strengths of children with autism, and offers activities that they intrinsically enjoy. It also engages and potentially stimulates a network of brain regions that may be dysfunctional in autism. Here, we report an initial efficacy study to provide 'proof of concept' for AMMT. Six non-verbal children with autism participated. Prior to treatment, the children had no intelligible words. They each received 40 individual sessions of AMMT 5 times per week, over an 8-week period. Probe assessments were conducted periodically during baseline, therapy, and follow-up sessions. After therapy, all children showed significant improvements in their ability to articulate words and phrases, with generalization to items that were not practiced during therapy sessions. Because these children had no or minimal vocal output prior to treatment, the acquisition of speech sounds and word approximations through AMMT represents a critical step in expressive language development in children with autism.

  5. Language representation of the emotional state of the personage in non-verbal speech behavior (on the material of Russian and German languages

    Directory of Open Access Journals (Sweden)

    Scherbakova Irina Vladimirovna

    2016-06-01

    Full Text Available The article examines the features of actualization of emotions in a non-verbal speech behavior of the character of a literary text. Emotions are considered basic, the most actively used method of literary character reaction to any object, action, or the communicative situation. Nonverbal ways of expressing emotions more fully give the reader an idea of the emotional state of the character. The main focus in the allocation of non-verbal means of communication in art is focused on the description of kinetic, proxemic and prosodic components. The material of the study is the microdialogue fragments extracted by continuous sampling of their works of art texts of the Russian-speaking and German-speaking classical and modern literature XIX - XX centuries. Fragments of the dialogues were analyzed, where the recorded voice of nonverbal behavior of the character of different emotional content (surprise, joy, fear, anger, rage, excitement, etc. was fixed. It was found that means of verbalization and descriptions of emotion of nonverbal behavior of the character are primarily indirect nomination, expressed verbal vocabulary, adjectives and adverbs. The lexical level is the most significant in the presentation of the emotional state of the character.

  6. Audiovisual Discrimination between Laughter and Speech

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audiovisual approach to distinguishing laughter from speech and we show that integrating the information from audio and video leads to an improved reliability of audiovisual approach in

  7. Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, B.

    2007-01-01

    Laughter is a highly variable signal, and can express a spectrum of emotions. This makes the automatic detection of laughter a challenging but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is performed

  8. Decreased BOLD responses in audiovisual processing

    NARCIS (Netherlands)

    Wiersinga-Post, Esther; Tomaskovic, Sonja; Slabu, Lavinia; Renken, Remco; de Smit, Femke; Duifhuis, Hendrikus

    2010-01-01

    Audiovisual processing was studied in a functional magnetic resonance imaging study using the McGurk effect. Perceptual responses and the brain activity patterns were measured as a function of audiovisual delay. In several cortical and subcortical brain areas, BOLD responses correlated negatively

  9. Audiovisual signs and information science: an evaluation

    Directory of Open Access Journals (Sweden)

    Jalver Bethônico

    2006-12-01

    Full Text Available This work evaluates the relationship of Information Science with audiovisual signs, pointing out conceptual limitations, difficulties imposed by the verbal fundament of knowledge, the reduced use within libraries and the ways in the direction of a more consistent analysis of the audiovisual means, supported by the semiotics of Charles Peirce.

  10. Cinco discursos da digitalidade audiovisual

    Directory of Open Access Journals (Sweden)

    Gerbase, Carlos

    2001-01-01

    Full Text Available Michel Foucault ensina que toda fala sistemática - inclusive aquela que se afirma “neutra” ou “uma desinteressada visão objetiva do que acontece” - é, na verdade, mecanismo de articulação do saber e, na seqüência, de formação de poder. O aparecimento de novas tecnologias, especialmente as digitais, no campo da produção audiovisual, provoca uma avalanche de declarações de cineastas, ensaios de acadêmicos e previsões de demiurgos da mídia.

  11. Habilidades de praxia verbal e não-verbal em indivíduos gagos Verbal and non-verbal praxic abilities in stutterers

    Directory of Open Access Journals (Sweden)

    Natália Casagrande Brabo

    2009-12-01

    Full Text Available OBJETIVO: caracterizar as habilidades de praxias verbal e não-verbal em indivíduos gagos. MÉTODOS: participaram do estudo 40 indivíduos, com idade igual ou superior a 18 anos, do sexo masculino e feminino: 20 gagos adultos e 20 sem queixas de comunicação. Para a avaliação das praxias verbal e não-verbal, os indivíduos foram submetidos à aplicação do Protocolo de Avaliação da Apraxia Verbal e Não-verbal (Martins e Ortiz, 2004. RESULTADOS: com relação às habilidades de praxia verbal houve diferença estatisticamente significante no número de disfluências típicas e atípicas apresentadas pelos grupos estudados. Quanto à tipologia das disfluências observou-se que nas típicas houve diferença estatisticamente significante entre os grupos estudados apenas na repetição de frase, e nas atípicas, houve diferença estatisticamente significante, tanto no bloqueio quanto na repetição de sílaba e no prolongamento. Com relação às habilidades de praxia não-verbal, não foram observadas diferenças estatisticamente significantes entre os indivíduos estudados na realização dos movimentos de lábios, língua e mandíbula, isolados e em sequência. CONCLUSÃO: com relação às habilidades de praxia verbal, os gagos apresentaram frequência maior de rupturas da fala, tanto de disfluências típicas quanto de atípicas, quando comparado ao grupo controle. Já na realização de movimentos práxicos isolados e em sequência, ou seja, nas habilidades de praxia não-verbal, os indivíduos gagos não se diferenciaram dos fluentes não confirmando a hipótese de que o início precoce da gagueira poderia comprometer as habilidades de praxia não-verbal.PURPOSE: to characterize the verbal and non-verbal praxic abilities in adult stutterers. METHODS: for this research, 40 over 18-year old men and women were selected: 20 stuttering adults and 20 without communication complaints. For the praxis evaluation, they were submitted to

  12. Visuospatial working memory for locations, colours, and binding in typically developing children and in children with dyslexia and non-verbal learning disability.

    Science.gov (United States)

    Garcia, Ricardo Basso; Mammarella, Irene C; Tripodi, Doriana; Cornoldi, Cesare

    2014-03-01

    This study examined forward and backward recall of locations and colours and the binding of locations and colours, comparing typically developing children - aged between 8 and 10 years - with two different groups of children of the same age with learning disabilities (dyslexia in one group, non-verbal learning disability [NLD] in the other). Results showed that groups with learning disabilities had different visuospatial working memory problems and that children with NLD had particular difficulties in the backward recall of locations. The differences between the groups disappeared, however, when locations and colours were bound together. It was concluded that specific processes may be involved in children in the binding and backward recall of different types of information, as they are not simply the resultant of combining the single processes needed to recall single features. © 2013 The British Psychological Society.

  13. Non-verbal communication between Registered Nurses Intellectual Disability and people with an intellectual disability: an exploratory study of the nurse's experiences. Part 1.

    Science.gov (United States)

    Martin, Anne-Marie; Connor-Fenelon, Maureen O'; Lyons, Rosemary

    2012-03-01

    This is the first of two articles presenting the findings of a qualitative study which explored the experiences of Registered Nurses Intellectual Disability (RNIDs) of communicating with people with an intellectual disability who communicate non-verbally. The article reports and critically discusses the findings in the context of the policy and service delivery discourses of person-centredness, inclusion, choice and independence. Arguably, RNIDs are the profession who most frequently encounter people with an intellectual disability and communication impairment. The results suggest that the communication studied is both complicated and multifaceted. An overarching category of 'familiarity/knowing the person' encompasses discrete but related themes and subthemes that explain the process: the RNID knowing the service-user; the RNID/service-user relationship; and the value of experience. People with an intellectual disability, their families and disability services are facing a time of great change, and RNIDs will have a crucial role in supporting this transition.

  14. WHAT’S THE “SECRET” OF THE GESTURE LANGUAGE? A FEW CRITICAL REFLECTIONS ON THE PSEUDO-SCIENCES DEALING WITH THE “NON-VERBAL DECODING”

    Directory of Open Access Journals (Sweden)

    PASCAL LARDELLIER

    2015-05-01

    Full Text Available In this article we deal with a situation commonly met with in contemporary society: the representatives of pseudo-sciences invite their readers to learn „to decode the non-verbal language”. They pretend that in this way our body is supposed to be „readable” and it would be enough to know these „theories” in order to read into our interlocutors and to find out their thoughts and emotions. It is obvious that we find ourselves in front of a discourse imitating the rhetorical codes of science, but having nothing to do with science. Moreover, these pseudo-sciences have never been presented or discussed within the academic sphere

  15. Seizure-related factors and non-verbal intelligence in children with epilepsy. A population-based study from Western Norway.

    Science.gov (United States)

    Høie, B; Mykletun, A; Sommerfelt, K; Bjørnaes, H; Skeidsvoll, H; Waaler, P E

    2005-06-01

    To study the relationship between seizure-related factors, non-verbal intelligence, and socio-economic status (SES) in a population-based sample of children with epilepsy. The latest ILAE International classifications of epileptic seizures and syndromes were used to classify seizure types and epileptic syndromes in all 6-12 year old children (N=198) with epilepsy in Hordaland County, Norway. The children had neuropediatric and EEG examinations. Of the 198 patients, demographic characteristics were collected on 183 who participated in psychological studies including Raven matrices. 126 healthy controls underwent the same testing. Severe non-verbal problems (SNVP) were defined as a Raven score at or Raven percentile group, whereas controls were highly over-represented in the higher percentile groups. SNVP were present in 43% of children with epilepsy and 3% of controls. These problems were especially common in children with remote symptomatic epilepsy aetiology, undetermined epilepsy syndromes, myoclonic seizures, early seizure debut, high seizure frequency and in children with polytherapy. Seizure-related characteristics that were not usually associated with SNVP were idiopathic epilepsies, localization related (LR) cryptogenic epilepsies, absence and simple partial seizures, and a late debut of epilepsy. Adjusting for socio-economic status factors did not significantly change results. In childhood epilepsy various seizure-related factors, but not SES factors, were associated with the presence or absence of SNVP. Such deficits may be especially common in children with remote symptomatic epilepsy aetiology and in complex and therapy resistant epilepsies. Low frequencies of SNVP may be found in children with idiopathic and LR cryptogenic epilepsy syndromes, simple partial or absence seizures and a late epilepsy debut. Our study contributes to an overall picture of cognitive function and its relation to central seizure characteristics in a childhood epilepsy population

  16. 29 CFR 2.13 - Audiovisual coverage prohibited.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage prohibited. 2.13 Section 2.13 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.13 Audiovisual coverage prohibited. The Department shall not permit audiovisual coverage of the...

  17. Sustainable models of audiovisual commons

    Directory of Open Access Journals (Sweden)

    Mayo Fuster Morell

    2013-03-01

    Full Text Available This paper addresses an emerging phenomenon characterized by continuous change and experimentation: the collaborative commons creation of audiovisual content online. The analysis wants to focus on models of sustainability of collaborative online creation, paying particular attention to the use of different forms of advertising. This article is an excerpt of a larger investigation, which unit of analysis are cases of Online Creation Communities that take as their central node of activity the Catalan territory. From 22 selected cases, the methodology combines quantitative analysis, through a questionnaire delivered to all cases, and qualitative analysis through face interviews conducted in 8 cases studied. The research, which conclusions we summarize in this article,in this article, leads us to conclude that the sustainability of the project depends largely on relationships of trust and interdependence between different voluntary agents, the non-monetary contributions and retributions as well as resources and infrastructure of free use. All together leads us to understand that this is and will be a very important area for the future of audiovisual content and its sustainability, which will imply changes in the policies that govern them.

  18. An audiovisual emotion recognition system

    Science.gov (United States)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  19. Audiovisual quality assessment and prediction for videotelephony

    CERN Document Server

    Belmudez, Benjamin

    2015-01-01

    The work presented in this book focuses on modeling audiovisual quality as perceived by the users of IP-based solutions for video communication like videotelephony. It also extends the current framework for the parametric prediction of audiovisual call quality. The book addresses several aspects related to the quality perception of entire video calls, namely, the quality estimation of the single audio and video modalities in an interactive context, the audiovisual quality integration of these modalities and the temporal pooling of short sample-based quality scores to account for the perceptual quality impact of time-varying degradations.

  20. Speech cues contribute to audiovisual spatial integration.

    Directory of Open Access Journals (Sweden)

    Christopher W Bishop

    Full Text Available Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral 'what' and dorsal 'where' pathways.

  1. Audiovisual Styling and the Film Experience

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2015-01-01

    Approaches to music and audiovisual meaning in film appear to be very different in nature and scope when considered from the point of view of experimental psychology or humanistic studies. Nevertheless, this article argues that experimental studies square with ideas of audiovisual perception...... and meaning in humanistic film music studies in two ways: through studies of vertical synchronous interaction and through studies of horizontal narrative effects. Also, it is argued that the combination of insights from quantitative experimental studies and qualitative audiovisual film analysis may actually...... be combined into a more complex understanding of how audiovisual features interact in the minds of their audiences. This is demonstrated through a review of a series of experimental studies. Yet, it is also argued that textual analysis and concepts from within film and music studies can provide insights...

  2. Audiovisual segregation in cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Simon Landry

    Full Text Available It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition, as well as in normal controls. A visual speech recognition task (i.e. speechreading was administered either in silence or in combination with three types of auditory distractors: i noise ii reverse speech sound and iii non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users.

  3. Audiovisual focus of attention and its application to Ultra High Definition video compression

    Science.gov (United States)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  4. Exploring the Domain Specificity of Creativity in Children: The Relationship between a Non-Verbal Creative Production Test and Creative Problem-Solving Activities

    Directory of Open Access Journals (Sweden)

    Ahmed Mohamed

    2012-12-01

    Full Text Available AbstractIn this study, we explored whether creativity was domain specific or domain general. The relationships between students’ scores on three creative problem-solving activities (math, spa-tial artistic, and oral linguistic in the DIS-COVER assessment (Discovering Intellectual Strengths and Capabilities While Observing Varied Ethnic Responses and the TCT-DP (Test of Creative Thinking-Drawing Produc-tion, a non-verbal general measure of creativi-ty, were examined. The participants were 135 first and second graders from two schools in the Southwestern United States from linguisti-cally and culturally diverse backgrounds. Pearson correlations, canonical correlations, and multiple regression analyses were calcu-lated to describe the relationship between the TCT-DP and the three DISCOVER creative problem-solving activities. We found that crea-tivity has both domain-specific and domain-general aspects, but that the domain-specific component seemed more prominent. One im-plication of these results is that educators should consider assessing creativity in specific domains to place students in special programs for gifted students rather than relying only on domain-general measures of divergent think-ing or creativity.

  5. Heart rate variability during acute psychosocial stress: A randomized cross-over trial of verbal and non-verbal laboratory stressors.

    Science.gov (United States)

    Brugnera, Agostino; Zarbo, Cristina; Tarvainen, Mika P; Marchettini, Paolo; Adorni, Roberta; Compare, Angelo

    2018-05-01

    Acute psychosocial stress is typically investigated in laboratory settings using protocols with distinctive characteristics. For example, some tasks involve the action of speaking, which seems to alter Heart Rate Variability (HRV) through acute changes in respiration patterns. However, it is still unknown which task induces the strongest subjective and autonomic stress response. The present cross-over randomized trial sought to investigate the differences in perceived stress and in linear and non-linear analyses of HRV between three different verbal (Speech and Stroop) and non-verbal (Montreal Imaging Stress Task; MIST) stress tasks, in a sample of 60 healthy adults (51.7% females; mean age = 25.6 ± 3.83 years). Analyses were run controlling for respiration rates. Participants reported similar levels of perceived stress across the three tasks. However, MIST induced a stronger cardiovascular response than Speech and Stroop tasks, even after controlling for respiration rates. Finally, women reported higher levels of perceived stress and lower HRV both at rest and in response to acute psychosocial stressors, compared to men. Taken together, our results suggest the presence of gender-related differences during psychophysiological experiments on stress. They also suggest that verbal activity masked the vagal withdrawal through altered respiration patterns imposed by speaking. Therefore, our findings support the use of highly-standardized math task, such as MIST, as a valid and reliable alternative to verbal protocols during laboratory studies on stress. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. A promessa do audiovisual interativo

    Directory of Open Access Journals (Sweden)

    João Baptista Winck

    Full Text Available A cadeia produtiva do audiovisual utiliza o capital cultural, especialmente a criatividade, como sua principal fonte de recursos, inaugurando o que se vem chamando de economia criativa. Essa cadeia de valor manufatura a inventividade como matéria-prima, transformado idéias em objetos de consumo de larga escala. A indústria da televisão está inserida num conglomerado maior de indústrias, como a da moda, das artes, da música etc. Esse gigantesco parque tecnológico reúne as atividades que têm a criação como valor, sua produção em escala como meio e o incremento da propriedade intelectual como fim em si mesmo. A industrialização da criatividade, aos poucos, está alterando o corpo teórico acerca do que se pensa sobre as relações de trabalho, as ferramentas e, acima de tudo, o conceito de bens como produto da inteligência.

  7. Audiovisual integration in speech perception: a multi-stage process

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    investigate whether the integration of auditory and visual speech observed in these two audiovisual integration effects are specific traits of speech perception. We further ask whether audiovisual integration is undertaken in a single processing stage or multiple processing stages....

  8. Lip movements affect infants' audiovisual speech perception.

    Science.gov (United States)

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  9. Audiovisual preservation strategies, data models and value-chains

    OpenAIRE

    Addis, Matthew; Wright, Richard

    2010-01-01

    This is a report on preservation strategies, models and value-chains for digital file-based audiovisual content. The report includes: (a)current and emerging value-chains and business-models for audiovisual preservation;(b) a comparison of preservation strategies for audiovisual content including their strengths and weaknesses, and(c) a review of current preservation metadata models, and requirements for extension to support audiovisual files.

  10. A Catalan code of best practices for the audiovisual sector

    OpenAIRE

    Teodoro, Emma; Casanovas, Pompeu

    2010-01-01

    In spite of a new general law regarding Audiovisual Communication, the regulatory framework of the audiovisual sector in Spain can still be defined as huge, disperse and obsolete. The first part of this paper provides an overview of the major challenges of the Spanish audiovisual sector as a result of the convergence of platforms, services and operators, paying especial attention to the Audiovisual Sector in Catalonia. In the second part, we will present an example of self-regulation through...

  11. 29 CFR 2.12 - Audiovisual coverage permitted.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage permitted. 2.12 Section 2.12 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.12 Audiovisual coverage permitted. The following are the types of hearings where the Department...

  12. A puzzle form of a non-verbal intelligence test gives significantly higher performance measures in children with severe intellectual disability.

    Science.gov (United States)

    Bello, Katrina D; Goharpey, Nahal; Crewther, Sheila G; Crewther, David P

    2008-08-01

    Assessment of 'potential intellectual ability' of children with severe intellectual disability (ID) is limited, as current tests designed for normal children do not maintain their interest. Thus a manual puzzle version of the Raven's Coloured Progressive Matrices (RCPM) was devised to appeal to the attentional and sensory preferences and language limitations of children with ID. It was hypothesized that performance on the book and manual puzzle forms would not differ for typically developing children but that children with ID would perform better on the puzzle form. The first study assessed the validity of this puzzle form of the RCPM for 76 typically developing children in a test-retest crossover design, with a 3 week interval between tests. A second study tested performance and completion rate for the puzzle form compared to the book form in a sample of 164 children with ID. In the first study, no significant difference was found between performance on the puzzle and book forms in typically developing children, irrespective of the order of completion. The second study demonstrated a significantly higher performance and completion rate for the puzzle form compared to the book form in the ID population. Similar performance on book and puzzle forms of the RCPM by typically developing children suggests that both forms measure the same construct. These findings suggest that the puzzle form does not require greater cognitive ability but demands sensory-motor attention and limits distraction in children with severe ID. Thus, we suggest the puzzle form of the RCPM is a more reliable measure of the non-verbal mentation of children with severe ID than the book form.

  13. A influência da comunicação não verbal no cuidado de enfermagem La influencia de la comunicación no verbal en la atención de enfermería The influence of non-verbal communication in nursing care

    Directory of Open Access Journals (Sweden)

    Carla Cristina Viana Santos

    2005-08-01

    Nursing School Alfredo Pinto UNIRIO, and it started during the development of a monograph. The object of the study is the meaning of non-verbal communication under the optics of the nursing course undergraduates. The study presents the following objectives: to determine how non-verbal communication is comprehended among college students in nursing and to analyze in what way that comprehension influences nursing care. The methodological approach was qualitative, while the dynamics of sensitivity were applied as strategy for data collection. It was observed that undergraduate students identify the relevance and influence of non-verbal communication along nursing care, however there is a need in amplifying the knowledge of non-verbal communication process prior the implementation of nursing care.

  14. Audiovisual Blindsight: Audiovisual learning in the absence of primary visual cortex

    OpenAIRE

    Mehrdad eSeirafi; Peter eDe Weerd; Alan J Pegna; Beatrice ede Gelder

    2016-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit...

  15. [Virtual audiovisual talking heads: articulatory data and models--applications].

    Science.gov (United States)

    Badin, P; Elisei, F; Bailly, G; Savariaux, C; Serrurier, A; Tarabalka, Y

    2007-01-01

    In the framework of experimental phonetics, our approach to the study of speech production is based on the measurement, the analysis and the modeling of orofacial articulators such as the jaw, the face and the lips, the tongue or the velum. Therefore, we present in this article experimental techniques that allow characterising the shape and movement of speech articulators (static and dynamic MRI, computed tomodensitometry, electromagnetic articulography, video recording). We then describe the linear models of the various organs that we can elaborate from speaker-specific articulatory data. We show that these models, that exhibit a good geometrical resolution, can be controlled from articulatory data with a good temporal resolution and can thus permit the reconstruction of high quality animation of the articulators. These models, that we have integrated in a virtual talking head, can produce augmented audiovisual speech. In this framework, we have assessed the natural tongue reading capabilities of human subjects by means of audiovisual perception tests. We conclude by suggesting a number of other applications of talking heads.

  16. Search in audiovisual broadcast archives : doctoral abstract

    NARCIS (Netherlands)

    Huurnink, B.

    Documentary makers, journalists, news editors, and other media professionals routinely require previously recorded audiovisual material for new productions. For example, a news editor might wish to reuse footage shot by overseas services for the evening news, or a documentary maker might require

  17. Planning and Producing Audiovisual Materials. Third Edition.

    Science.gov (United States)

    Kemp, Jerrold E.

    A revised edition of this handbook provides illustrated, step-by-step explanations of how to plan and produce audiovisual materials. Included are sections on the fundamental skills--photography, graphics and recording sound--followed by individual sections on photographic print series, slide series, filmstrips, tape recordings, overhead…

  18. Audiovisual laughter detection based on temporal features

    NARCIS (Netherlands)

    Petridis, Stavros; Nijholt, Antinus; Nijholt, A.; Pantic, M.; Pantic, Maja; Poel, Mannes; Poel, M.; Hondorp, G.H.W.

    2008-01-01

    Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audiovisual approach to distinguishing laughter from speech based on temporal features and we show that the integration of audio and visual information leads to improved

  19. Ordinal models of audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2011-01-01

    Audiovisual information is integrated in speech perception. One manifestation of this is the McGurk illusion in which watching the articulating face alters the auditory phonetic percept. Understanding this phenomenon fully requires a computational model with predictive power. Here, we describe...

  20. Rapid, generalized adaptation to asynchronous audiovisual speech.

    Science.gov (United States)

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-07

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  1. Audiovisual Asynchrony Detection in Human Speech

    Science.gov (United States)

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  2. Longevity and Depreciation of Audiovisual Equipment.

    Science.gov (United States)

    Post, Richard

    1987-01-01

    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  3. Quantifying temporal ventriloquism in audiovisual synchrony perception

    NARCIS (Netherlands)

    Kuling, I.A.; Kohlrausch, A.G.; Juola, J.F.

    2013-01-01

    The integration of visual and auditory inputs in the human brain works properly only if the components are perceived in close temporal proximity. In the present study, we quantified cross-modal interactions in the human brain for audiovisual stimuli with temporal asynchronies, using a paradigm from

  4. Psychometric evaluation of the Orofacial Pain Scale for Non-Verbal Individuals as a screening tool for orofacial pain in people with dementia.

    Science.gov (United States)

    Delwel, Suzanne; Perez, Roberto S G M; Maier, Andrea B; Hertogh, Cees M P M; de Vet, Henrica C W; Lobbezoo, Frank; Scherder, Erik J A

    2018-04-29

    The aim of this study was to describe the psychometric evaluation of the Orofacial Pain Scale for Non-Verbal Individuals (OPS-NVI) as a screening tool for orofacial pain in people with dementia. The OPS-NVI has recently been developed and needs psychometric evaluation for clinical use in people with dementia. The pain self-report is imperative as a reference standard and can be provided by people with mild-to-moderate cognitive impairment. The presence of orofacial pain during rest, drinking, chewing and oral hygiene care was observed in people with mild cognitive impairment (MCI) and dementia using the OPS-NVI. Participants who were considered to present a reliable self-report were asked about pain presence, and in all participants, the oral health was examined by a dentist for the presence of potential painful conditions. After item-reduction, inter-rater reliability and criterion validity were determined. The presence of orofacial pain in this population was low (0%-10%), resulting in an average Positive Agreement of 0%-100%, an average Negative Agreement of 77%-100%, a sensitivity of 0%-100% and a specificity of 66%-100% for the individual items of the OPS-NVI. At the same time, the presence of oral problems, such as ulcers, tooth root remnants and caries was high (64.5%). The orofacial pain presence in this MCI and dementia population was low, resulting in low scores for average Positive Agreement and sensitivity and high scores for average Negative Agreement and specificity. Therefore, the OPS-NVI in its current form cannot be recommended as a screening tool for orofacial pain in people with MCI and dementia. However, the inter-rater reliability and criterion validity of the individual items in this study provide more insight for the further adjustment of the OPS-NVI for diagnostic use. Notably, oral health problems were frequently present, although no pain was reported or observed, indicating that oral health problems cannot be used as a new reference

  5. RECURSO AUDIOVISUAL PAA ENSEÑAR Y APRENDER EN EL AULA: ANÁLISIS Y PROPUESTA DE UN MODELO FORMATIVO

    Directory of Open Access Journals (Sweden)

    Damian Marilu Mendoza Zambrano

    2015-09-01

    teaching of the media means following the initiative of Spain and Portugal, the international protagonists of some university educational models was made. Due to the extension and focalization in information technology and web communication through the Internet, the audiovisual aid as a technological instrument have gained utility as a dynamic and conciliatory source with special characteristics that differs it form the other sources that belong to the audiovisual aids eco system. As a result of this research; two application means are proposed: A. Proposal of the iconic and audiovisual language as a learning objective and/or as a curriculum subject in the university syllabus that will include workshops for the development of the audiovisual document, digital photography and the audiovisual production. B. Usage of the audiovisual resources as education means which will imply a pre- training process to the teachers in the activities recommended for the teachers and students. As a consequence, suggestions that allow implementing both means of academic actions are presented.KEYWORDS: Media Literacy; Education Audiovisual; Media Competence; Educommunication.

  6. Reduced audiovisual recalibration in the elderly.

    Science.gov (United States)

    Chan, Yu Man; Pianta, Michael J; McKendrick, Allison M

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22-32 years old) and 15 older (64-74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age.

  7. Audiovisual speech integration in the superior temporal region is dysfunctional in dyslexia.

    Science.gov (United States)

    Ye, Zheng; Rüsseler, Jascha; Gerth, Ivonne; Münte, Thomas F

    2017-07-25

    Dyslexia is an impairment of reading and spelling that affects both children and adults even after many years of schooling. Dyslexic readers have deficits in the integration of auditory and visual inputs but the neural mechanisms of the deficits are still unclear. This fMRI study examined the neural processing of auditorily presented German numbers 0-9 and videos of lip movements of a German native speaker voicing numbers 0-9 in unimodal (auditory or visual) and bimodal (always congruent) conditions in dyslexic readers and their matched fluent readers. We confirmed results of previous studies that the superior temporal gyrus/sulcus plays a critical role in audiovisual speech integration: fluent readers showed greater superior temporal activations for combined audiovisual stimuli than auditory-/visual-only stimuli. Importantly, such an enhancement effect was absent in dyslexic readers. Moreover, the auditory network (bilateral superior temporal regions plus medial PFC) was dynamically modulated during audiovisual integration in fluent, but not in dyslexic readers. These results suggest that superior temporal dysfunction may underly poor audiovisual speech integration in readers with dyslexia. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  8. 77 FR 16561 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-03-21

    ... INTERNATIONAL TRADE COMMISSION [DN 2884] Certain Audiovisual Components and Products Containing.... International Trade Commission has received a complaint entitled Certain Audiovisual Components and Products... audiovisual components and products containing the same. The complaint names as respondents Funai Electric...

  9. 77 FR 16560 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-03-21

    ... INTERNATIONAL TRADE COMMISSION [DN 2884] Certain Audiovisual Components and Products Containing.... International Trade Commission has received a complaint entitled Certain Audiovisual Components and Products... audiovisual components and products containing the same. The complaint names as respondents Funai Electric...

  10. [Accommodation effects of the audiovisual stimulation in the patients experiencing eyestrain with the concomitant disturbances of psychological adaptation].

    Science.gov (United States)

    Shakula, A V; Emel'ianov, G A

    2014-01-01

    The present study was designed to evaluate the effectiveness of audiovisual stimulation on the state of the eye accommodation system in the patients experiencing eyes train with the concomitant disturbances of psychological. It was shown that a course of audiovisual stimulation (seeing a psychorelaxing film accompanied by a proper music) results in positive (5.9-21.9%) dynamics of the objective accommodation parameters and of the subjective status (4.5-33.2%). Taken together, these findings whole allow this method to be regarded as "relaxing preparation" in the integral complex of the measures for the preservation of the professional vision in this group of the patients.

  11. Concurrent Unimodal Learning Enhances Multisensory Responses of Bi-Directional Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2018-01-01

    modalities to independently update modality-specific neural weights on a moment-by-moment basis, in response to dynamic changes in noisy sensory stimuli. The circuit is embodied as a non-holonomic robotic agent that must orient a towards a moving audio-visual target. The circuit continuously learns the best...

  12. Automated social skills training with audiovisual information.

    Science.gov (United States)

    Tanaka, Hiroki; Sakti, Sakriani; Neubig, Graham; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi

    2016-08-01

    People with social communication difficulties tend to have superior skills using computers, and as a result computer-based social skills training systems are flourishing. Social skills training, performed by human trainers, is a well-established method to obtain appropriate skills in social interaction. Previous works have attempted to automate one or several parts of social skills training through human-computer interaction. However, while previous work on simulating social skills training considered only acoustic and linguistic features, human social skills trainers take into account visual features (e.g. facial expression, posture). In this paper, we create and evaluate a social skills training system that closes this gap by considering audiovisual features regarding ratio of smiling, yaw, and pitch. An experimental evaluation measures the difference in effectiveness of social skill training when using audio features and audiovisual features. Results showed that the visual features were effective to improve users' social skills.

  13. Alterations in audiovisual simultaneity perception in amblyopia

    OpenAIRE

    Richards, Michael D.; Goltz, Herbert C.; Wong, Agnes M. F.

    2017-01-01

    Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV) perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window). Adults with unilateral amblyopia (n = 17) and visually normal controls (n = 17) judged...

  14. Cortical Integration of Audio-Visual Information

    Science.gov (United States)

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  15. Understanding the basics of audiovisual archiving in Africa and the ...

    African Journals Online (AJOL)

    In the developed world, the cultural value of the audiovisual media gained legitimacy and widening acceptance after World War II, and this is what Africa still requires. There are a lot of problems in Africa, and because of this, activities such as preservation of a historical record, especially in the audiovisual media are seen as ...

  16. Trigger videos on the Web: Impact of audiovisual design

    NARCIS (Netherlands)

    Verleur, R.; Heuvelman, A.; Verhagen, Pleunes Willem

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is

  17. Audiovisual Archive Exploitation in the Networked Information Society

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.

    2011-01-01

    Safeguarding the massive body of audiovisual content, including rich music collections, in audiovisual archives and enabling access for various types of user groups is a prerequisite for unlocking the social-economic value of these collections. Data quantities and the need for specific content

  18. Decision-level fusion for audio-visual laughter detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, M.; Truong, K.; Poppe, R.; Pantic, M.

    2008-01-01

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laughter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is

  19. Haptic and Audio-visual Stimuli: Enhancing Experiences and Interaction

    NARCIS (Netherlands)

    Nijholt, Antinus; Dijk, Esko O.; Lemmens, Paul M.C.; Luitjens, S.B.

    2010-01-01

    The intention of the symposium on Haptic and Audio-visual stimuli at the EuroHaptics 2010 conference is to deepen the understanding of the effect of combined Haptic and Audio-visual stimuli. The knowledge gained will be used to enhance experiences and interactions in daily life. To this end, a

  20. Knowledge Generated by Audiovisual Narrative Action Research Loops

    Science.gov (United States)

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of…

  1. Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special

    Science.gov (United States)

    Vroomen, Jean; Stekelenburg, Jeroen J.

    2011-01-01

    Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…

  2. Use of Audiovisual Texts in University Education Process

    Science.gov (United States)

    Aleksandrov, Evgeniy P.

    2014-01-01

    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  3. Neural Correlates of Audiovisual Integration of Semantic Category Information

    Science.gov (United States)

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-01-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded…

  4. Audiovisual Integration in High Functioning Adults with Autism

    Science.gov (United States)

    Keane, Brian P.; Rosenthal, Orna; Chun, Nicole H.; Shams, Ladan

    2010-01-01

    Autism involves various perceptual benefits and deficits, but it is unclear if the disorder also involves anomalous audiovisual integration. To address this issue, we compared the performance of high-functioning adults with autism and matched controls on experiments investigating the audiovisual integration of speech, spatiotemporal relations, and…

  5. Development of Sensitivity to Audiovisual Temporal Asynchrony during Midchildhood

    Science.gov (United States)

    Kaganovich, Natalya

    2016-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…

  6. Trigger Videos on the Web: Impact of Audiovisual Design

    Science.gov (United States)

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  7. Audiovisual consumption and its social logics on the web

    OpenAIRE

    Rose Marie Santini; Juan C. Calvi

    2013-01-01

    This article analyzes the social logics underlying audiovisualconsumption on digital networks. We retrieved some data on the Internet globaltraffic of audiovisual files since 2008 to identify formats, modes of distributionand consumption of audiovisual contents that tend to prevail on the Web. Thisresearch shows the types of social practices which are dominant among usersand its relation to what we designate as “Internet culture”.

  8. Multistage audiovisual integration of speech: dissociating identification and detection.

    Science.gov (United States)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias S

    2011-02-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers, the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multistage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.

  9. Audiovisual perception in amblyopia: A review and synthesis.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-05-17

    Amblyopia is a common developmental sensory disorder that has been extensively and systematically investigated as a unisensory visual impairment. However, its effects are increasingly recognized to extend beyond vision to the multisensory domain. Indeed, amblyopia is associated with altered cross-modal interactions in audiovisual temporal perception, audiovisual spatial perception, and audiovisual speech perception. Furthermore, although the visual impairment in amblyopia is typically unilateral, the multisensory abnormalities tend to persist even when viewing with both eyes. Knowledge of the extent and mechanisms of the audiovisual impairments in amblyopia, however, remains in its infancy. This work aims to review our current understanding of audiovisual processing and integration deficits in amblyopia, and considers the possible mechanisms underlying these abnormalities. Copyright © 2018. Published by Elsevier Ltd.

  10. Vicarious audiovisual learning in perfusion education.

    Science.gov (United States)

    Rath, Thomas E; Holt, David W

    2010-12-01

    Perfusion technology is a mechanical and visual science traditionally taught with didactic instruction combined with clinical experience. It is difficult to provide perfusion students the opportunity to experience difficult clinical situations, set up complex perfusion equipment, or observe corrective measures taken during catastrophic events because of patient safety concerns. Although high fidelity simulators offer exciting opportunities for future perfusion training, we explore the use of a less costly low fidelity form of simulation instruction, vicarious audiovisual learning. Two low fidelity modes of instruction; description with text and a vicarious, first person audiovisual production depicting the same content were compared. Students (n = 37) sampled from five North American perfusion schools were prospectively randomized to one of two online learning modules, text or video.These modules described the setup and operation of the MAQUET ROTAFLOW stand-alone centrifugal console and pump. Using a 10 question multiple-choice test, students were assessed immediately after viewing the module (test #1) and then again 2 weeks later (test #2) to determine cognition and recall of the module content. In addition, students completed a questionnaire assessing the learning preferences of today's perfusion student. Mean test scores from test #1 for video learners (n = 18) were significantly higher (88.89%) than for text learners (n = 19) (74.74%), (p audiovisual learning modules may be an efficacious, low cost means of delivering perfusion training on subjects such as equipment setup and operation. Video learning appears to improve cognition and retention of learned content and may play an important role in how we teach perfusion in the future, as simulation technology becomes more prevalent.

  11. A linguagem audiovisual da lousa digital interativa no contexto educacional/Audiovisual language of the digital interactive whiteboard in the educational environment

    Directory of Open Access Journals (Sweden)

    Rosária Helena Ruiz Nakashima

    2006-01-01

    solely in the oral and written languages, but is also audiovisual and dynamic, since it allows the student to become not merely a receptor but also a producer of knowledge. Therefore, our schools should be encouraged to use these new technological devices in order to facilitate their job and to promote more interesting and revolutionary classes.

  12. Dissociating verbal and nonverbal audiovisual object processing.

    Science.gov (United States)

    Hocking, Julia; Price, Cathy J

    2009-02-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

  13. Summarizing Audiovisual Contents of a Video Program

    Science.gov (United States)

    Gong, Yihong

    2003-12-01

    In this paper, we focus on video programs that are intended to disseminate information and knowledge such as news, documentaries, seminars, etc, and present an audiovisual summarization system that summarizes the audio and visual contents of the given video separately, and then integrating the two summaries with a partial alignment. The audio summary is created by selecting spoken sentences that best present the main content of the audio speech while the visual summary is created by eliminating duplicates/redundancies and preserving visually rich contents in the image stream. The alignment operation aims to synchronize each spoken sentence in the audio summary with its corresponding speaker's face and to preserve the rich content in the visual summary. A Bipartite Graph-based audiovisual alignment algorithm is developed to efficiently find the best alignment solution that satisfies these alignment requirements. With the proposed system, we strive to produce a video summary that: (1) provides a natural visual and audio content overview, and (2) maximizes the coverage for both audio and visual contents of the original video without having to sacrifice either of them.

  14. Alterations in audiovisual simultaneity perception in amblyopia.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2017-01-01

    Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV) perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window). Adults with unilateral amblyopia (n = 17) and visually normal controls (n = 17) judged the simultaneity of a flash and a click presented with both eyes viewing. The signal onset asynchrony (SOA) varied from 0 ms to 450 ms for auditory-lead and visual-lead conditions. A subset of participants with amblyopia (n = 6) was tested monocularly. Compared to the control group, the auditory-lead side of the AV simultaneity window was widened by 48 ms (36%; p = 0.002), whereas that of the visual-lead side was widened by 86 ms (37%; p = 0.02). The overall mean window width was 500 ms, compared to 366 ms among controls (37% wider; p = 0.002). Among participants with amblyopia, the simultaneity window parameters were unchanged by viewing condition, but subgroup analysis revealed differential effects on the parameters by amblyopia severity, etiology, and foveal suppression status. Possible mechanisms to explain these findings include visual temporal uncertainty, interocular perceptual latency asymmetry, and disruption of normal developmental tuning of sensitivity to audiovisual asynchrony.

  15. Alterations in audiovisual simultaneity perception in amblyopia.

    Directory of Open Access Journals (Sweden)

    Michael D Richards

    Full Text Available Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window. Adults with unilateral amblyopia (n = 17 and visually normal controls (n = 17 judged the simultaneity of a flash and a click presented with both eyes viewing. The signal onset asynchrony (SOA varied from 0 ms to 450 ms for auditory-lead and visual-lead conditions. A subset of participants with amblyopia (n = 6 was tested monocularly. Compared to the control group, the auditory-lead side of the AV simultaneity window was widened by 48 ms (36%; p = 0.002, whereas that of the visual-lead side was widened by 86 ms (37%; p = 0.02. The overall mean window width was 500 ms, compared to 366 ms among controls (37% wider; p = 0.002. Among participants with amblyopia, the simultaneity window parameters were unchanged by viewing condition, but subgroup analysis revealed differential effects on the parameters by amblyopia severity, etiology, and foveal suppression status. Possible mechanisms to explain these findings include visual temporal uncertainty, interocular perceptual latency asymmetry, and disruption of normal developmental tuning of sensitivity to audiovisual asynchrony.

  16. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan.

    Science.gov (United States)

    Noel, Jean-Paul; De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.

  17. Audiovisual integration facilitates monkeys' short-term memory.

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2016-07-01

    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans.

  18. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson's Disease.

    Science.gov (United States)

    Ren, Yanna; Suzuki, Keisuke; Yang, Weiping; Ren, Yanling; Wu, Fengxia; Yang, Jiajia; Takahashi, Satoshi; Ejima, Yoshimichi; Wu, Jinglong; Hirata, Koichi

    2018-01-01

    The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson's disease (PD). This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC) and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p audiovisual stimuli was significantly faster than that to unimodal stimuli in both NC and PD ( p audiovisual integration was absent in PD; however, it did occur in NC. Further analysis showed that there was no significant audiovisual integration in PD with/without cognitive impairment or in PD with/without sleep disturbances. Furthermore, audiovisual facilitation was not associated with Hoehn and Yahr stage, disease duration, or the presence of sleep disturbances (all p > 0.05). The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.

  19. Functionally segregated neural substrates for arbitrary audiovisual paired-association learning.

    Science.gov (United States)

    Tanabe, Hiroki C; Honda, Manabu; Sadato, Norihiro

    2005-07-06

    To clarify the neural substrates and their dynamics during crossmodal association learning, we conducted functional magnetic resonance imaging (MRI) during audiovisual paired-association learning of delayed matching-to-sample tasks. Thirty subjects were involved in the study; 15 performed an audiovisual paired-association learning task, and the remainder completed a control visuo-visual task. Each trial consisted of the successive presentation of a pair of stimuli. Subjects were asked to identify predefined audiovisual or visuo-visual pairs by trial and error. Feedback for each trial was given regardless of whether the response was correct or incorrect. During the delay period, several areas showed an increase in the MRI signal as learning proceeded: crossmodal activity increased in unimodal areas corresponding to visual or auditory areas, and polymodal responses increased in the occipitotemporal junction and parahippocampal gyrus. This pattern was not observed in the visuo-visual intramodal paired-association learning task, suggesting that crossmodal associations might be formed by binding unimodal sensory areas via polymodal regions. In both the audiovisual and visuo-visual tasks, the MRI signal in the superior temporal sulcus (STS) in response to the second stimulus and feedback peaked during the early phase of learning and then decreased, indicating that the STS might be key to the creation of paired associations, regardless of stimulus type. In contrast to the activity changes in the regions discussed above, there was constant activity in the frontoparietal circuit during the delay period in both tasks, implying that the neural substrates for the formation and storage of paired associates are distinct from working memory circuits.

  20. Audiovisual Interval Size Estimation Is Associated with Early Musical Training.

    Directory of Open Access Journals (Sweden)

    Mary Kathryn Abel

    Full Text Available Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.

  1. Audiovisual Interval Size Estimation Is Associated with Early Musical Training.

    Science.gov (United States)

    Abel, Mary Kathryn; Li, H Charles; Russo, Frank A; Schlaug, Gottfried; Loui, Psyche

    2016-01-01

    Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.

  2. Speech-specificity of two audiovisual integration effects

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2010-01-01

    Seeing the talker’s articulatory mouth movements can influence the auditory speech percept both in speech identification and detection tasks. Here we show that these audiovisual integration effects also occur for sine wave speech (SWS), which is an impoverished speech signal that naïve observers...... often fail to perceive as speech. While audiovisual integration in the identification task only occurred when observers were informed of the speech-like nature of SWS, integration occurred in the detection task both for informed and naïve observers. This shows that both speech-specific and general...... mechanisms underlie audiovisual integration of speech....

  3. Prediction and constraint in audiovisual speech perception.

    Science.gov (United States)

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration

  4. Talker Variability in Audiovisual Speech Perception

    Directory of Open Access Journals (Sweden)

    Shannon eHeald

    2014-07-01

    Full Text Available A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition. So far, this talker-variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target-word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.

  5. Prediction and constraint in audiovisual speech perception

    Science.gov (United States)

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported

  6. Audiovisual interpretative skills: between textual culture and formalized literacy

    Directory of Open Access Journals (Sweden)

    Estefanía Jiménez, Ph. D.

    2010-01-01

    Full Text Available This paper presents the results of a study on the process of acquiring interpretative skills to decode audiovisual texts among adolescents and youth. Based on the conception of such competence as the ability to understand the meanings connoted beneath the literal discourses of audiovisual texts, this study compared two variables: the acquisition of such skills from the personal and social experience in the consumption of audiovisual products (which is affected by age difference, and, on the second hand, the differences marked by the existence of formalized processes of media literacy. Based on focus groups of young students, the research assesses the existing academic debate about these processes of acquiring skills to interpret audiovisual materials.

  7. Exposure to audiovisual programs as sources of authentic language ...

    African Journals Online (AJOL)

    Exposure to audiovisual programs as sources of authentic language input and second ... Southern African Linguistics and Applied Language Studies ... The findings of the present research contribute more insights on the type and amount of ...

  8. On-line repository of audiovisual material feminist research methodology

    Directory of Open Access Journals (Sweden)

    Lena Prado

    2014-12-01

    Full Text Available This paper includes a collection of audiovisual material available in the repository of the Interdisciplinary Seminar of Feminist Research Methodology SIMReF (http://www.simref.net.

  9. Electrophysiological assessment of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Dau, Torsten

    Speech perception integrates signal from ear and eye. This is witnessed by a wide range of audiovisual integration effects, such as ventriloquism and the McGurk illusion. Some behavioral evidence suggest that audiovisual integration of specific aspects is special for speech perception. However, our...... knowledge of such bimodal integration would be strengthened if the phenomena could be investigated by objective, neutrally based methods. One key question of the present work is if perceptual processing of audiovisual speech can be gauged with a specific signature of neurophysiological activity...... on the auditory speech percept? In two experiments, which both combine behavioral and neurophysiological measures, an uncovering of the relation between perception of faces and of audiovisual integration is attempted. Behavioral findings suggest a strong effect of face perception, whereas the MMN results are less...

  10. Multistage audiovisual integration of speech: dissociating identification and detection

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech...... signal. Here we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers...... informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multi-stage account of audiovisual integration of speech in which the many attributes...

  11. Proper Use of Audio-Visual Aids: Essential for Educators.

    Science.gov (United States)

    Dejardin, Conrad

    1989-01-01

    Criticizes educators as the worst users of audio-visual aids and among the worst public speakers. Offers guidelines for the proper use of an overhead projector and the development of transparencies. (DMM)

  12. An Instrumented Glove for Control Audiovisual Elements in Performing Arts

    Directory of Open Access Journals (Sweden)

    Rafael Tavares

    2018-02-01

    Full Text Available The use of cutting-edge technologies such as wearable devices to control reactive audiovisual systems are rarely applied in more conventional stage performances, such as opera performances. This work reports a cross-disciplinary approach for the research and development of the WMTSensorGlove, a data-glove used in an opera performance to control audiovisual elements on stage through gestural movements. A system architecture of the interaction between the wireless wearable device and the different audiovisual systems is presented, taking advantage of the Open Sound Control (OSC protocol. The developed wearable system was used as audiovisual controller in “As sete mulheres de Jeremias Epicentro”, a portuguese opera by Quarteto Contratempus, which was premiered in September 2017.

  13. Audio-visual assistance in co-creating transition knowledge

    Science.gov (United States)

    Hezel, Bernd; Broschkowski, Ephraim; Kropp, Jürgen P.

    2013-04-01

    Earth system and climate impact research results point to the tremendous ecologic, economic and societal implications of climate change. Specifically people will have to adopt lifestyles that are very different from those they currently strive for in order to mitigate severe changes of our known environment. It will most likely not suffice to transfer the scientific findings into international agreements and appropriate legislation. A transition is rather reliant on pioneers that define new role models, on change agents that mainstream the concept of sufficiency and on narratives that make different futures appealing. In order for the research community to be able to provide sustainable transition pathways that are viable, an integration of the physical constraints and the societal dynamics is needed. Hence the necessary transition knowledge is to be co-created by social and natural science and society. To this end, the Climate Media Factory - in itself a massively transdisciplinary venture - strives to provide an audio-visual connection between the different scientific cultures and a bi-directional link to stake holders and society. Since methodology, particular language and knowledge level of the involved is not the same, we develop new entertaining formats on the basis of a "complexity on demand" approach. They present scientific information in an integrated and entertaining way with different levels of detail that provide entry points to users with different requirements. Two examples shall illustrate the advantages and restrictions of the approach.

  14. Neurophysiology underlying influence of stimulus reliability on audiovisual integration.

    Science.gov (United States)

    Shatzer, Hannah; Shen, Stanley; Kerlin, Jess R; Pitt, Mark A; Shahin, Antoine J

    2018-01-24

    We tested the predictions of the dynamic reweighting model (DRM) of audiovisual (AV) speech integration, which posits that spectrotemporally reliable (informative) AV speech stimuli induce a reweighting of processing from low-level to high-level auditory networks. This reweighting decreases sensitivity to acoustic onsets and in turn increases tolerance to AV onset asynchronies (AVOA). EEG was recorded while subjects watched videos of a speaker uttering trisyllabic nonwords that varied in spectrotemporal reliability and asynchrony of the visual and auditory inputs. Subjects judged the stimuli as in-sync or out-of-sync. Results showed that subjects exhibited greater AVOA tolerance for non-blurred than blurred visual speech and for less than more degraded acoustic speech. Increased AVOA tolerance was reflected in reduced amplitude of the P1-P2 auditory evoked potentials, a neurophysiological indication of reduced sensitivity to acoustic onsets and successful AV integration. There was also sustained visual alpha band (8-14 Hz) suppression (desynchronization) following acoustic speech onsets for non-blurred vs. blurred visual speech, consistent with continuous engagement of the visual system as the speech unfolds. The current findings suggest that increased spectrotemporal reliability of acoustic and visual speech promotes robust AV integration, partly by suppressing sensitivity to acoustic onsets, in support of the DRM's reweighting mechanism. Increased visual signal reliability also sustains the engagement of the visual system with the auditory system to maintain alignment of information across modalities. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  15. Audiovisual consumption and its social logics on the web

    Directory of Open Access Journals (Sweden)

    Rose Marie Santini

    2013-06-01

    Full Text Available This article analyzes the social logics underlying audiovisualconsumption on digital networks. We retrieved some data on the Internet globaltraffic of audiovisual files since 2008 to identify formats, modes of distributionand consumption of audiovisual contents that tend to prevail on the Web. Thisresearch shows the types of social practices which are dominant among usersand its relation to what we designate as “Internet culture”.

  16. Narrativa audiovisual. Estrategias y recursos [Reseña

    OpenAIRE

    Cuenca Jaramillo, María Dolores

    2011-01-01

    Reseña del libro "Narrativa audiovisual. Estrategias y recursos" de Fernando Canet y Josep Prósper. Cuenca Jaramillo, MD. (2011). Narrativa audiovisual. Estrategias y recursos [Reseña]. Vivat Academia. Revista de Comunicación. Año XIV(117):125-130. http://hdl.handle.net/10251/46210 Senia 125 130 Año XIV 117

  17. [Audio-visual communication in the history of psychiatry].

    Science.gov (United States)

    Farina, B; Remoli, V; Russo, F

    1993-12-01

    The authors analyse the evolution of visual communication in the history of psychiatry. From the 18th century oil paintings to the first dagherrotic prints until the cinematography and the modern audiovisual systems they observed an increasing diffusion of the new communication techniques in psychiatry, and described the use of the different techniques in psychiatric practice. The article ends with a brief review of the current applications of the audiovisual in therapy, training, teaching, and research.

  18. Plan empresa productora de audiovisuales : La Central Audiovisual y Publicidad

    OpenAIRE

    Arroyave Velasquez, Alejandro

    2015-01-01

    El presente documento corresponde al plan de creación de empresa La Central Publicidad y Audiovisual, una empresa dedicada a la pre-producción, producción y post-producción de material de tipo audiovisual. La empresa estará ubicada en la ciudad de Cali y tiene como mercado objetivo atender los diferentes tipos de empresas de la ciudad, entre las cuales se encuentran las pequeñas, medianas y grandes empresas.

  19. Audiovisual Association Learning in the Absence of Primary Visual Cortex

    OpenAIRE

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J.; de Gelder, Beatrice

    2016-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit ...

  20. Influences of selective adaptation on perception of audiovisual speech

    Science.gov (United States)

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  1. Elevated audiovisual temporal interaction in patients with migraine without aura

    Science.gov (United States)

    2014-01-01

    Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903

  2. 36 CFR 1237.14 - What are the additional scheduling requirements for audiovisual, cartographic, and related records?

    Science.gov (United States)

    2010-07-01

    ... scheduling requirements for audiovisual, cartographic, and related records? 1237.14 Section 1237.14 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL... audiovisual, cartographic, and related records? The disposition instructions should also provide that...

  3. Dissociated roles of the inferior frontal gyrus and superior temporal sulcus in audiovisual processing: top-down and bottom-up mismatch detection.

    Science.gov (United States)

    Uno, Takeshi; Kawai, Kensuke; Sakai, Katsuyuki; Wakebe, Toshihiro; Ibaraki, Takuya; Kunii, Naoto; Matsuo, Takeshi; Saito, Nobuhito

    2015-01-01

    Visual inputs can distort auditory perception, and accurate auditory processing requires the ability to detect and ignore visual input that is simultaneous and incongruent with auditory information. However, the neural basis of this auditory selection from audiovisual information is unknown, whereas integration process of audiovisual inputs is intensively researched. Here, we tested the hypothesis that the inferior frontal gyrus (IFG) and superior temporal sulcus (STS) are involved in top-down and bottom-up processing, respectively, of target auditory information from audiovisual inputs. We recorded high gamma activity (HGA), which is associated with neuronal firing in local brain regions, using electrocorticography while patients with epilepsy judged the syllable spoken by a voice while looking at a voice-congruent or -incongruent lip movement from the speaker. The STS exhibited stronger HGA if the patient was presented with information of large audiovisual incongruence than of small incongruence, especially if the auditory information was correctly identified. On the other hand, the IFG exhibited stronger HGA in trials with small audiovisual incongruence when patients correctly perceived the auditory information than when patients incorrectly perceived the auditory information due to the mismatched visual information. These results indicate that the IFG and STS have dissociated roles in selective auditory processing, and suggest that the neural basis of selective auditory processing changes dynamically in accordance with the degree of incongruity between auditory and visual information.

  4. Specific components of face perception in the human fusiform gyrus studied by tomographic estimates of magnetoencephalographic signals: a tool for the evaluation of non-verbal communication in psychosomatic paradigms

    Directory of Open Access Journals (Sweden)

    Ioannides Andreas A

    2007-12-01

    Full Text Available Abstract Aims The aim of this study was to determine the specific spatiotemporal activation patterns of face perception in the fusiform gyrus (FG. The FG is a key area in the specialized brain system that makes possible the recognition of face with ease and speed in our daily life. Characterization of FG response provides a quantitative method for evaluating the fundamental functions that contribute to non-verbal communication in various psychosomatic paradigms. Methods The MEG signal was recorded during passive visual stimulus presentation with three stimulus types – Faces, Hands and Shoes. The stimuli were presented separately to the central and peripheral visual fields. We performed statistical parametric mapping (SPM analysis of tomographic estimates of activity to compare activity between a pre- and post-stimulus period in the same object (baseline test, and activity between objects (active test. The time course of regional activation curves was analyzed for each stimulus condition. Results The SPM baseline test revealed a response to each stimulus type, which was very compact at the initial segment of main MFG170. For hands and shoes the area of significant change remains compact. For faces the area expanded widely within a few milliseconds and its boundaries engulfed the other object areas. The active test demonstrated that activity for faces was significantly larger than the activity for hands. The same face specific compact area as in the baseline test was identified, and then again expanded widely. For each stimulus type and presentation in each one of the visual fields locations, the analysis of the time course of FG activity identified three components in the FG: MFG100, MFG170, and MFG200 – all showed preference for faces. Conclusion Early compact face-specific activity in the FG expands widely along the occipito-ventral brain within a few milliseconds. The significant difference between faces and the other object stimuli in MFG

  5. Memory and learning with rapid audiovisual sequences

    Science.gov (United States)

    Keller, Arielle S.; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  6. Audiovisual perceptual learning with multiple speakers.

    Science.gov (United States)

    Mitchel, Aaron D; Gerfen, Chip; Weiss, Daniel J

    2016-05-01

    One challenge for speech perception is between-speaker variability in the acoustic parameters of speech. For example, the same phoneme (e.g. the vowel in "cat") may have substantially different acoustic properties when produced by two different speakers and yet the listener must be able to interpret these disparate stimuli as equivalent. Perceptual tuning, the use of contextual information to adjust phonemic representations, may be one mechanism that helps listeners overcome obstacles they face due to this variability during speech perception. Here we test whether visual contextual cues to speaker identity may facilitate the formation and maintenance of distributional representations for individual speakers, allowing listeners to adjust phoneme boundaries in a speaker-specific manner. We familiarized participants to an audiovisual continuum between /aba/ and /ada/. During familiarization, the "b-face" mouthed /aba/ when an ambiguous token was played, while the "D-face" mouthed /ada/. At test, the same ambiguous token was more likely to be identified as /aba/ when paired with a stilled image of the "b-face" than with an image of the "D-face." This was not the case in the control condition when the two faces were paired equally with the ambiguous token. Together, these results suggest that listeners may form speaker-specific phonemic representations using facial identity cues.

  7. Memory and learning with rapid audiovisual sequences.

    Science.gov (United States)

    Keller, Arielle S; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed.

  8. Neural circuits in auditory and audiovisual memory.

    Science.gov (United States)

    Plakke, B; Romanski, L M

    2016-06-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Causal inference of asynchronous audiovisual speech

    Directory of Open Access Journals (Sweden)

    John F Magnotti

    2013-11-01

    Full Text Available During speech perception, humans integrate auditory information from the voice with visual information from the face. This multisensory integration increases perceptual precision, but only if the two cues come from the same talker; this requirement has been largely ignored by current models of speech perception. We describe a generative model of multisensory speech perception that includes this critical step of determining the likelihood that the voice and face information have a common cause. A key feature of the model is that it is based on a principled analysis of how an observer should solve this causal inference problem using the asynchrony between two cues and the reliability of the cues. This allows the model to make predictions abut the behavior of subjects performing a synchrony judgment task, predictive power that does not exist in other approaches, such as post hoc fitting of Gaussian curves to behavioral data. We tested the model predictions against the performance of 37 subjects performing a synchrony judgment task viewing audiovisual speech under a variety of manipulations, including varying asynchronies, intelligibility, and visual cue reliability. The causal inference model outperformed the Gaussian model across two experiments, providing a better fit to the behavioral data with fewer parameters. Because the causal inference model is derived from a principled understanding of the task, model parameters are directly interpretable in terms of stimulus and subject properties.

  10. "Audio-visuel Integre" et Communication(s) ("Integrated Audiovisual" and Communication)

    Science.gov (United States)

    Moirand, Sophie

    1974-01-01

    This article examines the usefullness of the audiovisual method in teaching communication competence, and calls for research in audiovisual methods as well as in communication theory for improvement in these areas. (Text is in French.) (AM)

  11. Challenges and opportunities for audiovisual diversity in the Internet

    Directory of Open Access Journals (Sweden)

    Trinidad García Leiva

    2017-06-01

    Full Text Available http://dx.doi.org/10.5007/2175-7984.2017v16n35p132 At the gates of the first quarter of the XXI century, nobody doubts the fact that the value chain of the audiovisual industry has suffered important transformations. The digital era presents opportunities for cultural enrichment as well as displays new challenges. After presenting a general portray of the audiovisual industries in the digital era, taking as a point of departure the Spanish case and paying attention to players and logics in tension, this paper will present some notes about the advantages and disadvantages that exist for the diversity of audiovisual production, distribution and consumption online. It is here sustained that the diversity of the audiovisual sector online is not guaranteed because the formula that has made some players successful and powerful is based on walled-garden models to monetize contents (which, besides, add restrictions to their reproduction and circulation by and among consumers. The final objective is to present some ideas about the elements that prevent the strengthening of the diversity of the audiovisual industry in the digital scenario. Barriers to overcome are classified as technological, financial, social, legal and political.

  12. The production of audiovisual teaching tools in minimally invasive surgery.

    Science.gov (United States)

    Tolerton, Sarah K; Hugh, Thomas J; Cosman, Peter H

    2012-01-01

    Audiovisual learning resources have become valuable adjuncts to formal teaching in surgical training. This report discusses the process and challenges of preparing an audiovisual teaching tool for laparoscopic cholecystectomy. The relative value in surgical education and training, for both the creator and viewer are addressed. This audiovisual teaching resource was prepared as part of the Master of Surgery program at the University of Sydney, Australia. The different methods of video production used to create operative teaching tools are discussed. Collating and editing material for an audiovisual teaching resource can be a time-consuming and technically challenging process. However, quality learning resources can now be produced even with limited prior video editing experience. With minimal cost and suitable guidance to ensure clinically relevant content, most surgeons should be able to produce short, high-quality education videos of both open and minimally invasive surgery. Despite the challenges faced during production of audiovisual teaching tools, these resources are now relatively easy to produce using readily available software. These resources are particularly attractive to surgical trainees when real time operative footage is used. They serve as valuable adjuncts to formal teaching, particularly in the setting of minimally invasive surgery. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  13. The role of visual spatial attention in audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias; Tiippana, K.; Laarni, J.

    2009-01-01

    Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive b......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre...... from each of the faces and from the voice on the auditory speech percept. We found that directing visual spatial attention towards a face increased the influence of that face on auditory perception. However, the influence of the voice on auditory perception did not change suggesting that audiovisual...... integration did not change. Visual spatial attention was also able to select between the faces when lip reading. This suggests that visual spatial attention acts at the level of visual speech perception prior to audiovisual integration and that the effect propagates through audiovisual integration...

  14. Parametric packet-based audiovisual quality model for IPTV services

    CERN Document Server

    Garcia, Marie-Neige

    2014-01-01

    This volume presents a parametric packet-based audiovisual quality model for Internet Protocol TeleVision (IPTV) services. The model is composed of three quality modules for the respective audio, video and audiovisual components. The audio and video quality modules take as input a parametric description of the audiovisual processing path, and deliver an estimate of the audio and video quality. These outputs are sent to the audiovisual quality module which provides an estimate of the audiovisual quality. Estimates of perceived quality are typically used both in the network planning phase and as part of the quality monitoring. The same audio quality model is used for both these phases, while two variants of the video quality model have been developed for addressing the two application scenarios. The addressed packetization scheme is MPEG2 Transport Stream over Real-time Transport Protocol over Internet Protocol. In the case of quality monitoring, that is the case for which the network is already set-up, the aud...

  15. Audiovisual Association Learning in the Absence of Primary Visual Cortex.

    Science.gov (United States)

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J; de Gelder, Beatrice

    2015-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning.

  16. The Fungible Audio-Visual Mapping and its Experience

    Directory of Open Access Journals (Sweden)

    Adriana Sa

    2014-12-01

    Full Text Available This article draws a perceptual approach to audio-visual mapping. Clearly perceivable cause and effect relationships can be problematic if one desires the audience to experience the music. Indeed perception would bias those sonic qualities that fit previous concepts of causation, subordinating other sonic qualities, which may form the relations between the sounds themselves. The question is, how can an audio-visual mapping produce a sense of causation, and simultaneously confound the actual cause-effect relationships. We call this a fungible audio-visual mapping. Our aim here is to glean its constitution and aspect. We will report a study, which draws upon methods from experimental psychology to inform audio-visual instrument design and composition. The participants are shown several audio-visual mapping prototypes, after which we pose quantitative and qualitative questions regarding their sense of causation, and their sense of understanding the cause-effect relationships. The study shows that a fungible mapping requires both synchronized and seemingly non-related components – sufficient complexity to be confusing. As the specific cause-effect concepts remain inconclusive, the sense of causation embraces the whole. 

  17. Media Aid Beyond the Factual: Culture, Development, and Audiovisual Assistance

    Directory of Open Access Journals (Sweden)

    Benjamin A. J. Pearson

    2015-01-01

    Full Text Available This paper discusses audiovisual assistance, a form of development aid that focuses on the production and distribution of cultural and entertainment media such as fictional films and TV shows. While the first audiovisual assistance program dates back to UNESCO’s International Fund for the Promotion of Culture in the 1970s, the past two decades have seen a proliferation of audiovisual assistance that, I argue, is related to a growing concern for culture in post-2015 global development agendas. In this paper, I examine the aims and motivations behind the EU’s audiovisual assistance programs to countries in the Global South, using data from policy documents and semi-structured, in-depth interviews with Program Managers and administrative staff in Brussels. These programs prioritize forms of audiovisual content that are locally specific, yet globally tradable. Furthermore, I argue that they have an ambivalent relationship with traditional notions of international development, one that conceptualizes media not only as a means to achieve economic development and human rights aims, but as a form of development itself.

  18. earGram Actors: An Interactive Audiovisual System Based on Social Behavior

    Directory of Open Access Journals (Sweden)

    Peter Beyls

    2015-11-01

    Full Text Available In multi-agent systems, local interactions among system components following relatively simple rules often result in complex overall systemic behavior. Complex behavioral and morphological patterns have been used to generate and organize audiovisual systems with artistic purposes. In this work, we propose to use the Actor model of social interactions to drive a concatenative synthesis engine called earGram in real time. The Actor model was originally developed to explore the emergence of complex visual patterns. On the other hand, earGram was originally developed to facilitate the creative exploration of concatenative sound synthesis. The integrated audiovisual system allows a human performer to interact with the system dynamics while receiving visual and auditory feedback. The interaction happens indirectly by disturbing the rules governing the social relationships amongst the actors, which results in a wide range of dynamic spatiotemporal patterns. A performer thus improvises within the behavioural scope of the system while evaluating the apparent connections between parameter values and actual complexity of the system output.

  19. Some Behavioral and Neurobiological Constraints on Theories of Audiovisual Speech Integration: A Review and Suggestions for New Directions

    Science.gov (United States)

    Altieri, Nicholas; Pisoni, David B.; Townsend, James T.

    2012-01-01

    Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield’s feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration. PMID:21968081

  20. Saudi normative data for the Wisconsin Card Sorting test, Stroop test, Test of Non-verbal Intelligence-3, Picture Completion and Vocabulary (subtest of the Wechsler Adult Intelligence Scale-Revised).

    Science.gov (United States)

    Al-Ghatani, Ali M; Obonsawin, Marc C; Binshaig, Basmah A; Al-Moutaery, Khalaf R

    2011-01-01

    There are 2 aims for this study: first, to collect normative data for the Wisconsin Card Sorting Test (WCST), Stroop test, Test of Non-verbal Intelligence (TONI-3), Picture Completion (PC) and Vocabulary (VOC) sub-test of the Wechsler Adult Intelligence Scale-Revised for use in a Saudi Arabian culture, and second, to use the normative data provided to generate the regression equations. To collect the normative data and generate the regression equations, 198 healthy individuals were selected to provide a representative distribution for age, gender, years of education, and socioeconomic class. The WCST, Stroop test, TONI-3, PC, and VOC were administrated to the healthy individuals. This study was carried out at the Department of Clinical Neurosciences, Riyadh Military Hospital, Riyadh, Kingdom of Saudi Arabia from January 2000 to July 2002. Normative data were obtained for all tests, and tables were constructed to interpret scores for different age groups. Regression equations to predict performance on the 3 tests of frontal function from scores on tests of fluid (TONI-3) and premorbid intelligence were generated from the data from the healthy individuals. The data collected in this study provide normative tables for 3 tests of frontal lobe function and for tests of general intellectual ability for use in Saudi Arabia. The data also provide a method to estimate pre-injury ability without the use of verbally based tests.

  1. Documentary management of the sport audio-visual information in the generalist televisions

    OpenAIRE

    Jorge Caldera Serrano; Felipe Alonso

    2007-01-01

    The management of the sport audio-visual documentation of the Information Systems of the state, zonal and local chains is analyzed within the framework. For it it is made makes a route by the documentary chain that makes the sport audio-visual information with the purpose of being analyzing each one of the parameters, showing therefore a series of recommendations and norms for the preparation of the sport audio-visual registry. Evidently the audio-visual sport documentation difference i...

  2. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    Science.gov (United States)

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  3. 36 CFR 1237.26 - What materials and processes must agencies use to create audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... must agencies use to create audiovisual records? 1237.26 Section 1237.26 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.26 What materials and processes must agencies use to create audiovisual...

  4. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... considerations in the maintenance of audiovisual records? 1237.20 Section 1237.20 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.20 What are special considerations in the maintenance of audiovisual...

  5. 36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?

    Science.gov (United States)

    2010-07-01

    ... standards for audiovisual records storage? 1237.18 Section 1237.18 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.18 What are the environmental standards for audiovisual records storage? (a...

  6. 77 FR 22803 - Certain Audiovisual Components and Products Containing the Same; Institution of Investigation...

    Science.gov (United States)

    2012-04-17

    ... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-837] Certain Audiovisual Components and Products... importation of certain audiovisual components and products containing the same by reason of infringement of... importation, or the sale within the United States after importation of certain audiovisual components and...

  7. 36 CFR 1237.16 - How do agencies store audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... audiovisual records? 1237.16 Section 1237.16 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.16 How do agencies store audiovisual records? Agencies must maintain appropriate storage conditions for permanent...

  8. 36 CFR 1237.10 - How must agencies manage their audiovisual, cartographic, and related records?

    Science.gov (United States)

    2010-07-01

    ... their audiovisual, cartographic, and related records? 1237.10 Section 1237.10 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.10 How must agencies manage their audiovisual, cartographic, and related...

  9. Timing in audiovisual speech perception: A mini review and new psychophysical data.

    Science.gov (United States)

    Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory

    2016-02-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.

  10. Timing in Audiovisual Speech Perception: A Mini Review and New Psychophysical Data

    Science.gov (United States)

    Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory

    2015-01-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309

  11. Enhancing audiovisual experience with haptic feedback: a survey on HAV.

    Science.gov (United States)

    Danieau, F; Lecuyer, A; Guillotel, P; Fleureau, J; Mollet, N; Christie, M

    2013-01-01

    Haptic technology has been widely employed in applications ranging from teleoperation and medical simulation to art and design, including entertainment, flight simulation, and virtual reality. Today there is a growing interest among researchers in integrating haptic feedback into audiovisual systems. A new medium emerges from this effort: haptic-audiovisual (HAV) content. This paper presents the techniques, formalisms, and key results pertinent to this medium. We first review the three main stages of the HAV workflow: the production, distribution, and rendering of haptic effects. We then highlight the pressing necessity for evaluation techniques in this context and discuss the key challenges in the field. By building on existing technologies and tackling the specific challenges of the enhancement of audiovisual experience with haptics, we believe the field presents exciting research perspectives whose financial and societal stakes are significant.

  12. Multisensory integration in complete unawareness: evidence from audiovisual congruency priming.

    Science.gov (United States)

    Faivre, Nathan; Mudrik, Liad; Schwartz, Naama; Koch, Christof

    2014-11-01

    Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter "b" or "m," respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit "6" or "8," respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration. © The Author(s) 2014.

  13. Specialization in audiovisual speech perception: a replication study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by bimodal integration in the McGurk effect. This integration effect may be specific to speech or be applied to all stimuli in general. To investigate this, Tuomainen et al. (2005) used sine-wave speech, which naïve observers may perceive as non......-speech, but hear as speech once informed of the linguistic origin of the signal. Combinations of sine-wave speech and incongruent video of the talker elicited a McGurk effect only for informed observers. This indicates that the audiovisual integration effect is specific to speech perception. However, observers...... that observers did look near the mouth. We conclude that eye-movements did not influence the results of Tuomainen et al. and that their results thus can be taken as evidence of a speech specific mode of audiovisual integration underlying the McGurk illusion....

  14. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect.

    Science.gov (United States)

    Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi

    2015-11-01

    Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then

  15. Audiovisual biofeedback improves motion prediction accuracy.

    Science.gov (United States)

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-04-01

    The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients' respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p biofeedback improves prediction accuracy. This would result in increased efficiency of motion management techniques affected by system latencies used in radiotherapy.

  16. The process of developing audiovisual patient information: challenges and opportunities.

    Science.gov (United States)

    Hutchison, Catherine; McCreaddie, May

    2007-11-01

    The aim of this project was to produce audiovisual patient information, which was user friendly and fit for purpose. The purpose of the audiovisual patient information is to inform patients about randomized controlled trials, as a supplement to their trial-specific written information sheet. Audiovisual patient information is known to be an effective way of informing patients about treatment. User involvement is also recognized as being important in the development of service provision. The aim of this paper is (i) to describe and discuss the process of developing the audiovisual patient information and (ii) to highlight the challenges and opportunities, thereby identifying implications for practice. A future study will test the effectiveness of the audiovisual patient information in the cancer clinical trial setting. An advisory group was set up to oversee the project and provide guidance in relation to information content, level and delivery. An expert panel of two patients provided additional guidance and a dedicated operational team dealt with the logistics of the project including: ethics; finance; scriptwriting; filming; editing and intellectual property rights. Challenges included the limitations of filming in a busy clinical environment, restricted technical and financial resources, ethical needs and issues around copyright. There were, however, substantial opportunities that included utilizing creative skills, meaningfully involving patients, teamworking and mutual appreciation of clinical, multidisciplinary and technical expertise. Developing audiovisual patient information is an important area for nurses to be involved with. However, this must be performed within the context of the multiprofessional team. Teamworking, including patient involvement, is crucial as a wide variety of expertise is required. Many aspects of the process are transferable and will provide information and guidance for nurses, regardless of specialty, considering developing this

  17. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    Science.gov (United States)

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate

  18. Electrophysiological evidence for speech-specific audiovisual integration.

    Science.gov (United States)

    Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode. © 2013 Published by Elsevier Ltd.

  19. Imitation Therapy for Non-Verbal Toddlers

    Science.gov (United States)

    Gill, Cindy; Mehta, Jyutika; Fredenburg, Karen; Bartlett, Karen

    2011-01-01

    When imitation skills are not present in young children, speech and language skills typically fail to emerge. There is little information on practices that foster the emergence of imitation skills in general and verbal imitation skills in particular. The present study attempted to add to our limited evidence base regarding accelerating the…

  20. Spontaneous Non-verbal Counting in Toddlers

    Science.gov (United States)

    Sella, Francesco; Berteletti, Ilaria; Lucangeli, Daniela; Zorzi, Marco

    2016-01-01

    A wealth of studies have investigated numerical abilities in infants and in children aged 3 or above, but research on pre-counting toddlers is sparse. Here we devised a novel version of an imitation task that was previously used to assess spontaneous focusing on numerosity (i.e. the predisposition to grasp numerical properties of the environment)…

  1. Non-verbal communication through sensor fusion

    Science.gov (United States)

    Tairych, Andreas; Xu, Daniel; O'Brien, Benjamin M.; Anderson, Iain A.

    2016-04-01

    When we communicate face to face, we subconsciously engage our whole body to convey our message. In telecommunication, e.g. during phone calls, this powerful information channel cannot be used. Capturing nonverbal information from body motion and transmitting it to the receiver parallel to speech would make these conversations feel much more natural. This requires a sensing device that is capable of capturing different types of movements, such as the flexion and extension of joints, and the rotation of limbs. In a first embodiment, we developed a sensing glove that is used to control a computer game. Capacitive dielectric elastomer (DE) sensors measure finger positions, and an inertial measurement unit (IMU) detects hand roll. These two sensor technologies complement each other, with the IMU allowing the player to move an avatar through a three-dimensional maze, and the DE sensors detecting finger flexion to fire weapons or open doors. After demonstrating the potential of sensor fusion in human-computer interaction, we take this concept to the next level and apply it in nonverbal communication between humans. The current fingerspelling glove prototype uses capacitive DE sensors to detect finger gestures performed by the sending person. These gestures are mapped to corresponding messages and transmitted wirelessly to another person. A concept for integrating an IMU into this system is presented. The fusion of the DE sensor and the IMU combines the strengths of both sensor types, and therefore enables very comprehensive body motion sensing, which makes a large repertoire of gestures available to nonverbal communication over distances.

  2. Cross-modal cueing in audiovisual spatial attention

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Greenlee, Mark W.; Gondan, Matthias

    2015-01-01

    effects have been reported for endogenous visual cues while exogenous cues seem to be mostly ineffective. In three experiments, we investigated cueing effects on the processing of audiovisual signals. In Experiment 1 we used endogenous cues to investigate their effect on the detection of auditory, visual......, and audiovisual targets presented with onset asynchrony. Consistent cueing effects were found in all target conditions. In Experiment 2 we used exogenous cues and found cueing effects only for visual target detection, but not auditory target detection. In Experiment 3 we used predictive exogenous cues to examine...

  3. A conceptual framework for audio-visual museum media

    DEFF Research Database (Denmark)

    Kirkedahl Lysholm Nielsen, Mikkel

    2017-01-01

    In today's history museums, the past is communicated through many other means than original artefacts. This interdisciplinary and theoretical article suggests a new approach to studying the use of audio-visual media, such as film, video and related media types, in a museum context. The centre...... and museum studies, existing case studies, and real life observations, the suggested framework instead stress particular characteristics of contextual use of audio-visual media in history museums, such as authenticity, virtuality, interativity, social context and spatial attributes of the communication...

  4. 36 CFR 1237.12 - What record elements must be created and preserved for permanent audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... created and preserved for permanent audiovisual records? 1237.12 Section 1237.12 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC... permanent audiovisual records? For permanent audiovisual records, the following record elements must be...

  5. Audiovisual Modulation in Mouse Primary Visual Cortex Depends on Cross-Modal Stimulus Configuration and Congruency.

    Science.gov (United States)

    Meijer, Guido T; Montijn, Jorrit S; Pennartz, Cyriel M A; Lansink, Carien S

    2017-09-06

    audiovisual stimulation resulted in a balanced pattern of response enhancement and suppression compared with unisensory visual stimuli, whereas incongruent or dissimilar stimuli at full contrast gave rise to a population dominated by response-suppressing neurons. Our results indicate that V1 dynamically integrates nonvisual sources of information while still attributing most of its resources to coding visual information. Copyright © 2017 the authors 0270-6474/17/378783-14$15.00/0.

  6. Audio-visual materials usage preference among agricultural ...

    African Journals Online (AJOL)

    It was found that respondents preferred radio, television, poster, advert, photographs, specimen, bulletin, magazine, cinema, videotape, chalkboard, and bulletin board as audio-visual materials for extension work. These are the materials that can easily be manipulated and utilized for extension work. Nigerian Journal of ...

  7. Audio-Visual Aids for Cooperative Education and Training.

    Science.gov (United States)

    Botham, C. N.

    Within the context of cooperative education, audiovisual aids may be used for spreading the idea of cooperatives and helping to consolidate study groups; for the continuous process of education, both formal and informal, within the cooperative movement; for constant follow up purposes; and for promoting loyalty to the movement. Detailed…

  8. Content-based analysis improves audiovisual archive retrieval

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2012-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. In this paper, we take into account the information needs

  9. Today's and tomorrow's retrieval practice in the audiovisual archive

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2010-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. We investigate to what extent content-based video

  10. The Role of Audiovisual Mass Media News in Language Learning

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2011-01-01

    The present paper focuses on the role of audio/visual mass media news in language learning. In this regard, the two important issues regarding the selection and preparation of TV news for language learning are the content of the news and the linguistic difficulty. Content is described as whether the news is specialized or universal. Universal…

  11. Narrativa audiovisual i cinema d'animació per ordinador

    OpenAIRE

    Duran Castells, Jaume

    2009-01-01

    DE LA TESI:Aquesta tesi doctoral estudia les relacions entre la narrativa audiovisual i el cinema d'animació per ordinador i fa una anàlisi al respecte dels llargmetratges de Pixar Animation Studios compresos entre 1995 i 2006.

  12. School Building Design and Audio-Visual Resources.

    Science.gov (United States)

    National Committee for Audio-Visual Aids in Education, London (England).

    The design of new schools should facilitate the use of audiovisual resources by ensuring that the materials used in the construction of the buildings provide adequate sound insulation and acoustical and viewing conditions in all learning spaces. The facilities to be considered are: electrical services; electronic services; light control and…

  13. Market potential for interactive audio-visual media

    NARCIS (Netherlands)

    Leurdijk, A.; Limonard, S.

    2005-01-01

    NM2 (New Media for a New Millennium) develops tools for interactive, personalised and non-linear audio-visual content that will be tested in seven pilot productions. This paper looks at the market potential for these productions from a technological, a business and a users' perspective. It shows

  14. Skill dependent audiovisual integration in the fusiform induces repetition suppression.

    Science.gov (United States)

    McNorgan, Chris; Booth, James R

    2015-02-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    Science.gov (United States)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  16. Audiovisual Cues and Perceptual Learning of Spectrally Distorted Speech

    Science.gov (United States)

    Pilling, Michael; Thomas, Sharon

    2011-01-01

    Two experiments investigate the effectiveness of audiovisual (AV) speech cues (cues derived from both seeing and hearing a talker speak) in facilitating perceptual learning of spectrally distorted speech. Speech was distorted through an eight channel noise-vocoder which shifted the spectral envelope of the speech signal to simulate the properties…

  17. Computationally efficient clustering of audio-visual meeting data

    NARCIS (Netherlands)

    Hung, H.; Friedland, G.; Yeo, C.; Shao, L.; Shan, C.; Luo, J.; Etoh, M.

    2010-01-01

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors,

  18. Decision-Level Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, Mannes; Truong, Khiet Phuong; Poppe, Ronald Walter; Pantic, Maja; Popescu-Belis, Andrei; Stiefelhagen, Rainer

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laugh- ter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio- visual laughter detection is

  19. Temporal Ventriloquism Reveals Intact Audiovisual Temporal Integration in Amblyopia.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-02-01

    We have shown previously that amblyopia involves impaired detection of asynchrony between auditory and visual events. To distinguish whether this impairment represents a defect in temporal integration or nonintegrative multisensory processing (e.g., cross-modal matching), we used the temporal ventriloquism effect in which visual temporal order judgment (TOJ) is normally enhanced by a lagging auditory click. Participants with amblyopia (n = 9) and normally sighted controls (n = 9) performed a visual TOJ task. Pairs of clicks accompanied the two lights such that the first click preceded the first light, or second click lagged the second light by 100, 200, or 450 ms. Baseline audiovisual synchrony and visual-only conditions also were tested. Within both groups, just noticeable differences for the visual TOJ task were significantly reduced compared with baseline in the 100- and 200-ms click lag conditions. Within the amblyopia group, poorer stereo acuity and poorer visual acuity in the amblyopic eye were significantly associated with greater enhancement in visual TOJ performance in the 200-ms click lag condition. Audiovisual temporal integration is intact in amblyopia, as indicated by perceptual enhancement in the temporal ventriloquism effect. Furthermore, poorer stereo acuity and poorer visual acuity in the amblyopic eye are associated with a widened temporal binding window for the effect. These findings suggest that previously reported abnormalities in audiovisual multisensory processing may result from impaired cross-modal matching rather than a diminished capacity for temporal audiovisual integration.

  20. Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression

    Science.gov (United States)

    McNorgan, Chris; Booth, James R.

    2015-01-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276

  1. Electrophysiological evidence for speech-specific audiovisual integration

    NARCIS (Netherlands)

    Baart, M.; Stekelenburg, J.J.; Vroomen, J.

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were

  2. Voice activity detection using audio-visual information

    DEFF Research Database (Denmark)

    Petsatodis, Theodore; Pnevmatikakis, Aristodemos; Boukis, Christos

    2009-01-01

    An audio-visual voice activity detector that uses sensors positioned distantly from the speaker is presented. Its constituting unimodal detectors are based on the modeling of the temporal variation of audio and visual features using Hidden Markov Models; their outcomes are fused using a post...

  3. Audio-Visual Equipment Depreciation. RDU-75-07.

    Science.gov (United States)

    Drake, Miriam A.; Baker, Martha

    A study was conducted at Purdue University to gather operational and budgetary planning data for the Libraries and Audiovisual Center. The objectives were: (1) to complete a current inventory of equipment including year of purchase, costs, and salvage value; (2) to determine useful life data for general classes of equipment; and (3) to determine…

  4. Users Requirements in Audiovisual Search: A Quantitative Approach

    NARCIS (Netherlands)

    Nadeem, Danish; Ordelman, Roeland J.F.; Aly, Robin; Verbruggen, Erwin; Aalberg, Trond; Papatheodorou, Christos; Dobreva, Milena; Tsakonas, Giannis; Farrugia, Charles J.

    2013-01-01

    This paper reports on the results of a quantitative analysis of user requirements for audiovisual search that allow the categorisation of requirements and to compare requirements across user groups. The categorisation provides clear directions with respect to the prioritisation of system features

  5. Selected Audio-Visual Materials for Consumer Education. [New Version.

    Science.gov (United States)

    Johnston, William L.

    Ninety-two films, filmstrips, multi-media kits, slides, and audio cassettes, produced between 1964 and 1974, are listed in this selective annotated bibliography on consumer education. The major portion of the bibliography is devoted to films and filmstrips. The main topics of the audio-visual materials include purchasing, advertising, money…

  6. Cortical oscillations modulated by congruent and incongruent audiovisual stimuli.

    Science.gov (United States)

    Herdman, A T; Fujioka, T; Chau, W; Ross, B; Pantev, C; Picton, T W

    2004-11-30

    Congruent or incongruent grapheme-phoneme stimuli are easily perceived as one or two linguistic objects. The main objective of this study was to investigate the changes in cortical oscillations that reflect the processing of congruent and incongruent audiovisual stimuli. Graphemes were Japanese Hiragana characters for four different vowels (/a/, /o/, /u/, and /i/). They were presented simultaneously with their corresponding phonemes (congruent) or non-corresponding phonemes (incongruent) to native-speaking Japanese participants. Participants' reaction times to the congruent audiovisual stimuli were significantly faster by 57 ms as compared to reaction times to incongruent stimuli. We recorded the brain responses for each condition using a whole-head magnetoencephalograph (MEG). A novel approach to analysing MEG data, called synthetic aperture magnetometry (SAM), was used to identify event-related changes in cortical oscillations involved in audiovisual processing. The SAM contrast between congruent and incongruent responses revealed greater event-related desynchonization (8-16 Hz) bilaterally in the occipital lobes and greater event-related synchronization (4-8 Hz) in the left transverse temporal gyrus. Results from this study further support the concept of interactions between the auditory and visual sensory cortices in multi-sensory processing of audiovisual objects.

  7. Use of Audiovisual Media and Equipment by Medical Educationists ...

    African Journals Online (AJOL)

    The most frequently used audiovisual medium and equipment is transparency on Overhead projector (O. H. P.) while the medium and equipment that is barely used for teaching is computer graphics on multi-media projector. This study also suggests ways of improving teaching-learning processes in medical education, ...

  8. Context-specific effects of musical expertise on audiovisual integration

    Science.gov (United States)

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  9. Sistema audiovisual para reconocimiento de comandos Audiovisual system for recognition of commands

    Directory of Open Access Journals (Sweden)

    Alexander Ceballos

    2011-08-01

    Full Text Available Se presenta el desarrollo de un sistema automático de reconocimiento audiovisual del habla enfocado en el reconocimiento de comandos. La representación del audio se realizó mediante los coeficientes cepstrales de Mel y las primeras dos derivadas temporales. Para la caracterización del vídeo se hizo seguimiento automático de características visuales de alto nivel a través de toda la secuencia. Para la inicialización automática del algoritmo se emplearon transformaciones de color y contornos activos con información de flujo del vector gradiente ("GVF snakes" sobre la región labial, mientras que para el seguimiento se usaron medidas de similitud entre vecindarios y restricciones morfológicas definidas en el estándar MPEG-4. Inicialmente, se presenta el diseño del sistema de reconocimiento automático del habla, empleando únicamente información de audio (ASR, mediante Modelos Ocultos de Markov (HMMs y un enfoque de palabra aislada; posteriormente, se muestra el diseño de los sistemas empleando únicamente características de vídeo (VSR, y empleando características de audio y vídeo combinadas (AVSR. Al final se comparan los resultados de los tres sistemas para una base de datos propia en español y francés, y se muestra la influencia del ruido acústico, mostrando que el sistema de AVSR es más robusto que ASR y VSR.We present the development of an automatic audiovisual speech recognition system focused on the recognition of commands. Signal audio representation was done using Mel cepstral coefficients and their first and second order time derivatives. In order to characterize the video signal, a set of high-level visual features was tracked throughout the sequences. Automatic initialization of the algorithm was performed using color transformations and active contour models based on Gradient Vector Flow (GVF Snakes on the lip region, whereas visual tracking used similarity measures across neighborhoods and morphological

  10. Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation.

    Science.gov (United States)

    Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina

    2017-01-01

    Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this "online" multisensory improvement, there is evidence of long-lasting, "offline" effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced "online" effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual

  11. Audiovisual Temporal Processing and Synchrony Perception in the Rat.

    Science.gov (United States)

    Schormans, Ashley L; Scott, Kaela E; Vo, Albert M Q; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L

    2016-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer's ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats ( n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats ( n = 7) perceived the synchronous audiovisual stimuli to be "visual first" for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20-40 ms. Ultimately, given

  12. DANCING AROUND THE SUBJECT WITH ROBOTS: ETHICAL COMMUNICATION AS A “TRIPLE AUDIOVISUAL REALITY”

    Directory of Open Access Journals (Sweden)

    Eleanor Sandry

    2012-06-01

    Full Text Available Communication is often thought of as a bridge between self and other, supported by what they have in common, and pursued with the aim of further developing this commonality. However, theorists such as John Durham Peters and Amit Pinchevski argue that this conception, connected as it is with the need to resolve and remove difference, is inherently ‘violent’ to the other and therefore unethical. To encourage ethical communication, they suggest that theory should instead support acts of communication for which the differences between self and other are not only retained, but also valued for the possibilities they offer. As a means of moving towards a more ethical stance, this paper stresses the importance of understanding communication as more than the transmission of information in spoken and written language. In particular, it draws on Fernando Poyatos’ research into simultaneous translation, which suggests that communication is a “triple audiovisual reality” consisting of language, paralanguage and kinesics. This perspective is then extended by considering the way in which Alan Fogel’s dynamic systems model also stresses the place of nonverbal signs. The paper explores and illustrates these theories by considering human-robot interactions because analysis of such interactions, with both humanoid and non-humanoid robots, helps to draw out the importance of paralanguage and kinesics as elements of communication. The human-robot encounters discussed here also highlight the way in which these theories position both reason and emotion as valuable in communication. The resulting argument – that communication occurs as a dynamic process, relying on a triple audiovisual reality drawn from both reason and emotion – supports a theoretical position that values difference, rather than promoting commonality as a requirement for successful communicative events. In conclusion, this paper extends this theory and suggests that it can form a basis

  13. Robust audio-visual speech recognition under noisy audio-video conditions.

    Science.gov (United States)

    Stewart, Darryl; Seymour, Rowan; Pass, Adrian; Ming, Ji

    2014-02-01

    This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.

  14. Recording and Validation of Audiovisual Expressions by Faces and Voices

    Directory of Open Access Journals (Sweden)

    Sachiko Takagi

    2011-10-01

    Full Text Available This study aims to further examine the cross-cultural differences in multisensory emotion perception between Western and East Asian people. In this study, we recorded the audiovisual stimulus video of Japanese and Dutch actors saying neutral phrase with one of the basic emotions. Then we conducted a validation experiment of the stimuli. In the first part (facial expression, participants watched a silent video of actors and judged what kind of emotion the actor is expressing by choosing among 6 options (ie, happiness, anger, disgust, sadness, surprise, and fear. In the second part (vocal expression, they listened to the audio part of the same videos without video images while the task was the same. We analyzed their categorization responses based on accuracy and confusion matrix and created a controlled audiovisual stimulus set.

  15. Computationally Efficient Clustering of Audio-Visual Meeting Data

    Science.gov (United States)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  16. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    OpenAIRE

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin?Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possib...

  17. Academic e-learning experience in the enhancement of open access audiovisual and media education

    OpenAIRE

    Pacholak, Anna; Sidor, Dorota

    2015-01-01

    The paper presents how the academic e-learning experience and didactic methods of the Centre for Open and Multimedia Education (COME UW), University of Warsaw, enhance the open access to audiovisual and media education at various levels of education. The project is implemented within the Audiovisual and Media Education Programme (PEAM). It is funded by the Polish Film Institute (PISF). The aim of the project is to create a proposal of a comprehensive and open programme for the audiovisual (me...

  18. Neural correlates of quality during perception of audiovisual stimuli

    CERN Document Server

    Arndt, Sebastian

    2016-01-01

    This book presents a new approach to examining perceived quality of audiovisual sequences. It uses electroencephalography to understand how exactly user quality judgments are formed within a test participant, and what might be the physiologically-based implications when being exposed to lower quality media. The book redefines experimental paradigms of using EEG in the area of quality assessment so that they better suit the requirements of standard subjective quality testings. Therefore, experimental protocols and stimuli are adjusted accordingly. .

  19. Psychophysiological effects of audiovisual stimuli during cycle exercise.

    Science.gov (United States)

    Barreto-Silva, Vinícius; Bigliassi, Marcelo; Chierotti, Priscila; Altimari, Leandro R

    2018-05-01

    Immersive environments induced by audiovisual stimuli are hypothesised to facilitate the control of movements and ameliorate fatigue-related symptoms during exercise. The objective of the present study was to investigate the effects of pleasant and unpleasant audiovisual stimuli on perceptual and psychophysiological responses during moderate-intensity exercises performed on an electromagnetically braked cycle ergometer. Twenty young adults were administered three experimental conditions in a randomised and counterbalanced order: unpleasant stimulus (US; e.g. images depicting laboured breathing); pleasant stimulus (PS; e.g. images depicting pleasant emotions); and neutral stimulus (NS; e.g. neutral facial expressions). The exercise had 10 min of duration (2 min of warm-up + 6 min of exercise + 2 min of warm-down). During all conditions, the rate of perceived exertion and heart rate variability were monitored to further understanding of the moderating influence of audiovisual stimuli on perceptual and psychophysiological responses, respectively. The results of the present study indicate that PS ameliorated fatigue-related symptoms and reduced the physiological stress imposed by the exercise bout. Conversely, US increased the global activity of the autonomic nervous system and increased exertional responses to a greater degree when compared to PS. Accordingly, audiovisual stimuli appear to induce a psychophysiological response in which individuals visualise themselves within the story presented in the video. In such instances, individuals appear to copy the behaviour observed in the videos as if the situation was real. This mirroring mechanism has the potential to up-/down-regulate the cardiac work as if in fact the exercise intensities were different in each condition.

  20. Audiovisual speech perception development at varying levels of perceptual processing

    OpenAIRE

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-01-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the le...

  1. Health Education Audiovisual Media on Mental Illness for Family

    OpenAIRE

    Wahyuningsih, Dyah; Wiyati, Ruti; Subagyo, Widyo

    2012-01-01

    This study aimed to produce health education media in form of Video Compact Disk (VCD). The first disk consist of method how to take care of patient with social isolation and the second disk consist of method how to take care of patient with violence behaviour. The implementation of audiovisual media is giving for family in Psyciatric Ward Banyumas hospital. The family divided in two groups, the first group was given health education about social isolation and the second group was given healt...

  2. Conditioning Influences Audio-Visual Integration by Increasing Sound Saliency

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    2011-10-01

    Full Text Available We investigated the effect of prior conditioning of an auditory stimulus on audiovisual integration in a series of four psychophysical experiments. The experiments factorially manipulated the conditioning procedure (picture vs monetary conditioning and multisensory paradigm (2AFC visual detection vs redundant target paradigm. In the conditioning sessions, subjects were presented with three pure tones (= conditioned stimulus, CS that were paired with neutral, positive, or negative unconditioned stimuli (US, monetary: +50 euro cents,.–50 cents, 0 cents; pictures: highly pleasant, unpleasant, and neutral IAPS. In a 2AFC visual selective attention paradigm, detection of near-threshold Gabors was improved by concurrent sounds that had previously been paired with a positive (monetary or negative (picture outcome relative to neutral sounds. In the redundant target paradigm, sounds previously paired with positive (monetary or negative (picture outcomes increased response speed to both auditory and audiovisual targets similarly. Importantly, prior conditioning did not increase the multisensory response facilitation (ie, (A + V/2 – AV or the race model violation. Collectively, our results suggest that prior conditioning primarily increases the saliency of the auditory stimulus per se rather than influencing audiovisual integration directly. In turn, conditioned sounds are rendered more potent for increasing response accuracy or speed in detection of visual targets.

  3. Infants' preference for native audiovisual speech dissociated from congruency preference.

    Directory of Open Access Journals (Sweden)

    Kathleen Shaw

    Full Text Available Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces. Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English and non-native (Spanish language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.

  4. Audiovisual integration of speech falters under high attention demands.

    Science.gov (United States)

    Alsius, Agnès; Navarra, Jordi; Campbell, Ruth; Soto-Faraco, Salvador

    2005-05-10

    One of the most commonly cited examples of human multisensory integration occurs during exposure to natural speech, when the vocal and the visual aspects of the signal are integrated in a unitary percept. Audiovisual association of facial gestures and vocal sounds has been demonstrated in nonhuman primates and in prelinguistic children, arguing for a general basis for this capacity. One critical question, however, concerns the role of attention in such multisensory integration. Although both behavioral and neurophysiological studies have converged on a preattentive conceptualization of audiovisual speech integration, this mechanism has rarely been measured under conditions of high attentional load, when the observers' attention resources are depleted. We tested the extent to which audiovisual integration was modulated by the amount of available attentional resources by measuring the observers' susceptibility to the classic McGurk illusion in a dual-task paradigm. The proportion of visually influenced responses was severely, and selectively, reduced if participants were concurrently performing an unrelated visual or auditory task. In contrast with the assumption that crossmodal speech integration is automatic, our results suggest that these multisensory binding processes are subject to attentional demands.

  5. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    Science.gov (United States)

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  6. [Intermodal timing cues for audio-visual speech recognition].

    Science.gov (United States)

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  7. Neural correlates of audiovisual speech processing in a second language.

    Science.gov (United States)

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Audiovisual integration for speech during mid-childhood: Electrophysiological evidence

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-01-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7–8-year-olds and 10–11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception. PMID:25463815

  9. Audiovisual integration of speech in a patient with Broca's Aphasia

    Science.gov (United States)

    Andersen, Tobias S.; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia. PMID:25972819

  10. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    Science.gov (United States)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  11. A General Audiovisual Temporal Processing Deficit in Adult Readers with Dyslexia

    Science.gov (United States)

    Francisco, Ana A.; Jesse, Alexandra; Groen, Margriet A.; McQueen, James M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of…

  12. Selective Attention and Audiovisual Integration: Is Attending to Both Modalities a Prerequisite for Early Integration?

    NARCIS (Netherlands)

    Talsma, D.; Doty, Tracy J.; Woldorff, Marty G.

    2007-01-01

    Interactions between multisensory integration and attention were studied using a combined audiovisual streaming design and a rapid serial visual presentation paradigm. Event-related potentials (ERPs) following audiovisual objects (AV) were compared with the sum of the ERPs following auditory (A) and

  13. Audiovisual Narrative Creation and Creative Retrieval: How Searching for a Story Shapes the Story

    NARCIS (Netherlands)

    Sauer, Sabrina

    2017-01-01

    Media professionals – such as news editors, image researchers, and documentary filmmakers - increasingly rely on online access to digital content within audiovisual archives to create narratives. Retrieving audiovisual sources therefore requires an in-depth knowledge of how to find sources

  14. Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability

    NARCIS (Netherlands)

    Francisco, A.A.; Groen, M.A.; Jesse, A.; McQueen, J.M.

    2017-01-01

    The aim of this study was to clarify whether audiovisual processing accounted for variance in reading and reading-related abilities, beyond the effect of a set of measures typically associated with individual differences in both reading and audiovisual processing. Testing adults with and without a

  15. Changes of the Prefrontal EEG (Electroencephalogram) Activities According to the Repetition of Audio-Visual Learning.

    Science.gov (United States)

    Kim, Yong-Jin; Chang, Nam-Kee

    2001-01-01

    Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…

  16. Audiovisual cultural heritage: bridging the gap between digital archives and its users

    NARCIS (Netherlands)

    Ongena, G.; Donoso, Veronica; Geerts, David; Cesar, Pablo; de Grooff, Dirk

    2009-01-01

    This document describes a PhD research track on the disclosure of audiovisual digital archives. The domain of audiovisual material is introduced as well as a problem description is formulated. The main research objective is to investigate the gap between the different users and the digital archives.

  17. First clinical implementation of audiovisual biofeedback in liver cancer stereotactic body radiation therapy

    International Nuclear Information System (INIS)

    Pollock, Sean; Tse, Regina; Martin, Darren

    2015-01-01

    This case report details a clinical trial's first recruited liver cancer patient who underwent a course of stereotactic body radiation therapy treatment utilising audiovisual biofeedback breathing guidance. Breathing motion results for both abdominal wall motion and tumour motion are included. Patient 1 demonstrated improved breathing motion regularity with audiovisual biofeedback. A training effect was also observed.

  18. 78 FR 63243 - Certain Audiovisual Components and Products Containing the Same; Commission Determination To...

    Science.gov (United States)

    2013-10-23

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-837] Certain Audiovisual Components and Products Containing the Same; Commission Determination To Review a Final Initial Determination Finding a... section 337 as to certain audiovisual components and products containing the same with respect to claims 1...

  19. Exploring Student Perceptions of Audiovisual Feedback via Screencasting in Online Courses

    Science.gov (United States)

    Mathieson, Kathleen

    2012-01-01

    Using Moore's (1993) theory of transactional distance as a framework, this action research study explored students' perceptions of audiovisual feedback provided via screencasting as a supplement to text-only feedback. A crossover design was employed to ensure that all students experienced both text-only and text-plus-audiovisual feedback and to…

  20. Audiovisual News, Cartoons, and Films as Sources of Authentic Language Input and Language Proficiency Enhancement

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2012-01-01

    In today's audiovisually driven world, various audiovisual programs can be incorporated as authentic sources of potential language input for second language acquisition. In line with this view, the present research aimed at discovering the effectiveness of exposure to news, cartoons, and films as three different types of authentic audiovisual…

  1. 78 FR 48190 - Certain Audiovisual Components and Products Containing the Same Notice of Request for Statements...

    Science.gov (United States)

    2013-08-07

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-837] Certain Audiovisual Components and Products Containing the Same Notice of Request for Statements on the Public Interest AGENCY: U.S... infringing audiovisual components and products containing the same, imported by Funai Corporation, Inc. of...

  2. Age-related audiovisual interactions in the superior colliculus of the rat.

    Science.gov (United States)

    Costa, M; Piché, M; Lepore, F; Guillemot, J-P

    2016-04-21

    It is well established that multisensory integration is a functional characteristic of the superior colliculus that disambiguates external stimuli and therefore reduces the reaction times toward simple audiovisual targets in space. However, in a condition where a complex audiovisual stimulus is used, such as the optical flow in the presence of modulated audio signals, little is known about the processing of the multisensory integration in the superior colliculus. Furthermore, since visual and auditory deficits constitute hallmark signs during aging, we sought to gain some insight on whether audiovisual processes in the superior colliculus are altered with age. Extracellular single-unit recordings were conducted in the superior colliculus of anesthetized Sprague-Dawley adult (10-12 months) and aged (21-22 months) rats. Looming circular concentric sinusoidal (CCS) gratings were presented alone and in the presence of sinusoidally amplitude modulated white noise. In both groups of rats, two different audiovisual response interactions were encountered in the spatial domain: superadditive, and suppressive. In contrast, additive audiovisual interactions were found only in adult rats. Hence, superior colliculus audiovisual interactions were more numerous in adult rats (38%) than in aged rats (8%). These results suggest that intersensory interactions in the superior colliculus play an essential role in space processing toward audiovisual moving objects during self-motion. Moreover, aging has a deleterious effect on complex audiovisual interactions. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  3. A general audiovisual temporal processing deficit in adult readers with dyslexia

    NARCIS (Netherlands)

    Francisco, A.A.; Jesse, A.; Groen, M.A.; McQueen, J.M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with

  4. Audiovisual Speech Perception and Eye Gaze Behavior of Adults with Asperger Syndrome

    Science.gov (United States)

    Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-01-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…

  5. 16 CFR 307.8 - Requirements for disclosure in audiovisual and audio advertising.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Requirements for disclosure in audiovisual and audio advertising. 307.8 Section 307.8 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... ACT OF 1986 Advertising Disclosures § 307.8 Requirements for disclosure in audiovisual and audio...

  6. Threats and opportunities for new audiovisual cultural heritage archive services: the Dutch case

    NARCIS (Netherlands)

    Ongena, G.; Huizer, E.; van de Wijngaert, Lidwien

    2012-01-01

    Purpose The purpose of this paper is to analyze the business-to-consumer market for digital audiovisual archiving services. In doing so we identify drivers, threats, and opportunities for new services based on audiovisual archives in the cultural heritage domain. By analyzing the market we provide

  7. Children with a History of SLI Show Reduced Sensitivity to Audiovisual Temporal Asynchrony: An ERP Study

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer; Leonard, Laurence B.; Gustafson, Dana; Macias, Danielle

    2014-01-01

    Purpose: The authors examined whether school-age children with a history of specific language impairment (H-SLI), their peers with typical development (TD), and adults differ in sensitivity to audiovisual temporal asynchrony and whether such difference stems from the sensory encoding of audiovisual information. Method: Fifteen H-SLI children, 15…

  8. From "Piracy" to Payment: Audio-Visual Copyright and Teaching Practice.

    Science.gov (United States)

    Anderson, Peter

    1993-01-01

    The changing circumstances in Australia governing the use of broadcast television and radio material in education are examined, from the uncertainty of the early 1980s to current management of copyrighted audiovisual material under the statutory licensing agreement between universities and an audiovisual copyright agency. (MSE)

  9. Film Studies in Motion : From Audiovisual Essay to Academic Research Video

    NARCIS (Netherlands)

    Kiss, Miklós; van den Berg, Thomas

    2016-01-01

    Our (co-written with Thomas van den Berg) ‪media rich,‬ ‪‎open access‬ ‪‎Scalar‬ ‪e-book‬ on the ‪‎Audiovisual Essay‬ practice is available online: http://scalar.usc.edu/works/film-studies-in-motion Audiovisual essaying should be more than an appropriation of traditional video artistry, or a mere

  10. 36 CFR 1235.42 - What specifications and standards for transfer apply to audiovisual records, cartographic, and...

    Science.gov (United States)

    2010-07-01

    ... standards for transfer apply to audiovisual records, cartographic, and related records? 1235.42 Section 1235... Standards § 1235.42 What specifications and standards for transfer apply to audiovisual records... elements that are needed for future preservation, duplication, and reference for audiovisual records...

  11. 36 CFR 1256.100 - What is the copying policy for USIA audiovisual records that either have copyright protection or...

    Science.gov (United States)

    2010-07-01

    ... for USIA audiovisual records that either have copyright protection or contain copyrighted material... Distribution of United States Information Agency Audiovisual Materials in the National Archives of the United States § 1256.100 What is the copying policy for USIA audiovisual records that either have copyright...

  12. The level of audiovisual print-speech integration deficits in dyslexia.

    Science.gov (United States)

    Kronschnabel, Jens; Brem, Silvia; Maurer, Urs; Brandeis, Daniel

    2014-09-01

    The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No

  13. The organization and reorganization of audiovisual speech perception in the first year of life.

    Science.gov (United States)

    Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F

    2017-04-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.

  14. Long-term music training modulates the recalibration of audiovisual simultaneity.

    Science.gov (United States)

    Jicol, Crescent; Proulx, Michael J; Pollick, Frank E; Petrini, Karin

    2018-07-01

    To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally occurring multisensory training known to enhance audiovisual perceptual accuracy. Hence, we asked a group of drummers, of non-drummer musicians and of non-musicians to judge the audiovisual simultaneity of musical and non-musical audiovisual events, before and after adaptation with two fixed audiovisual asynchronies. We found that the recalibration for the musicians and drummers was in the opposite direction (sound leading vision) to that of non-musicians (vision leading sound), and change together with both increased music training and increased perceptual accuracy (i.e. ability to detect asynchrony). Our findings demonstrate that long-term musical training reshapes the way humans adaptively recalibrate simultaneity between auditory and visual signals.

  15. On the relevance of script writing basics in audiovisual translation practice and training

    Directory of Open Access Journals (Sweden)

    Juan José Martínez-Sierra

    2012-07-01

    Full Text Available http://dx.doi.org/10.5007/2175-7968.2012v1n29p145   Audiovisual texts possess characteristics that clearly differentiate audiovisual translation from both oral and written translation, and prospective screen translators are usually taught about the issues that typically arise in audiovisual translation. This article argues for the development of an interdisciplinary approach that brings together Translation Studies and Film Studies, which would prepare future audiovisual translators to work with the nature and structure of a script in mind, in addition to the study of common and diverse translational aspects. Focusing on film, the article briefly discusses the nature and structure of scripts, and identifies key points in the development and structuring of a plot. These key points and various potential hurdles are illustrated with examples from the films Chinatown and La habitación de Fermat. The second part of this article addresses some implications for teaching audiovisual translation.

  16. Audiovisual English-Arabic Translation: De Beaugrande's Perspective

    Directory of Open Access Journals (Sweden)

    Alaa Eddin Hussain

    2016-05-01

    Full Text Available This paper attempts to demonstrate the significance of the seven standards of textuality with special application to audiovisual English Arabic translation.  Ample and thoroughly analysed examples have been provided to help in audiovisual English-Arabic translation decision-making. A text is meaningful if and only if it carries meaning and knowledge to its audience, and is optimally activatable, recoverable and accessible.  The same is equally applicable to audiovisual translation (AVT. The latter should also carry knowledge which can be easily accessed by the TL audience, and be processed with least energy and time, i.e. achieving the utmost level of efficiency. Communication occurs only when that text is coherent, with continuity of senses and concepts that are appropriately linked. Coherence of a text will be achieved when all aspects of cohesive devices are well accounted for pragmatically.  This combined with a good amount of psycholinguistic element will provide a text with optimal communicative value. Non-text is certainly devoid of such components and ultimately non-communicative. Communicative knowledge can be classified into three categories: determinate knowledge, typical knowledge and accidental knowledge. To create dramatic suspense and the element of surprise, the text in AV environment, as in any dialogue, often carries accidental knowledge.  This unusual knowledge aims to make AV material interesting in the eyes of its audience. That cognitive environment is enhanced by an adequate employment of material (picture and sound, and helps to recover sense in the text. Hence, the premise of this paper is the application of certain aspects of these standards to AV texts taken from various recent feature films and documentaries, in order to facilitate the translating process and produce a final appropriate product.

  17. Bayesian calibration of simultaneity in audiovisual temporal order judgments.

    Directory of Open Access Journals (Sweden)

    Shinya Yamamoto

    Full Text Available After repeated exposures to two successive audiovisual stimuli presented in one frequent order, participants eventually perceive a pair separated by some lag time in the same order as occurring simultaneously (lag adaptation. In contrast, we previously found that perceptual changes occurred in the opposite direction in response to tactile stimuli, conforming to bayesian integration theory (bayesian calibration. We further showed, in theory, that the effect of bayesian calibration cannot be observed when the lag adaptation was fully operational. This led to the hypothesis that bayesian calibration affects judgments regarding the order of audiovisual stimuli, but that this effect is concealed behind the lag adaptation mechanism. In the present study, we showed that lag adaptation is pitch-insensitive using two sounds at 1046 and 1480 Hz. This enabled us to cancel lag adaptation by associating one pitch with sound-first stimuli and the other with light-first stimuli. When we presented each type of stimulus (high- or low-tone in a different block, the point of simultaneity shifted to "sound-first" for the pitch associated with sound-first stimuli, and to "light-first" for the pitch associated with light-first stimuli. These results are consistent with lag adaptation. In contrast, when we delivered each type of stimulus in a randomized order, the point of simultaneity shifted to "light-first" for the pitch associated with sound-first stimuli, and to "sound-first" for the pitch associated with light-first stimuli. The results clearly show that bayesian calibration is pitch-specific and is at work behind pitch-insensitive lag adaptation during temporal order judgment of audiovisual stimuli.

  18. La regulación audiovisual: argumentos a favor y en contra The audio-visual regulation: the arguments for and against

    Directory of Open Access Journals (Sweden)

    Jordi Sopena Palomar

    2008-03-01

    Full Text Available El artículo analiza la efectividad de la regulación audiovisual y valora los diversos argumentos a favor y en contra de la existencia de consejos reguladores a nivel estatal. El debate sobre la necesidad de un organismo de este calado en España todavía persiste. La mayoría de los países comunitarios se han dotado de consejos competentes en esta materia, como es el caso del OFCOM en el Reino Unido o el CSA en Francia. En España, la regulación audiovisual se limita a organismos de alcance autonómico, como son el Consejo Audiovisual de Navarra, el de Andalucía y el Consell de l’Audiovisual de Catalunya (CAC, cuyo modelo también es abordado en este artículo. The article analyzes the effectiveness of the audio-visual regulation and assesses the different arguments for and against the existence of the broadcasting authorities at the state level. The debate of the necessity of a Spanish organism of regulation is still active. Most of the European countries have created some competent authorities, like the OFCOM in United Kingdom and the CSA in France. In Spain, the broadcasting regulation is developed by regional organisms, like the Consejo Audiovisual de Navarra, the Consejo Audiovisual de Andalucía and the Consell de l’Audiovisual de Catalunya (CAC, whose case is also studied in this article.

  19. Panorama del derecho audiovisual francés

    OpenAIRE

    Derieux, E. (Emmanuel)

    1999-01-01

    El artículo realiza una panorámica del Derecho audiovisual francés hasta 1998. Como características básicas, se destacan su complejidad e inestabilidad, debida en gran parte a la incapacidad para asumir los rápidos cambios tecnológicos y a las continuas modificaciones que han ido introduciendo los gobiernos de distinto signo. Además, se repasan algunas de las cuestiones actuales más relevantes, desde la regulación de las estructuras empresariales hasta los programas audiovisuales y sus conten...

  20. Sistemas de Registro Audiovisual del Patrimonio Urbano (SRAPU)

    OpenAIRE

    Conles, Liliana Eva

    2006-01-01

    El Sistema SRAPU es un método de relevamiento fílmico diseñado para configurar una base de datos interactiva del paisaje urbano. Sobre esta base se persigue la formulación de criterios ordenados en términos de: flexibilidad y eficacia económica, eficiencia en el manejo de datos, democratización de la información. El SRAPU se plantea como un registro audiovisual del patrimonio material e intangible en su singularidad y como conjunto histórico y natural. En su concepción involucra los pro...

  1. A Joint Audio-Visual Approach to Audio Localization

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2015-01-01

    Localization of audio sources is an important research problem, e.g., to facilitate noise reduction. In the recent years, the problem has been tackled using distributed microphone arrays (DMA). A common approach is to apply direction-of-arrival (DOA) estimation on each array (denoted as nodes), a...... time-of-flight cameras. Moreover, we propose an optimal method for weighting such DOA and range information for audio localization. Our experiments on both synthetic and real data show that there is a clear, potential advantage of using the joint audiovisual localization framework....

  2. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    Science.gov (United States)

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Estudo longitudinal da atenção compartilhada em crianças autistas não-verbais Longitudinal study of joint attention in non-verbal autistic children

    Directory of Open Access Journals (Sweden)

    Leila Sandra Damião Farah

    2009-12-01

    desenvolvimento da comunicação das crianças autistas.PURPOSE: to identify and characterize abilities of Joint Attention of non-verbal autistic children through the observation of communicative behaviors. METHODS: the research involved 5 boys, between 5,9 and 8,6-year old, diagnosed as Autistic Disorder (DSM IV, 2002, recorded in two instances with a four months interval. Meanwhile, the children were submitted to a language therapy mediation based on Joint Attention stimulation. Each recording was 15 minutes long and involved one child or group of 2-3 children with the therapist within non-directed and semi-directed interaction situations, at school where they studied. We observed and registered behaviors regarding Joint Attention abilities. The used material involved percussion instruments. Data were analyzed in relation to time, interaction and interlocutor. RESULTS: the gaze behavior showed the greatest growth in each subject. Data analysis revealed that the subjects showed qualitative trends for evolution of the Joint Attention ability revealing important clinical meaning although there was lack of statistical significance. Each subject showed characteristics and evolution of the communicative behaviors regarding Joint Attention in an individualized manner. After the period of language therapy intervention, we observed a quantitative behavioral growth in the 5 subjects, specifically under child-therapist interaction. CONCLUSIONS: the gaze behavior is an important step for the development of others behaviors toward Joint Attention. The adult-child interaction situation facilitates the appearance of communication behaviors and sharing. Language therapy with focus on the Joint Attention abilities seems to contribute positively for communication development of autistic children.

  4. Encoding of Physics Concepts: Concreteness and Presentation Modality Reflected by Human Brain Dynamics

    OpenAIRE

    Lai, Kevin; She, Hsiao-Ching; Chen, Sheng-Chang; Chou, Wen-Chi; Huang, Li-Yu; Jung, Tzyy-Ping; Gramann, Klaus

    2012-01-01

    Previous research into working memory has focused on activations in different brain areas accompanying either different presentation modalities (verbal vs. non-verbal) or concreteness (abstract vs. concrete) of non-science concepts. Less research has been conducted investigating how scientific concepts are learned and further processed in working memory. To bridge this gap, the present study investigated human brain dynamics associated with encoding of physics concepts, taking both presentati...

  5. Extraction of Information of Audio-Visual Contents

    Directory of Open Access Journals (Sweden)

    Carlos Aguilar

    2011-10-01

    Full Text Available In this article we show how it is possible to use Channel Theory (Barwise and Seligman, 1997 for modeling the process of information extraction realized by audiences of audio-visual contents. To do this, we rely on the concepts pro- posed by Channel Theory and, especially, its treatment of representational systems. We then show how the information that an agent is capable of extracting from the content depends on the number of channels he is able to establish between the content and the set of classifications he is able to discriminate. The agent can endeavor the extraction of information through these channels from the totality of content; however, we discuss the advantages of extracting from its constituents in order to obtain a greater number of informational items that represent it. After showing how the extraction process is endeavored for each channel, we propose a method of representation of all the informative values an agent can obtain from a content using a matrix constituted by the channels the agent is able to establish on the content (source classifications, and the ones he can understand as individual (destination classifications. We finally show how this representation allows reflecting the evolution of the informative items through the evolution of audio-visual content.

  6. Automatic summarization of soccer highlights using audio-visual descriptors.

    Science.gov (United States)

    Raventós, A; Quijada, R; Torres, Luis; Tarrés, Francesc

    2015-01-01

    Automatic summarization generation of sports video content has been object of great interest for many years. Although semantic descriptions techniques have been proposed, many of the approaches still rely on low-level video descriptors that render quite limited results due to the complexity of the problem and to the low capability of the descriptors to represent semantic content. In this paper, a new approach for automatic highlights summarization generation of soccer videos using audio-visual descriptors is presented. The approach is based on the segmentation of the video sequence into shots that will be further analyzed to determine its relevance and interest. Of special interest in the approach is the use of the audio information that provides additional robustness to the overall performance of the summarization system. For every video shot a set of low and mid level audio-visual descriptors are computed and lately adequately combined in order to obtain different relevance measures based on empirical knowledge rules. The final summary is generated by selecting those shots with highest interest according to the specifications of the user and the results of relevance measures. A variety of results are presented with real soccer video sequences that prove the validity of the approach.

  7. Audiovisual integration in children listening to spectrally degraded speech.

    Science.gov (United States)

    Maidment, David W; Kang, Hi Jee; Stewart, Hannah J; Amitay, Sygal

    2015-02-01

    The study explored whether visual information improves speech identification in typically developing children with normal hearing when the auditory signal is spectrally degraded. Children (n=69) and adults (n=15) were presented with noise-vocoded sentences from the Children's Co-ordinate Response Measure (Rosen, 2011) in auditory-only or audiovisual conditions. The number of bands was adaptively varied to modulate the degradation of the auditory signal, with the number of bands required for approximately 79% correct identification calculated as the threshold. The youngest children (4- to 5-year-olds) did not benefit from accompanying visual information, in comparison to 6- to 11-year-old children and adults. Audiovisual gain also increased with age in the child sample. The current data suggest that children younger than 6 years of age do not fully utilize visual speech cues to enhance speech perception when the auditory signal is degraded. This evidence not only has implications for understanding the development of speech perception skills in children with normal hearing but may also inform the development of new treatment and intervention strategies that aim to remediate speech perception difficulties in pediatric cochlear implant users.

  8. O potencial da imagem televisiva na sociedade da cultura audiovisual

    Directory of Open Access Journals (Sweden)

    Juliana L. M. F. Sabino

    Full Text Available Resumo A cultura audiovisual vem cada vez mais ganhando espaço, e os avanços tecnológicos contribuem, vertiginosamente, para o seu desenvolvimento e sua abrangência. Assim, este estudo tem como temática a cultura audiovisual, e como objetivo de pesquisa, discutir a importância das imagens na televisão. Para tanto, selecionamos um exemplo de propaganda televisiva observada no ano de 2006, que inspirou uma reflexão crítica sobre a importância das linguagens híbridas na televisão, ilustrando a interferência dessas na produção do sentido na mensagem televisiva. Como referencial teórico e metodológico, utilizamos as concepções de imagem e linguagens híbridas de Lúcia Santaella. A partir da análise da propaganda ora proposta concluímos que sua constituição é mais icônica do que de verbal, mas que se insere numa concepção dialógica, constituindo-se, portanto, por meio de um processo criativo de produção de significados.

  9. Compliments in Audiovisual Translation – issues in character identity

    Directory of Open Access Journals (Sweden)

    Isabel Fernandes Silva

    2011-12-01

    Full Text Available Over the last decades, audiovisual translation has gained increased significance in Translation Studies as well as an interdisciplinary subject within other fields (media, cinema studies etc. Although many articles have been published on communicative aspects of translation such as politeness, only recently have scholars taken an interest in the translation of compliments. This study will focus on both these areas from a multimodal and pragmatic perspective, emphasizing the links between these fields and how this multidisciplinary approach will evidence the polysemiotic nature of the translation process. In Audiovisual Translation both text and image are at play, therefore, the translation of speech produced by the characters may either omit (because it is provided by visualgestual signs or it may emphasize information. A selection was made of the compliments present in the film What Women Want, our focus being on subtitles which did not successfully convey the compliment expressed in the source text, as well as analyze the reasons for this, namely difference in register, Culture Specific Items and repetitions. These differences lead to a different portrayal/identity/perception of the main character in the English version (original soundtrack and subtitled versions in Portuguese and Italian.

  10. Authentic Language Input Through Audiovisual Technology and Second Language Acquisition

    Directory of Open Access Journals (Sweden)

    Taher Bahrani

    2014-09-01

    Full Text Available Second language acquisition cannot take place without having exposure to language input. With regard to this, the present research aimed at providing empirical evidence about the low and the upper-intermediate language learners’ preferred type of audiovisual programs and language proficiency development outside the classroom. To this end, 60 language learners (30 low level and 30 upper-intermediate level were asked to have exposure to their preferred types of audiovisual program(s outside the classroom and keep a diary of the amount and the type of exposure. The obtained data indicated that the low-level participants preferred cartoons and the upper-intermediate participants preferred news more. To find out which language proficiency level could improve its language proficiency significantly, a post-test was administered. The results indicated that only the upper-intermediate language learners gained significant improvement. Based on the findings, the quality of the language input should be given priority over the amount of exposure.

  11. Valores occidentales en el discurso publicitario audiovisual argentino

    Directory of Open Access Journals (Sweden)

    Isidoro Arroyo Almaraz

    2012-04-01

    Full Text Available En el presente artículo se desarrolla un análisis del discurso publicitario audiovisual argentino. Se pretende identificar los valores sociales que comunica con mayor predominancia y su posible vinculación con los valores característicos de la sociedad occidental posmoderna. Con este propósito se analizó la frecuencia de aparición de valores sociales para el estudio de 28 anuncios de diferentes anunciantes . Como modelo de análisis se utilizó el modelo “Seven/Seven” (siete pecados capitales y siete virtudes cardinales ya que se considera que los valores tradicionales son herederos de las virtudes y los pecados, utilizados por la publicidad para resolver necesidades relacionadas con el consumo. La publicidad audiovisual argentina promueve y anima ideas relacionadas con las virtudes y pecados a través de los comportamientos de los personajes de los relatos audiovisuales. Los resultados evidencian una mayor frecuencia de valores sociales caracterizados como pecados que de valores sociales caracterizados como virtudes ya que los pecados se transforman a través de la publicidad en virtudes que dinamizan el deseo y que favorecen el consumo fortaleciendo el aprendizaje de las marcas. Finalmente, a partir de los resultados obtenidos se reflexiona acerca de los usos y alcances sociales que el discurso publicitario posee.

  12. A imagem-ritmo e o videoclipe no audiovisual

    Directory of Open Access Journals (Sweden)

    Felipe de Castro Muanis

    2012-12-01

    Full Text Available A televisão pode ser um espaço de reunião entre som e imagem em um dispositivo que possibilita a imagem-ritmo – dando continuidade à teoria da imagem de Gilles Deleuze, proposta para o cinema. Ela agregaria, simultaneamente, ca-racterísticas da imagem-movimento e da imagem-tempo, que se personificariam na construção de imagens pós-modernas, em produtos audiovisuais não necessariamente narrativos, porém populares. Filmes, videogames, videoclipes e vinhetas em que a música conduz as imagens permitiriam uma leitura mais sensorial. O audiovisual como imagem-música abre, assim, para uma nova forma de percepção além da textual tradicional, fruto da interação entre ritmo, texto e dispositivo. O tempo das imagens em movimento no audiovisual está atrelado inevitável e prioritariamente ao som. Elas agregam possibilidades não narrativas que se realizam, na maioria das vezes, sobre a lógica do ritmo musical, so-bressaindo-se como um valor fundamental, observado nos filmes Sem Destino (1969, Assassinos por Natureza (1994 e Corra Lola Corra (1998.

  13. Dissociating Cortical Activity during Processing of Native and Non-Native Audiovisual Speech from Early to Late Infancy

    Directory of Open Access Journals (Sweden)

    Eswen Fava

    2014-08-01

    Full Text Available Initially, infants are capable of discriminating phonetic contrasts across the world’s languages. Starting between seven and ten months of age, they gradually lose this ability through a process of perceptual narrowing. Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech. Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds. But infants learn from speech produced by people talking to them, meaning they learn from a complex audiovisual signal. Here, we use near-infrared spectroscopy to measure blood concentration changes in the bilateral temporal cortices of infants in three different age groups: 3-to-6 months, 7-to-10 months, and 11-to-14-months. Critically, all three groups of infants were tested with continuous audiovisual speech in both their native and another, unfamiliar language. We found that at each age range, infants showed different patterns of cortical activity in response to the native and non-native stimuli. Infants in the youngest group showed bilateral cortical activity that was greater overall in response to non-native relative to native speech; the oldest group showed left lateralized activity in response to native relative to non-native speech. These results highlight perceptual tuning as a dynamic process that happens across modalities and at different levels of stimulus complexity.

  14. Effect of attentional load on audiovisual speech perception: Evidence from ERPs

    Directory of Open Access Journals (Sweden)

    Agnès eAlsius

    2014-07-01

    Full Text Available Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e. a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  15. Audio-visual temporal recalibration can be constrained by content cues regardless of spatial overlap

    Directory of Open Access Journals (Sweden)

    Warrick eRoseboom

    2013-04-01

    Full Text Available It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this was necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; Experiment 1 and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; Experiment 2 we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  16. Efficient visual search from synchronized auditory signals requires transient audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.

  17. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG.

    Directory of Open Access Journals (Sweden)

    Yao Lu

    Full Text Available Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave, while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  18. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    Science.gov (United States)

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  19. The Influence of Selective and Divided Attention on Audiovisual Integration in Children.

    Science.gov (United States)

    Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong

    2016-01-24

    This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder. © The Author(s) 2016.

  20. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    Science.gov (United States)

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  1. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    Science.gov (United States)

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  2. Addition of Audiovisual Feedback During Standard Compressions Is Associated with Improved Ability

    Directory of Open Access Journals (Sweden)

    Nicholas Asakawa

    2018-02-01

    Full Text Available Introduction: A benefit of in-hospital cardiac arrest is the opportunity for rapid initiation of “high-quality” chest compressions as defined by current American Heart Association (AHA adult guidelines as a depth 2–2.4 inches, full chest recoil, rate 100–120 per minute, and minimal interruptions with a chest compression fraction (CCF ≥ 60%. The goal of this study was to assess the effect of audiovisual feedback on the ability to maintain high-quality chest compressions as per 2015 updated guidelines. Methods: Ninety-eight participants were randomized into four groups. Participants were randomly assigned to perform chest compressions with or without use of audiovisual feedback (+/− AVF. Participants were further assigned to perform either standard compressions with a ventilation ratio of 30:2 to simulate cardiopulmonary resuscitation (CPR without an advanced airway or continuous chest compressions to simulate CPR with an advanced airway. The primary outcome measured was ability to maintain high-quality chest compressions as defined by current 2015 AHA guidelines. Results: Overall comparisons between continuous and standard chest compressions (n=98 were without significant differences in chest compression dynamics (p’s >0.05. Overall comparisons between +/− AVF (n = 98 were significant for differences in average rate of compressions per minute (p= 0.0241 and proportion of chest compressions within guideline rate recommendations (p = 0.0084. There was a significant difference in the proportion of high quality-chest compressions favoring AVF (p = 0.0399. Comparisons between chest compression strategy groups +/− AVF were significant for differences in compression dynamics favoring AVF (p’s < 0.05. Conclusion: Overall, AVF is associated with greater ability to maintain high-quality chest compressions per most-recent AHA guidelines. Specifically, AVF was associated with a greater proportion of compressions within ideal rate with

  3. Effect of Audiovisual Treatment Information on Relieving Anxiety in Patients Undergoing Impacted Mandibular Third Molar Removal.

    Science.gov (United States)

    Choi, Sung-Hwan; Won, Ji-Hoon; Cha, Jung-Yul; Hwang, Chung-Ju

    2015-11-01

    The authors hypothesized that an audiovisual slide presentation that provided treatment information regarding the removal of an impacted mandibular third molar could improve patient knowledge of postoperative complications and decrease anxiety in young adults before and after surgery. A group that received an audiovisual description was compared with a group that received the conventional written description of the procedure. This randomized clinical trial included young adult patients who required surgical removal of an impacted mandibular third molar and fulfilled the predetermined criteria. The predictor variable was the presentation of an audiovisual slideshow. The audiovisual informed group provided informed consent after viewing an audiovisual slideshow. The control group provided informed consent after reading a written description of the procedure. The outcome variables were the State-Trait Anxiety Inventory, the Dental Anxiety Scale, a self-reported anxiety questionnaire, completed immediately before and 1 week after surgery, and a postoperative questionnaire about the level of understanding of potential postoperative complications. The data were analyzed with χ(2) tests, independent t tests, Mann-Whitney U  tests, and Spearman rank correlation coefficients. Fifty-one patients fulfilled the inclusion criteria. The audiovisual informed group was comprised of 20 men and 5 women; the written informed group was comprised of 21 men and 5 women. The audiovisual informed group remembered significantly more information than the control group about a potential allergic reaction to local anesthesia or medication and potential trismus (P audiovisual informed group had lower self-reported anxiety scores than the control group 1 week after surgery (P audiovisual slide presentation could improve patient knowledge about postoperative complications and aid in alleviating anxiety after the surgical removal of an impacted mandibular third molar. Copyright © 2015

  4. La estacíon de trabajo del traductor audiovisual: Herramientas y Recursos.

    Directory of Open Access Journals (Sweden)

    Anna Matamala

    2005-01-01

    Full Text Available In this article, we discuss the relationship between audiovisual translation and new technologies, and describe the characteristics of the audiovisual translator´s workstation, especially as regards dubbing and voiceover. After presenting the tools necessary for the translator to perform his/ her task satisfactorily as well as pointing to future perspectives, we make a list of sources that can be consulted in order to solve translation problems, including those available on the Internet. Keywords: audiovisual translation, new technologies, Internet, translator´s tools.

  5. Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception.

    Science.gov (United States)

    Baart, Martijn; Lindborg, Alma; Andersen, Tobias S

    2017-11-01

    Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure of audiovisual integration) for fusions was similar to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli. © 2017 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  6. Audiovisual Integration of Speech in a Patient with Broca’s Aphasia

    DEFF Research Database (Denmark)

    Andersen, Tobias; Starrfelt, Randi

    2015-01-01

    's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical......, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing...

  7. Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception

    DEFF Research Database (Denmark)

    Baart, Martijn; Lindborg, Alma Cornelia; Andersen, Tobias S

    2017-01-01

    Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure...... of audiovisual integration) for fusions was comparable to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli. This article is protected...

  8. The audiovisual mounting narrative as a basis for the documentary film interactive: news studies

    Directory of Open Access Journals (Sweden)

    Mgs. Denis Porto Renó

    2008-01-01

    Full Text Available This paper presents a literature review and experiment results from pilot-doctoral research "assembly language visual narrative for the documentary film interactive," which defend the thesis that there are features interactive audio and video editing of the movie, even as causing agent of interactivity. The search for interactive audio-visual formats are present in international investigations, but sob glances technology. He believes that this paper is to propose possible formats for interactive audiovisual production film, video, television, computer and cell phone from the postmodern society. Key words: Audiovisual, language, interactivity, cinema interactive, documentary, communication.

  9. La comunicación corporativa audiovisual: propuesta metodológica de estudio

    OpenAIRE

    Lorán Herrero, María Dolores

    2016-01-01

    Esta investigación, versa en torno a dos conceptos, la Comunicación Audiovisual y La Comunicación Corporativa, disciplinas que afectan a las organizaciones y que se van articulando de tal manera que dan lugar a la Comunicación Corporativa Audiovisual, concepto que se propone en esta tesis. Se realiza una clasificación y definición de los formatos que utilizan las organizaciones para su comunicación. Se trata de poder analizar cualquier documento audiovisual corporativo para constatar si el l...

  10. Atypical audiovisual speech integration in infants at risk for autism.

    Directory of Open Access Journals (Sweden)

    Jeanne A Guiraud

    Full Text Available The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/- audio/ba/and the congruent visual/ba/- audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/- audio/ga/display compared with the congruent visual/ga/- audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16 = 17.153, p = 0.001. The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25 = 0.09, p = 0.767, in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41 = 4.466, p = 0.041. In some cases this reduced ability might lead to the poor communication skills characteristic of autism.

  11. The Digital Turn in the French Audiovisual Model

    Directory of Open Access Journals (Sweden)

    Olivier Alexandre

    2016-07-01

    Full Text Available This article deals with the digital turn in the French audiovisual model. An organizational and legal system has evolved with changing technology and economic forces over the past thirty years. The high-income television industry served as the key element during the 1980s to compensate for a shifting value economy from movie theaters to domestic screens and personal devices. However, the growing competition in the TV sector and the rise of tech companies have initiated a disruption process. A challenged French conception copyright, the weakened position of TV channels and the scaling of content market all now call into question the sustainability of the French model in a digital era.

  12. Audiovisual semantic congruency during encoding enhances memory performance.

    Science.gov (United States)

    Heikkilä, Jenni; Alho, Kimmo; Hyvönen, Heidi; Tiippana, Kaisa

    2015-01-01

    Studies of memory and learning have usually focused on a single sensory modality, although human perception is multisensory in nature. In the present study, we investigated the effects of audiovisual encoding on later unisensory recognition memory performance. The participants were to memorize auditory or visual stimuli (sounds, pictures, spoken words, or written words), each of which co-occurred with either a semantically congruent stimulus, incongruent stimulus, or a neutral (non-semantic noise) stimulus in the other modality during encoding. Subsequent memory performance was overall better when the stimulus to be memorized was initially accompanied by a semantically congruent stimulus in the other modality than when it was accompanied by a neutral stimulus. These results suggest that semantically congruent multisensory experiences enhance encoding of both nonverbal and verbal materials, resulting in an improvement in their later recognition memory.

  13. A simple and efficient method to enhance audiovisual binding tendencies

    Directory of Open Access Journals (Sweden)

    Brian Odegaard

    2017-04-01

    Full Text Available Individuals vary in their tendency to bind signals from multiple senses. For the same set of sights and sounds, one individual may frequently integrate multisensory signals and experience a unified percept, whereas another individual may rarely bind them and often experience two distinct sensations. Thus, while this binding/integration tendency is specific to each individual, it is not clear how plastic this tendency is in adulthood, and how sensory experiences may cause it to change. Here, we conducted an exploratory investigation which provides evidence that (1 the brain’s tendency to bind in spatial perception is plastic, (2 that it can change following brief exposure to simple audiovisual stimuli, and (3 that exposure to temporally synchronous, spatially discrepant stimuli provides the most effective method to modify it. These results can inform current theories about how the brain updates its internal model of the surrounding sensory world, as well as future investigations seeking to increase integration tendencies.

  14. Handicrafts production: documentation and audiovisual dissemination as sociocultural appreciation technology

    Directory of Open Access Journals (Sweden)

    Luciana Alvarenga

    2016-01-01

    Full Text Available The paper presents the results of scientific research, technology and innovation project in the creative economy sector, conducted from January 2014 to January 2015 that aimed to document and disclose the artisans and handicraft production of Vila de Itaúnas, ES, Brasil. The process was developed from initial conversations, followed by planning and conducting participatory workshops for documentation and audiovisual dissemination around the production of handicrafts and its relation to biodiversity and local culture. The initial objective was to promote expression and diffusion spaces of knowledge among and for the local population, also reaching a regional, state and national public. Throughout the process, it was found that the participatory workshops and the collective production of a virtual site for disclosure of practices and products contributed to the development and socio-cultural recognition of artisan and craft in the region.

  15. Moedor de Pixels : interfaces, interações e audiovisual

    OpenAIRE

    Vieira, Jackson Marinho

    2016-01-01

    Moedor de Pixels: interfaces, interações e audiovisual é uma pesquisa teórica e prática sobre obras de arte que empregam meios audiovisuais e computacionais em contextos onde a participação e a interação do público tornam-se o centro da experiência estética. O estudo sugere que a videoarte envolve novos procedimentos na tecnologia do vídeo que deram impulso para explorações mais extensas no campo da arte mídia interativa. A pesquisa também destaca como a inclusão dos meios digitais fornece ex...

  16. On the Role of Crossmodal Prediction in Audiovisual Emotion Perception

    Directory of Open Access Journals (Sweden)

    Sarah eJessen

    2013-07-01

    Full Text Available Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others’ emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of crossmodal prediction. In emotion perception, as in most other settings, visual information precedes the auditory one. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, it has not been addressed so far in audiovisual emotion perception. Based on the current state of the art in (a crossmodal prediction and (b multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG and magnetoencephalographic (MEG studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow for a more reliable prediction of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 response in the EEG and the duration of visual emotional but not non-emotional information. If the assumption that emotional content allows for more reliable predictions can be corroborated in future studies, crossmodal prediction is a crucial factor in our understanding of multisensory emotion perception.

  17. On the role of crossmodal prediction in audiovisual emotion perception.

    Science.gov (United States)

    Jessen, Sarah; Kotz, Sonja A

    2013-01-01

    Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others' emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of cross-modal prediction. In emotion perception, as in most other settings, visual information precedes the auditory information. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, so far it has not been addressed in audiovisual emotion perception. Based on the current state of the art in (a) cross-modal prediction and (b) multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG) and magnetoencephalographic (MEG) studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow more reliable predicting of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 EEG response and the duration of visual emotional, but not non-emotional information. If the assumption that emotional content allows more reliable predicting can be corroborated in future studies, cross-modal prediction is a crucial factor in our understanding of multisensory emotion perception.

  18. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  19. Audiovisual Webjournalism: An analysis of news on UOL News and on TV UERJ Online

    Directory of Open Access Journals (Sweden)

    Leila Nogueira

    2008-06-01

    Full Text Available This work shows the development of audiovisual webjournalism on the Brazilian Internet. This paper, based on the analysis of UOL News on UOL TV – pioneer format on commercial web television - and of UERJ Online TV – first on-line university television in Brazil - investigates the changes in the gathering, production and dissemination processes of audiovisual news when it starts to be transmitted through the web. Reflections of authors such as Herreros (2003, Manovich (2001 and Gosciola (2003 are used to discuss the construction of audiovisual narrative on the web. To comprehend the current changes in today’s webjournalism, we draw on the concepts developed by Fidler (1997; Bolter and Grusin (1998; Machado (2000; Mattos (2002 and Palacios (2003. We may conclude that the organization of narrative elements in cyberspace makes for the efficiency of journalistic messages, while establishing the basis of a particular language for audiovisual news on the Internet.

  20. Strategies for media literacy: Audiovisual skills and the citizenship in Andalusia

    Directory of Open Access Journals (Sweden)

    Ignacio Aguaded-Gómez

    2012-07-01

    Full Text Available Media consumption is an undeniable fact in present-day society. The hours that members of all social segments spend in front of a screen take up a large part of their leisure time worldwide. Audiovisual communication becomes especially important within the context of today’s digital society (society-network, where information and communication technologies pervade all corners of everyday life. However, people do not own enough audiovisual media skills to cope with this mass media omnipresence. Neither the education system nor civic associations, or the media themselves, have promoted audiovisual skills to make people critically competent when viewing media. This study aims to provide an updated conceptualization of the “audiovisual skill” in this digital environment and transpose it onto a specific interventional environment, seeking to detect needs and shortcomings, plan global strategies to be adopted by governments and devise training programmes for the various sectors involved.

  1. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate...... of audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase...... visual detection task. In our first experiment, observers presented with congruent and incongruent audiovisual sine-wave speech stimuli did only show a McGurk effect when informed of the speech nature of the stimulus. Performance on the secondary visual task was very good, thus supporting the finding...

  2. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss.

    Science.gov (United States)

    Brooks, Cassandra J; Chan, Yu Man; Anderson, Andrew J; McKendrick, Allison M

    2018-01-01

    Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.

  3. The effect of varying talker identity and listening conditions on gaze behavior during audiovisual speech perception.

    Science.gov (United States)

    Buchan, Julie N; Paré, Martin; Munhall, Kevin G

    2008-11-25

    During face-to-face conversation the face provides auditory and visual linguistic information, and also conveys information about the identity of the speaker. This study investigated behavioral strategies involved in gathering visual information while watching talking faces. The effects of varying talker identity and varying the intelligibility of speech (by adding acoustic noise) on gaze behavior were measured with an eyetracker. Varying the intelligibility of the speech by adding noise had a noticeable effect on the location and duration of fixations. When noise was present subjects adopted a vantage point that was more centralized on the face by reducing the frequency of the fixations on the eyes and mouth and lengthening the duration of their gaze fixations on the nose and mouth. Varying talker identity resulted in a more modest change in gaze behavior that was modulated by the intelligibility of the speech. Although subjects generally used similar strategies to extract visual information in both talker variability conditions, when noise was absent there were more fixations on the mouth when viewing a different talker every trial as opposed to the same talker every trial. These findings provide a useful baseline for studies examining gaze behavior during audiovisual speech perception and perception of dynamic faces.

  4. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss

    Science.gov (United States)

    Brooks, Cassandra J.; Chan, Yu Man; Anderson, Andrew J.; McKendrick, Allison M.

    2018-01-01

    Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information. PMID:29867415

  5. Contribution of Prosody in Audio-Visual Integration to Emotional Perception of Virtual Characters

    Directory of Open Access Journals (Sweden)

    Ekaterina Volkova

    2011-10-01

    Full Text Available Recent technology provides us with realistic looking virtual characters. Motion capture and elaborate mathematical models supply data for natural looking, controllable facial and bodily animations. With the help of computational linguistics and artificial intelligence, we can automatically assign emotional categories to appropriate stretches of text for a simulation of those social scenarios where verbal communication is important. All this makes virtual characters a valuable tool for creation of versatile stimuli for research on the integration of emotion information from different modalities. We conducted an audio-visual experiment to investigate the differential contributions of emotional speech and facial expressions on emotion identification. We used recorded and synthesized speech as well as dynamic virtual faces, all enhanced for seven emotional categories. The participants were asked to recognize the prevalent emotion of paired faces and audio. Results showed that when the voice was recorded, the vocalized emotion influenced participants' emotion identification more than the facial expression. However, when the voice was synthesized, facial expression influenced participants' emotion identification more than vocalized emotion. Additionally, individuals did worse on identifying either the facial expression or vocalized emotion when the voice was synthesized. Our experimental method can help to determine how to improve synthesized emotional speech.

  6. Ensenyar amb casos audiovisuals en l'entorn virtual: metodologia i resultats

    OpenAIRE

    Triadó i Ivern, Xavier Ma.; Aparicio Chueca, Ma. del Pilar (María del Pilar); Jaría Chacón, Natalia; Gallardo-Gallardo, Eva; Elasri Ejjaberi, Amal

    2010-01-01

    Aquest quadern pretén posar i donar a conèixer les bases d'una metodologia que serveixi per engegar experiències d'aprenentatge amb casos audiovisuals en l'entorn del campus virtual. Per aquest motiu, s'ha definit un protocol metodològic per utilitzar els casos audiovisuals dins l'entorn del campus virtual a diferents assignatures.

  7. Audio/visual analysis for high-speed TV advertisement detection from MPEG bitstream

    OpenAIRE

    Sadlier, David A.

    2002-01-01

    Advertisement breaks dunng or between television programmes are typically flagged by senes of black-and-silent video frames, which recurrendy occur in order to audio-visually separate individual advertisement spots from one another. It is the regular prevalence of these flags that enables automatic differentiauon between what is programme content and what is advertisement break. Detection of these audio-visual depressions within broadcast television content provides a basis on which advertise...

  8. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    Science.gov (United States)

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation

  9. Audiovisual quality assessment in communications applications: Current status, trends and challenges

    DEFF Research Database (Denmark)

    Korhonen, Jari

    2010-01-01

    Audiovisual quality assessment is one of the major challenges in multimedia communications. Traditionally, algorithm-based (objective) assessment methods have focused primarily on the compression artifacts. However, compression is only one of the numerous factors influencing the perception...... addressed in practical quality metrics is the co-impact of audio and video qualities. This paper provides an overview of the current trends and challenges in objective audiovisual quality assessment, with emphasis on communication applications...

  10. Non-verbal mother-child communication in conditions of maternal HIV in an experimental environment Comunicación no verbal madre/hijo em la existencia del HIV materna en ambiente experimental Comunicação não-verbal mãe/filho na vigência do HIV materno em ambiente experimental

    Directory of Open Access Journals (Sweden)

    Simone de Sousa Paiva

    2010-02-01

    Full Text Available Non-verbal communication is predominant in the mother-child relation. This study aimed to analyze non-verbal mother-child communication in conditions of maternal HIV. In an experimental environment, five HIV-positive mothers were evaluated during care delivery to their babies of up to six months old. Recordings of the care were analyzed by experts, observing aspects of non-verbal communication, such as: paralanguage, kinesics, distance, visual contact, tone of voice, maternal and infant tactile behavior. In total, 344 scenes were obtained. After statistical analysis, these permitted inferring that mothers use non-verbal communication to demonstrate their close attachment to their children and to perceive possible abnormalities. It is suggested that the mother’s infection can be a determining factor for the formation of mothers’ strong attachment to their children after birth.La comunicación no verbal es predominante en la relación entre madre/hijo. Se tuvo por objetivo verificar la comunicación no verbal madre/hijo en la existencia del HIV materno. En ambiente experimental, fueron evaluadas cinco madres HIV+, que cuidaban de sus hijos de hasta seis meses de vida. Las filmaciones de los cuidados fueron analizadas por peritos, siendo observados los aspectos de la comunicación no verbal, como: paralenguaje, cinestésica, proximidad, contacto visual, tono de voz y comportamiento táctil materno e infantil. Se obtuvo 344 escenas que, después de un análisis estadístico, posibilitó inferir que la comunicación no verbal es utilizada por la madre para demonstrar su apego íntimo a los hijos y para percibir posibles anormalidades. Se sugiere que la infección materna puede ser un factor determinante para la formación del fuerte apego de la madre por su bebé después el nacimiento.A comunicação não-verbal é predominante na relação entre mãe/filho. Objetivou-se verificar a comunicação não-verbal mãe/filho na vigência do HIV

  11. Multiple concurrent temporal recalibrations driven by audiovisual stimuli with apparent physical differences.

    Science.gov (United States)

    Yuan, Xiangyong; Bi, Cuihua; Huang, Xiting

    2015-05-01

    Out-of-synchrony experiences can easily recalibrate one's subjective simultaneity point in the direction of the experienced asynchrony. Although temporal adjustment of multiple audiovisual stimuli has been recently demonstrated to be spatially specific, perceptual grouping processes that organize separate audiovisual stimuli into distinctive "objects" may play a more important role in forming the basis for subsequent multiple temporal recalibrations. We investigated whether apparent physical differences between audiovisual pairs that make them distinct from each other can independently drive multiple concurrent temporal recalibrations regardless of spatial overlap. Experiment 1 verified that reducing the physical difference between two audiovisual pairs diminishes the multiple temporal recalibrations by exposing observers to two utterances with opposing temporal relationships spoken by one single speaker rather than two distinct speakers at the same location. Experiment 2 found that increasing the physical difference between two stimuli pairs can promote multiple temporal recalibrations by complicating their non-temporal dimensions (e.g., disks composed of two rather than one attribute and tones generated by multiplying two frequencies); however, these recalibration aftereffects were subtle. Experiment 3 further revealed that making the two audiovisual pairs differ in temporal structures (one transient and one gradual) was sufficient to drive concurrent temporal recalibration. These results confirm that the more audiovisual pairs physically differ, especially in temporal profile, the more likely multiple temporal perception adjustments will be content-constrained regardless of spatial overlap. These results indicate that multiple temporal recalibrations are based secondarily on the outcome of perceptual grouping processes.

  12. Detecting Functional Connectivity During Audiovisual Integration with MEG: A Comparison of Connectivity Metrics.

    Science.gov (United States)

    Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard

    2015-08-01

    In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.

  13. Neural Correlates of Temporal Complexity and Synchrony during Audiovisual Correspondence Detection.

    Science.gov (United States)

    Baumann, Oliver; Vromen, Joyce M G; Cheung, Allen; McFadyen, Jessica; Ren, Yudan; Guo, Christine C

    2018-01-01

    We often perceive real-life objects as multisensory cues through space and time. A key challenge for audiovisual integration is to match neural signals that not only originate from different sensory modalities but also that typically reach the observer at slightly different times. In humans, complex, unpredictable audiovisual streams lead to higher levels of perceptual coherence than predictable, rhythmic streams. In addition, perceptual coherence for complex signals seems less affected by increased asynchrony between visual and auditory modalities than for simple signals. Here, we used functional magnetic resonance imaging to determine the human neural correlates of audiovisual signals with different levels of temporal complexity and synchrony. Our study demonstrated that greater perceptual asynchrony and lower signal complexity impaired performance in an audiovisual coherence-matching task. Differences in asynchrony and complexity were also underpinned by a partially different set of brain regions. In particular, our results suggest that, while regions in the dorsolateral prefrontal cortex (DLPFC) were modulated by differences in memory load due to stimulus asynchrony, areas traditionally thought to be involved in speech production and recognition, such as the inferior frontal and superior temporal cortex, were modulated by the temporal complexity of the audiovisual signals. Our results, therefore, indicate specific processing roles for different subregions of the fronto-temporal cortex during audiovisual coherence detection.

  14. Optimal Audiovisual Integration in the Ventriloquism Effect But Pervasive Deficits in Unisensory Spatial Localization in Amblyopia.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-01-01

    Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.

  15. Attenuated audiovisual integration in middle-aged adults in a discrimination task.

    Science.gov (United States)

    Yang, Weiping; Ren, Yanna

    2018-02-01

    Numerous studies have focused on the diversity of audiovisual integration between younger and older adults. However, consecutive trends in audiovisual integration throughout life are still unclear. In the present study, to clarify audiovisual integration characteristics in middle-aged adults, we instructed younger and middle-aged adults to conduct an auditory/visual stimuli discrimination experiment. Randomized streams of unimodal auditory (A), unimodal visual (V) or audiovisual stimuli were presented on the left or right hemispace of the central fixation point, and subjects were instructed to respond to the target stimuli rapidly and accurately. Our results demonstrated that the responses of middle-aged adults to all unimodal and bimodal stimuli were significantly slower than those of younger adults (p Audiovisual integration was markedly delayed (onset time 360 ms) and weaker (peak 3.97%) in middle-aged adults than in younger adults (onset time 260 ms, peak 11.86%). The results suggested that audiovisual integration was attenuated in middle-aged adults and further confirmed age-related decline in information processing.

  16. GÖRSEL-İŞİTSEL ÇEVİRİ / AUDIOVISUAL TRANSLATION

    Directory of Open Access Journals (Sweden)

    Sevtap GÜNAY KÖPRÜLÜ

    2016-04-01

    Full Text Available Audiovisual translation dating back to the silent film era is a special translation method which has been developed for the translation of the movies and programs shown on TV and cinema. Therefore, in the beginning, the term “film translation” was used for this type of translation. Due to the growing number of audiovisual texts it has attracted the interest of scientists and has been assessed under the translation studies. Also in our country the concept of film translation was used for this area, but recently, the concept of audio-visual has been used. Since it not only encompasses the films but also covers all the audio-visual communicatian tools as well, especially in scientific field. In this study, the aspects are analyzed which should be taken into consideration by the translator during the audio-visual translation process within the framework of source text, translated text, film, technical knowledge and knowledge. At the end of the study, it is shown that there are some factors, apart from linguistic and paralinguistic factors and they must be considered carefully as they can influence the quality of the translation. And it is also shown that the given factors require technical knowledge in translation. In this sense, the audio-visual translation is accessed from a different angle compared to the other researches which have been done.

  17. Gestión documental de la información audiovisual deportiva en las televisiones generalistas

    Documentary management of the sport audio-visual information in the generalist televisions

    OpenAIRE

    Jorge Caldera Serrano; Felipe Zapico Alonso

    2005-01-01

    Se analiza la gestión de la información audiovisual deportiva en el marco de los Sistemas de Información Documental de las cadenas estatales, zonales y locales. Para ello se realiza un realiza un recorrido por la cadena documental que realiza la información audiovisual deportiva con el fin de ir analizando cada uno de los parámetros, mostrando así una serie de recomendaciones y normativas para la confección del registro audiovisual deportivo. Evidentemente la documentación deportiva audiovisu...

  18. Crossmodal and incremental perception of audiovisual cues to emotional speech.

    Science.gov (United States)

    Barkhuysen, Pashiera; Krahmer, Emiel; Swerts, Marc

    2010-01-01

    In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: 1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests with video clips of emotional utterances collected via a variant of the well-known Velten method. More specifically, we recorded speakers who displayed positive or negative emotions, which were congruent or incongruent with the (emotional) lexical content of the uttered sentence. In order to test this, we conducted two experiments. The first experiment is a perception experiment in which Czech participants, who do not speak Dutch, rate the perceived emotional state of Dutch speakers in a bimodal (audiovisual) or a unimodal (audio- or vision-only) condition. It was found that incongruent emotional speech leads to significantly more extreme perceived emotion scores than congruent emotional speech, where the difference between congruent and incongruent emotional speech is larger for the negative than for the positive conditions. Interestingly, the largest overall differences between congruent and incongruent emotions were found for the audio-only condition, which suggests that posing an incongruent emotion has a particularly strong effect on the spoken realization of emotions. The second experiment uses a gating paradigm to test the recognition speed for various emotional expressions from a speaker's face. In this experiment participants were presented with the same clips as experiment I, but this time presented vision-only. The clips were shown in successive segments (gates) of increasing duration. Results show that participants are surprisingly accurate in their recognition of the various emotions, as they already reach high recognition scores in the first gate (after only 160 ms). Interestingly, the recognition scores

  19. Desarrollo de una prueba de comprensión audiovisual

    Directory of Open Access Journals (Sweden)

    Casañ Núñez, Juan Carlos

    2016-06-01

    Full Text Available Este artículo forma parte de una investigación doctoral que estudia el uso de preguntas de comprensión audiovisual integradas en la imagen del vídeo como subtítulos y sincronizadas con los fragmentos de vídeo relevantes. Anteriormente se han publicado un marco teórico que describe esta técnica (Casañ Núñez, 2015b y un ejemplo en una secuencia didáctica (Casañ Núñez, 2015a. El presente trabajo detalla el proceso de planificación, diseño y experimentación de una prueba de comprensión audiovisual con dos variantes que será administrada junto con otros instrumentos en estudios cuasiexperimentales con grupos de control y tratamiento. Fundamentalmente, se pretende averiguar si la subtitulación de las preguntas facilita la comprensión, si aumenta el tiempo que los estudiantes miran en dirección a la pantalla y conocer la opinión del grupo de tratamiento sobre esta técnica. En la fase de experimentación se efectuaron seis estudios. En el último estudio piloto participaron cuarenta y un estudiantes de ELE (veintidós en el grupo de control y diecinueve en el de tratamiento. Las observaciones de los informantes durante la administración de la prueba y su posterior corrección sugirieron que las indicaciones sobre la estructura del test, las presentaciones de los textos de entrada, la explicación sobre el funcionamiento de las preguntas subtituladas para el grupo experimental y la redacción de los ítems resultaron comprensibles. Los datos de las dos variantes del instrumento se sometieron a sendos análisis de facilidad, discriminación, fiabilidad y descriptivos. También se calcularon las correlaciones entre los test y dos tareas de un examen de comprensión auditiva. Los resultados mostraron que las dos versiones de la prueba estaban preparadas para ser administradas.

  20. Neural initialization of audiovisual integration in prereaders at varying risk for developmental dyslexia.

    Science.gov (United States)

    I Karipidis, Iliana; Pleisch, Georgette; Röthlisberger, Martina; Hofstetter, Christoph; Dornbierer, Dario; Stämpfli, Philipp; Brem, Silvia

    2017-02-01

    Learning letter-speech sound correspondences is a major step in reading acquisition and is severely impaired in children with dyslexia. Up to now, it remains largely unknown how quickly neural networks adopt specific functions during audiovisual integration of linguistic information when prereading children learn letter-speech sound correspondences. Here, we simulated the process of learning letter-speech sound correspondences in 20 prereading children (6.13-7.17 years) at varying risk for dyslexia by training artificial letter-speech sound correspondences within a single experimental session. Subsequently, we acquired simultaneously event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) scans during implicit audiovisual presentation of trained and untrained pairs. Audiovisual integration of trained pairs correlated with individual learning rates in right superior temporal, left inferior temporal, and bilateral parietal areas and with phonological awareness in left temporal areas. In correspondence, a differential left-lateralized parietooccipitotemporal ERP at 400 ms for trained pairs correlated with learning achievement and familial risk. Finally, a late (650 ms) posterior negativity indicating audiovisual congruency of trained pairs was associated with increased fMRI activation in the left occipital cortex. Taken together, a short (audiovisual integration in neural systems that are responsible for processing linguistic information in proficient readers. To conclude, the ability to learn grapheme-phoneme correspondences, the familial history of reading disability, and phonological awareness of prereading children account for the degree of audiovisual integration in a distributed brain network. Such findings on emerging linguistic audiovisual integration could allow for distinguishing between children with typical and atypical reading development. Hum Brain Mapp 38:1038-1055, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals

  1. Brain responses to audiovisual speech mismatch in infants are associated with individual differences in looking behaviour.

    Science.gov (United States)

    Kushnerenko, Elena; Tomalski, Przemyslaw; Ballieux, Haiko; Ribeiro, Helena; Potton, Anita; Axelsson, Emma L; Murphy, Elizabeth; Moore, Derek G

    2013-11-01

    Research on audiovisual speech integration has reported high levels of individual variability, especially among young infants. In the present study we tested the hypothesis that this variability results from individual differences in the maturation of audiovisual speech processing during infancy. A developmental shift in selective attention to audiovisual speech has been demonstrated between 6 and 9 months with an increase in the time spent looking to articulating mouths as compared to eyes (Lewkowicz & Hansen-Tift. (2012) Proc. Natl Acad. Sci. USA, 109, 1431-1436; Tomalski et al. (2012) Eur. J. Dev. Psychol., 1-14). In the present study we tested whether these changes in behavioural maturational level are associated with differences in brain responses to audiovisual speech across this age range. We measured high-density event-related potentials (ERPs) in response to videos of audiovisually matching and mismatched syllables /ba/ and /ga/, and subsequently examined visual scanning of the same stimuli with eye-tracking. There were no clear age-specific changes in ERPs, but the amplitude of audiovisual mismatch response (AVMMR) to the combination of visual /ba/ and auditory /ga/ was strongly negatively associated with looking time to the mouth in the same condition. These results have significant implications for our understanding of individual differences in neural signatures of audiovisual speech processing in infants, suggesting that they are not strictly related to chronological age but instead associated with the maturation of looking behaviour, and develop at individual rates in the second half of the first year of life. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  2. Prácticas de producción audiovisual universitaria reflejadas en los trabajos presentados en la muestra audiovisual universitaria Ventanas 2005-2009

    Directory of Open Access Journals (Sweden)

    Maria Urbanczyk

    2011-01-01

    Full Text Available Este artículo presenta los resultados de la investigación realizada sobre la producción audiovisual universitaria en Colombia, a partir de los trabajos presentados en la muestra audiovisual Ventanas 2005-2009. El estudio de los trabajos trató de abarcar de la manera más completa posible el proceso de producción audiovisual que realizan los jóvenes universitarios, desde el nacimiento de la idea hasta el producto final, la circulación y la socialización. Se encontró que los temas más recurrentes son la violencia y los sentimientos, reflejados desde distintos géneros, tratamientos estéticos y abordajes conceptuales. Ante la ausencia de investigaciones que legitimen el saber que se produce en las aulas en cuanto al campo audiovisual en Colombia, esta investigación pretende abrir un camino para evidenciar el aporte que dejan los jóvenes en la consolidación de una narrativa nacional y en la preservación de la memoria del país.

  3. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    International Nuclear Information System (INIS)

    George, Rohini; Chung, Theodore D.; Vedam, Sastry S.; Ramakrishnan, Viswanathan; Mohan, Radhe; Weiss, Elisabeth; Keall, Paul J.

    2006-01-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathed without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating

  4. The Audio-Visual Services in Fifteen African Countries. Comparative Study on the Administration of Audio-Visual Services in Advanced and Developing Countries. Part Four. First Edition.

    Science.gov (United States)

    Jongbloed, Harry J. L.

    As the fourth part of a comparative study on the administration of audiovisual services in advanced and developing countries, this UNESCO-funded study reports on the African countries of Cameroun, Republic of Central Africa, Dahomey, Gabon, Ghana, Kenya, Libya, Mali, Nigeria, Rwanda, Senegal, Swaziland, Tunisia, Upper Volta and Zambia. Information…

  5. Audio-Visual Integration Modifies Emotional Judgment in Music

    Directory of Open Access Journals (Sweden)

    Shen-Yuan Su

    2011-10-01

    Full Text Available The conventional view that perceived emotion in music is derived mainly from auditory signals has led to neglect of the contribution of visual image. In this study, we manipulated mode (major vs. minor and examined the influence of a video image on emotional judgment in music. Melodies in either major or minor mode were controlled for tempo and rhythm and played to the participants. We found that Taiwanese participants, like Westerners, judged major melodies as expressing positive, and minor melodies negative, emotions. The major or minor melodies were then paired with video images of the singers, which were either emotionally congruent or incongruent with their modes. Results showed that participants perceived stronger positive or negative emotions with congruent audio-visual stimuli. Compared to listening to music alone, stronger emotions were perceived when an emotionally congruent video image was added and weaker emotions were perceived when an incongruent image was added. We therefore demonstrate that mode is important to perceive the emotional valence in music and that treating musical art as a purely auditory event might lose the enhanced emotional strength perceived in music, since going to a concert may lead to stronger perceived emotion than listening to the CD at home.

  6. Teleconferences and Audiovisual Materials in Earth Science Education

    Science.gov (United States)

    Cortina, L. M.

    2007-05-01

    Unidad de Educacion Continua y a Distancia, Universidad Nacional Autonoma de Mexico, Coyoaca 04510 Mexico, MEXICO As stated in the special session description, 21st century undergraduate education has access to resources/experiences that go beyond university classrooms. However in some cases, resources may go largely unused and a number of factors may be cited such as logistic problems, restricted internet and telecommunication service access, miss-information, etc. We present and comment on our efforts and experiences at the National University of Mexico in a new unit dedicated to teleconferences and audio-visual materials. The unit forms part of the geosciences institutes, located in the central UNAM campus and campuses in other States. The use of teleconference in formal graduate and undergraduate education allows teachers and lecturers to distribute course material as in classrooms. Course by teleconference requires learning and student and teacher effort without physical contact, but they have access to multimedia available to support their exhibition. Well selected multimedia material allows the students to identify and recognize digital information to aid understanding natural phenomena integral to Earth Sciences. Cooperation with international partnerships providing access to new materials and experiences and to field practices will greatly add to our efforts. We will present specific examples of the experiences that we have at the Earth Sciences Postgraduate Program of UNAM with the use of technology in the education in geosciences.

  7. Aging Effect on Audiovisual Integrative Processing in Spatial Discrimination Task

    Directory of Open Access Journals (Sweden)

    Zhi Zou

    2017-11-01

    Full Text Available Multisensory integration is an essential process that people employ daily, from conversing in social gatherings to navigating the nearby environment. The aim of this study was to investigate the impact of aging on modulating multisensory integrative processes using event-related potential (ERP, and the validity of the study was improved by including “noise” in the contrast conditions. Older and younger participants were involved in perceiving visual and/or auditory stimuli that contained spatial information. The participants responded by indicating the spatial direction (far vs. near and left vs. right conveyed in the stimuli using different wrist movements. electroencephalograms (EEGs were captured in each task trial, along with the accuracy and reaction time of the participants’ motor responses. Older participants showed a greater extent of behavioral improvements in the multisensory (as opposed to unisensory condition compared to their younger counterparts. Older participants were found to have fronto-centrally distributed super-additive P2, which was not the case for the younger participants. The P2 amplitude difference between the multisensory condition and the sum of the unisensory conditions was found to correlate significantly with performance on spatial discrimination. The results indicated that the age-related effect modulated the integrative process in the perceptual and feedback stages, particularly the evaluation of auditory stimuli. Audiovisual (AV integration may also serve a functional role during spatial-discrimination processes to compensate for the compromised attention function caused by aging.

  8. Nuevas pantallas y política audiovisual

    Directory of Open Access Journals (Sweden)

    Francisco Sierra Caballero

    2016-11-01

    Full Text Available La guerra de las pantallas es hoy la quiebra de un orden televisivo en transición a una ecología compleja post Galaxia Marconi, basada en los nuevos hábitos de consumo y de vida. Un problema político, sin ninguna duda, si entendemos que la Comunicación es una Ciencia de lo Común. Una interpretación simple del futuro del audiovisual tiende a poner énfasis solo en las transformaciones tecnológicas. Ciertamente, los cambios en equipamientos, la revolución digital es un factor disruptor del sistema cultural que hay que tomar en cuenta por su relevancia. Ahora bien, insistimos, el acto de ver, la discrecionalidad de la ventana indiscreta nos confronta con el universo ético y político de la mediación como reproducción social. Pues la tecnología no es neutral, ni la comunicación un simple instrumento de transmisión.

  9. Audiovisual speech perception development at varying levels of perceptual processing.

    Science.gov (United States)

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-04-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.

  10. Audio-Visual Perception System for a Humanoid Robotic Head

    Directory of Open Access Journals (Sweden)

    Raquel Viciana-Abad

    2014-05-01

    Full Text Available One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.

  11. Modelling audiovisual integration of affect from videos and music.

    Science.gov (United States)

    Gao, Chuanji; Wedell, Douglas H; Kim, Jongwan; Weber, Christine E; Shinkareva, Svetlana V

    2018-05-01

    Two experiments examined how affective values from visual and auditory modalities are integrated. Experiment 1 paired music and videos drawn from three levels of valence while holding arousal constant. Experiment 2 included a parallel combination of three levels of arousal while holding valence constant. In each experiment, participants rated their affective states after unimodal and multimodal presentations. Experiment 1 revealed a congruency effect in which stimulus combinations of the same extreme valence resulted in more extreme state ratings than component stimuli presented in isolation. An interaction between music and video valence reflected the greater influence of negative affect. Video valence was found to have a significantly greater effect on combined ratings than music valence. The pattern of data was explained by a five parameter differential weight averaging model that attributed greater weight to the visual modality and increased weight with decreasing values of valence. Experiment 2 revealed a congruency effect only for high arousal combinations and no interaction effects. This pattern was explained by a three parameter constant weight averaging model with greater weight for the auditory modality and a very low arousal value for the initial state. These results demonstrate key differences in audiovisual integration between valence and arousal.

  12. TDT y servicio público. Retos del audiovisual iberoamericano

    Directory of Open Access Journals (Sweden)

    Francisco Sierra Caballero

    2011-03-01

    Full Text Available ¿Qué viabilidad tienen los medios públicos en Latinoamérica? ¿La radiotelevisión pública está acometiendo con garantías de éxito los retos de la Sociedad de la Información? ¿Qué sentido tiene hoy plantear la defensa del servicio público audiovisual ante la convergencia tecnológica que lideran las industrias culturales y los operadores privados? Son las preguntas que se intentan responder a partir de un análisis de situación de los medios públicos en la región. Se establecen tres retos importantes para que la radio televisión pública pueda ser una vía plausible: políticas culturales, apertura del espacio público y la democracia nacional, y acceso de las minorías y el pluralismo cultural.

  13. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  14. Situación actual de la traducción audiovisual en Colombia

    Directory of Open Access Journals (Sweden)

    Jeffersson David Orrego Carmona

    2010-05-01

    Full Text Available Objetivos: el presente artículo tiene dos objetivos: dar a conocer el panorama general del mercado actual de la traducción audiovisual en Colombia y resaltar la importancia de desarrollar estudios en esta área. Método: la metodología empleada incluyó investigación y lectura de bibliografía relacionada con el tema, aplicación de encuestas a diferentes grupos vinculados con la traducción audiovisual y el posterior análisis. Resultados: éstos mostraron el desconocimiento general que hay sobre esta labor y las preferencias de los grupos encuestados sobre las modalidades de traducción audiovisual. Se pudo observar que hay una marcada preferencia por el subtitulaje, por razones particulares de cada grupo. Conclusiones: los traductores colombianos necesitan un entrenamiento en traducción audiovisual para satisfacer las demandas del mercado y se resalta la importancia de desarrollar estudios más profundos enfocados en el desarrollo de la traducción audiovisual en Colombia.

  15. Personal Audiovisual Aptitude Influences the Interaction Between Landscape and Soundscape Appraisal.

    Science.gov (United States)

    Sun, Kang; Echevarria Sanchez, Gemma M; De Coensel, Bert; Van Renterghem, Timothy; Talsma, Durk; Botteldooren, Dick

    2018-01-01

    It has been established that there is an interaction between audition and vision in the appraisal of our living environment, and that this appraisal is influenced by personal factors. Here, we test the hypothesis that audiovisual aptitude influences appraisal of our sonic and visual environment. To measure audiovisual aptitude, an auditory deviant detection experiment was conducted in an ecologically valid and complex context. This experiment allows us to distinguish between accurate and less accurate listeners. Additionally, it allows to distinguish between participants that are easily visually distracted and those who are not. To do so, two previously conducted laboratory experiments were re-analyzed. The first experiment focuses on self-reported noise annoyance in a living room context, whereas the second experiment focuses on the perceived pleasantness of using outdoor public spaces. In the first experiment, the influence of visibility of vegetation on self-reported noise annoyance was modified by audiovisual aptitude. In the second one, it was found that the overall appraisal of walking across a bridge is influenced by audiovisual aptitude, in particular when a visually intrusive noise barrier is used to reduce highway traffic noise levels. We conclude that audiovisual aptitude may affect the appraisal of the living environment.

  16. Audiovisual materials are effective for enhancing the correction of articulation disorders in children with cleft palate.

    Science.gov (United States)

    Pamplona, María Del Carmen; Ysunza, Pablo Antonio; Morales, Santiago

    2017-02-01

    Children with cleft palate frequently show speech disorders known as compensatory articulation. Compensatory articulation requires a prolonged period of speech intervention that should include reinforcement at home. However, frequently relatives do not know how to work with their children at home. To study whether the use of audiovisual materials especially designed for complementing speech pathology treatment in children with compensatory articulation can be effective for stimulating articulation practice at home and consequently enhancing speech normalization in children with cleft palate. Eighty-two patients with compensatory articulation were studied. Patients were randomly divided into two groups. Both groups received speech pathology treatment aimed to correct articulation placement. In addition, patients from the active group received a set of audiovisual materials to be used at home. Parents were instructed about strategies and ideas about how to use the materials with their children. Severity of compensatory articulation was compared at the onset and at the end of the speech intervention. After the speech therapy period, the group of patients using audiovisual materials at home demonstrated significantly greater improvement in articulation, as compared with the patients receiving speech pathology treatment on - site without audiovisual supporting materials. The results of this study suggest that audiovisual materials especially designed for practicing adequate articulation placement at home can be effective for reinforcing and enhancing speech pathology treatment of patients with cleft palate and compensatory articulation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Neurofunctional Underpinnings of Audiovisual Emotion Processing in Teens with Autism Spectrum Disorders

    Science.gov (United States)

    Doyle-Thomas, Krissy A.R.; Goldberg, Jeremy; Szatmari, Peter; Hall, Geoffrey B.C.

    2013-01-01

    Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD). Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n = 18) and typically developing controls (n = 16) during audiovisual and unimodal emotion processing. Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviors, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that in the absence of engaging integrative emotional networks during audiovisual emotion matching, teens with ASD may have recruited the parietofrontal network as an alternate compensatory system. PMID:23750139

  18. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection

    Science.gov (United States)

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-01-01

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC. PMID:26123281

  19. A pilot study of audiovisual family meetings in the intensive care unit.

    Science.gov (United States)

    de Havenon, Adam; Petersen, Casey; Tanana, Michael; Wold, Jana; Hoesch, Robert

    2015-10-01

    We hypothesized that virtual family meetings in the intensive care unit with conference calling or Skype videoconferencing would result in increased family member satisfaction and more efficient decision making. This is a prospective, nonblinded, nonrandomized pilot study. A 6-question survey was completed by family members after family meetings, some of which used conference calling or Skype by choice. Overall, 29 (33%) of the completed surveys came from audiovisual family meetings vs 59 (67%) from control meetings. The survey data were analyzed using hierarchical linear modeling, which did not find any significant group differences between satisfaction with the audiovisual meetings vs controls. There was no association between the audiovisual intervention and withdrawal of care (P = .682) or overall hospital length of stay (z = 0.885, P = .376). Although we do not report benefit from an audiovisual intervention, these results are preliminary and heavily influenced by notable limitations to the study. Given that the intervention was feasible in this pilot study, audiovisual and social media intervention strategies warrant additional investigation given their unique ability to facilitate communication among family members in the intensive care unit. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Non-verbal communication: aspects observed during nursing consultations with blind patients Comunicación no-verbal: aspectos observados durante la consulta de Enfermería con el paciente ciego Comunicação não-verbal: aspectos observados durante a consulta de Enfermagem com o paciente cego

    Directory of Open Access Journals (Sweden)

    Cristiana Brasil de Almeida Rebouças

    2007-03-01

    Full Text Available Exploratory-descriptive study on non-verbal communication among nurses and blind patients during nursing consultations to diabetes patients, based on Hall's theoretical reference framework. Data were collected by recording the consultations. The recordings were analyzed every fifteen seconds, totaling 1,131 non-verbal communication moments. The analysis shows intimate distance (91.0% and seated position (98.3%; no contact occurred in 83.3% of the interactions. Emblematic gestures were present, including hand movements (67.4%; looks deviated from the interlocutor (52.8%, and centered on the interlocutor (44.4%. In all recordings, considerable interference occurred at the moment of nurse-patient interaction. Nurses need to know about and deepen non-verbal communication studies and adequate its use to the type of patients attended during the consultations.Estudio exploratorio y descriptivo sobre comunicación no-verbal entre el enfermero y el paciente ciego durante la consulta de enfermería al diabético, desde el referencial teórico de Hall. Colecta de datos con filmación de la consulta, analizadas a cada quince segundos, totalizando 1.131 momentos de comunicación no-verbal. El análisis muestra alejamiento íntimo (91,0% y postura sentada (98,3%, en 83,3% de las intervenciones no hubo contacto. Estubo presente el gesto emblemático mover las manos (67,4%; el mirar desviado del interlocutor (52,8% y al mirar centrado en el interlocutor (44,4%. En todas las filmaciones, hubieron interferencias considerables en el momento de la interacción enfermero y paciente. Concluyese que el enfermero precisa conocer y profundizar los estudios en comunicación no-verbal y adecuar su utilización al tipo de pacientes asistidos durante las consultas.Estudo exploratório-descritivo sobre comunicação não-verbal entre o enfermeiro e o cego durante a consulta de enfermagem ao diabético, a partir do referencial teórico de Hall. Coleta de dados com filmagem da

  1. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    Directory of Open Access Journals (Sweden)

    Magnus eAlm

    2015-07-01

    Full Text Available Gender and age have been found to affect adults’ audio-visual (AV speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20-30 years and middle-aged adults (50-60 years with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy towards more visually dominated responses.

  2. The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study.

    Science.gov (United States)

    Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence

    2017-09-25

    At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. A Comparison of the Development of Audiovisual Integration in Children with Autism Spectrum Disorders and Typically Developing Children

    Science.gov (United States)

    Taylor, Natalie; Isaac, Claire; Milne, Elizabeth

    2010-01-01

    This study aimed to investigate the development of audiovisual integration in children with Autism Spectrum Disorder (ASD). Audiovisual integration was measured using the McGurk effect in children with ASD aged 7-16 years and typically developing children (control group) matched approximately for age, sex, nonverbal ability and verbal ability.…

  4. Developing an Audiovisual Notebook as a Self-Learning Tool in Histology: Perceptions of Teachers and Students

    Science.gov (United States)

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four…

  5. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals

    Science.gov (United States)

    Lidestam, Björn; Rönnberg, Jerker

    2016-01-01

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667

  6. 36 CFR 1256.96 - What provisions apply to the transfer of USIA audiovisual records to the National Archives of the...

    Science.gov (United States)

    2010-07-01

    ... transfer of USIA audiovisual records to the National Archives of the United States? 1256.96 Section 1256.96... Information Agency Audiovisual Materials in the National Archives of the United States § 1256.96 What provisions apply to the transfer of USIA audiovisual records to the National Archives of the United States...

  7. 36 CFR 1256.98 - Can I get access to and obtain copies of USIA audiovisual records transferred to the National...

    Science.gov (United States)

    2010-07-01

    ... obtain copies of USIA audiovisual records transferred to the National Archives of the United States? 1256... United States Information Agency Audiovisual Materials in the National Archives of the United States § 1256.98 Can I get access to and obtain copies of USIA audiovisual records transferred to the National...

  8. [From oral history to the research film: the audiovisual as a tool of the historian].

    Science.gov (United States)

    Mattos, Hebe; Abreu, Martha; Castro, Isabel

    2017-01-01

    An analytical essay of the process of image production, audiovisual archive formation, analysis of sources, and creation of the filmic narrative of the four historiographic films that form the DVD set Passados presentes (Present pasts) from the Oral History and Image Laboratory of Universidade Federal Fluminense (Labhoi/UFF). From excerpts from the audiovisual archive of Labhoi and the films made, the article analyzes: how the problem of research (the memory of slavery, and the legacy of the slave song in the agrofluminense region) led us to the production of images in a research situation; the analytical shift in relation to the cinematographic documentary and the ethnographic film; the specificities of revisiting the audiovisual collection constituted by the formulation of new research problems.

  9. A economia do audiovisual no contexto contemporâneo das Cidades Criativas

    Directory of Open Access Journals (Sweden)

    Paulo Celso da Silva

    2012-12-01

    Full Text Available Este trabalho aborda a economia do audiovisual em cidades com status de criativas. Mais do que um adjetivo, é no bojo das atividades ligadas à comunicação, o audiovisual entre elas, cultura, moda, arquitetura, artes manuais ou artesanato local, que tais cidades renovaram a forma de acumulação, reorganizando espaços públicos e privados. As cidades de  Barcelona, Berlim, New York, Milão e São Paulo, são representativas para atingir o objetivo de analisar as cidades relacionado ao desenvolvimento do setor audiovisual. Ainda que tal hipótese possa parecer indicar, através de dados oficiais que auxiliam em uma compreensão mais realista de cada uma delas.

  10. Sincronía entre formas sonoras y formas visuales en la narrativa audiovisual

    Directory of Open Access Journals (Sweden)

    Lic. José Alfredo Sánchez Ríos

    1999-01-01

    Full Text Available ¿Dónde tiene que situarse el investigador para realizar un trabajo que lleve consigo un conocimiento más profundo para entender un fenómeno tan próximo y tan complejo como es la comunicación audiovisual que usa sonido e imagen a la vez? ¿Cuál es el papel del investigador en comunicación audiovisual para aportar nuevas aproximaciones en torno a su objeto de estudio? Desde esta perspectiva, pensamos que la nueva tarea del investigador en comunicación audiovisual será hacer una teoría menos interpretativa-subjetiva y encaminar sus observaciones hacia conocimientos segmentados que puedan ser demostrables, repetibles y autocuestionables, es decir, estudiar, elaborar y construir una teoría con un mayor y nuevo rigor metodológico.

  11. Influence of auditory and audiovisual stimuli on the right-left prevalence effect

    DEFF Research Database (Denmark)

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate...... vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did...... not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension...

  12. Self-organizing maps for measuring similarity of audiovisual speech percepts

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich

    The goal of this work is to find a way to measure similarity of audiovisual speech percepts. Phoneme-related self-organizing maps (SOM) with a rectangular basis are trained with data material from a (labeled) video film. For the training, a combination of auditory speech features and corresponding....... Dependent on the training data, these other units may also be contextually immediate neighboring units. The poster demonstrates the idea with text material spoken by one individual subject using a set of simple audio-visual features. The data material for the training process consists of 44 labeled...... sentences in German with a balanced phoneme repertoire. As a result it can be stated that (i) the SOM can be trained to map auditory and visual features in a topology-preserving way and (ii) they show strain due to the influence of other audio-visual units. The SOM can be used to measure similarity amongst...

  13. A Similarity-Based Approach for Audiovisual Document Classification Using Temporal Relation Analysis

    Directory of Open Access Journals (Sweden)

    Ferrane Isabelle

    2011-01-01

    Full Text Available Abstract We propose a novel approach for video classification that bases on the analysis of the temporal relationships between the basic events in audiovisual documents. Starting from basic segmentation results, we define a new representation method that is called Temporal Relation Matrix (TRM. Each document is then described by a set of TRMs, the analysis of which makes events of a higher level stand out. This representation has been first designed to analyze any audiovisual document in order to find events that may well characterize its content and its structure. The aim of this work is to use this representation to compute a similarity measure between two documents. Approaches for audiovisual documents classification are presented and discussed. Experimentations are done on a set of 242 video documents and the results show the efficiency of our proposals.

  14. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Yanna Ren

    2018-01-01

    Full Text Available The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson’s disease (PD. This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p0.05. The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.

  15. Audiovisual correspondence between musical timbre and visual shapes.

    Directory of Open Access Journals (Sweden)

    Mohammad eAdeli

    2014-05-01

    Full Text Available This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g. simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against colored shapes and once against grayscale shapes. Subjects had to select the visual equivalent of a given sound i.e. its shape, color (or grayscale and vertical position. This scenario permitted studying the associations between normalized timbre and visual shapes as well as some of the previous findings for more complex stimuli. 119 subjects (31 females and 88 males participated in the online experiment. Subjects included 36 claimed professional musicians, 47 claimed amateur musicians and 36 claimed non-musicians. 31 subjects have also claimed to have synesthesia-like experiences. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects have strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the two previous shapes. Color or grayscale had no effect on timbre-shape associations. Fundamental frequency was not associated with height, grayscale or color. The significant correspondence between timbre and shape revealed by the present work allows designing substitution systems which might help the blind to perceive shapes through timbre.

  16. On the spatial specificity of audiovisual crossmodal exogenous cuing effects.

    Science.gov (United States)

    Lee, Jae; Spence, Charles

    2017-06-01

    It is generally-accepted that the presentation of an auditory cue will direct an observer's spatial attention to the region of space from where it originates and therefore facilitate responses to visual targets presented there rather than from a different position within the cued hemifield. However, to date, there has been surprisingly limited evidence published in support of such within-hemifield crossmodal exogenous spatial cuing effects. Here, we report two experiments designed to investigate within- and between-hemifield spatial cuing effects in the case of audiovisual exogenous covert orienting. Auditory cues were presented from one of four frontal loudspeakers (two on either side of central fixation). There were eight possible visual target locations (one above and another below each of the loudspeakers). The auditory cues were evenly separated laterally by 30° in Experiment 1, and by 10° in Experiment 2. The potential cue and target locations were separated vertically by approximately 19° in Experiment 1, and by 4° in Experiment 2. On each trial, the participants made a speeded elevation (i.e., up vs. down) discrimination response to the visual target following the presentation of a spatially-nonpredictive auditory cue. Within-hemifield spatial cuing effects were observed only when the auditory cues were presented from the inner locations. Between-hemifield spatial cuing effects were observed in both experiments. Taken together, these results demonstrate that crossmodal exogenous shifts of spatial attention depend on the eccentricity of both the cue and target in a way that has not been made explicit by previous research. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Audiovisual distraction reduces pain perception during shockwave lithotripsy.

    Science.gov (United States)

    Marsdin, Emma; Noble, Jeremy G; Reynard, John M; Turney, Benjamin W

    2012-05-01

    Lithotripsy is an established method to fragment kidney stones that can be performed without general anesthesia in the outpatient setting. Discomfort and/or noise, however, may deter some patients. It has been demonstrated that audiovisual distraction (AV) can reduce sedoanalgesic requirements and improve patient satisfaction in nonurologic settings, but to our knowledge, this has not been investigated with lithotripsy. This randomized controlled trial was designed to test the hypothesis that AV distraction can reduce perceived pain during lithotripsy. All patients in the study received identical analgesia before a complete session of lithotripsy on a fixed-site Storz Modulith SLX F2 lithotripter. Patients were randomized to two groups: One group (n=61) received AV distraction via a wall-mounted 32″ (82 cm) television with wireless headphones; the other group (n=57) received no AV distraction. The mean intensity of treatment was comparable in both groups. Patients used a visual analogue scale (0-10) to record independent pain and distress scores and a nonverbal pain score was documented by the radiographer during the procedure (0-4). In the group that received AV distraction, all measures of pain perception were statistically lower. The patient-reported pain score was reduced from a mean of 6.1 to 2.4 (P<0.0001), and the distress score was reduced from a mean of 4.4 to 1.0 (P=0.0001). The mean nonverbal score recorded by the radiographer was reduced from 1.5 to 0.5 (<0.0001). AV distraction significantly lowered patients' reported pain and distress scores. This correlated with the nonverbal scores reported by the radiographer. We conclude that AV distraction is a simple method of improving acceptance of lithotripsy and optimizing treatment.

  18. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    Directory of Open Access Journals (Sweden)

    Yuanqing Li

    Full Text Available One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG. The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  19. Presentación: Narrativas de no ficción audiovisual, interactiva y transmedia

    Directory of Open Access Journals (Sweden)

    Arnau Gifreu Castells

    2015-03-01

    Full Text Available El número 8 de la Revista profundiza en las formas de expresión narrativas de no ficción audiovisual, interactiva y transmedia. A lo largo de la historia de la comunicación, el ámbito de la no ficción siempre ha sido considerado como menor respecto de su homónimo de ficción. Esto sucede también en el campo de la investigación, donde las narrativas de ficción audiovisual, interactiva y transmedia siempre han ido un paso por delante de las de no ficción. Este monográfico propone un acercamiento teórico-práctico a narrativas de no ficción como el documental, el reportaje, el ensayo, los formatos educativos o las películas institucionales, con el propósito de ofrecer una radiografía de su ubicación actual en el ecosistema de medios.  Audiovisual, interactive and transmedia non-fiction Abstract Number 8 of  Obra Digital Revista de Comunicación  explores  audiovisual, interactive and transmedia non-fiction narrative expression forms. Throughout the history of communication the field of non-fiction has always been regarded as less than its fictional namesake. This is also true in the field of research, where the studies into audiovisual, interactive and transmedia fiction narratives have always been one step ahead of the studies into nonfiction narratives. This monograph proposes a theoretical and practical approach to narrative nonfiction forms as documentary, reporting, essay, educational formats and institutional films in order to supply a picture of its current position in the media ecosystem. Keywords: Non-fiction, Audiovisual Narrative, Interactive Narrative, Transmedia Narrative.

  20. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    Science.gov (United States)

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.