WorldWideScience

Sample records for audiovisual non-verbal dynamic

  1. Dissociating verbal and nonverbal audiovisual object processing.

    Science.gov (United States)

    Hocking, Julia; Price, Cathy J

    2009-02-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

  2. The role of emotion in dynamic audiovisual integration of faces and voices.

    Science.gov (United States)

    Kokinous, Jenny; Kotz, Sonja A; Tavano, Alessandro; Schröger, Erich

    2015-05-01

    We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  3. Non-verbal Full Body Emotional and Social Interaction: A Case Study on Multimedia Systems for Active Music Listening

    Science.gov (United States)

    Camurri, Antonio

    Research on HCI and multimedia systems for art and entertainment based on non-verbal, full-body, emotional and social interaction is the main topic of this paper. A short review of previous research projects in this area at our centre are presented, to introduce the main issues discussed in the paper. In particular, a case study based on novel paradigms of social active music listening is presented. Active music listening experience enables users to dynamically mould expressive performance of music and of audiovisual content. This research is partially supported by the 7FP EU-ICT Project SAME (Sound and Music for Everyone, Everyday, Everywhere, Every Way, www.sameproject.eu).

  4. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Neurophysiological Modulations of Non-Verbal and Verbal Dual-Tasks Interference during Word Planning.

    Directory of Open Access Journals (Sweden)

    Raphaël Fargier

    Full Text Available Running a concurrent task while speaking clearly interferes with speech planning, but whether verbal vs. non-verbal tasks interfere with the same processes is virtually unknown. We investigated the neural dynamics of dual-task interference on word production using event-related potentials (ERPs with either tones or syllables as concurrent stimuli. Participants produced words from pictures in three conditions: without distractors, while passively listening to distractors and during a distractor detection task. Production latencies increased for tasks with higher attentional demand and were longer for syllables relative to tones. ERP analyses revealed common modulations by dual-task for verbal and non-verbal stimuli around 240 ms, likely corresponding to lexical selection. Modulations starting around 350 ms prior to vocal onset were only observed when verbal stimuli were involved. These later modulations, likely reflecting interference with phonological-phonetic encoding, were observed only when overlap between tasks was maximal and the same underlying neural circuits were engaged (cross-talk.

  6. Neurophysiological Modulations of Non-Verbal and Verbal Dual-Tasks Interference during Word Planning.

    Science.gov (United States)

    Fargier, Raphaël; Laganaro, Marina

    2016-01-01

    Running a concurrent task while speaking clearly interferes with speech planning, but whether verbal vs. non-verbal tasks interfere with the same processes is virtually unknown. We investigated the neural dynamics of dual-task interference on word production using event-related potentials (ERPs) with either tones or syllables as concurrent stimuli. Participants produced words from pictures in three conditions: without distractors, while passively listening to distractors and during a distractor detection task. Production latencies increased for tasks with higher attentional demand and were longer for syllables relative to tones. ERP analyses revealed common modulations by dual-task for verbal and non-verbal stimuli around 240 ms, likely corresponding to lexical selection. Modulations starting around 350 ms prior to vocal onset were only observed when verbal stimuli were involved. These later modulations, likely reflecting interference with phonological-phonetic encoding, were observed only when overlap between tasks was maximal and the same underlying neural circuits were engaged (cross-talk).

  7. Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection.

    Science.gov (United States)

    Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica

    2015-09-01

    Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration.

  8. Verbal and non-verbal behaviour and patient perception of communication in primary care: an observational study.

    Science.gov (United States)

    Little, Paul; White, Peter; Kelly, Joanne; Everitt, Hazel; Gashi, Shkelzen; Bikker, Annemieke; Mercer, Stewart

    2015-06-01

    Few studies have assessed the importance of a broad range of verbal and non-verbal consultation behaviours. To explore the relationship of observer ratings of behaviours of videotaped consultations with patients' perceptions. Observational study in general practices close to Southampton, Southern England. Verbal and non-verbal behaviour was rated by independent observers blind to outcome. Patients competed the Medical Interview Satisfaction Scale (MISS; primary outcome) and questionnaires addressing other communication domains. In total, 275/360 consultations from 25 GPs had useable videotapes. Higher MISS scores were associated with slight forward lean (an 0.02 increase for each degree of lean, 95% confidence interval [CI] = 0.002 to 0.03), the number of gestures (0.08, 95% CI = 0.01 to 0.15), 'back-channelling' (for example, saying 'mmm') (0.11, 95% CI = 0.02 to 0.2), and social talk (0.29, 95% CI = 0.4 to 0.54). Starting the consultation with professional coolness ('aloof') was helpful and optimism unhelpful. Finishing with non-verbal 'cut-offs' (for example, looking away), being professionally cool ('aloof'), or patronising, ('infantilising') resulted in poorer ratings. Physical contact was also important, but not traditional verbal communication. These exploratory results require confirmation, but suggest that patients may be responding to several non-verbal behaviours and non-specific verbal behaviours, such as social talk and back-channelling, more than traditional verbal behaviours. A changing consultation dynamic may also help, from professional 'coolness' at the beginning of the consultation to becoming warmer and avoiding non-verbal cut-offs at the end. © British Journal of General Practice 2015.

  9. Evaluating verbal and non-verbal communication skills, in an ethnogeriatric OSCE.

    Science.gov (United States)

    Collins, Lauren G; Schrimmer, Anne; Diamond, James; Burke, Janice

    2011-05-01

    Communication during medical interviews plays a large role in patient adherence, satisfaction with care, and health outcomes. Both verbal and non-verbal communication (NVC) skills are central to the development of rapport between patients and healthcare professionals. The purpose of this study was to assess the role of non-verbal and verbal communication skills on evaluations by standardized patients during an ethnogeriatric Objective Structured Clinical Examination (OSCE). Interviews from 19 medical students, residents, and fellows in an ethnogeriatric OSCE were analyzed. Each interview was videotaped and evaluated on a 14 item verbal and an 8 item non-verbal communication checklist. The relationship between verbal and non-verbal communication skills on interview evaluations by standardized patients were examined using correlational analyses. Maintaining adequate facial expression (FE), using affirmative gestures (AG), and limiting both unpurposive movements (UM) and hand gestures (HG) had a significant positive effect on perception of interview quality during this OSCE. Non-verbal communication skills played a role in perception of overall interview quality as well as perception of culturally competent communication. Incorporating formative and summative evaluation of both verbal and non-verbal communication skills may be a critical component of curricular innovations in ethnogeriatrics, such as the OSCE. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  10. [Non-verbal communication in Alzheimer's disease].

    Science.gov (United States)

    Schiaratura, Loris Tamara

    2008-09-01

    This review underlines the importance of non-verbal communication in Alzheimer's disease. A social psychological perspective of communication is privileged. Non-verbal behaviors such as looks, head nods, hand gestures, body posture or facial expression provide a lot of information about interpersonal attitudes, behavioral intentions, and emotional experiences. Therefore they play an important role in the regulation of interaction between individuals. Non-verbal communication is effective in Alzheimer's disease even in the late stages. Patients still produce non-verbal signals and are responsive to others. Nevertheless, few studies have been devoted to the social factors influencing the non-verbal exchange. Misidentification and misinterpretation of behaviors may have negative consequences for the patients. Thus, improving the comprehension of and the response to non-verbal behavior would increase first the quality of the interaction, then the physical and psychological well-being of patients and that of caregivers. The role of non-verbal behavior in social interactions should be approached from an integrative and functional point of view.

  11. Audiovisual signs and information science: an evaluation

    Directory of Open Access Journals (Sweden)

    Jalver Bethônico

    2006-12-01

    Full Text Available This work evaluates the relationship of Information Science with audiovisual signs, pointing out conceptual limitations, difficulties imposed by the verbal fundament of knowledge, the reduced use within libraries and the ways in the direction of a more consistent analysis of the audiovisual means, supported by the semiotics of Charles Peirce.

  12. Interactive use of communication by verbal and non-verbal autistic children.

    Science.gov (United States)

    Amato, Cibelle Albuquerque de la Higuera; Fernandes, Fernanda Dreux Miranda

    2010-01-01

    Communication of autistic children. To assess the communication functionality of verbal and non-verbal children of the autistic spectrum and to identify possible associations amongst the groups. Subjects were 20 children of the autistic spectrum divided into two groups: V with 10 verbal children and NV with 10 non-verbal children with ages varying between 2y10m and 10y6m. All subjects were video recorded during 30 minutes of spontaneous interaction with their mothers. The samples were analyzed according to the functional communicative profile and comparisons within and between groups were conducted. Data referring to the occupation of communicative space suggest that there is an even balance between each child and his mother. The number of communicative acts per minute shows a clear difference between verbal and non-verbal children. Both verbal and non-verbal children use mostly the gestual communicative mean in their interactions. Data about the use of interpersonal communicative functions point out to the autistic children's great interactive impairment. The characterization of the functional communicative profile proposed in this study confirmed the autistic children's difficulties with interpersonal communication and that these difficulties do not depend on the preferred communicative mean.

  13. Condom use: exploring verbal and non-verbal communication strategies among Latino and African American men and women.

    Science.gov (United States)

    Zukoski, Ann P; Harvey, S Marie; Branch, Meredith

    2009-08-01

    A growing body of literature provides evidence of a link between communication with sexual partners and safer sexual practices, including condom use. More research is needed that explores the dynamics of condom communication including gender differences in initiation, and types of communication strategies. The overall objective of this study was to explore condom use and the dynamics surrounding condom communication in two distinct community-based samples of African American and Latino heterosexual couples at increased risk for HIV. Based on 122 in-depth interviews, 80% of women and 74% of men reported ever using a condom with their primary partner. Of those who reported ever using a condom with their current partner, the majority indicated that condom use was initiated jointly by men and women. In addition, about one-third of the participants reported that the female partner took the lead and let her male partner know she wanted to use a condom. A sixth of the sample reported that men initiated use. Although over half of the respondents used bilateral verbal strategies (reminding, asking and persuading) to initiate condom use, one-fourth used unilateral verbal strategies (commanding and threatening to withhold sex). A smaller number reported using non-verbal strategies involving condoms themselves (e.g. putting a condom on or getting condoms). The results suggest that interventions designed to improve condom use may need to include both members of a sexual dyad and focus on improving verbal and non-verbal communication skills of individuals and couples.

  14. Dissociation of neural correlates of verbal and non-verbal visual working memory with different delays

    Directory of Open Access Journals (Sweden)

    Endestad Tor

    2007-10-01

    Full Text Available Abstract Background Dorsolateral prefrontal cortex (DLPFC, posterior parietal cortex, and regions in the occipital cortex have been identified as neural sites for visual working memory (WM. The exact involvement of the DLPFC in verbal and non-verbal working memory processes, and how these processes depend on the time-span for retention, remains disputed. Methods We used functional MRI to explore the neural correlates of the delayed discrimination of Gabor stimuli differing in orientation. Twelve subjects were instructed to code the relative orientation either verbally or non-verbally with memory delays of short (2 s or long (8 s duration. Results Blood-oxygen level dependent (BOLD 3-Tesla fMRI revealed significantly more activity for the short verbal condition compared to the short non-verbal condition in bilateral superior temporal gyrus, insula and supramarginal gyrus. Activity in the long verbal condition was greater than in the long non-verbal condition in left language-associated areas (STG and bilateral posterior parietal areas, including precuneus. Interestingly, right DLPFC and bilateral superior frontal gyrus was more active in the non-verbal long delay condition than in the long verbal condition. Conclusion The results point to a dissociation between the cortical sites involved in verbal and non-verbal WM for long and short delays. Right DLPFC seems to be engaged in non-verbal WM tasks especially for long delays. Furthermore, the results indicate that even slightly different memory maintenance intervals engage largely differing networks and that this novel finding may explain differing results in previous verbal/non-verbal WM studies.

  15. The role of interaction of verbal and non-verbal means of communication in different types of discourse

    OpenAIRE

    Orlova M. А.

    2010-01-01

    Communication relies on verbal and non-verbal interaction. To be most effective, group members need to improve verbal and non-verbal communication. Non-verbal communication fulfills functions within groups that are sometimes difficult to communicate verbally. But interpreting non-verbal messages requires a great deal of skill because multiple meanings abound in these messages.

  16. Audiovisual Capture with Ambiguous Audiovisual Stimuli

    Directory of Open Access Journals (Sweden)

    Jean-Michel Hupé

    2011-10-01

    Full Text Available Audiovisual capture happens when information across modalities get fused into a coherent percept. Ambiguous multi-modal stimuli have the potential to be powerful tools to observe such effects. We used such stimuli made of temporally synchronized and spatially co-localized visual flashes and auditory tones. The flashes produced bistable apparent motion and the tones produced ambiguous streaming. We measured strong interferences between perceptual decisions in each modality, a case of audiovisual capture. However, does this mean that audiovisual capture occurs before bistable decision? We argue that this is not the case, as the interference had a slow temporal dynamics and was modulated by audiovisual congruence, suggestive of high-level factors such as attention or intention. We propose a framework to integrate bistability and audiovisual capture, which distinguishes between “what” competes and “how” it competes (Hupé et al., 2008. The audiovisual interactions may be the result of contextual influences on neural representations (“what” competes, quite independent from the causal mechanisms of perceptual switches (“how” it competes. This framework predicts that audiovisual capture can bias bistability especially if modalities are congruent (Sato et al., 2007, but that is fundamentally distinct in nature from the bistable competition mechanism.

  17. Modulations of 'late' event-related brain potentials in humans by dynamic audiovisual speech stimuli.

    Science.gov (United States)

    Lebib, Riadh; Papo, David; Douiri, Abdel; de Bode, Stella; Gillon Dowens, Margaret; Baudonnière, Pierre-Marie

    2004-11-30

    Lipreading reliably improve speech perception during face-to-face conversation. Within the range of good dubbing, however, adults tolerate some audiovisual (AV) discrepancies and lipreading, then, can give rise to confusion. We used event-related brain potentials (ERPs) to study the perceptual strategies governing the intermodal processing of dynamic and bimodal speech stimuli, either congruently dubbed or not. Electrophysiological analyses revealed that non-coherent audiovisual dubbings modulated in amplitude an endogenous ERP component, the N300, we compared to a 'N400-like effect' reflecting the difficulty to integrate these conflicting pieces of information. This result adds further support for the existence of a cerebral system underlying 'integrative processes' lato sensu. Further studies should take advantage of this 'N400-like effect' with AV speech stimuli to open new perspectives in the domain of psycholinguistics.

  18. Motor system contributions to verbal and non-verbal working memory

    Directory of Open Access Journals (Sweden)

    Diana A Liao

    2014-09-01

    Full Text Available Working memory (WM involves the ability to maintain and manipulate information held in mind. Neuroimaging studies have shown that secondary motor areas activate during WM for verbal content (e.g., words or letters, in the absence of primary motor area activation. This activation pattern may reflect an inner speech mechanism supporting online phonological rehearsal. Here, we examined the causal relationship between motor system activity and WM processing by using transcranial magnetic stimulation (TMS to manipulate motor system activity during WM rehearsal. We tested WM performance for verbalizable (words and pseudowords and non-verbalizable (Chinese characters visual information. We predicted that disruption of motor circuits would specifically affect WM processing of verbalizable information. We found that TMS targeting motor cortex slowed response times on verbal WM trials with high (pseudoword vs. low (real word phonological load. However, non-verbal WM trials were also significantly slowed with motor TMS. WM performance was unaffected by sham stimulation or TMS over visual cortex. Self-reported use of motor strategy predicted the degree of motor stimulation disruption on WM performance. These results provide evidence of the motor system’s contributions to verbal and non-verbal WM processing. We speculate that the motor system supports WM by creating motor traces consistent with the type of information being rehearsed during maintenance.

  19. The Effects of Verbal and Non-Verbal Features on the Reception of DRTV Commercials

    Directory of Open Access Journals (Sweden)

    Smiljana Komar

    2016-12-01

    Full Text Available Analyses of consumer response are important for successful advertising as they help advertisers to find new, original and successful ways of persuasion. Successful advertisements have to boost the product’s benefits but they also have to appeal to consumers’ emotions. In TV advertisements, this is done by means of verbal and non-verbal strategies. The paper presents the results of an empirical investigation whose purpose was to examine the viewers’ emotional responses to a DRTV commercial induced by different verbal and non-verbal features, the amount of credibility and persuasiveness of the commercial and its general acceptability. Our findings indicate that (1 an overload of the same verbal and non-verbal information decreases persuasion; and (2 highly marked prosodic delivery is either exaggerated or funny, while the speaker is perceived as annoying.

  20. Interpersonal Interactions in Instrumental Lessons: Teacher/Student Verbal and Non-Verbal Behaviours

    Science.gov (United States)

    Zhukov, Katie

    2013-01-01

    This study examined verbal and non-verbal teacher/student interpersonal interactions in higher education instrumental music lessons. Twenty-four lessons were videotaped and teacher/student behaviours were analysed using a researcher-designed instrument. The findings indicate predominance of student and teacher joke among the verbal behaviours with…

  1. Habilidades de praxia verbal e não-verbal em indivíduos gagos Verbal and non-verbal praxic abilities in stutterers

    Directory of Open Access Journals (Sweden)

    Natália Casagrande Brabo

    2009-12-01

    Full Text Available OBJETIVO: caracterizar as habilidades de praxias verbal e não-verbal em indivíduos gagos. MÉTODOS: participaram do estudo 40 indivíduos, com idade igual ou superior a 18 anos, do sexo masculino e feminino: 20 gagos adultos e 20 sem queixas de comunicação. Para a avaliação das praxias verbal e não-verbal, os indivíduos foram submetidos à aplicação do Protocolo de Avaliação da Apraxia Verbal e Não-verbal (Martins e Ortiz, 2004. RESULTADOS: com relação às habilidades de praxia verbal houve diferença estatisticamente significante no número de disfluências típicas e atípicas apresentadas pelos grupos estudados. Quanto à tipologia das disfluências observou-se que nas típicas houve diferença estatisticamente significante entre os grupos estudados apenas na repetição de frase, e nas atípicas, houve diferença estatisticamente significante, tanto no bloqueio quanto na repetição de sílaba e no prolongamento. Com relação às habilidades de praxia não-verbal, não foram observadas diferenças estatisticamente significantes entre os indivíduos estudados na realização dos movimentos de lábios, língua e mandíbula, isolados e em sequência. CONCLUSÃO: com relação às habilidades de praxia verbal, os gagos apresentaram frequência maior de rupturas da fala, tanto de disfluências típicas quanto de atípicas, quando comparado ao grupo controle. Já na realização de movimentos práxicos isolados e em sequência, ou seja, nas habilidades de praxia não-verbal, os indivíduos gagos não se diferenciaram dos fluentes não confirmando a hipótese de que o início precoce da gagueira poderia comprometer as habilidades de praxia não-verbal.PURPOSE: to characterize the verbal and non-verbal praxic abilities in adult stutterers. METHODS: for this research, 40 over 18-year old men and women were selected: 20 stuttering adults and 20 without communication complaints. For the praxis evaluation, they were submitted to

  2. Parts of Speech in Non-typical Function: (Asymmetrical Encoding of Non-verbal Predicates in Erzya

    Directory of Open Access Journals (Sweden)

    Rigina Turunen

    2011-01-01

    Full Text Available Erzya non-verbal conjugation refers to symmetric paradigms in which non-verbal predicates behave morphosyntactically in a similar way to verbal predicates. Notably, though, non-verbal conjugational paradigms are asymmetric, which is seen as an outcome of paradigmatic neutralisation in less frequent/less typical contexts. For non-verbal predicates it is not obligatory to display the same amount of behavioural potential as it is for verbal predicates, and the lexical class of non-verbal predicate operates in such a way that adjectival predicates are more likely to be conjugated than nominals. Further, besides symmetric paradigms and constructions, in Erzya there are non-verbal predicate constructions which display a more overt structural encoding than do verbal ones, namely, copula constructions. Complexity in the domain of non-verbal predication in Erzya decreases the symmetry of the paradigms. Complexity increases in asymmetric constructions, as well as in paradigmatic neutralisation when non-verbal predicates cannot be inflected in all the tenses and moods occurring in verbal predication. The results would be the reverse if we were to measure complexity in terms of the morphological structure. The asymmetric features in non-verbal predication are motivated language-externally, because non-verbal predicates refer to states and occur less frequently as predicates than verbal categories. The symmetry of the paradigms and constructions is motivated language-internally: a grammatical system with fewer rules is economical.

  3. Verbal and Non-Verbal Communication and Coordination in Mission Control

    Science.gov (United States)

    Vinkhuyzen, Erik; Norvig, Peter (Technical Monitor)

    1998-01-01

    In this talk I will present some video-materials gathered in Mission Control during simulations. The focus of the presentation will be on verbal and non-verbal communication between the officers in the front and backroom, especially the practices that have evolved around a peculiar communications technology called voice loops.

  4. A comprehensive model of audiovisual perception: both percept and temporal dynamics.

    Directory of Open Access Journals (Sweden)

    Patricia Besson

    Full Text Available The sparse information captured by the sensory systems is used by the brain to apprehend the environment, for example, to spatially locate the source of audiovisual stimuli. This is an ill-posed inverse problem whose inherent uncertainty can be solved by jointly processing the information, as well as introducing constraints during this process, on the way this multisensory information is handled. This process and its result--the percept--depend on the contextual conditions perception takes place in. To date, perception has been investigated and modeled on the basis of either one of two of its dimensions: the percept or the temporal dynamics of the process. Here, we extend our previously proposed audiovisual perception model to predict both these dimensions to capture the phenomenon as a whole. Starting from a behavioral analysis, we use a data-driven approach to elicit a bayesian network which infers the different percepts and dynamics of the process. Context-specific independence analyses enable us to use the model's structure to directly explore how different contexts affect the way subjects handle the same available information. Hence, we establish that, while the percepts yielded by a unisensory stimulus or by the non-fusion of multisensory stimuli may be similar, they result from different processes, as shown by their differing temporal dynamics. Moreover, our model predicts the impact of bottom-up (stimulus driven factors as well as of top-down factors (induced by instruction manipulation on both the perception process and the percept itself.

  5. Audio-Visual Tibetan Speech Recognition Based on a Deep Dynamic Bayesian Network for Natural Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Yue Zhao

    2012-12-01

    Full Text Available Audio-visual speech recognition is a natural and robust approach to improving human-robot interaction in noisy environments. Although multi-stream Dynamic Bayesian Network and coupled HMM are widely used for audio-visual speech recognition, they fail to learn the shared features between modalities and ignore the dependency of features among the frames within each discrete state. In this paper, we propose a Deep Dynamic Bayesian Network (DDBN to perform unsupervised extraction of spatial-temporal multimodal features from Tibetan audio-visual speech data and build an accurate audio-visual speech recognition model under a no frame-independency assumption. The experiment results on Tibetan speech data from some real-world environments showed the proposed DDBN outperforms the state-of-art methods in word recognition accuracy.

  6. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis.

    Science.gov (United States)

    Altieri, Nicholas; Wenger, Michael J

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of -12 dB, and S/N ratio of -18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity.

  7. Reducing Information's Speed Improves Verbal Cognition and Behavior in Autism: A 2-Cases Report.

    Science.gov (United States)

    Tardif, Carole; Latzko, Laura; Arciszewski, Thomas; Gepner, Bruno

    2017-06-01

    According to the temporal theory of autism spectrum disorders (ASDs), audiovisual changes in environment, particularly those linked to facial and verbal language, are often too fast to be faced, perceived, and/or interpreted online by many children with ASD, which could help explain their facial, verbal, and/or socioemotional interaction impairments. Our goal here was to test for the first time the impact of slowed-down audiovisual information on verbal cognition and behavior in 2 boys with ASD and verbal delay. Using 15 experimental sessions during 4 months, both boys were presented with various stimuli (eg, pictures, words, sentences, cartoons) and were then asked questions or given instructions regarding stimuli. The audiovisual stimuli and instructions/questions were presented on a computer's screen and were always displayed twice: at real-time speed (RTS) and at slowed-down speed (SDS) using the software Logiral. We scored the boys' verbal cognition performance (ie, ability to understand questions/instructions and answer them verbally/nonverbally) and their behavioral reactions (ie, attention, verbal/nonverbal communication, social reciprocity), and analyzed the effects of speed and order of the stimuli presentation on these factors. According to the results, both participants exhibited significant improvements in verbal cognition performance with SDS presentation compared with RTS presentation, and they scored better with RTS presentation when having SDS presentation before rather than after RTS presentation. Behavioral reactions were also improved in SDS conditions compared with RTS conditions. This initial evidence of a positive impact of slowed-down audiovisual information on verbal cognition should be tested in a large cohort of children with ASD and associated speech/language impairments. Copyright © 2017 by the American Academy of Pediatrics.

  8. The similar effects of verbal and non-verbal intervening tasks on word recall in an elderly population.

    Science.gov (United States)

    Williams, B R; Sullivan, S K; Morra, L F; Williams, J R; Donovick, P J

    2014-01-01

    Vulnerability to retroactive interference has been shown to increase with cognitive aging. Consistent with the findings of memory and aging literature, the authors of the California Verbal Learning Test-II (CVLT-II) suggest that a non-verbal task be administered during the test's delay interval to minimize the effects of retroactive interference on delayed recall. The goal of the present study was to determine the extent to which retroactive interference caused by non-verbal and verbal intervening tasks affects recall of verbal information in non-demented, older adults. The effects of retroactive interference on recall of words during Long-Delay recall on the California Verbal Learning Test-II (CVLT-II) were evaluated. Participants included 85 adults age 60 and older. During a 20-minute delay interval on the CVLT-II, participants received either a verbal (WAIS-III Vocabulary or Peabody Picture Vocabulary Test-IIIB) or non-verbal (Raven's Standard Progressive Matrices or WAIS-III Block Design) intervening task. Similarly to previous research with young adults (Williams & Donovick, 2008), older adults recalled the same number of words across all groups, regardless of the type of intervening task. These findings suggest that the administration of verbal intervening tasks during the CVLT-II do not elicit more retroactive interference than non-verbal intervening tasks, and thus verbal tasks need not be avoided during the delay interval of the CVLT-II.

  9. Network structure underlying resolution of conflicting non-verbal and verbal social information.

    Science.gov (United States)

    Watanabe, Takamitsu; Yahata, Noriaki; Kawakubo, Yuki; Inoue, Hideyuki; Takano, Yosuke; Iwashiro, Norichika; Natsubori, Tatsunobu; Takao, Hidemasa; Sasaki, Hiroki; Gonoi, Wataru; Murakami, Mizuho; Katsura, Masaki; Kunimatsu, Akira; Abe, Osamu; Kasai, Kiyoto; Yamasue, Hidenori

    2014-06-01

    Social judgments often require resolution of incongruity in communication contents. Although previous studies revealed that such conflict resolution recruits brain regions including the medial prefrontal cortex (mPFC) and posterior inferior frontal gyrus (pIFG), functional relationships and networks among these regions remain unclear. In this functional magnetic resonance imaging study, we investigated the functional dissociation and networks by measuring human brain activity during resolving incongruity between verbal and non-verbal emotional contents. First, we found that the conflict resolutions biased by the non-verbal contents activated the posterior dorsal mPFC (post-dmPFC), bilateral anterior insula (AI) and right dorsal pIFG, whereas the resolutions biased by the verbal contents activated the bilateral ventral pIFG. In contrast, the anterior dmPFC (ant-dmPFC), bilateral superior temporal sulcus and fusiform gyrus were commonly involved in both of the resolutions. Second, we found that the post-dmPFC and right ventral pIFG were hub regions in networks underlying the non-verbal- and verbal-content-biased resolutions, respectively. Finally, we revealed that these resolution-type-specific networks were bridged by the ant-dmPFC, which was recruited for the conflict resolutions earlier than the two hub regions. These findings suggest that, in social conflict resolutions, the ant-dmPFC selectively recruits one of the resolution-type-specific networks through its interaction with resolution-type-specific hub regions. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  10. A qualitative study on non-verbal sensitivity in nursing students.

    Science.gov (United States)

    Chan, Zenobia C Y

    2013-07-01

    To explore nursing students' perception of the meanings and roles of non-verbal communication and sensitivity. It also attempts to understand how different factors influence their non-verbal communication style. The importance of non-verbal communication in the health arena lies in the need for good communication for efficient healthcare delivery. Understanding nursing students' non-verbal communication with patients and the influential factors is essential to prepare them for field work in the future. Qualitative approach based on 16 in-depth interviews. Sixteen nursing students from the Master of Nursing and the Year 3 Bachelor of Nursing program were interviewed. Major points in the recorded interviews were marked down for content analysis. Three main themes were developed: (1) understanding students' non-verbal communication, which shows how nursing students value and experience non-verbal communication in the nursing context; (2) factors that influence the expression of non-verbal cues, which reveals the effect of patients' demographic background (gender, age, social status and educational level) and participants' characteristics (character, age, voice and appearance); and (3) metaphors of non-verbal communication, which is further divided into four subthemes: providing assistance, individualisation, dropping hints and promoting interaction. Learning about students' non-verbal communication experiences in the clinical setting allowed us to understand their use of non-verbal communication and sensitivity, as well as to understand areas that may need further improvement. The experiences and perceptions revealed by the nursing students could provoke nurses to reconsider the effects of the different factors suggested in this study. The results might also help students and nurses to learn and ponder their missing gap, leading them to rethink, train and pay more attention to their non-verbal communication style and sensitivity. © 2013 John Wiley & Sons Ltd.

  11. Categorization of natural dynamic audiovisual scenes.

    Directory of Open Access Journals (Sweden)

    Olli Rummukainen

    Full Text Available This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database.

  12. An executable model of the interaction between verbal and non-verbal communication.

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.

    2000-01-01

    In this paper an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The model has been formalised by three-levelled partial temporal models, covering both the material and mental processes and their relations. The generic

  13. An Executable Model of the Interaction between Verbal and Non-Verbal Communication

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.; Dignum, F.; Greaves, M.

    2000-01-01

    In this paper an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The model has been formalised by three-levelled partial temporal models, covering both the material and mental processes and their relations. The generic

  14. MODELO DE COMUNICACIÓN NO VERBAL EN DEPORTE Y BALLET NON-VERBAL COMMUNICATION MODELS IN SPORTS AND BALLET

    Directory of Open Access Journals (Sweden)

    Gloria Vallejo

    2010-12-01

    Full Text Available Este estudio analiza el modelo de comunicación que se genera en los entrenadores de fútbol y de gimnasia artística a nivel profesional, y en los instructores de ballet en modalidad folklórica, tomando como referente el lenguaje corporal dinámico propio de la comunicación especializada de deportistas y bailarines, en la que se evidencia lenguaje no verbal. Este último se estudió tanto en prácticas psicomotrices como sociomotrices, para identificar y caracterizar relaciones entre diferentes conceptos y su correspondiente representación gestual. Los resultados indican que el lenguaje no verbal de los entrenadores e instructores toma ocasionalmente el lugar del lenguaje verbal, cuando este último resulta insuficiente o inapropiado para describir una acción motriz de gran precisión, debido a las condiciones de distancia o de interferencias acústicas. En los instructores de ballet se encontró una forma generalizada de dirigir los ensayos utilizando conteos rítmicos con las palmas o los pies. De igual forma, se destacan los componentes paralingüísticos de los diversos actos de habla, especialmente, en lo que se refiere a entonación, duración e intensidad.This study analyzes the communication model generated among professional soccer trainers, artistic gymnastics trainers, and folkloric ballet instructors, on the basis of the dynamic body language typical of specialized communication among sportspeople and dancers, which includes a high percentage of non-verbal language. Non-verbal language was observed in both psychomotor and sociomotor practices in order to identify and characterize relations between different concepts and their corresponding gestural representation. This made it possible to generate a communication model that takes into account the non-verbal aspects of specialized communicative contexts. The results indicate that the non-verbal language of trainers and instructors occasionally replaces verbal language when the

  15. The Bursts and Lulls of Multimodal Interaction: Temporal Distributions of Behavior Reveal Differences Between Verbal and Non-Verbal Communication.

    Science.gov (United States)

    Abney, Drew H; Dale, Rick; Louwerse, Max M; Kello, Christopher T

    2018-04-06

    Recent studies of naturalistic face-to-face communication have demonstrated coordination patterns such as the temporal matching of verbal and non-verbal behavior, which provides evidence for the proposal that verbal and non-verbal communicative control derives from one system. In this study, we argue that the observed relationship between verbal and non-verbal behaviors depends on the level of analysis. In a reanalysis of a corpus of naturalistic multimodal communication (Louwerse, Dale, Bard, & Jeuniaux, ), we focus on measuring the temporal patterns of specific communicative behaviors in terms of their burstiness. We examined burstiness estimates across different roles of the speaker and different communicative modalities. We observed more burstiness for verbal versus non-verbal channels, and for more versus less informative language subchannels. Using this new method for analyzing temporal patterns in communicative behaviors, we show that there is a complex relationship between verbal and non-verbal channels. We propose a "temporal heterogeneity" hypothesis to explain how the language system adapts to the demands of dialog. Copyright © 2018 Cognitive Science Society, Inc.

  16. Non-verbal communication barriers when dealing with Saudi sellers

    Directory of Open Access Journals (Sweden)

    Yosra Missaoui

    2015-12-01

    Full Text Available Communication has a major impact on how customers perceive sellers and their organizations. Especially, the non-verbal communication such as body language, appearance, facial expressions, gestures, proximity, posture, eye contact that can influence positively or negatively the first impression of customers and their experiences in stores. Salespeople in many countries, especially the developing ones, are just telling about their companies’ products because they are unaware of the real role of sellers and the importance of non-verbal communication. In Saudi Arabia, the seller profession has been exclusively for foreign labor until 2006. It is very recently that Saudi workforce enters to the retailing sector as sellers. The non-verbal communication of those sellers has never been evaluated from consumer’s point of view. Therefore, the aim of this paper is to explore the non-verbal communication barriers that customers are facing when dealing with Saudi sellers. After discussing the non-verbal communication skills that sellers must have in the light of the previous academic research and the depth interviews with seven focus groups of Saudi customers, this study found that the Saudi customers were not totally satisfied with the current non-verbal communication skills of Saudi sellers. Therefore, it is strongly recommended to develop the non-verbal communication skills of Saudi sellers by intensive trainings, to distinguish more the appearance of their sellers, especially the female ones, to focus on the time of intervention as well as the proximity to customers.

  17. Patients' perceptions of GP non-verbal communication: a qualitative study.

    Science.gov (United States)

    Marcinowicz, Ludmila; Konstantynowicz, Jerzy; Godlewski, Cezary

    2010-02-01

    During doctor-patient interactions, many messages are transmitted without words, through non-verbal communication. To elucidate the types of non-verbal behaviours perceived by patients interacting with family GPs and to determine which cues are perceived most frequently. In-depth interviews with patients of family GPs. Nine family practices in different regions of Poland. At each practice site, interviews were performed with four patients who were scheduled consecutively to see their family doctor. Twenty-four of 36 studied patients spontaneously perceived non-verbal behaviours of the family GP during patient-doctor encounters. They reported a total of 48 non-verbal cues. The most frequent features were tone of voice, eye contact, and facial expressions. Less frequent were examination room characteristics, touch, interpersonal distance, GP clothing, gestures, and posture. Non-verbal communication is an important factor by which patients spontaneously describe and evaluate their interactions with a GP. Family GPs should be trained to better understand and monitor their own non-verbal behaviours towards patients.

  18. Getting the Message Across; Non-Verbal Communication in the Classroom.

    Science.gov (United States)

    Levy, Jack

    This handbook presents selected theories, activities, and resources which can be utilized by educators in the area of non-verbal communication. Particular attention is given to the use of non-verbal communication in a cross-cultural context. Categories of non-verbal communication such as proxemics, haptics, kinesics, smiling, sound, clothing, and…

  19. A Meta-study of musicians' non-verbal interaction

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer; Marchetti, Emanuela

    2010-01-01

    interruptions. Hence, despite the fact that the skill to engage in a non-verbal interaction is described as tacit knowledge, it is fundamental for both musicians and teachers (Davidson and Good 2002). Typical observed non-verbal cues are for example: physical gestures, modulations of sound, steady eye contact...

  20. Consonant Differentiation Mediates the Discrepancy between Non-verbal and Verbal Abilities in Children with ASD

    Science.gov (United States)

    Key, A. P.; Yoder, P. J.; Stone, W. L.

    2016-01-01

    Background: Many children with autism spectrum disorder (ASD) demonstrate verbal communication disorders reflected in lower verbal than non-verbal abilities. The present study examined the extent to which this discrepancy is associated with atypical speech sound differentiation. Methods: Differences in the amplitude of auditory event-related…

  1. Non-Verbal Communication in Children with Visual Impairment

    Science.gov (United States)

    Mallineni, Sharmila; Nutheti, Rishita; Thangadurai, Shanimole; Thangadurai, Puspha

    2006-01-01

    The aim of this study was to determine: (a) whether children with visual and additional impairments show any non-verbal behaviors, and if so what were the common behaviors; (b) whether two rehabilitation professionals interpreted the non-verbal behaviors similarly; and (c) whether a speech pathologist and a rehabilitation professional interpreted…

  2. Virtual Chironomia: A Multimodal Study of Verbal and Non-Verbal Communication in a Virtual World

    Science.gov (United States)

    Verhulsdonck, Gustav

    2010-01-01

    This mixed methods study examined the various aspects of multimodal use of non-verbal communication in virtual worlds during dyadic negotiations. Quantitative analysis uncovered a treatment effect whereby people with more rhetorical certainty used more neutral non-verbal communication; whereas people that were rhetorically less certain used more…

  3. A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    OpenAIRE

    Mavridis, Nikolaos

    2014-01-01

    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-lookin...

  4. Prosody Predicts Contest Outcome in Non-Verbal Dialogs.

    Science.gov (United States)

    Dreiss, Amélie N; Chatelain, Philippe G; Roulin, Alexandre; Richner, Heinz

    2016-01-01

    Non-verbal communication has important implications for inter-individual relationships and negotiation success. However, to what extent humans can spontaneously use rhythm and prosody as a sole communication tool is largely unknown. We analysed human ability to resolve a conflict without verbal dialogs, independently of semantics. We invited pairs of subjects to communicate non-verbally using whistle sounds. Along with the production of more whistles, participants unwittingly used a subtle prosodic feature to compete over a resource (ice-cream scoops). Winners can be identified by their propensity to accentuate the first whistles blown when replying to their partner, compared to the following whistles. Naive listeners correctly identified this prosodic feature as a key determinant of which whistler won the interaction. These results suggest that in the absence of other communication channels, individuals spontaneously use a subtle variation of sound accentuation (prosody), instead of merely producing exuberant sounds, to impose themselves in a conflict of interest. We discuss the biological and cultural bases of this ability and their link with verbal communication. Our results highlight the human ability to use non-verbal communication in a negotiation process.

  5. Guidelines for Teaching Non-Verbal Communications Through Visual Media

    Science.gov (United States)

    Kundu, Mahima Ranjan

    1976-01-01

    There is a natural unique relationship between non-verbal communication and visual media such as television and film. Visual media will have to be used extensively--almost exclusively--in teaching non-verbal communications, as well as other methods requiring special teaching skills. (Author/ER)

  6. Non-verbal communication in meetings of psychiatrists and patients with schizophrenia.

    Science.gov (United States)

    Lavelle, M; Dimic, S; Wildgrube, C; McCabe, R; Priebe, S

    2015-03-01

    Recent evidence found that patients with schizophrenia display non-verbal behaviour designed to avoid social engagement during the opening moments of their meetings with psychiatrists. This study aimed to replicate, and build on, this finding, assessing the non-verbal behaviour of patients and psychiatrists during meetings, exploring changes over time and its association with patients' symptoms and the quality of the therapeutic relationship. 40-videotaped routine out-patient consultations, involving patients with schizophrenia, were analysed. Non-verbal behaviour of patients and psychiatrists was assessed during three fixed, 2-min intervals using a modified Ethological Coding System for Interviews. Symptoms, satisfaction with communication and the quality of the therapeutic relationship were also measured. Over time, patients' non-verbal behaviour remained stable, whilst psychiatrists' flight behaviour decreased. Patients formed two groups based on their non-verbal profiles, one group (n = 25) displaying pro-social behaviour, inviting interaction and a second (n = 15) displaying flight behaviour, avoiding interaction. Psychiatrists interacting with pro-social patients displayed more pro-social behaviours (P communication (P non-verbal behaviour during routine psychiatric consultations remains unchanged, and is linked to both their psychiatrist's non-verbal behaviour and the quality of the therapeutic relationship. © 2014 The Authors. Acta Psychiatrica Scandinavica Published by John Wiley & Sons Ltd.

  7. Symbiotic Relations of Verbal and Non-Verbal Components of Creolized Text on the Example of Stephen King’s Books Covers Analysis

    OpenAIRE

    Anna S. Kobysheva; Viktoria A. Nakaeva

    2017-01-01

    The article examines the symbiotic relationships between non-verbal and verbal components of the creolized text. The research focuses on the analysis of the correlation between verbal and visual elements of horror book covers based on three types of correlations between verbal and non-verbal text constituents, i.e. recurrent, additive and emphatic.

  8. Audiovisual segregation in cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Simon Landry

    Full Text Available It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition, as well as in normal controls. A visual speech recognition task (i.e. speechreading was administered either in silence or in combination with three types of auditory distractors: i noise ii reverse speech sound and iii non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users.

  9. Cross-cultural features of gestures in non-verbal communication

    Directory of Open Access Journals (Sweden)

    Chebotariova N. A.

    2017-09-01

    Full Text Available this article is devoted to analysis of the concept of non-verbal communication and ways of expressing it. Gesticulation is studied in detail as it is the main element of non-verbal communication and has different characteristics in various countries of the world.

  10. From SOLER to SURETY for effective non-verbal communication.

    Science.gov (United States)

    Stickley, Theodore

    2011-11-01

    This paper critiques the model for non-verbal communication referred to as SOLER (which stands for: "Sit squarely"; "Open posture"; "Lean towards the other"; "Eye contact; "Relax"). It has been approximately thirty years since Egan (1975) introduced his acronym SOLER as an aid for teaching and learning about non-verbal communication. There is evidence that the SOLER framework has been widely used in nurse education with little published critical appraisal. A new acronym that might be appropriate for non-verbal communication skills training and education is proposed and this is SURETY (which stands for "Sit at an angle"; "Uncross legs and arms"; "Relax"; "Eye contact"; "Touch"; "Your intuition"). The proposed model advances the SOLER model by including the use of touch and the importance of individual intuition is emphasised. The model encourages student nurse educators to also think about therapeutic space when they teach skills of non-verbal communication. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Toward a digitally mediated, transgenerational negotiation of verbal and non-verbal concepts in daycare

    DEFF Research Database (Denmark)

    Chimirri, Niklas Alexander

    an adult researcher’s research problem and her/his conceptual knowledge of the child-adult-digital media interaction are able to do justice to what the children actually intend to communicate about their experiences and actions, both verbally and non-verbally, by and large remains little explored...

  12. The impact of the teachers' non-verbal communication on success in teaching.

    Science.gov (United States)

    Bambaeeroo, Fatemeh; Shokrpour, Nasrin

    2017-04-01

    Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers' non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers' use of non-verbal communication and also its impact on success in teaching. Considering the research method, i.e. a review article, we searched for all articles in this field using key words such as success in teaching, verbal communication and non-verbal communication. In this study, we did not encode the articles. The results of this revealed that there was a strong relationship among the quality, amount and the method of using non-verbal communication by teachers while teaching. Based on the findings of the studies reviewed, it was found that the more the teachers used verbal and non-verbal communication, the more efficacious their education and the students' academic progress were. Under non-verbal communication, some other patterns were used. For example, emotive, team work, supportive, imaginative, purposive, and balanced communication using speech, body, and pictures all have been effective in students' learning and academic success. The teachers' attention to the students' non-verbal reactions and arranging the syllabus considering the students' mood and readiness have been emphasized in the studies reviewed. It was concluded that if this skill is practiced by teachers, it will have a positive and profound effect on the students' mood. Non-verbal communication is highly reliable in the communication process, so if the recipient of a message is between two contradictory verbal and nonverbal messages, logic dictates that we push him toward the non-verbal message and ask him to pay more attention to non-verbal than verbal messages because non-verbal

  13. Symbiotic Relations of Verbal and Non-Verbal Components of Creolized Text on the Example of Stephen King’s Books Covers Analysis

    Directory of Open Access Journals (Sweden)

    Anna S. Kobysheva

    2017-12-01

    Full Text Available The article examines the symbiotic relationships between non-verbal and verbal components of the creolized text. The research focuses on the analysis of the correlation between verbal and visual elements of horror book covers based on three types of correlations between verbal and non-verbal text constituents, i.e. recurrent, additive and emphatic.

  14. The impact of the teachers’ non-verbal communication on success in teaching

    Directory of Open Access Journals (Sweden)

    FATEMEH BAMBAEEROO

    2017-04-01

    Full Text Available Introduction: Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers’ non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers’ use of non-verbal communication and also its impact on success in teaching. Methods: Considering the research method, i.e. a review article, we searched for all articles in this field using key words such as success in teaching, verbal communication and non-verbal communication. In this study, we did not encode the articles. Results: The results of this revealed that there was a strong relationship among the quality, amount and the method of using non-verbal communication by teachers while teaching. Based on the findings of the studies reviewed, it was found that the more the teachers used verbal and non-verbal communication, the more efficacious their education and the students’ academic progress were. Under non-verbal communication, some other patterns were used. For example, emotive, team work, supportive, imaginative, purposive, and balanced communication using speech, body, and pictures all have been effective in students’ learning and academic success. The teachers’ attention to the students’ non-verbal reactions and arranging the syllabus considering the students’ mood and readiness have been emphasized in the studies reviewed. Conclusion: It was concluded that if this skill is practiced by teachers, it will have a positive and profound effect on the students’ mood. Non-verbal communication is highly reliable in the communication process, so if the recipient of a message is between two contradictory verbal and nonverbal messages, logic dictates that we push him toward the non-verbal message

  15. The impact of the teachers’ non-verbal communication on success in teaching

    Science.gov (United States)

    BAMBAEEROO, FATEMEH; SHOKRPOUR, NASRIN

    2017-01-01

    Introduction: Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers’ non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers’ use of non-verbal communication and also its impact on success in teaching. Methods: Considering the research method, i.e. a review article, we searched for all articles in this field using key words such as success in teaching, verbal communication and non-verbal communication. In this study, we did not encode the articles. Results: The results of this revealed that there was a strong relationship among the quality, amount and the method of using non-verbal communication by teachers while teaching. Based on the findings of the studies reviewed, it was found that the more the teachers used verbal and non-verbal communication, the more efficacious their education and the students’ academic progress were. Under non-verbal communication, some other patterns were used. For example, emotive, team work, supportive, imaginative, purposive, and balanced communication using speech, body, and pictures all have been effective in students’ learning and academic success. The teachers’ attention to the students’ non-verbal reactions and arranging the syllabus considering the students’ mood and readiness have been emphasized in the studies reviewed. Conclusion: It was concluded that if this skill is practiced by teachers, it will have a positive and profound effect on the students’ mood. Non-verbal communication is highly reliable in the communication process, so if the recipient of a message is between two contradictory verbal and nonverbal messages, logic dictates that we push him toward the non-verbal message and ask him to pay

  16. Sex differences in the ability to recognise non-verbal displays of emotion: a meta-analysis.

    Science.gov (United States)

    Thompson, Ashley E; Voyer, Daniel

    2014-01-01

    The present study aimed to quantify the magnitude of sex differences in humans' ability to accurately recognise non-verbal emotional displays. Studies of relevance were those that required explicit labelling of discrete emotions presented in the visual and/or auditory modality. A final set of 551 effect sizes from 215 samples was included in a multilevel meta-analysis. The results showed a small overall advantage in favour of females on emotion recognition tasks (d=0.19). However, the magnitude of that sex difference was moderated by several factors, namely specific emotion, emotion type (negative, positive), sex of the actor, sensory modality (visual, audio, audio-visual) and age of the participants. Method of presentation (computer, slides, print, etc.), type of measurement (response time, accuracy) and year of publication did not significantly contribute to variance in effect sizes. These findings are discussed in the context of social and biological explanations of sex differences in emotion recognition.

  17. Incongruence between Verbal and Non-Verbal Information Enhances the Late Positive Potential.

    Science.gov (United States)

    Morioka, Shu; Osumi, Michihiro; Shiotani, Mayu; Nobusako, Satoshi; Maeoka, Hiroshi; Okada, Yohei; Hiyamizu, Makoto; Matsuo, Atsushi

    2016-01-01

    Smooth social communication consists of both verbal and non-verbal information. However, when presented with incongruence between verbal information and nonverbal information, the relationship between an individual judging trustworthiness in those who present the verbal-nonverbal incongruence and the brain activities observed during judgment for trustworthiness are not clear. In the present study, we attempted to identify the impact of incongruencies between verbal information and facial expression on the value of trustworthiness and brain activity using event-related potentials (ERP). Combinations of verbal information [positive/negative] and facial expressions [smile/angry] expressions were presented randomly on a computer screen to 17 healthy volunteers. The value of trustworthiness of the presented facial expression was evaluated by the amount of donation offered by the observer to the person depicted on the computer screen. In addition, the time required to judge the value of trustworthiness was recorded for each trial. Using electroencephalography, ERP were obtained by averaging the wave patterns recorded while the participants judged the value of trustworthiness. The amount of donation offered was significantly lower when the verbal information and facial expression were incongruent, particularly for [negative × smile]. The amplitude of the early posterior negativity (EPN) at the temporal lobe showed no significant difference between all conditions. However, the amplitude of the late positive potential (LPP) at the parietal electrodes for the incongruent condition [negative × smile] was higher than that for the congruent condition [positive × smile]. These results suggest that the LPP amplitude observed from the parietal cortex is involved in the processing of incongruence between verbal information and facial expression.

  18. A Comparison of the Development of Audiovisual Integration in Children with Autism Spectrum Disorders and Typically Developing Children

    Science.gov (United States)

    Taylor, Natalie; Isaac, Claire; Milne, Elizabeth

    2010-01-01

    This study aimed to investigate the development of audiovisual integration in children with Autism Spectrum Disorder (ASD). Audiovisual integration was measured using the McGurk effect in children with ASD aged 7-16 years and typically developing children (control group) matched approximately for age, sex, nonverbal ability and verbal ability.…

  19. Perceived synchrony for realistic and dynamic audiovisual events.

    Science.gov (United States)

    Eg, Ragnhild; Behne, Dawn M

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli.

  20. Emotional expression in oral history narratives: comparing results of automated verbal and nonverbal analyses

    NARCIS (Netherlands)

    Truong, Khiet Phuong; Westerhof, Gerben Johan; Lamers, S.M.A.; de Jong, Franciska M.G.; Sools, A.

    Audiovisual collections of narratives about war-traumas are rich in descriptions of personal and emotional experiences which can be expressed through verbal and nonverbal means. We complement a commonly used verbal analysis with a nonverbal one to study emotional developments in narratives. Using

  1. Emotional expression in oral history narratives: comparing results of automated verbal and nonverbal analyses

    NARCIS (Netherlands)

    F.M.G. de Jong (Franciska); K.P. Truong (Khiet); G.J. Westerhof (Gerben); S.M.A. Lamers (Sanne); A. Sools (Anneke)

    2013-01-01

    textabstractAudiovisual collections of narratives about war-traumas are rich in descriptions of personal and emotional experiences which can be expressed through verbal and nonverbal means. We complement a commonly used verbal analysis with a nonverbal one to study emotional developments in

  2. Oncologists’ non-verbal behavior and analog patients’ recall of information

    NARCIS (Netherlands)

    Hillen, M.A.; de Haes, H.C.J.M.; van Tienhoven, G.; van Laarhoven, H.W.M.; van Weert, J.C.M.; Vermeulen, D.M.; Smets, E.M.A.

    2016-01-01

    Background Information in oncological consultations is often excessive. Those patients who better recall information are more satisfied, less anxious and more adherent. Optimal recall may be enhanced by the oncologist’s non-verbal communication. We tested the influence of three non-verbal behaviors,

  3. Oncologists' non-verbal behavior and analog patients' recall of information

    NARCIS (Netherlands)

    Hillen, Marij A.; de Haes, Hanneke C. J. M.; van Tienhoven, Geertjan; van Laarhoven, Hanneke W. M.; van Weert, Julia C. M.; Vermeulen, Daniëlle M.; Smets, Ellen M. A.

    2016-01-01

    Background Information in oncological consultations is often excessive. Those patients who better recall information are more satisfied, less anxious and more adherent. Optimal recall may be enhanced by the oncologist's non-verbal communication. We tested the influence of three non-verbal behaviors,

  4. Infant Attention to Dynamic Audiovisual Stimuli: Look Duration from 3 to 9 Months of Age

    Science.gov (United States)

    Reynolds, Greg D.; Zhang, Dantong; Guy, Maggie W.

    2013-01-01

    The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3-, 6-, and 9-month-old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous…

  5. Drama to promote non-verbal communication skills.

    Science.gov (United States)

    Kelly, Martina; Nixon, Lara; Broadfoot, Kirsten; Hofmeister, Marianna; Dornan, Tim

    2018-05-23

    Non-verbal communication skills (NVCS) help physicians to deliver relationship-centred care, and the effective use of NVCS is associated with improved patient satisfaction, better use of health services and high-quality clinical care. In contrast to verbal communication skills, NVCS training is under developed in communication curricula for the health care professions. One of the challenges teaching NVCS is their tacit nature. In this study, we evaluated drama exercises to raise awareness of NVCS by making familiar activities 'strange'. Workshops based on drama exercises were designed to heighten an awareness of sight, hearing, touch and proxemics in non-verbal communication. These were conducted at eight medical education conferences, held between 2014 and 2016, and were open to all conference participants. Workshops were evaluated by recording narrative data generated during the workshops and an open-ended questionnaire following the workshop. Data were analysed qualitatively, using thematic analysis. Non-verbal communication skills help doctors to deliver relationship-centred care RESULTS: One hundred and twelve participants attended workshops, 73 (65%) of whom completed an evaluation form: 56 physicians, nine medical students and eight non-physician faculty staff. Two themes were described: an increased awareness of NVCS and the importance of NVCS in relationship building. Drama exercises enabled participants to experience NVCS, such as sight, sound, proxemics and touch, in novel ways. Participants reflected on how NCVS contribute to developing trust and building relationships in clinical practice. Drama-based exercises elucidate the tacit nature of NVCS and require further evaluation in formal educational settings. © 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  6. Linguistic analysis of verbal and non-verbal communication in the operating room.

    Science.gov (United States)

    Moore, Alison; Butt, David; Ellis-Clarke, Jodie; Cartmill, John

    2010-12-01

    Surgery can be a triumph of co-operation, the procedure evolving as a result of joint action between multiple participants. The communication that mediates the joint action of surgery is conveyed by verbal but particularly by non-verbal signals. Competing priorities superimposed by surgical learning must also be negotiated within this context and this paper draws on techniques of systemic functional linguistics to observe and analyse the flow of information during such a phase of surgery. © 2010 The Authors. ANZ Journal of Surgery © 2010 Royal Australasian College of Surgeons.

  7. Electrophysiological evidence for speech-specific audiovisual integration.

    Science.gov (United States)

    Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode. © 2013 Published by Elsevier Ltd.

  8. Comparative Analysis of Verbal and Non-Verbal Mental Activity Components Regarding the Young People with Different Intellectual Levels

    Directory of Open Access Journals (Sweden)

    Y. M. Revenko

    2013-01-01

    Full Text Available The paper maintains that for developing the educational pro- grams and technologies adequate to the different stages of students’ growth and maturity, there is a need for exploring the natural determinants of intel- lectual development as well as the students’ individual qualities affecting the cognition process. The authors investigate the differences of the intellect manifestations with the reference to the gender principle, and analyze the correlations be- tween verbal and non-verbal components in boys and girls’ mental activity depending on their general intellect potential. The research, carried out in Si- berian State Automobile Road Academy and focused on the first year stu- dents, demonstrates the absence of gender differences in students’ general in- tellect levels; though, there are some other conformities: the male students of different intellectual levels show the same correlation coefficient of verbal and non-verbal intellect while the female ones have the same correlation only at the high intellect level. In conclusion, the authors emphasize the need for the integral ap- proach to raising students’ mental abilities considering the close interrelation between the verbal and non-verbal component development. The teaching materials should stimulate different mental qualities by differentiating the educational process to develop students’ individual abilities. 

  9. Non-verbal numerical cognition: from reals to integers.

    Science.gov (United States)

    Gallistel; Gelman

    2000-02-01

    Data on numerical processing by verbal (human) and non-verbal (animal and human) subjects are integrated by the hypothesis that a non-verbal counting process represents discrete (countable) quantities by means of magnitudes with scalar variability. These appear to be identical to the magnitudes that represent continuous (uncountable) quantities such as duration. The magnitudes representing countable quantity are generated by a discrete incrementing process, which defines next magnitudes and yields a discrete ordering. In the case of continuous quantities, the continuous accumulation process does not define next magnitudes, so the ordering is also continuous ('dense'). The magnitudes representing both countable and uncountable quantity are arithmetically combined in, for example, the computation of the income to be expected from a foraging patch. Thus, on the hypothesis presented here, the primitive machinery for arithmetic processing works with real numbers (magnitudes).

  10. Non-verbal Communication in a Neonatal Intensive Care Unit: A Video Audit Using Non-verbal Immediacy Scale (NIS-O).

    Science.gov (United States)

    Nimbalkar, Somashekhar Marutirao; Raval, Himalaya; Bansal, Satvik Chaitanya; Pandya, Utkarsh; Pathak, Ajay

    2018-05-03

    Effective communication with parents is a very important skill for pediatricians especially in a neonatal setup. The authors analyzed non-verbal communication of medical caregivers during counseling sessions. Recorded videos of counseling sessions from the months of March-April 2016 were audited. Counseling episodes were scored using Non-verbal Immediacy Scale Observer Report (NIS-O). A total of 150 videos of counseling sessions were audited. The mean (SD) total score on (NIS-O) was 78.96(7.07). Female counseled sessions had significantly higher proportion of low scores (p communication skills in a neonatal unit. This study lays down a template on which other Neonatal intensive care units (NICUs) can carry out gap defining audits.

  11. Dissociating Cortical Activity during Processing of Native and Non-Native Audiovisual Speech from Early to Late Infancy

    Directory of Open Access Journals (Sweden)

    Eswen Fava

    2014-08-01

    Full Text Available Initially, infants are capable of discriminating phonetic contrasts across the world’s languages. Starting between seven and ten months of age, they gradually lose this ability through a process of perceptual narrowing. Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech. Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds. But infants learn from speech produced by people talking to them, meaning they learn from a complex audiovisual signal. Here, we use near-infrared spectroscopy to measure blood concentration changes in the bilateral temporal cortices of infants in three different age groups: 3-to-6 months, 7-to-10 months, and 11-to-14-months. Critically, all three groups of infants were tested with continuous audiovisual speech in both their native and another, unfamiliar language. We found that at each age range, infants showed different patterns of cortical activity in response to the native and non-native stimuli. Infants in the youngest group showed bilateral cortical activity that was greater overall in response to non-native relative to native speech; the oldest group showed left lateralized activity in response to native relative to non-native speech. These results highlight perceptual tuning as a dynamic process that happens across modalities and at different levels of stimulus complexity.

  12. The impact of the teachers? non-verbal communication on success in teaching

    OpenAIRE

    BAMBAEEROO, FATEMEH; SHOKRPOUR, NASRIN

    2017-01-01

    Introduction: Non-verbal communication skills, also called sign language or silent language, include all behaviors performed in the presence of others or perceived either consciously or unconsciously. The main aim of this review article was to determine the effect of the teachers’ non-verbal communication on success in teaching using the findings of the studies conducted on the relationship between quality of teaching and the teachers’ use of non-verbal communication and ...

  13. The impact of culture and education on non-verbal neuropsychological measurements: a critical review.

    Science.gov (United States)

    Rosselli, Mónica; Ardila, Alfredo

    2003-08-01

    Clinical neuropsychology has frequently considered visuospatial and non-verbal tests to be culturally and educationally fair or at least fairer than verbal tests. This paper reviews the cross-cultural differences in performance on visuoperceptual and visuoconstructional ability tasks and analyzes the impact of education and culture on non-verbal neuropsychological measurements. This paper compares: (1) non-verbal test performance among groups with different educational levels, and the same cultural background (inter-education intra-culture comparison); (2) the test performance among groups with the same educational level and different cultural backgrounds (intra-education inter-culture comparisons). Several studies have demonstrated a strong association between educational level and performance on common non-verbal neuropsychological tests. When neuropsychological test performance in different cultural groups is compared, significant differences are evident. Performance on non-verbal tests such as copying figures, drawing maps or listening to tones can be significantly influenced by the individual's culture. Arguments against the use of some current neuropsychological non-verbal instruments, procedures, and norms in the assessment of diverse educational and cultural groups are discussed and possible solutions to this problem are presented.

  14. Context, culture and (non-verbal) communication affect handover quality.

    Science.gov (United States)

    Frankel, Richard M; Flanagan, Mindy; Ebright, Patricia; Bergman, Alicia; O'Brien, Colleen M; Franks, Zamal; Allen, Andrew; Harris, Angela; Saleem, Jason J

    2012-12-01

    Transfers of care, also known as handovers, remain a substantial patient safety risk. Although research on handovers has been done since the 1980s, the science is incomplete. Surprisingly few interventions have been rigorously evaluated and, of those that have, few have resulted in long-term positive change. Researchers, both in medicine and other high reliability industries, agree that face-to-face handovers are the most reliable. It is not clear, however, what the term face-to-face means in actual practice. We studied the use of non-verbal behaviours, including gesture, posture, bodily orientation, facial expression, eye contact and physical distance, in the delivery of information during face-to-face handovers. To address this question and study the role of non-verbal behaviour on the quality and accuracy of handovers, we videotaped 52 nursing, medicine and surgery handovers covering 238 patients. Videotapes were analysed using immersion/crystallisation methods of qualitative data analysis. A team of six researchers met weekly for 18 months to view videos together using a consensus-building approach. Consensus was achieved on verbal, non-verbal, and physical themes and patterns observed in the data. We observed four patterns of non-verbal behaviour (NVB) during handovers: (1) joint focus of attention; (2) 'the poker hand'; (3) parallel play and (4) kerbside consultation. In terms of safety, joint focus of attention was deemed to have the best potential for high quality and reliability; however, it occurred infrequently, creating opportunities for education and improvement. Attention to patterns of NVB in face-to-face handovers coupled with education and practice can improve quality and reliability.

  15. Trauma team leaders' non-verbal communication: video registration during trauma team training.

    Science.gov (United States)

    Härgestam, Maria; Hultin, Magnus; Brulin, Christine; Jacobsson, Maritha

    2016-03-25

    There is widespread consensus on the importance of safe and secure communication in healthcare, especially in trauma care where time is a limiting factor. Although non-verbal communication has an impact on communication between individuals, there is only limited knowledge of how trauma team leaders communicate. The purpose of this study was to investigate how trauma team members are positioned in the emergency room, and how leaders communicate in terms of gaze direction, vocal nuances, and gestures during trauma team training. Eighteen trauma teams were audio and video recorded during trauma team training in the emergency department of a hospital in northern Sweden. Quantitative content analysis was used to categorize the team members' positions and the leaders' non-verbal communication: gaze direction, vocal nuances, and gestures. The quantitative data were interpreted in relation to the specific context. Time sequences of the leaders' gaze direction, speech time, and gestures were identified separately and registered as time (seconds) and proportions (%) of the total training time. The team leaders who gained control over the most important area in the emergency room, the "inner circle", positioned themselves as heads over the team, using gaze direction, gestures, vocal nuances, and verbal commands that solidified their verbal message. Changes in position required both attention and collaboration. Leaders who spoke in a hesitant voice, or were silent, expressed ambiguity in their non-verbal communication: and other team members took over the leader's tasks. In teams where the leader had control over the inner circle, the members seemed to have an awareness of each other's roles and tasks, knowing when in time and where in space these tasks needed to be executed. Deviations in the leaders' communication increased the ambiguity in the communication, which had consequences for the teamwork. Communication cannot be taken for granted; it needs to be practiced

  16. Audiovisual semantic congruency during encoding enhances memory performance.

    Science.gov (United States)

    Heikkilä, Jenni; Alho, Kimmo; Hyvönen, Heidi; Tiippana, Kaisa

    2015-01-01

    Studies of memory and learning have usually focused on a single sensory modality, although human perception is multisensory in nature. In the present study, we investigated the effects of audiovisual encoding on later unisensory recognition memory performance. The participants were to memorize auditory or visual stimuli (sounds, pictures, spoken words, or written words), each of which co-occurred with either a semantically congruent stimulus, incongruent stimulus, or a neutral (non-semantic noise) stimulus in the other modality during encoding. Subsequent memory performance was overall better when the stimulus to be memorized was initially accompanied by a semantically congruent stimulus in the other modality than when it was accompanied by a neutral stimulus. These results suggest that semantically congruent multisensory experiences enhance encoding of both nonverbal and verbal materials, resulting in an improvement in their later recognition memory.

  17. Historia audiovisual para una sociedad audiovisual

    Directory of Open Access Journals (Sweden)

    Julio Montero Díaz

    2013-04-01

    Full Text Available This article analyzes the possibilities of presenting an audiovisual history in a society in which audiovisual media has progressively gained greater protagonism. We analyze specific cases of films and historical documentaries and we assess the difficulties faced by historians to understand the keys of audiovisual language and by filmmakers to understand and incorporate history into their productions. We conclude that it would not be possible to disseminate history in the western world without audiovisual resources circulated through various types of screens (cinema, television, computer, mobile phone, video games.

  18. Long-term music training modulates the recalibration of audiovisual simultaneity.

    Science.gov (United States)

    Jicol, Crescent; Proulx, Michael J; Pollick, Frank E; Petrini, Karin

    2018-07-01

    To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally occurring multisensory training known to enhance audiovisual perceptual accuracy. Hence, we asked a group of drummers, of non-drummer musicians and of non-musicians to judge the audiovisual simultaneity of musical and non-musical audiovisual events, before and after adaptation with two fixed audiovisual asynchronies. We found that the recalibration for the musicians and drummers was in the opposite direction (sound leading vision) to that of non-musicians (vision leading sound), and change together with both increased music training and increased perceptual accuracy (i.e. ability to detect asynchrony). Our findings demonstrate that long-term musical training reshapes the way humans adaptively recalibrate simultaneity between auditory and visual signals.

  19. Cross-cultural Differences of Stereotypes about Non-verbal Communication of Russian and Chinese Students

    Directory of Open Access Journals (Sweden)

    I A Novikova

    2011-09-01

    Full Text Available The article deals with peculiarities of non-verbal communication as a factor of cross-cultural intercourse and adaptation of representatives of different cultures. The possibility of studying of ethnic stereotypes concerning non-verbal communication is considered. The results of empiric research of stereotypes about non-verbal communication of Russian and Chinese students are presented.

  20. Verbal and non-verbal semantic impairment: From fluent primary progressive aphasia to semantic dementia

    Directory of Open Access Journals (Sweden)

    Mirna Lie Hosogi Senaha

    Full Text Available Abstract Selective disturbances of semantic memory have attracted the interest of many investigators and the question of the existence of single or multiple semantic systems remains a very controversial theme in the literature. Objectives: To discuss the question of multiple semantic systems based on a longitudinal study of a patient who presented semantic dementia from fluent primary progressive aphasia. Methods: A 66 year-old woman with selective impairment of semantic memory was examined on two occasions, undergoing neuropsychological and language evaluations, the results of which were compared to those of three paired control individuals. Results: In the first evaluation, physical examination was normal and the score on the Mini-Mental State Examination was 26. Language evaluation revealed fluent speech, anomia, disturbance in word comprehension, preservation of the syntactic and phonological aspects of the language, besides surface dyslexia and dysgraphia. Autobiographical and episodic memories were relatively preserved. In semantic memory tests, the following dissociation was found: disturbance of verbal semantic memory with preservation of non-verbal semantic memory. Magnetic resonance of the brain revealed marked atrophy of the left anterior temporal lobe. After 14 months, the difficulties in verbal semantic memory had become more severe and the semantic disturbance, limited initially to the linguistic sphere, had worsened to involve non-verbal domains. Conclusions: Given the dissociation found in the first examination, we believe there is sufficient clinical evidence to refute the existence of a unitary semantic system.

  1. Physical growth and non-verbal intelligence: Associations in Zambia

    Science.gov (United States)

    Hein, Sascha; Reich, Jodi; Thuma, Philip E.; Grigorenko, Elena L.

    2014-01-01

    Objectives To investigate normative developmental BMI trajectories and associations of physical growth indicators (ie, height, weight, head circumference [HC], body mass index [BMI]) with non-verbal intelligence in an understudied population of children from Sub-Saharan Africa. Study design A sample of 3981 students (50.8% male), grades 3 to 7, with a mean age of 12.75 years was recruited from 34 rural Zambian schools. Children with low scores on vision and hearing screenings were excluded. Height, weight and HC were measured, and non-verbal intelligence was assessed using UNIT-symbolic memory and KABC-II-triangles. Results Results showed that students in higher grades have a higher BMI over and above the effect of age. Girls showed a marginally higher BMI, although that for both boys and girls was approximately 1 SD below the international CDC and WHO norms. Controlling for the effect of age, non-verbal intelligence showed small but significant positive relationships with HC (r = .17) and BMI (r = .11). HC and BMI accounted for 1.9% of the variance in non-verbal intelligence, over and above the contribution of grade and sex. Conclusions BMI-for-age growth curves of Zambian children follow observed worldwide developmental trajectories. The positive relationships between BMI and intelligence underscore the importance of providing adequate nutritional and physical growth opportunities for children worldwide and in sub-Saharan Africa in particular. Directions for future studies are discussed with regard to maximizing the cognitive potential of all rural African children. PMID:25217196

  2. FROM VERBAL TO AUDIOVISUAL MEDIUM: THE CASE OF THE CINEMATIC ADAPTATION OF R. L. STEVENSON’S NOVEL “THE WRONG BOX” Part I

    Directory of Open Access Journals (Sweden)

    Jadvyga Krūminienė

    2018-04-01

    Full Text Available The paper attempts at the analysis of the narrational shifts between verbal and audiovisual mediums on the basis of R. L. Stevenson’s novel “The Wrong Box” (1989 and its cinematic adaptation under the same title by Bryan Forbes (1966. The authors approach adaptation as a complex phenomenon that experiences the creative tension between preserving fidelity to the source literary text and striving for filmic originality. Similarly to novels, movies represent an act and art of narration but they use different narrative strategies. In film narratives, deep focus, the length and scale of the shots, editing, montage, lighting, sound design, music, human voice etc. accompany the verbal medium. Modelled after literature, movies demonstrate the specific construal narrative components that are combined into coherent cinematic sequences. When transfering R. L. Stevenson’s novel from fictional medium into cinematic medium, Forbes organises the relations of the narrative elements on an intertextual level thus fostering new expressive means. Such practice allows to project the cinematic narrator as a complex construct also given the possibility of being perceived as a speaking persona through an inventive use of intertitles. In fact, the adaptor is caught up in the farcical narrational game, provoking the viewer to actively participate in it.

  3. Culture and Social Relationship as Factors of Affecting Communicative Non-Verbal Behaviors

    DEFF Research Database (Denmark)

    Lipi, Afia Akhter; Nakano, Yukiko; Rehm, Matthias

    2010-01-01

    The goal of this paper is to link a bridge between social relationship and cultural variation to predict conversants' non-verbal behaviors. This idea serves as a basis of establishing a parameter based socio-cultural model, which determines non-verbal expressive parameters that specify the shapes...

  4. Using Audiovisual TV Interviews to Create Visible Authors that Reduce the Learning Gap between Native and Non-Native Language Speakers

    Science.gov (United States)

    Inglese, Terry; Mayer, Richard E.; Rigotti, Francesca

    2007-01-01

    Can archives of audiovisual TV interviews be used to make authors more visible to students, and thereby reduce the learning gap between native and non-native language speakers in college classes? We examined students in a college course who learned about one scholar's ideas through watching an audiovisual TV interview (i.e., visible author format)…

  5. Non-Verbal Communication Training: An Avenue for University Professionalizing Programs?

    Science.gov (United States)

    Gazaille, Mariane

    2011-01-01

    In accordance with today's workplace expectations, many university programs identify the ability to communicate as a crucial asset for future professionals. Yet, if the teaching of verbal communication is clearly identifiable in most university programs, the same cannot be said of non-verbal communication (NVC). Knowing the importance of the…

  6. Effects of proactive interference on non-verbal working memory.

    Science.gov (United States)

    Cyr, Marilyn; Nee, Derek E; Nelson, Eric; Senger, Thea; Jonides, John; Malapani, Chara

    2017-02-01

    Working memory (WM) is a cognitive system responsible for actively maintaining and processing relevant information and is central to successful cognition. A process critical to WM is the resolution of proactive interference (PI), which involves suppressing memory intrusions from prior memories that are no longer relevant. Most studies that have examined resistance to PI in a process-pure fashion used verbal material. By contrast, studies using non-verbal material are scarce, and it remains unclear whether the effect of PI is domain-general or whether it applies solely to the verbal domain. The aim of the present study was to examine the effect of PI in visual WM using both objects with high and low nameability. Using a Directed-Forgetting paradigm, we varied discriminability between WM items on two dimensions, one verbal (high-nameability vs. low-nameability objects) and one perceptual (colored vs. gray objects). As in previous studies using verbal material, effects of PI were found with object stimuli, even after controlling for verbal labels being used (i.e., low-nameability condition). We also found that the addition of distinctive features (color, verbal label) increased performance in rejecting intrusion probes, most likely through an increase in discriminability between content-context bindings in WM.

  7. Measuring Verbal and Non-Verbal Communication in Aphasia: Reliability, Validity, and Sensitivity to Change of the Scenario Test

    Science.gov (United States)

    van der Meulen, Ineke; van de Sandt-Koenderman, W. Mieke E.; Duivenvoorden, Hugo J.; Ribbers, Gerard M.

    2010-01-01

    Background: This study explores the psychometric qualities of the Scenario Test, a new test to assess daily-life communication in severe aphasia. The test is innovative in that it: (1) examines the effectiveness of verbal and non-verbal communication; and (2) assesses patients' communication in an interactive setting, with a supportive…

  8. Phenomenology of non-verbal communication as a representation of sports activities

    Directory of Open Access Journals (Sweden)

    Liubov Karpets

    2018-04-01

    Full Text Available The priority of language professional activity in sports is such non-verbal communication as body language. Purpose: to delete the main aspects of non-verbal communication as a representation of sports activities. Material & Methods: in the study participated members of sports teams, individual athletes, in particular, for such sports: basketball, handball, volleyball, football, hockey, bodybuilding. Results: in the process of research it was revealed that in sports activities such nonverbal communication as gestures, facial expressions, physique, etc., are lapped, and, as a consequence, the position "everything is language" (Lyotard is embodied. Conclusions: non-verbal communication is one of the most significant forms of communication in sports. Additional means of communication through the "language" of the body help the athletes to realize themselves and self-determination.

  9. Non-verbal communication in severe aphasia: influence of aphasia, apraxia, or semantic processing?

    Science.gov (United States)

    Hogrefe, Katharina; Ziegler, Wolfram; Weidinger, Nicole; Goldenberg, Georg

    2012-09-01

    Patients suffering from severe aphasia have to rely on non-verbal means of communication to convey a message. However, to date it is not clear which patients are able to do so. Clinical experience indicates that some patients use non-verbal communication strategies like gesturing very efficiently whereas others fail to transmit semantic content by non-verbal means. Concerns have been expressed that limb apraxia would affect the production of communicative gestures. Research investigating if and how apraxia influences the production of communicative gestures, led to contradictory outcomes. The purpose of this study was to investigate the impact of limb apraxia on spontaneous gesturing. Further, linguistic and non-verbal semantic processing abilities were explored as potential factors that might influence non-verbal expression in aphasic patients. Twenty-four aphasic patients with highly limited verbal output were asked to retell short video-clips. The narrations were videotaped. Gestural communication was analyzed in two ways. In the first part of the study, we used a form-based approach. Physiological and kinetic aspects of hand movements were transcribed with a notation system for sign languages. We determined the formal diversity of the hand gestures as an indicator of potential richness of the transmitted information. In the second part of the study, comprehensibility of the patients' gestural communication was evaluated by naive raters. The raters were familiarized with the model video-clips and shown the recordings of the patients' retelling without sound. They were asked to indicate, for each narration, which story was being told and which aspects of the stories they recognized. The results indicate that non-verbal faculties are the most important prerequisites for the production of hand gestures. Whereas results on standardized aphasia testing did not correlate with any gestural indices, non-verbal semantic processing abilities predicted the formal diversity

  10. Culture and Social Relationship as Factors of Affecting Communicative Non-verbal Behaviors

    Science.gov (United States)

    Akhter Lipi, Afia; Nakano, Yukiko; Rehm, Mathias

    The goal of this paper is to link a bridge between social relationship and cultural variation to predict conversants' non-verbal behaviors. This idea serves as a basis of establishing a parameter based socio-cultural model, which determines non-verbal expressive parameters that specify the shapes of agent's nonverbal behaviors in HAI. As the first step, a comparative corpus analysis is done for two cultures in two specific social relationships. Next, by integrating the cultural and social parameters factors with the empirical data from corpus analysis, we establish a model that predicts posture. The predictions from our model successfully demonstrate that both cultural background and social relationship moderate communicative non-verbal behaviors.

  11. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    Science.gov (United States)

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  12. Persistent non-verbal memory impairment in remitted major depression - caused by encoding deficits?

    Science.gov (United States)

    Behnken, Andreas; Schöning, Sonja; Gerss, Joachim; Konrad, Carsten; de Jong-Meyer, Renate; Zwanzger, Peter; Arolt, Volker

    2010-04-01

    While neuropsychological impairments are well described in acute phases of major depressive disorders (MDD), little is known about the neuropsychological profile in remission. There is evidence for episodic memory impairments in both acute depressed and remitted patients with MDD. Learning and memory depend on individuals' ability to organize information during learning. This study investigates non-verbal memory functions in remitted MDD and whether nonverbal memory performance is mediated by organizational strategies whilst learning. 30 well-characterized fully remitted individuals with unipolar MDD and 30 healthy controls matching in age, sex and education were investigated. Non-verbal learning and memory were measured by the Rey-Osterrieth-Complex-Figure-Test (RCFT). The RCFT provides measures of planning, organizational skills, perceptual and non-verbal memory functions. For assessing the mediating effects of organizational strategies, we used the Savage Organizational Score. Compared to healthy controls, participants with remitted MDD showed more deficits in their non-verbal memory function. Moreover, participants with remitted MDD demonstrated difficulties in organizing non-verbal information appropriately during learning. In contrast, no impairments regarding visual-spatial functions in remitted MDD were observed. Except for one patient, all the others were taking psychopharmacological medication. The neuropsychological function was solely investigated in the remitted phase of MDD. Individuals with MDD in remission showed persistent non-verbal memory impairments, modulated by a deficient use of organizational strategies during encoding. Therefore, our results strongly argue for additional therapeutic interventions in order to improve these remaining deficits in cognitive function. Copyright 2009 Elsevier B.V. All rights reserved.

  13. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    Science.gov (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  14. Anatomical Correlates of Non-Verbal Perception in Dementia Patients

    Directory of Open Access Journals (Sweden)

    Pin-Hsuan Lin

    2016-08-01

    Full Text Available Purpose: Patients with dementia who have dissociations in verbal and non-verbal sound processing may offer insights into the anatomic basis for highly related auditory modes. Methods: To determine the neuronal networks on non-verbal perception, 16 patients with Alzheimer’s dementia (AD, 15 with behavior variant fronto-temporal dementia (bv-FTD, 14 with semantic dementia (SD were evaluated and compared with 15 age-matched controls. Neuropsychological and auditory perceptive tasks were included to test the ability to compare pitch changes, scale-violated melody and for naming and associating with environmental sound. The brain 3D T1 images were acquired and voxel-based morphometry (VBM was used to compare and correlated the volumetric measures with task scores. Results: The SD group scored the lowest among 3 groups in pitch or scale-violated melody tasks. In the environmental sound test, the SD group also showed impairment in naming and also in associating sound with pictures. The AD and bv-FTD groups, compared with the controls, showed no differences in all tests. VBM with task score correlation showed that atrophy in the right supra-marginal and superior temporal gyri was strongly related to deficits in detecting violated scales, while atrophy in the bilateral anterior temporal poles and left medial temporal structures was related to deficits in environmental sound recognition. Conclusions: Auditory perception of pitch, scale-violated melody or environmental sound reflects anatomical degeneration in dementia patients and the processing of non-verbal sounds is mediated by distinct neural circuits.

  15. Negative Symptoms and Avoidance of Social Interaction: A Study of Non-Verbal Behaviour.

    Science.gov (United States)

    Worswick, Elizabeth; Dimic, Sara; Wildgrube, Christiane; Priebe, Stefan

    2018-01-01

    Non-verbal behaviour is fundamental to social interaction. Patients with schizophrenia display an expressivity deficit of non-verbal behaviour, exhibiting behaviour that differs from both healthy subjects and patients with different psychiatric diagnoses. The present study aimed to explore the association between non-verbal behaviour and symptom domains, overcoming methodological shortcomings of previous studies. Standardised interviews with 63 outpatients diagnosed with schizophrenia were videotaped. Symptoms were assessed using the Clinical Assessment Interview for Negative Symptoms (CAINS), the Positive and Negative Syndrome Scale (PANSS) and the Calgary Depression Scale. Independent raters later analysed the videos for non-verbal behaviour, using a modified version of the Ethological Coding System for Interviews (ECSI). Patients with a higher level of negative symptoms displayed significantly fewer prosocial (e.g., nodding and smiling), gesture, and displacement behaviours (e.g., fumbling), but significantly more flight behaviours (e.g., looking away, freezing). No gender differences were found, and these associations held true when adjusted for antipsychotic medication dosage. Negative symptoms are associated with both a lower level of actively engaging non-verbal behaviour and an increased active avoidance of social contact. Future research should aim to identify the mechanisms behind flight behaviour, with implications for the development of treatments to improve social functioning. © 2017 S. Karger AG, Basel.

  16. Improviser non verbalement pour l’apprentissage de la langue parlée

    Directory of Open Access Journals (Sweden)

    Francine Chaîné

    2015-04-01

    Full Text Available Un texte réflexif sur la pratique de l'improvisation dans un contexte scolaire en vue d'apprendre la langue parlée. D'aucun penserait que l'improvisation verbale est le moyen par excellence pour faire l'apprentissage de la langue, mais l'expérience nous a fait découvrir la richesse de l'improvisation non-verbale suivie de prise de parole sur la pratique comme moyen privilégié. L'article est illustré d'un atelier d'improvisation-non verbale s'adressant à des enfants ou à des adolescents.

  17. Non-verbal mother-child communication in conditions of maternal HIV in an experimental environment.

    Science.gov (United States)

    de Sousa Paiva, Simone; Galvão, Marli Teresinha Gimeniz; Pagliuca, Lorita Marlena Freitag; de Almeida, Paulo César

    2010-01-01

    Non-verbal communication is predominant in the mother-child relation. This study aimed to analyze non-verbal mother-child communication in conditions of maternal HIV. In an experimental environment, five HIV-positive mothers were evaluated during care delivery to their babies of up to six months old. Recordings of the care were analyzed by experts, observing aspects of non-verbal communication, such as: paralanguage, kinesics, distance, visual contact, tone of voice, maternal and infant tactile behavior. In total, 344 scenes were obtained. After statistical analysis, these permitted inferring that mothers use non-verbal communication to demonstrate their close attachment to their children and to perceive possible abnormalities. It is suggested that the mothers infection can be a determining factor for the formation of mothers strong attachment to their children after birth.

  18. Oncologists' non-verbal behavior and analog patients' recall of information.

    Science.gov (United States)

    Hillen, Marij A; de Haes, Hanneke C J M; van Tienhoven, Geertjan; van Laarhoven, Hanneke W M; van Weert, Julia C M; Vermeulen, Daniëlle M; Smets, Ellen M A

    2016-06-01

    Background Information in oncological consultations is often excessive. Those patients who better recall information are more satisfied, less anxious and more adherent. Optimal recall may be enhanced by the oncologist's non-verbal communication. We tested the influence of three non-verbal behaviors, i.e. eye contact, body posture and smiling, on patients' recall of information and perceived friendliness of the oncologist. Moreover, the influence of patient characteristics on recall was examined, both directly or as a moderator of non-verbal communication. Material and methods Non-verbal communication of an oncologist was experimentally varied using video vignettes. In total 194 breast cancer patients/survivors and healthy women participated as 'analog patients', viewing a randomly selected video version while imagining themselves in the role of the patient. Directly after viewing, they evaluated the oncologist. From 24 to 48 hours later, participants' passive recall, i.e. recognition, and free recall of information provided by the oncologist were assessed. Results Participants' recognition was higher if the oncologist maintained more consistent eye contact (β = 0.17). More eye contact and smiling led to a perception of the oncologist as more friendly. Body posture and smiling did not significantly influence recall. Older age predicted significantly worse recognition (β = -0.28) and free recall (β = -0.34) of information. Conclusion Oncologists may be able to facilitate their patients' recall functioning through consistent eye contact. This seems particularly relevant for older patients, whose recall is significantly worse. These findings can be used in training, focused on how to maintain eye contact while managing computer tasks.

  19. Parents and Physiotherapists Recognition of Non-Verbal Communication of Pain in Individuals with Cerebral Palsy.

    Science.gov (United States)

    Riquelme, Inmaculada; Pades Jiménez, Antonia; Montoya, Pedro

    2017-08-29

    Pain assessment is difficult in individuals with cerebral palsy (CP). This is of particular relevance in children with communication difficulties, when non-verbal pain behaviors could be essential for appropriate pain recognition. Parents are considered good proxies in the recognition of pain in their children; however, health professionals also need a good understanding of their patients' pain experience. This study aims at analyzing the agreement between parents' and physiotherapists' assessments of verbal and non-verbal pain behaviors in individuals with CP. A written survey about pain characteristics and non-verbal pain expression of 96 persons with CP (45 classified as communicative, and 51 as non-communicative individuals) was performed. Parents and physiotherapists displayed a high agreement in their estimations of the presence of chronic pain, healthcare seeking, pain intensity and pain interference, as well as in non-verbal pain behaviors. Physiotherapists and parents can recognize pain behaviors in individuals with CP regardless of communication disabilities.

  20. Non-verbal communication between primary care physicians and older patients: how does race matter?

    Science.gov (United States)

    Stepanikova, Irena; Zhang, Qian; Wieland, Darryl; Eleazer, G Paul; Stewart, Thomas

    2012-05-01

    Non-verbal communication is an important aspect of the diagnostic and therapeutic process, especially with older patients. It is unknown how non-verbal communication varies with physician and patient race. To examine the joint influence of physician race and patient race on non-verbal communication displayed by primary care physicians during medical interviews with patients 65 years or older. Video-recordings of visits of 209 patients 65 years old or older to 30 primary care physicians at three clinics located in the Midwest and Southwest. Duration of physicians' open body position, eye contact, smile, and non-task touch, coded using an adaption of the Nonverbal Communication in Doctor-Elderly Patient Transactions form. African American physicians with African American patients used more open body position, smile, and touch, compared to the average across other dyads (adjusted mean difference for open body position = 16.55, p non-verbal communication with older patients. Its influence is best understood when physician race and patient race are considered jointly.

  1. Shall we use non-verbal fluency in schizophrenia? A pilot study.

    Science.gov (United States)

    Rinaldi, Romina; Trappeniers, Julie; Lefebvre, Laurent

    2014-05-30

    Over the last few years, numerous studies have attempted to explain fluency impairments in people with schizophrenia, leading to heterogeneous results. This could notably be due to the fact that fluency is often used in its verbal form where semantic dimensions are implied. In order to gain an in-depth understanding of fluency deficits, a non-verbal fluency task - the Five-Point Test (5PT) - was proposed to 24 patients with schizophrenia and to 24 healthy subjects categorized in terms of age, gender and schooling. The 5PT involves producing as many abstract figures as possible within 1min by connecting points with straight lines. All subjects also completed the Frontal Assessment Battery (FAB) while those with schizophrenia were further assessed using the Positive and Negative Syndrome Scale (PANSS). Results show that the 5PT evaluation differentiates patients from healthy subjects with regard to the number of figures produced. Patients׳ results also suggest that the number of figures produced is linked to the "overall executive functioning" and to some inhibition components. Although this study is a first step in the non-verbal efficiency research field, we believe that experimental psychopathology could benefit from the investigations on non-verbal fluency. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    Directory of Open Access Journals (Sweden)

    Jonathan M P Wilbiks

    Full Text Available Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations are combined with the temporal unpredictability of the critical frame (Experiment 2, or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4. Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  3. The role of non-verbal behaviour in racial disparities in health care: implications and solutions.

    Science.gov (United States)

    Levine, Cynthia S; Ambady, Nalini

    2013-09-01

    People from racial minority backgrounds report less trust in their doctors and have poorer health outcomes. Although these deficiencies have multiple roots, one important set of explanations involves racial bias, which may be non-conscious, on the part of providers, and minority patients' fears that they will be treated in a biased way. Here, we focus on one mechanism by which this bias may be communicated and reinforced: namely, non-verbal behaviour in the doctor-patient interaction. We review 2 lines of research on race and non-verbal behaviour: (i) the ways in which a patient's race can influence a doctor's non-verbal behaviour toward the patient, and (ii) the relative difficulty that doctors can have in accurately understanding the nonverbal communication of non-White patients. Further, we review research on the implications that both lines of work can have for the doctor-patient relationship and the patient's health. The research we review suggests that White doctors interacting with minority group patients are likely to behave and respond in ways that are associated with worse health outcomes. As doctors' disengaged non-verbal behaviour towards minority group patients and lower ability to read minority group patients' non-verbal behaviours may contribute to racial disparities in patients' satisfaction and health outcomes, solutions that target non-verbal behaviour may be effective. A number of strategies for such targeting are discussed. © 2013 John Wiley & Sons Ltd.

  4. Language, Power, Multilingual and Non-Verbal Multicultural Communication

    NARCIS (Netherlands)

    Marácz, L.; Zhuravleva, E.A.

    2014-01-01

    Due to developments in internal migration and mobility there is a proliferation of linguistic diversity, multilingual and non-verbal multicultural communication. At the same time the recognition of the use of one’s first language receives more and more support in international political, legal and

  5. Automated Video Analysis of Non-verbal Communication in a Medical Setting.

    Science.gov (United States)

    Hart, Yuval; Czerniak, Efrat; Karnieli-Miller, Orit; Mayo, Avraham E; Ziv, Amitai; Biegon, Anat; Citron, Atay; Alon, Uri

    2016-01-01

    Non-verbal communication plays a significant role in establishing good rapport between physicians and patients and may influence aspects of patient health outcomes. It is therefore important to analyze non-verbal communication in medical settings. Current approaches to measure non-verbal interactions in medicine employ coding by human raters. Such tools are labor intensive and hence limit the scale of possible studies. Here, we present an automated video analysis tool for non-verbal interactions in a medical setting. We test the tool using videos of subjects that interact with an actor portraying a doctor. The actor interviews the subjects performing one of two scripted scenarios of interviewing the subjects: in one scenario the actor showed minimal engagement with the subject. The second scenario included active listening by the doctor and attentiveness to the subject. We analyze the cross correlation in total kinetic energy of the two people in the dyad, and also characterize the frequency spectrum of their motion. We find large differences in interpersonal motion synchrony and entrainment between the two performance scenarios. The active listening scenario shows more synchrony and more symmetric followership than the other scenario. Moreover, the active listening scenario shows more high-frequency motion termed jitter that has been recently suggested to be a marker of followership. The present approach may be useful for analyzing physician-patient interactions in terms of synchrony and dominance in a range of medical settings.

  6. Concurrent Unimodal Learning Enhances Multisensory Responses of Bi-Directional Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2018-01-01

    modalities to independently update modality-specific neural weights on a moment-by-moment basis, in response to dynamic changes in noisy sensory stimuli. The circuit is embodied as a non-holonomic robotic agent that must orient a towards a moving audio-visual target. The circuit continuously learns the best...

  7. Non-verbal behaviour in nurse-elderly patient communication.

    NARCIS (Netherlands)

    Caris-Verhallen, W.M.C.M.; Kerkstra, A.; Bensing, J.M.

    1999-01-01

    This study explores the occurence of non-verbal communication in nurse-elderly patient interaction in two different care settings: home nursing and a home for the elderly. In a sample of 181 nursing encounters involving 47 nurses a study was made of videotaped nurse-patient communication. Six

  8. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG.

    Directory of Open Access Journals (Sweden)

    Yao Lu

    Full Text Available Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave, while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  9. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    Science.gov (United States)

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  10. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    Science.gov (United States)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  11. EXPLICITATION AND ADDITION TECHNIQUES IN AUDIOVISUAL TRANSLATION: A MULTIMODAL APPROACH OF ENGLISHINDONESIAN SUBTITLES

    Directory of Open Access Journals (Sweden)

    Ichwan Suyudi

    2017-12-01

    Full Text Available In audiovisual translation, the multimodality of the audiovisual text is both a challenge and a resource for subtitlers. This paper illustrates how multi-modes provide information that helps subtitlers to gain a better understanding of meaning-making practices that will influence them to make a decision-making in translating a certain verbal text. Subtitlers may explicit, add, and condense the texts based on the multi-modes as seen on the visual frames. Subtitlers have to consider the distribution and integration of the meanings of multi-modes in order to create comprehensive equivalence between the source and target texts. Excerpts of visual frames in this paper are taken from English films Forrest Gump (drama, 1996, and James Bond (thriller, 2010.

  12. [Non-verbal communication of patients submitted to heart surgery: from awaking after anesthesia to extubation].

    Science.gov (United States)

    Werlang, Sueli da Cruz; Azzolin, Karina; Moraes, Maria Antonieta; de Souza, Emiliane Nogueira

    2008-12-01

    Preoperative orientation is an essential tool for patient's communication after surgery. This study had the objective of evaluating non-verbal communication of patients submitted to cardiac surgery from the time of awaking from anesthesia until extubation, after having received preoperative orientation by nurses. A quantitative cross-sectional study was developed in a reference hospital of the state of Rio Grande do Sul, Brazil, from March to July 2006. Data were collected in the pre and post operative periods. A questionnaire to evaluate non-verbal communication on awaking from sedation was applied to a sample of 100 patients. Statistical analysis included Student, Wilcoxon, and Mann Whittney tests. Most of the patients responded satisfactorily to non-verbal communication strategies as instructed on the preoperative orientation. Thus, non-verbal communication based on preoperative orientation was helpful during the awaking period.

  13. Non-verbal Persuasion and Communication in an Affective Agent

    DEFF Research Database (Denmark)

    André, Elisabeth; Bevacqua, Elisabetta; Heylen, Dirk

    2011-01-01

    the critical role of non-verbal behaviour during face-to-face communication. In this chapter we restrict the discussion to body language. We also consider embodied virtual agents. As is the case with humans, there are a number of fundamental factors to be considered when constructing persuasive agents......This chapter deals with the communication of persuasion. Only a small percentage of communication involves words: as the old saying goes, “it’s not what you say, it’s how you say it”. While this likely underestimates the importance of good verbal persuasion techniques, it is accurate in underlining...

  14. Videotutoring, Non-Verbal Communication and Initial Teacher Training.

    Science.gov (United States)

    Nichol, Jon; Watson, Kate

    2000-01-01

    Describes the use of video tutoring for distance education within the context of a post-graduate teacher training course at the University of Exeter. Analysis of the tapes used a protocol based on non-verbal communication research, and findings suggest that the interaction of participants was significantly different from face-to-face…

  15. Cortical Integration of Audio-Visual Information

    Science.gov (United States)

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  16. Copyright for audiovisual work and analysis of websites offering audiovisual works

    OpenAIRE

    Chrastecká, Nicolle

    2014-01-01

    This Bachelor's thesis deals with the matter of audiovisual piracy. It discusses the question of audiovisual piracy being caused not by the wrong interpretation of law but by the lack of competitiveness among websites with legal audiovisual content. This thesis questions the quality of legal interpretation in the matter of audiovisual piracy and focuses on its sufficiency. It analyses the responsibility of website providers, providers of the illegal content, the responsibility of illegal cont...

  17. The Enjoyment of the Audiovisual Experience by Deaf and Hearing-Impaired People: Visual Representation of Sound Effects

    OpenAIRE

    Tsaousi, Aikaterini

    2017-01-01

    El objetivo del presente estudio es observar la experiencia narrativa de los receptores que usan subtítulos para sordos y/o con dificultades de audición en el consumo de productos audiovisuales. Concretamente el estudio busca determinar si existe un efecto en el disfrute y en algunos de sus principales componentes dependiendo de si en pantalla se usan representaciones verbales o no verbales de los efectos sonoros que acompañan la narrativa audiovisual. Un total de 46 personas fueron asignadas...

  18. Herramienta observacional para el estudio de conductas violentas en un cómic audiovisual

    Directory of Open Access Journals (Sweden)

    Zaida Márquez

    2012-01-01

    Full Text Available Abstract This research paper presents a study which aimed to structure a system of categories for observation and description of violent behavior within an audiovisual children program, specifically in cartoons. A chapter of an audiovisual cartoon was chosen as an example. This chapter presented three main female characters in a random fashion in order to be observed by the children. Categories were established using the taxonomic criteria proposed by Anguera (2001 and were made up of various typed behaviors according to levels of response. To identify a stable behavioral pattern, some events were taken as a sample, taking into account one or several behavior registered in the observed sessions. The episode was analyzed by two observers who appreciated the material simultaneously, making two observations, registering the relevant data and contrasting opinions. The researchers determined a set of categories which expressed violent behavior such as: Nonverbal behavior, special behavior, and vocal/verbal behavior. It was concluded that there was a pattern of predominant and stable violent behavior in the cartoon observed. Resumen El presente artículo de investigación presenta un trabajo cuyo objetivo consistió en estructurar un sistema de categorías para la observación y descripción de conductas violentas en un cómic audiovisual (dibujo animado. Se seleccionó como muestra un cómic audiovisual que tiene tres personajes principales femeninos; tomándose de forma aleatoria, para su observación, uno de sus capítulos. Para el establecimiento de las categorías se escogieron como base los criterios taxonómicos propuestos por Anguera (2001, con lo cual se tipificaron las conductas que conforman cada categoría según los niveles de respuesta. Y para identificar un patrón de conducta estable se ha realizado un muestreo de eventos, usando todas las ocurrencias de una o varias conductas que se registraron en las sesiones observadas. El episodio

  19. Multistage audiovisual integration of speech: dissociating identification and detection.

    Science.gov (United States)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias S

    2011-02-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers, the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multistage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.

  20. A influência da comunicação não verbal no cuidado de enfermagem La influencia de la comunicación no verbal en la atención de enfermería The influence of non-verbal communication in nursing care

    Directory of Open Access Journals (Sweden)

    Carla Cristina Viana Santos

    2005-08-01

    Nursing School Alfredo Pinto UNIRIO, and it started during the development of a monograph. The object of the study is the meaning of non-verbal communication under the optics of the nursing course undergraduates. The study presents the following objectives: to determine how non-verbal communication is comprehended among college students in nursing and to analyze in what way that comprehension influences nursing care. The methodological approach was qualitative, while the dynamics of sensitivity were applied as strategy for data collection. It was observed that undergraduate students identify the relevance and influence of non-verbal communication along nursing care, however there is a need in amplifying the knowledge of non-verbal communication process prior the implementation of nursing care.

  1. Audiovisual Speech Synchrony Measure: Application to Biometrics

    Directory of Open Access Journals (Sweden)

    Gérard Chollet

    2007-01-01

    Full Text Available Speech is a means of communication which is intrinsically bimodal: the audio signal originates from the dynamics of the articulators. This paper reviews recent works in the field of audiovisual speech, and more specifically techniques developed to measure the level of correspondence between audio and visual speech. It overviews the most common audio and visual speech front-end processing, transformations performed on audio, visual, or joint audiovisual feature spaces, and the actual measure of correspondence between audio and visual speech. Finally, the use of synchrony measure for biometric identity verification based on talking faces is experimented on the BANCA database.

  2. Electrocortical Dynamics in Children with a Language-Learning Impairment Before and After Audiovisual Training.

    Science.gov (United States)

    Heim, Sabine; Choudhury, Naseem; Benasich, April A

    2016-05-01

    Detecting and discriminating subtle and rapid sound changes in the speech environment is a fundamental prerequisite of language processing, and deficits in this ability have frequently been observed in individuals with language-learning impairments (LLI). One approach to studying associations between dysfunctional auditory dynamics and LLI, is to implement a training protocol tapping into this potential while quantifying pre- and post-intervention status. Event-related potentials (ERPs) are highly sensitive to the brain correlates of these dynamic changes and are therefore ideally suited for examining hypotheses regarding dysfunctional auditory processes. In this study, ERP measurements to rapid tone sequences (standard and deviant tone pairs) along with behavioral language testing were performed in 6- to 9-year-old LLI children (n = 21) before and after audiovisual training. A non-treatment group of children with typical language development (n = 12) was also assessed twice at a comparable time interval. The results indicated that the LLI group exhibited considerable gains on standardized measures of language. In terms of ERPs, we found evidence of changes in the LLI group specifically at the level of the P2 component, later than 250 ms after the onset of the second stimulus in the deviant tone pair. These changes suggested enhanced discrimination of deviant from standard tone sequences in widespread cortices, in LLI children after training.

  3. Multistage audiovisual integration of speech: dissociating identification and detection

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech...... signal. Here we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers...... informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multi-stage account of audiovisual integration of speech in which the many attributes...

  4. Heart rate variability during acute psychosocial stress: A randomized cross-over trial of verbal and non-verbal laboratory stressors.

    Science.gov (United States)

    Brugnera, Agostino; Zarbo, Cristina; Tarvainen, Mika P; Marchettini, Paolo; Adorni, Roberta; Compare, Angelo

    2018-05-01

    Acute psychosocial stress is typically investigated in laboratory settings using protocols with distinctive characteristics. For example, some tasks involve the action of speaking, which seems to alter Heart Rate Variability (HRV) through acute changes in respiration patterns. However, it is still unknown which task induces the strongest subjective and autonomic stress response. The present cross-over randomized trial sought to investigate the differences in perceived stress and in linear and non-linear analyses of HRV between three different verbal (Speech and Stroop) and non-verbal (Montreal Imaging Stress Task; MIST) stress tasks, in a sample of 60 healthy adults (51.7% females; mean age = 25.6 ± 3.83 years). Analyses were run controlling for respiration rates. Participants reported similar levels of perceived stress across the three tasks. However, MIST induced a stronger cardiovascular response than Speech and Stroop tasks, even after controlling for respiration rates. Finally, women reported higher levels of perceived stress and lower HRV both at rest and in response to acute psychosocial stressors, compared to men. Taken together, our results suggest the presence of gender-related differences during psychophysiological experiments on stress. They also suggest that verbal activity masked the vagal withdrawal through altered respiration patterns imposed by speaking. Therefore, our findings support the use of highly-standardized math task, such as MIST, as a valid and reliable alternative to verbal protocols during laboratory studies on stress. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Young Children's Understanding of Markedness in Non-Verbal Communication

    Science.gov (United States)

    Liebal, Kristin; Carpenter, Malinda; Tomasello, Michael

    2011-01-01

    Speakers often anticipate how recipients will interpret their utterances. If they wish some other, less obvious interpretation, they may "mark" their utterance (e.g. with special intonations or facial expressions). We investigated whether two- and three-year-olds recognize when adults mark a non-verbal communicative act--in this case a pointing…

  6. Verbal-Visual Intertextuality: How do Multisemiotic Texts Dialogue?

    Directory of Open Access Journals (Sweden)

    Leonardo Mozdzenski

    2013-11-01

    Full Text Available The objective of this work is to understand how multisemiotic texts interact with each other to produce meanings, observing the complex intertextual relations among genres from various artistic and/or audiovisual fields. Therefore, I initially present a brief review of the literature on intertextuality, critically discussing how leading scholars address this issue. Then I argue that it is necessary to understand intertextuality in an integral and non-discretized way through a typological continuum of relationships between verbal-visual texts. Thus, I develop a model for understanding this phenomenon by means of a graph in which two continua intertwine: the representation of intertextuality through form (Implicitness/ Explicitness and function (Approach/Distance of the quoted voice assumed in communicative situations. To test the model,four music video clips of American singer Madonna were selected so we can verify how music video texts rely on other texts to build their discourses and evoked identities.

  7. The Fungible Audio-Visual Mapping and its Experience

    Directory of Open Access Journals (Sweden)

    Adriana Sa

    2014-12-01

    Full Text Available This article draws a perceptual approach to audio-visual mapping. Clearly perceivable cause and effect relationships can be problematic if one desires the audience to experience the music. Indeed perception would bias those sonic qualities that fit previous concepts of causation, subordinating other sonic qualities, which may form the relations between the sounds themselves. The question is, how can an audio-visual mapping produce a sense of causation, and simultaneously confound the actual cause-effect relationships. We call this a fungible audio-visual mapping. Our aim here is to glean its constitution and aspect. We will report a study, which draws upon methods from experimental psychology to inform audio-visual instrument design and composition. The participants are shown several audio-visual mapping prototypes, after which we pose quantitative and qualitative questions regarding their sense of causation, and their sense of understanding the cause-effect relationships. The study shows that a fungible mapping requires both synchronized and seemingly non-related components – sufficient complexity to be confusing. As the specific cause-effect concepts remain inconclusive, the sense of causation embraces the whole. 

  8. Cuando el cuerpo comunica. Manual de la comunicación no verbal

    OpenAIRE

    GARCÍA ALCÁNTARA, ALBA

    2013-01-01

    El Proyecto consiste en la realización de una pieza audiovisual a través de la cual se consiga la difusión de algunas de las claves de la comunicación no verbal en el terreno de la seducción. Conocer la importancia de la Comunicación no verbal y su naturaleza así como estudiar y comprender sus significados. De esta forma, pretendo difundir a través de la grabación de las imágenes, ese valor de la comunicación corporal, y como ya han hecho otros estudios, situar a la palabra ...

  9. Multi-level prediction of short-term outcome of depression : non-verbal interpersonal processes, cognitions and personality traits

    NARCIS (Netherlands)

    Geerts, E; Bouhuys, N

    1998-01-01

    It was hypothesized that personality factors determine the short-term outcome of depression, and that they may do this via non-verbal interpersonal interactions and via cognitive interpretations of non-verbal behaviour. Twenty-six hospitalized depressed patients entered the study. Personality

  10. Consistency between verbal and non-verbal affective cues: a clue to speaker credibility.

    Science.gov (United States)

    Gillis, Randall L; Nilsen, Elizabeth S

    2017-06-01

    Listeners are exposed to inconsistencies in communication; for example, when speakers' words (i.e. verbal) are discrepant with their demonstrated emotions (i.e. non-verbal). Such inconsistencies introduce ambiguity, which may render a speaker to be a less credible source of information. Two experiments examined whether children make credibility discriminations based on the consistency of speakers' affect cues. In Experiment 1, school-age children (7- to 8-year-olds) preferred to solicit information from consistent speakers (e.g. those who provided a negative statement with negative affect), over novel speakers, to a greater extent than they preferred to solicit information from inconsistent speakers (e.g. those who provided a negative statement with positive affect) over novel speakers. Preschoolers (4- to 5-year-olds) did not demonstrate this preference. Experiment 2 showed that school-age children's ratings of speakers were influenced by speakers' affect consistency when the attribute being judged was related to information acquisition (speakers' believability, "weird" speech), but not general characteristics (speakers' friendliness, likeability). Together, findings suggest that school-age children are sensitive to, and use, the congruency of affect cues to determine whether individuals are credible sources of information.

  11. Individual Differences in Verbal and Non-Verbal Affective Responses to Smells: Influence of Odor Label Across Cultures.

    Science.gov (United States)

    Ferdenzi, Camille; Joussain, Pauline; Digard, Bérengère; Luneau, Lucie; Djordjevic, Jelena; Bensafi, Moustafa

    2017-01-01

    Olfactory perception is highly variable from one person to another, as a function of individual and contextual factors. Here, we investigated the influence of 2 important factors of variation: culture and semantic information. More specifically, we tested whether cultural-specific knowledge and presence versus absence of odor names modulate odor perception, by measuring these effects in 2 populations differing in cultural background but not in language. Participants from France and Quebec, Canada, smelled 4 culture-specific and 2 non-specific odorants in 2 conditions: first without label, then with label. Their ratings of pleasantness, familiarity, edibility, and intensity were collected as well as their psychophysiological and olfactomotor responses. The results revealed significant effects of culture and semantic information, both at the verbal and non-verbal level. They also provided evidence that availability of semantic information reduced cultural differences. Semantic information had a unifying action on olfactory perception that overrode the influence of cultural background. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Digital audiovisual archives

    CERN Document Server

    Stockinger, Peter

    2013-01-01

    Today, huge quantities of digital audiovisual resources are already available - everywhere and at any time - through Web portals, online archives and libraries, and video blogs. One central question with respect to this huge amount of audiovisual data is how they can be used in specific (social, pedagogical, etc.) contexts and what are their potential interest for target groups (communities, professionals, students, researchers, etc.).This book examines the question of the (creative) exploitation of digital audiovisual archives from a theoretical, methodological, technical and practical

  13. Prevalence of inter-hemispheric asymetry in children and adolescents with interdisciplinary diagnosis of non-verbal learning disorder.

    Science.gov (United States)

    Wajnsztejn, Alessandra Bernardes Caturani; Bianco, Bianca; Barbosa, Caio Parente

    2016-01-01

    To describe clinical and epidemiological features of children and adolescents with interdisciplinary diagnosis of non-verbal learning disorder and to investigate the prevalence of inter-hemispheric asymmetry in this population group. Cross-sectional study including children and adolescents referred for interdisciplinary assessment with learning difficulty complaints, who were given an interdisciplinary diagnosis of non-verbal learning disorder. The following variables were included in the analysis: sex-related prevalence, educational system, initial presumptive diagnoses and respective prevalence, overall non-verbal learning disorder prevalence, prevalence according to school year, age range at the time of assessment, major family complaints, presence of inter-hemispheric asymmetry, arithmetic deficits, visuoconstruction impairments and major signs and symptoms of non-verbal learning disorder. Out of 810 medical records analyzed, 14 were from individuals who met the diagnostic criteria for non-verbal learning disorder, including the presence of inter-hemispheric asymmetry. Of these 14 patients, 8 were male. The high prevalence of inter-hemispheric asymmetry suggests this parameter can be used to predict or support the diagnosis of non-verbal learning disorder. Descrever as características clínicas e epidemiológicas de crianças e adolescentes com transtorno de aprendizagem não verbal, e investigar a prevalência de assimetria inter-hemisférica neste grupo populacional. Estudo transversal que incluiu crianças e adolescentes encaminhados para uma avaliação interdisciplinar, com queixas de dificuldades de aprendizagem e que receberam diagnóstico interdisciplinar de transtorno de aprendizagem não verbal. As variáveis avaliadas foram prevalência por sexo, sistema de ensino, hipóteses diagnósticas iniciais e respectivas prevalências, prevalência de condições em relação à amostra total, prevalência geral do transtorno de aprendizagem não verbal

  14. The organization and reorganization of audiovisual speech perception in the first year of life.

    Science.gov (United States)

    Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F

    2017-04-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.

  15. “Communication by impact” and other forms of non-verbal ...

    African Journals Online (AJOL)

    This article aims to review the importance, place and especially the emotional impact of non-verbal communication in psychiatry. The paper argues that while biological psychiatry is in the ascendency with increasing discoveries being made about the functioning of the brain and psycho-pharmacology, it is important to try ...

  16. Exploring Children’s Peer Relationships through Verbal and Non-verbal Communication: A Qualitative Action Research Focused on Waldorf Pedagogy

    Directory of Open Access Journals (Sweden)

    Aida Milena Montenegro Mantilla

    2007-12-01

    Full Text Available This study analyzes the relationships that children around seven and eight years old establish in a classroom. It shows that peer relationships have a positive dimension with features such as the development of children’s creativity to communicate and modify norms. These features were found through an analysis of children’s verbal and non-verbal communication and an interdisciplinary view of children’s learning process from Rudolf Steiner, founder of Waldorf Pedagogy, and Jean Piaget and Lev Vygotsky, specialists in children’s cognitive and social dimensions. This research is an invitation to recognize children’s capacity to construct their own rules in peer relationships.

  17. Verbal lie detection

    NARCIS (Netherlands)

    Vrij, Aldert; Taylor, Paul J.; Picornell, Isabel; Oxburgh, Gavin; Myklebust, Trond; Grant, Tim; Milne, Rebecca

    2015-01-01

    In this chapter, we discuss verbal lie detection and will argue that speech content can be revealing about deception. Starting with a section discussing the, in our view, myth that non-verbal behaviour would be more revealing about deception than speech, we then provide an overview of verbal lie

  18. Linking social cognition with social interaction: Non-verbal expressivity, social competence and "mentalising" in patients with schizophrenia spectrum disorders

    Directory of Open Access Journals (Sweden)

    Lehmkämper Caroline

    2009-01-01

    Full Text Available Abstract Background Research has shown that patients with schizophrenia spectrum disorders (SSD can be distinguished from controls on the basis of their non-verbal expression. For example, patients with SSD use facial expressions less than normals to invite and sustain social interaction. Here, we sought to examine whether non-verbal expressivity in patients corresponds with their impoverished social competence and neurocognition. Method Fifty patients with SSD were videotaped during interviews. Non-verbal expressivity was evaluated using the Ethological Coding System for Interviews (ECSI. Social competence was measured using the Social Behaviour Scale and psychopathology was rated using the Positive and Negative Symptom Scale. Neurocognitive variables included measures of IQ, executive functioning, and two mentalising tasks, which tapped into the ability to appreciate mental states of story characters. Results Non-verbal expressivity was reduced in patients relative to controls. Lack of "prosocial" nonverbal signals was associated with poor social competence and, partially, with impaired understanding of others' minds, but not with non-social cognition or medication. Conclusion This is the first study to link deficits in non-verbal expressivity to levels of social skills and awareness of others' thoughts and intentions in patients with SSD.

  19. Prediction-based Audiovisual Fusion for Classification of Non-Linguistic Vocalisations

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Prediction plays a key role in recent computational models of the brain and it has been suggested that the brain constantly makes multisensory spatiotemporal predictions. Inspired by these findings we tackle the problem of audiovisual fusion from a new perspective based on prediction. We train

  20. The Use of Virtual Characters to Assess and Train Non-Verbal Communication in High-Functioning Autism

    Science.gov (United States)

    Georgescu, Alexandra Livia; Kuzmanovic, Bojana; Roth, Daniel; Bente, Gary; Vogeley, Kai

    2014-01-01

    High-functioning autism (HFA) is a neurodevelopmental disorder, which is characterized by life-long socio-communicative impairments on the one hand and preserved verbal and general learning and memory abilities on the other. One of the areas where particular difficulties are observable is the understanding of non-verbal communication cues. Thus, investigating the underlying psychological processes and neural mechanisms of non-verbal communication in HFA allows a better understanding of this disorder, and potentially enables the development of more efficient forms of psychotherapy and trainings. However, the research on non-verbal information processing in HFA faces several methodological challenges. The use of virtual characters (VCs) helps to overcome such challenges by enabling an ecologically valid experience of social presence, and by providing an experimental platform that can be systematically and fully controlled. To make this field of research accessible to a broader audience, we elaborate in the first part of the review the validity of using VCs in non-verbal behavior research on HFA, and we review current relevant paradigms and findings from social-cognitive neuroscience. In the second part, we argue for the use of VCs as either agents or avatars in the context of “transformed social interactions.” This allows for the implementation of real-time social interaction in virtual experimental settings, which represents a more sensitive measure of socio-communicative impairments in HFA. Finally, we argue that VCs and environments are a valuable assistive, educational and therapeutic tool for HFA. PMID:25360098

  1. The use of virtual characters to assess and train non-verbal communication in high-functioning autism.

    Science.gov (United States)

    Georgescu, Alexandra Livia; Kuzmanovic, Bojana; Roth, Daniel; Bente, Gary; Vogeley, Kai

    2014-01-01

    High-functioning autism (HFA) is a neurodevelopmental disorder, which is characterized by life-long socio-communicative impairments on the one hand and preserved verbal and general learning and memory abilities on the other. One of the areas where particular difficulties are observable is the understanding of non-verbal communication cues. Thus, investigating the underlying psychological processes and neural mechanisms of non-verbal communication in HFA allows a better understanding of this disorder, and potentially enables the development of more efficient forms of psychotherapy and trainings. However, the research on non-verbal information processing in HFA faces several methodological challenges. The use of virtual characters (VCs) helps to overcome such challenges by enabling an ecologically valid experience of social presence, and by providing an experimental platform that can be systematically and fully controlled. To make this field of research accessible to a broader audience, we elaborate in the first part of the review the validity of using VCs in non-verbal behavior research on HFA, and we review current relevant paradigms and findings from social-cognitive neuroscience. In the second part, we argue for the use of VCs as either agents or avatars in the context of "transformed social interactions." This allows for the implementation of real-time social interaction in virtual experimental settings, which represents a more sensitive measure of socio-communicative impairments in HFA. Finally, we argue that VCs and environments are a valuable assistive, educational and therapeutic tool for HFA.

  2. Individual differences in non-verbal number acuity correlate with maths achievement.

    Science.gov (United States)

    Halberda, Justin; Mazzocco, Michèle M M; Feigenson, Lisa

    2008-10-02

    Human mathematical competence emerges from two representational systems. Competence in some domains of mathematics, such as calculus, relies on symbolic representations that are unique to humans who have undergone explicit teaching. More basic numerical intuitions are supported by an evolutionarily ancient approximate number system that is shared by adults, infants and non-human animals-these groups can all represent the approximate number of items in visual or auditory arrays without verbally counting, and use this capacity to guide everyday behaviour such as foraging. Despite the widespread nature of the approximate number system both across species and across development, it is not known whether some individuals have a more precise non-verbal 'number sense' than others. Furthermore, the extent to which this system interfaces with the formal, symbolic maths abilities that humans acquire by explicit instruction remains unknown. Here we show that there are large individual differences in the non-verbal approximation abilities of 14-year-old children, and that these individual differences in the present correlate with children's past scores on standardized maths achievement tests, extending all the way back to kindergarten. Moreover, this correlation remains significant when controlling for individual differences in other cognitive and performance factors. Our results show that individual differences in achievement in school mathematics are related to individual differences in the acuity of an evolutionarily ancient, unlearned approximate number sense. Further research will determine whether early differences in number sense acuity affect later maths learning, whether maths education enhances number sense acuity, and the extent to which tertiary factors can affect both.

  3. Specialization in audiovisual speech perception: a replication study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by bimodal integration in the McGurk effect. This integration effect may be specific to speech or be applied to all stimuli in general. To investigate this, Tuomainen et al. (2005) used sine-wave speech, which naïve observers may perceive as non......-speech, but hear as speech once informed of the linguistic origin of the signal. Combinations of sine-wave speech and incongruent video of the talker elicited a McGurk effect only for informed observers. This indicates that the audiovisual integration effect is specific to speech perception. However, observers...... that observers did look near the mouth. We conclude that eye-movements did not influence the results of Tuomainen et al. and that their results thus can be taken as evidence of a speech specific mode of audiovisual integration underlying the McGurk illusion....

  4. Presentación: Narrativas de no ficción audiovisual, interactiva y transmedia

    Directory of Open Access Journals (Sweden)

    Arnau Gifreu Castells

    2015-03-01

    Full Text Available El número 8 de la Revista profundiza en las formas de expresión narrativas de no ficción audiovisual, interactiva y transmedia. A lo largo de la historia de la comunicación, el ámbito de la no ficción siempre ha sido considerado como menor respecto de su homónimo de ficción. Esto sucede también en el campo de la investigación, donde las narrativas de ficción audiovisual, interactiva y transmedia siempre han ido un paso por delante de las de no ficción. Este monográfico propone un acercamiento teórico-práctico a narrativas de no ficción como el documental, el reportaje, el ensayo, los formatos educativos o las películas institucionales, con el propósito de ofrecer una radiografía de su ubicación actual en el ecosistema de medios.  Audiovisual, interactive and transmedia non-fiction Abstract Number 8 of  Obra Digital Revista de Comunicación  explores  audiovisual, interactive and transmedia non-fiction narrative expression forms. Throughout the history of communication the field of non-fiction has always been regarded as less than its fictional namesake. This is also true in the field of research, where the studies into audiovisual, interactive and transmedia fiction narratives have always been one step ahead of the studies into nonfiction narratives. This monograph proposes a theoretical and practical approach to narrative nonfiction forms as documentary, reporting, essay, educational formats and institutional films in order to supply a picture of its current position in the media ecosystem. Keywords: Non-fiction, Audiovisual Narrative, Interactive Narrative, Transmedia Narrative.

  5. O potencial da imagem televisiva na sociedade da cultura audiovisual

    Directory of Open Access Journals (Sweden)

    Juliana L. M. F. Sabino

    Full Text Available Resumo A cultura audiovisual vem cada vez mais ganhando espaço, e os avanços tecnológicos contribuem, vertiginosamente, para o seu desenvolvimento e sua abrangência. Assim, este estudo tem como temática a cultura audiovisual, e como objetivo de pesquisa, discutir a importância das imagens na televisão. Para tanto, selecionamos um exemplo de propaganda televisiva observada no ano de 2006, que inspirou uma reflexão crítica sobre a importância das linguagens híbridas na televisão, ilustrando a interferência dessas na produção do sentido na mensagem televisiva. Como referencial teórico e metodológico, utilizamos as concepções de imagem e linguagens híbridas de Lúcia Santaella. A partir da análise da propaganda ora proposta concluímos que sua constituição é mais icônica do que de verbal, mas que se insere numa concepção dialógica, constituindo-se, portanto, por meio de um processo criativo de produção de significados.

  6. Presentation Trainer: a toolkit for learning non-verbal public speaking skills

    NARCIS (Netherlands)

    Schneider, Jan; Börner, Dirk; Van Rosmalen, Peter; Specht, Marcus

    2014-01-01

    The paper presents and outlines the demonstration of Presentation Trainer, a prototype that works as a public speaking instructor. It tracks and analyses the body posture, movements and voice of the user in order to give in- structional feedback on non-verbal communication skills. Besides exploring

  7. Deaf children’s non-verbal working memory is impacted by their language experience

    Directory of Open Access Journals (Sweden)

    Chloe eMarshall

    2015-05-01

    Full Text Available Recent studies suggest that deaf children perform more poorly on working memory tasks compared to hearing children, but do not say whether this poorer performance arises directly from deafness itself or from deaf children’s reduced language exposure. The issue remains unresolved because findings come from (1 tasks that are verbal as opposed to non-verbal, and (2 involve deaf children who use spoken communication and therefore may have experienced impoverished input and delayed language acquisition. This is in contrast to deaf children who have been exposed to a sign language since birth from Deaf parents (and who therefore have native language-learning opportunities. A more direct test of how the type and quality of language exposure impacts working memory is to use measures of non-verbal working memory (NVWM and to compare hearing children with two groups of deaf signing children: those who have had native exposure to a sign language, and those who have experienced delayed acquisition compared to their native-signing peers. In this study we investigated the relationship between NVWM and language in three groups aged 6-11 years: hearing children (n=27, deaf native users of British Sign Language (BSL; n=7, and deaf children non native signers (n=19. We administered a battery of non-verbal reasoning, NVWM, and language tasks. We examined whether the groups differed on NVWM scores, and if language tasks predicted scores on NVWM tasks. For the two NVWM tasks, the non-native signers performed less accurately than the native signer and hearing groups (who did not differ from one another. Multiple regression analysis revealed that the vocabulary measure predicted scores on NVWM tasks. Our results suggest that whatever the language modality – spoken or signed – rich language experience from birth, and the good language skills that result from this early age of aacquisition, play a critical role in the development of NVWM and in performance on NVWM

  8. Non-verbal communication of compassion: measuring psychophysiologic effects.

    Science.gov (United States)

    Kemper, Kathi J; Shaltout, Hossam A

    2011-12-20

    Calm, compassionate clinicians comfort others. To evaluate the direct psychophysiologic benefits of non-verbal communication of compassion (NVCC), it is important to minimize the effect of subjects' expectation. This preliminary study was designed to a) test the feasibility of two strategies for maintaining subject blinding to non-verbal communication of compassion (NVCC), and b) determine whether blinded subjects would experience psychophysiologic effects from NVCC. Subjects were healthy volunteers who were told the study was evaluating the effect of time and touch on the autonomic nervous system. The practitioner had more than 10 years' experience with loving-kindness meditation (LKM), a form of NVCC. Subjects completed 10-point visual analog scales (VAS) for stress, relaxation, and peacefulness before and after LKM. To assess physiologic effects, practitioners and subjects wore cardiorespiratory monitors to assess respiratory rate (RR), heart rate (HR) and heart rate variability (HRV) throughout the 4 10-minute study periods: Baseline (both practitioner and subjects read neutral material); non-tactile-LKM (subjects read while the practitioner practiced LKM while pretending to read); tactile-LKM (subjects rested while the practitioner practiced LKM while lightly touching the subject on arms, shoulders, hands, feet, and legs); Post-Intervention Rest (subjects rested; the practitioner read). To assess blinding, subjects were asked after the interventions what the practitioner was doing during each period (reading, touch, or something else). Subjects' mean age was 43.6 years; all were women. Blinding was maintained and the practitioner was able to maintain meditation for both tactile and non-tactile LKM interventions as reflected in significantly reduced RR. Despite blinding, subjects' VAS scores improved from baseline to post-intervention for stress (5.5 vs. 2.2), relaxation (3.8 vs. 8.8) and peacefulness (3.8 vs. 9.0, P non-tactile LKM. It is possible to test the

  9. Effect of interaction with clowns on vital signs and non-verbal communication of hospitalized children.

    Science.gov (United States)

    Alcântara, Pauline Lima; Wogel, Ariane Zonho; Rossi, Maria Isabela Lobo; Neves, Isabela Rodrigues; Sabates, Ana Llonch; Puggina, Ana Cláudia

    2016-12-01

    Compare the non-verbal communication of children before and during interaction with clowns and compare their vital signs before and after this interaction. Uncontrolled, intervention, cross-sectional, quantitative study with children admitted to a public university hospital. The intervention was performed by medical students dressed as clowns and included magic tricks, juggling, singing with the children, making soap bubbles and comedic performances. The intervention time was 20minutes. Vital signs were assessed in two measurements with an interval of one minute immediately before and after the interaction. Non-verbal communication was observed before and during the interaction using the Non-Verbal Communication Template Chart, a tool in which nonverbal behaviors are assessed as effective or ineffective in the interactions. The sample consisted of 41 children with a mean age of 7.6±2.7 years; most were aged 7 to 11 years (n=23; 56%) and were males (n=26; 63.4%). There was a statistically significant difference in systolic and diastolic blood pressure, pain and non-verbal behavior of children with the intervention. Systolic and diastolic blood pressure increased and pain scales showed decreased scores. The playful interaction with clowns can be a therapeutic resource to minimize the effects of the stressing environment during the intervention, improve the children's emotional state and reduce the perception of pain. Copyright © 2016 Sociedade de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.

  10. [Non-verbal communication and executive function impairment after traumatic brain injury: a case report].

    Science.gov (United States)

    Sainson, C

    2007-05-01

    Following post-traumatic impairment in executive function, failure to adjust to communication situations often creates major obstacles to social and professional reintegration. The analysis of pathological verbal communication has been based on clinical scales since the 1980s, but that of nonverbal elements has been neglected, although their importance should be acknowledged. The aim of this research was to study non-verbal aspects of communication in a case of executive-function impairment after traumatic brain injury. During the patient's conversation with an interlocutor, all nonverbal parameters - coverbal gestures, gaze, posture, proxemics and facial expressions - were studied in as much an ecological way as possible, to closely approximate natural conversation conditions. Such an approach highlights the difficulties such patients experience in communicating, difficulties of a pragmatic kind, that have so far been overlooked by traditional investigations, which mainly take into account the formal linguistic aspects of language. The analysis of the patient's conversation revealed non-verbal dysfunctions, not only on a pragmatic and interactional level but also in terms of enunciation. Moreover, interactional adjustment phenomena were noted in the interlocutor's behaviour. The two inseparable aspects of communication - verbal and nonverbal - should be equally assessed in patients with communication difficulties; highlighting distortions in each area might bring about an improvement in the rehabilitation of such people.

  11. Auditory Verbal Experience and Agency in Waking, Sleep Onset, REM, and Non-REM Sleep.

    Science.gov (United States)

    Speth, Jana; Harley, Trevor A; Speth, Clemens

    2017-04-01

    We present one of the first quantitative studies on auditory verbal experiences ("hearing voices") and auditory verbal agency (inner speech, and specifically "talking to (imaginary) voices or characters") in healthy participants across states of consciousness. Tools of quantitative linguistic analysis were used to measure participants' implicit knowledge of auditory verbal experiences (VE) and auditory verbal agencies (VA), displayed in mentation reports from four different states. Analysis was conducted on a total of 569 mentation reports from rapid eye movement (REM) sleep, non-REM sleep, sleep onset, and waking. Physiology was controlled with the nightcap sleep-wake mentation monitoring system. Sleep-onset hallucinations, traditionally at the focus of scientific attention on auditory verbal hallucinations, showed the lowest degree of VE and VA, whereas REM sleep showed the highest degrees. Degrees of different linguistic-pragmatic aspects of VE and VA likewise depend on the physiological states. The quantity and pragmatics of VE and VA are a function of the physiologically distinct state of consciousness in which they are conceived. Copyright © 2016 Cognitive Science Society, Inc.

  12. Maternal postpartum depressive symptoms predict delay in non-verbal communication in 14-month-old infants.

    Science.gov (United States)

    Kawai, Emiko; Takagai, Shu; Takei, Nori; Itoh, Hiroaki; Kanayama, Naohiro; Tsuchiya, Kenji J

    2017-02-01

    We investigated the potential relationship between maternal depressive symptoms during the postpartum period and non-verbal communication skills of infants at 14 months of age in a birth cohort study of 951 infants and assessed what factors may influence this association. Maternal depressive symptoms were measured using the Edinburgh Postnatal Depression Scale, and non-verbal communication skills were measured using the MacArthur-Bates Communicative Development Inventories, which include Early Gestures and Later Gestures domains. Infants whose mothers had a high level of depressive symptoms (13+ points) during both the first month postpartum and at 10 weeks were approximately 0.5 standard deviations below normal in Early Gestures scores and 0.5-0.7 standard deviations below normal in Later Gestures scores. These associations were independent of potential explanations, such as maternal depression/anxiety prior to birth, breastfeeding practices, and recent depressive symptoms among mothers. These findings indicate that infants whose mothers have postpartum depressive symptoms may be at increased risk of experiencing delay in non-verbal development. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Perception of the Multisensory Coherence of Fluent Audiovisual Speech in Infancy: Its Emergence & the Role of Experience

    Science.gov (United States)

    Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa

    2014-01-01

    To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038

  14. 29 CFR 2.13 - Audiovisual coverage prohibited.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage prohibited. 2.13 Section 2.13 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.13 Audiovisual coverage prohibited. The Department shall not permit audiovisual coverage of the...

  15. Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.

    Science.gov (United States)

    Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu

    2018-05-01

    Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.

  16. Randomised controlled trial of a brief intervention targeting predominantly non-verbal communication in general practice consultations.

    Science.gov (United States)

    Little, Paul; White, Peter; Kelly, Joanne; Everitt, Hazel; Mercer, Stewart

    2015-06-01

    The impact of changing non-verbal consultation behaviours is unknown. To assess brief physician training on improving predominantly non-verbal communication. Cluster randomised parallel group trial among adults aged ≥16 years attending general practices close to the study coordinating centres in Southampton. Sixteen GPs were randomised to no training, or training consisting of a brief presentation of behaviours identified from a prior study (acronym KEPe Warm: demonstrating Knowledge of the patient; Encouraging [back-channelling by saying 'hmm', for example]; Physically engaging [touch, gestures, slight lean]; Warm-up: cool/professional initially, warming up, avoiding distancing or non-verbal cut-offs at the end of the consultation); and encouragement to reflect on videos of their consultation. Outcomes were the Medical Interview Satisfaction Scale (MISS) mean item score (1-7) and patients' perceptions of other domains of communication. Intervention participants scored higher MISS overall (0.23, 95% confidence interval [CI] = 0.06 to 0.41), with the largest changes in the distress-relief and perceived relationship subscales. Significant improvement occurred in perceived communication/partnership (0.29, 95% CI = 0.09 to 0.49) and health promotion (0.26, 95% CI = 0.05 to 0.46). Non-significant improvements occurred in perceptions of a personal relationship, a positive approach, and understanding the effects of the illness on life. Brief training of GPs in predominantly non-verbal communication in the consultation and reflection on consultation videotapes improves patients' perceptions of satisfaction, distress, a partnership approach, and health promotion. © British Journal of General Practice 2015.

  17. Cortical oscillations modulated by congruent and incongruent audiovisual stimuli.

    Science.gov (United States)

    Herdman, A T; Fujioka, T; Chau, W; Ross, B; Pantev, C; Picton, T W

    2004-11-30

    Congruent or incongruent grapheme-phoneme stimuli are easily perceived as one or two linguistic objects. The main objective of this study was to investigate the changes in cortical oscillations that reflect the processing of congruent and incongruent audiovisual stimuli. Graphemes were Japanese Hiragana characters for four different vowels (/a/, /o/, /u/, and /i/). They were presented simultaneously with their corresponding phonemes (congruent) or non-corresponding phonemes (incongruent) to native-speaking Japanese participants. Participants' reaction times to the congruent audiovisual stimuli were significantly faster by 57 ms as compared to reaction times to incongruent stimuli. We recorded the brain responses for each condition using a whole-head magnetoencephalograph (MEG). A novel approach to analysing MEG data, called synthetic aperture magnetometry (SAM), was used to identify event-related changes in cortical oscillations involved in audiovisual processing. The SAM contrast between congruent and incongruent responses revealed greater event-related desynchonization (8-16 Hz) bilaterally in the occipital lobes and greater event-related synchronization (4-8 Hz) in the left transverse temporal gyrus. Results from this study further support the concept of interactions between the auditory and visual sensory cortices in multi-sensory processing of audiovisual objects.

  18. Market potential for interactive audio-visual media

    NARCIS (Netherlands)

    Leurdijk, A.; Limonard, S.

    2005-01-01

    NM2 (New Media for a New Millennium) develops tools for interactive, personalised and non-linear audio-visual content that will be tested in seven pilot productions. This paper looks at the market potential for these productions from a technological, a business and a users' perspective. It shows

  19. Infants' preference for native audiovisual speech dissociated from congruency preference.

    Directory of Open Access Journals (Sweden)

    Kathleen Shaw

    Full Text Available Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces. Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English and non-native (Spanish language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.

  20. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

    Science.gov (United States)

    Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa

    2015-02-01

    To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Plantilla 1: El documento audiovisual: elementos importantes

    OpenAIRE

    Alemany, Dolores

    2011-01-01

    Concepto de documento audiovisual y de documentación audiovisual, profundizando en la distinción de documentación de imagen en movimiento con posible incorporación de sonido frente al concepto de documentación audiovisual según plantea Jorge Caldera. Diferenciación entre documentos audiovisuales, obras audiovisuales y patrimonio audiovisual según Félix del Valle.

  2. Exploring laterality and memory effects in the haptic discrimination of verbal and non-verbal shapes.

    Science.gov (United States)

    Stoycheva, Polina; Tiippana, Kaisa

    2018-03-14

    The brain's left hemisphere often displays advantages in processing verbal information, while the right hemisphere favours processing non-verbal information. In the haptic domain due to contra-lateral innervations, this functional lateralization is reflected in a hand advantage during certain functions. Findings regarding the hand-hemisphere advantage for haptic information remain contradictory, however. This study addressed these laterality effects and their interaction with memory retention times in the haptic modality. Participants performed haptic discrimination of letters, geometric shapes and nonsense shapes at memory retention times of 5, 15 and 30 s with the left and right hand separately, and we measured the discriminability index d'. The d' values were significantly higher for letters and geometric shapes than for nonsense shapes. This might result from dual coding (naming + spatial) or/and from a low stimulus complexity. There was no stimulus-specific laterality effect. However, we found a time-dependent laterality effect, which revealed that the performance of the left hand-right hemisphere was sustained up to 15 s, while the performance of the right-hand-left hemisphere decreased progressively throughout all retention times. This suggests that haptic memory traces are more robust to decay when they are processed by the left hand-right hemisphere.

  3. The Introduction of Non-Verbal Communication in Greek Education: A Literature Review

    Science.gov (United States)

    Stamatis, Panagiotis J.

    2012-01-01

    Introduction: The introductory part of this paper underlines the research interest of the educational community in the issue of non-verbal communication in education. The question for the introduction of this scientific field in Greek education enter within the context of this research which include many aspects. Method: The paper essentially…

  4. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech and music

    Directory of Open Access Journals (Sweden)

    Hwee Ling eLee

    2014-08-01

    Full Text Available This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogues of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms. Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past three years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  5. Peculiarities of Stereotypes about Non-Verbal Communication and their Role in Cross-Cultural Interaction between Russian and Chinese Students

    Directory of Open Access Journals (Sweden)

    I A Novikova

    2012-12-01

    Full Text Available The article is devoted to the analysis of the peculiarities of the stereotypes about non-verbal communication, formed in Russian and Chinese cultures. The results of the experimental research of the role of ethnic auto- and heterostereotypes about non-verbal communication in cross-cultural interaction between Russian and Chinese students of the Peoples’ Friendship University of Russia are presented.

  6. Attention to affective audio-visual information: Comparison between musicians and non-musicians

    NARCIS (Netherlands)

    Weijkamp, J.; Sadakata, M.

    2017-01-01

    Individuals with more musical training repeatedly demonstrate enhanced auditory perception abilities. The current study examined how these enhanced auditory skills interact with attention to affective audio-visual stimuli. A total of 16 participants with more than 5 years of musical training

  7. Web-based audiovisual patient information system--a study of preoperative patient information in a neurosurgical department.

    Science.gov (United States)

    Gautschi, Oliver P; Stienen, Martin N; Hermann, Christel; Cadosch, Dieter; Fournier, Jean-Yves; Hildebrandt, Gerhard

    2010-08-01

    In the current climate of increasing awareness, patients are demanding more knowledge about forthcoming operations. The patient information accounts for a considerable part of the physician's daily clinical routine. Unfortunately, only a small percentage of the information is understood by the patient after solely verbal elucidation. To optimise information delivery, different auxiliary materials are used. In a prospective study, 52 consecutive stationary patients, scheduled for an elective lumbar disc operation were asked to use a web-based audiovisual patient information system. A combination of pictures, text, tone and video about the planned surgical intervention is installed on a tablet personal computer presented the day before surgery. All patients were asked to complete a questionnaire. Eighty-four percent of all participants found that the audiovisual patient information system lead to a better understanding of the forthcoming operation. Eighty-two percent found that the information system was a very helpful preparation before the pre-surgical interview with the surgeon. Ninety percent of all participants considered it meaningful to provide this kind of preoperative education also to patients planned to undergo other surgical interventions. Eighty-four percent were altogether "very content" with the audiovisual patient information system and 86% would recommend the system to others. This new approach of patient information had a positive impact on patient education as is evident from high satisfaction scores. Because patient satisfaction with the informed consent process and understanding of the presented information improved substantially, the audiovisual patient information system clearly benefits both surgeons and patients.

  8. Near Real-Time Comprehension Classification with Artificial Neural Networks: Decoding e-Learner Non-Verbal Behavior

    Science.gov (United States)

    Holmes, Mike; Latham, Annabel; Crockett, Keeley; O'Shea, James D.

    2018-01-01

    Comprehension is an important cognitive state for learning. Human tutors recognize comprehension and non-comprehension states by interpreting learner non-verbal behavior (NVB). Experienced tutors adapt pedagogy, materials, and instruction to provide additional learning scaffold in the context of perceived learner comprehension. Near real-time…

  9. Audiovisual integration facilitates monkeys' short-term memory.

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2016-07-01

    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans.

  10. Audiovisual Blindsight: Audiovisual learning in the absence of primary visual cortex

    OpenAIRE

    Mehrdad eSeirafi; Peter eDe Weerd; Alan J Pegna; Beatrice ede Gelder

    2016-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit...

  11. Development of non-verbal intellectual capacity in school-age children with cerebral palsy

    NARCIS (Netherlands)

    Smits, D. W.; Ketelaar, M.; Gorter, J. W.; van Schie, P. E.; Becher, J. G.; Lindeman, E.; Jongmans, M. J.

    Background Children with cerebral palsy (CP) are at greater risk for a limited intellectual development than typically developing children. Little information is available which children with CP are most at risk. This study aimed to describe the development of non-verbal intellectual capacity of

  12. Audiovisual speech integration in the superior temporal region is dysfunctional in dyslexia.

    Science.gov (United States)

    Ye, Zheng; Rüsseler, Jascha; Gerth, Ivonne; Münte, Thomas F

    2017-07-25

    Dyslexia is an impairment of reading and spelling that affects both children and adults even after many years of schooling. Dyslexic readers have deficits in the integration of auditory and visual inputs but the neural mechanisms of the deficits are still unclear. This fMRI study examined the neural processing of auditorily presented German numbers 0-9 and videos of lip movements of a German native speaker voicing numbers 0-9 in unimodal (auditory or visual) and bimodal (always congruent) conditions in dyslexic readers and their matched fluent readers. We confirmed results of previous studies that the superior temporal gyrus/sulcus plays a critical role in audiovisual speech integration: fluent readers showed greater superior temporal activations for combined audiovisual stimuli than auditory-/visual-only stimuli. Importantly, such an enhancement effect was absent in dyslexic readers. Moreover, the auditory network (bilateral superior temporal regions plus medial PFC) was dynamically modulated during audiovisual integration in fluent, but not in dyslexic readers. These results suggest that superior temporal dysfunction may underly poor audiovisual speech integration in readers with dyslexia. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  13. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    Science.gov (United States)

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation

  14. Relationship of Non-Verbal Intelligence Materials as Catalyst for Academic Achievement and Peaceful Co-Existence among Secondary School Students in Nigeria

    Science.gov (United States)

    Sambo, Aminu

    2015-01-01

    This paper examines students' performance in Non-verbal Intelligence tests relative academic achievement of some selected secondary school students. Two hypotheses were formulated with a view to generating data for the ease of analyses. Two non-verbal intelligent tests viz: Raven's Standard Progressive Matrices (SPM) and AH[subscript 4] Part II…

  15. Differential patterns of prefrontal MEG activation during verbal & visual encoding and retrieval.

    Science.gov (United States)

    Prendergast, Garreth; Limbrick-Oldfield, Eve; Ingamells, Ed; Gathercole, Susan; Baddeley, Alan; Green, Gary G R

    2013-01-01

    The spatiotemporal profile of activation of the prefrontal cortex in verbal and non-verbal recognition memory was examined using magnetoencephalography (MEG). Sixteen neurologically healthy right-handed participants were scanned whilst carrying out a modified version of the Doors and People Test of recognition memory. A pattern of significant prefrontal activity was found for non-verbal and verbal encoding and recognition. During the encoding, verbal stimuli activated an area in the left ventromedial prefrontal cortex, and non-verbal stimuli activated an area in the right. A region in the left dorsolateral prefrontal cortex also showed significant activation during the encoding of non-verbal stimuli. Both verbal and non-verbal stimuli significantly activated an area in the right dorsomedial prefrontal cortex and the right anterior prefrontal cortex during successful recognition, however these areas showed temporally distinct activation dependent on material, with non-verbal showing activation earlier than verbal stimuli. Additionally, non-verbal material activated an area in the left anterior prefrontal cortex during recognition. These findings suggest a material-specific laterality in the ventromedial prefrontal cortex during encoding for verbal and non-verbal but also support the HERA model for verbal material. The discovery of two process dependent areas during recognition that showed patterns of temporal activation dependent on material demonstrates the need for the application of more temporally sensitive techniques to the involvement of the prefrontal cortex in recognition memory.

  16. 29 CFR 2.12 - Audiovisual coverage permitted.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage permitted. 2.12 Section 2.12 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.12 Audiovisual coverage permitted. The following are the types of hearings where the Department...

  17. Audiovisual preservation strategies, data models and value-chains

    OpenAIRE

    Addis, Matthew; Wright, Richard

    2010-01-01

    This is a report on preservation strategies, models and value-chains for digital file-based audiovisual content. The report includes: (a)current and emerging value-chains and business-models for audiovisual preservation;(b) a comparison of preservation strategies for audiovisual content including their strengths and weaknesses, and(c) a review of current preservation metadata models, and requirements for extension to support audiovisual files.

  18. Multiple concurrent temporal recalibrations driven by audiovisual stimuli with apparent physical differences.

    Science.gov (United States)

    Yuan, Xiangyong; Bi, Cuihua; Huang, Xiting

    2015-05-01

    Out-of-synchrony experiences can easily recalibrate one's subjective simultaneity point in the direction of the experienced asynchrony. Although temporal adjustment of multiple audiovisual stimuli has been recently demonstrated to be spatially specific, perceptual grouping processes that organize separate audiovisual stimuli into distinctive "objects" may play a more important role in forming the basis for subsequent multiple temporal recalibrations. We investigated whether apparent physical differences between audiovisual pairs that make them distinct from each other can independently drive multiple concurrent temporal recalibrations regardless of spatial overlap. Experiment 1 verified that reducing the physical difference between two audiovisual pairs diminishes the multiple temporal recalibrations by exposing observers to two utterances with opposing temporal relationships spoken by one single speaker rather than two distinct speakers at the same location. Experiment 2 found that increasing the physical difference between two stimuli pairs can promote multiple temporal recalibrations by complicating their non-temporal dimensions (e.g., disks composed of two rather than one attribute and tones generated by multiplying two frequencies); however, these recalibration aftereffects were subtle. Experiment 3 further revealed that making the two audiovisual pairs differ in temporal structures (one transient and one gradual) was sufficient to drive concurrent temporal recalibration. These results confirm that the more audiovisual pairs physically differ, especially in temporal profile, the more likely multiple temporal perception adjustments will be content-constrained regardless of spatial overlap. These results indicate that multiple temporal recalibrations are based secondarily on the outcome of perceptual grouping processes.

  19. Differential patterns of prefrontal MEG activation during verbal & visual encoding and retrieval.

    Directory of Open Access Journals (Sweden)

    Garreth Prendergast

    Full Text Available The spatiotemporal profile of activation of the prefrontal cortex in verbal and non-verbal recognition memory was examined using magnetoencephalography (MEG. Sixteen neurologically healthy right-handed participants were scanned whilst carrying out a modified version of the Doors and People Test of recognition memory. A pattern of significant prefrontal activity was found for non-verbal and verbal encoding and recognition. During the encoding, verbal stimuli activated an area in the left ventromedial prefrontal cortex, and non-verbal stimuli activated an area in the right. A region in the left dorsolateral prefrontal cortex also showed significant activation during the encoding of non-verbal stimuli. Both verbal and non-verbal stimuli significantly activated an area in the right dorsomedial prefrontal cortex and the right anterior prefrontal cortex during successful recognition, however these areas showed temporally distinct activation dependent on material, with non-verbal showing activation earlier than verbal stimuli. Additionally, non-verbal material activated an area in the left anterior prefrontal cortex during recognition. These findings suggest a material-specific laterality in the ventromedial prefrontal cortex during encoding for verbal and non-verbal but also support the HERA model for verbal material. The discovery of two process dependent areas during recognition that showed patterns of temporal activation dependent on material demonstrates the need for the application of more temporally sensitive techniques to the involvement of the prefrontal cortex in recognition memory.

  20. Executive functioning and non-verbal intelligence as predictors of bullying in early elementary school

    NARCIS (Netherlands)

    Verlinden, Marina; Veenstra, René; Ghassabian, Akhgar; Jansen, P.W.; Hofman, Albert; Jaddoe, Vincent W. V.; Verhulst, F.C.; Tiemeier, Henning

    Executive function and intelligence are negatively associated with aggression, yet the role of executive function has rarely been examined in the context of school bullying. We studied whether different domains of executive function and non-verbal intelligence are associated with bullying

  1. Neural correlates of audiovisual speech processing in a second language.

    Science.gov (United States)

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Non-verbal auditory cognition in patients with temporal epilepsy before and after anterior temporal lobectomy

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-11-01

    Full Text Available For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL - i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri - is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits.

  3. On the embedded cognition of non-verbal narratives

    DEFF Research Database (Denmark)

    Bruni, Luis Emilio; Baceviciute, Sarune

    2014-01-01

    Acknowledging that narratives are an important resource in human communication and cognition, the focus of this article is on the cognitive aspects of involvement with visual and auditory non-verbal narratives, particularly in relation to the newest immersive media and digital interactive...... representational technologies. We consider three relevant trends in narrative studies that have emerged in the 60 years of cognitive and digital revolution. The issue at hand could have implications for developmental psychology, pedagogics, cognitive science, cognitive psychology, ethology and evolutionary studies...... of language. In particular, it is of great importance for narratology in relation to interactive media and new representational technologies. Therefore we outline a research agenda for a bio-cognitive semiotic interdisciplinary investigation on how people understand, react to, and interact with narratives...

  4. Impaired self-monitoring of inner speech in schizophrenia patients with verbal hallucinations and in non-clinical individuals prone to hallucinations

    Directory of Open Access Journals (Sweden)

    Gildas Brébion

    2016-09-01

    Full Text Available Background: Previous research has shown that various memory errors reflecting failure in the self-monitoring of speech were associated with auditory/verbal hallucinations in schizophrenia patients and with proneness to hallucinations in non-clinical individuals. Method: We administered to 57 schizophrenia patients and 60 healthy participants a verbal memory task involving free recall and recognition of lists of words with different structures (high-frequency, low-frequency, and semantically-organisable words. Extra-list intrusions in free recall were tallied, and the response bias reflecting tendency to make false recognitions of non-presented words was computed for each list. Results: In the male patient subsample, extra-list intrusions were positively associated with verbal hallucinations and inversely associated with negative symptoms. In the healthy participants the extra-list intrusions were positively associated with proneness to hallucinations. A liberal response bias in the recognition of the high-frequency words was associated with verbal hallucinations in male patients and with proneness to hallucinations in healthy men. Meanwhile, a conservative response bias for these high-frequency words was associated with negative symptoms in male patients and with social anhedonia in healthy men. Conclusions: Misattribution of inner speech to an external source, reflected by false recollection of familiar material, seems to underlie both clinical and non-clinical hallucinations. Further, both clinical and non-clinical negative symptoms may exert on verbal memory errors an effect opposite to that of hallucinations.

  5. Perception of non-verbal auditory stimuli in Italian dyslexic children.

    Science.gov (United States)

    Cantiani, Chiara; Lorusso, Maria Luisa; Valnegri, Camilla; Molteni, Massimo

    2010-01-01

    Auditory temporal processing deficits have been proposed as the underlying cause of phonological difficulties in Developmental Dyslexia. The hypothesis was tested in a sample of 20 Italian dyslexic children aged 8-14, and 20 matched control children. Three tasks of auditory processing of non-verbal stimuli, involving discrimination and reproduction of sequences of rapidly presented short sounds were expressly created. Dyslexic subjects performed more poorly than control children, suggesting the presence of a deficit only partially influenced by the duration of the stimuli and of inter-stimulus intervals (ISIs).

  6. Does hearing aid use affect audiovisual integration in mild hearing impairment?

    Science.gov (United States)

    Gieseler, Anja; Tahden, Maike A S; Thiel, Christiane M; Colonius, Hans

    2018-04-01

    There is converging evidence for altered audiovisual integration abilities in hearing-impaired individuals and those with profound hearing loss who are provided with cochlear implants, compared to normal-hearing adults. Still, little is known on the effects of hearing aid use on audiovisual integration in mild hearing loss, although this constitutes one of the most prevalent conditions in the elderly and, yet, often remains untreated in its early stages. This study investigated differences in the strength of audiovisual integration between elderly hearing aid users and those with the same degree of mild hearing loss who were not using hearing aids, the non-users, by measuring their susceptibility to the sound-induced flash illusion. We also explored the corresponding window of integration by varying the stimulus onset asynchronies. To examine general group differences that are not attributable to specific hearing aid settings but rather reflect overall changes associated with habitual hearing aid use, the group of hearing aid users was tested unaided while individually controlling for audibility. We found greater audiovisual integration together with a wider window of integration in hearing aid users compared to their age-matched untreated peers. Signal detection analyses indicate that a change in perceptual sensitivity as well as in bias may underlie the observed effects. Our results and comparisons with other studies in normal-hearing older adults suggest that both mild hearing impairment and hearing aid use seem to affect audiovisual integration, possibly in the sense that hearing aid use may reverse the effects of hearing loss on audiovisual integration. We suggest that these findings may be particularly important for auditory rehabilitation and call for a longitudinal study.

  7. Audiovisual Styling and the Film Experience

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2015-01-01

    Approaches to music and audiovisual meaning in film appear to be very different in nature and scope when considered from the point of view of experimental psychology or humanistic studies. Nevertheless, this article argues that experimental studies square with ideas of audiovisual perception...... and meaning in humanistic film music studies in two ways: through studies of vertical synchronous interaction and through studies of horizontal narrative effects. Also, it is argued that the combination of insights from quantitative experimental studies and qualitative audiovisual film analysis may actually...... be combined into a more complex understanding of how audiovisual features interact in the minds of their audiences. This is demonstrated through a review of a series of experimental studies. Yet, it is also argued that textual analysis and concepts from within film and music studies can provide insights...

  8. Associations between olfactory identification and verbal memory in patients with schizophrenia, first-degree relatives, and non-psychiatric controls.

    Science.gov (United States)

    Compton, Michael T; McKenzie Mack, LaTasha; Esterberg, Michelle L; Bercu, Zachary; Kryda, Aimee D; Quintero, Luis; Weiss, Paul S; Walker, Elaine F

    2006-09-01

    Olfactory identification deficits and verbal memory impairments may represent trait markers for schizophrenia. The aims of this study were to: (1) assess olfactory identification in patients, first-degree relatives, and non-psychiatric controls, (2) determine differences in verbal memory functioning in these three groups, and (3) study correlations between olfactory identification and three specific verbal memory domains. A total of 106 participants-41 patients with schizophrenia or related disorders, 27 relatives, and 38 controls-were assessed with the University of Pennsylvania Smell Identification Test (UPSIT) and the Wechsler Memory Scale-Third Edition. Linear mixed models, accounting for clustering within families and relevant covariates, were used to compare scores across groups and to examine associations between olfactory identification ability and the three verbal memory domains. A group effect was apparent for all four measures, and relatives scored midway between patients and controls on all three memory domains. UPSIT scores were significantly correlated with all three forms of verbal memory. Age, verbal working memory, and auditory recognition delayed memory were independently predictive of UPSIT scores. Impairments in olfactory identification and verbal memory appear to represent two correlated risk markers for schizophrenia, and frontal-temporal deficits likely account for both impairments.

  9. Judging the urgency of non-verbal auditory alarms: a case study.

    Science.gov (United States)

    Arrabito, G Robert; Mondor, Todd; Kent, Kimberley

    2004-06-22

    When designed correctly, non-verbal auditory alarms can convey different levels of urgency to the aircrew, and thereby permit the operator to establish the appropriate level of priority to address the alarmed condition. The conveyed level of urgency of five non-verbal auditory alarms presently used in the Canadian Forces CH-146 Griffon helicopter was investigated. Pilots of the CH-146 Griffon helicopter and non-pilots rated the perceived urgency of the signals using a rating scale. The pilots also ranked the urgency of the alarms in a post-experiment questionnaire to reflect their assessment of the actual situation that triggers the alarms. The results of this investigation revealed that participants' ratings of perceived urgency appear to be based on the acoustic properties of the alarms which are known to affect the listener's perceived level of urgency. Although for 28% of the pilots the mapping of perceived urgency to the urgency of their perception of the triggering situation was statistically significant for three of the five alarms, the overall data suggest that the triggering situations are not adequately conveyed by the acoustic parameters inherent in the alarms. The pilots' judgement of the triggering situation was intended as a means of evaluating the reliability of the alerting system. These data will subsequently be discussed with respect to proposed enhancements in alerting systems as it relates to addressing the problem of phase of flight. These results call for more serious consideration of incorporating situational awareness in the design and assignment of auditory alarms in aircraft.

  10. Contribution of Prosody in Audio-Visual Integration to Emotional Perception of Virtual Characters

    Directory of Open Access Journals (Sweden)

    Ekaterina Volkova

    2011-10-01

    Full Text Available Recent technology provides us with realistic looking virtual characters. Motion capture and elaborate mathematical models supply data for natural looking, controllable facial and bodily animations. With the help of computational linguistics and artificial intelligence, we can automatically assign emotional categories to appropriate stretches of text for a simulation of those social scenarios where verbal communication is important. All this makes virtual characters a valuable tool for creation of versatile stimuli for research on the integration of emotion information from different modalities. We conducted an audio-visual experiment to investigate the differential contributions of emotional speech and facial expressions on emotion identification. We used recorded and synthesized speech as well as dynamic virtual faces, all enhanced for seven emotional categories. The participants were asked to recognize the prevalent emotion of paired faces and audio. Results showed that when the voice was recorded, the vocalized emotion influenced participants' emotion identification more than the facial expression. However, when the voice was synthesized, facial expression influenced participants' emotion identification more than vocalized emotion. Additionally, individuals did worse on identifying either the facial expression or vocalized emotion when the voice was synthesized. Our experimental method can help to determine how to improve synthesized emotional speech.

  11. Verbal and Non-verbal Fluency in Adults with Developmental Dyslexia: Phonological Processing or Executive Control Problems?

    Science.gov (United States)

    Smith-Spark, James H; Henry, Lucy A; Messer, David J; Zięcik, Adam P

    2017-08-01

    The executive function of fluency describes the ability to generate items according to specific rules. Production of words beginning with a certain letter (phonemic fluency) is impaired in dyslexia, while generation of words belonging to a certain semantic category (semantic fluency) is typically unimpaired. However, in dyslexia, verbal fluency has generally been studied only in terms of overall words produced. Furthermore, performance of adults with dyslexia on non-verbal design fluency tasks has not been explored but would indicate whether deficits could be explained by executive control, rather than phonological processing, difficulties. Phonemic, semantic and design fluency tasks were presented to adults with dyslexia and without dyslexia, using fine-grained performance measures and controlling for IQ. Hierarchical regressions indicated that dyslexia predicted lower phonemic fluency, but not semantic or design fluency. At the fine-grained level, dyslexia predicted a smaller number of switches between subcategories on phonemic fluency, while dyslexia did not predict the size of phonemically related clusters of items. Overall, the results suggested that phonological processing problems were at the root of dyslexia-related fluency deficits; however, executive control difficulties could not be completely ruled out as an alternative explanation. Developments in research methodology, equating executive demands across fluency tasks, may resolve this issue. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. La regulación audiovisual: argumentos a favor y en contra The audio-visual regulation: the arguments for and against

    Directory of Open Access Journals (Sweden)

    Jordi Sopena Palomar

    2008-03-01

    Full Text Available El artículo analiza la efectividad de la regulación audiovisual y valora los diversos argumentos a favor y en contra de la existencia de consejos reguladores a nivel estatal. El debate sobre la necesidad de un organismo de este calado en España todavía persiste. La mayoría de los países comunitarios se han dotado de consejos competentes en esta materia, como es el caso del OFCOM en el Reino Unido o el CSA en Francia. En España, la regulación audiovisual se limita a organismos de alcance autonómico, como son el Consejo Audiovisual de Navarra, el de Andalucía y el Consell de l’Audiovisual de Catalunya (CAC, cuyo modelo también es abordado en este artículo. The article analyzes the effectiveness of the audio-visual regulation and assesses the different arguments for and against the existence of the broadcasting authorities at the state level. The debate of the necessity of a Spanish organism of regulation is still active. Most of the European countries have created some competent authorities, like the OFCOM in United Kingdom and the CSA in France. In Spain, the broadcasting regulation is developed by regional organisms, like the Consejo Audiovisual de Navarra, the Consejo Audiovisual de Andalucía and the Consell de l’Audiovisual de Catalunya (CAC, whose case is also studied in this article.

  13. Influence of auditory and audiovisual stimuli on the right-left prevalence effect

    DEFF Research Database (Denmark)

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate...... vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did...... not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension...

  14. Sustainable models of audiovisual commons

    Directory of Open Access Journals (Sweden)

    Mayo Fuster Morell

    2013-03-01

    Full Text Available This paper addresses an emerging phenomenon characterized by continuous change and experimentation: the collaborative commons creation of audiovisual content online. The analysis wants to focus on models of sustainability of collaborative online creation, paying particular attention to the use of different forms of advertising. This article is an excerpt of a larger investigation, which unit of analysis are cases of Online Creation Communities that take as their central node of activity the Catalan territory. From 22 selected cases, the methodology combines quantitative analysis, through a questionnaire delivered to all cases, and qualitative analysis through face interviews conducted in 8 cases studied. The research, which conclusions we summarize in this article,in this article, leads us to conclude that the sustainability of the project depends largely on relationships of trust and interdependence between different voluntary agents, the non-monetary contributions and retributions as well as resources and infrastructure of free use. All together leads us to understand that this is and will be a very important area for the future of audiovisual content and its sustainability, which will imply changes in the policies that govern them.

  15. Non-verbal communication between nurses and people with an intellectual disability: a review of the literature.

    Science.gov (United States)

    Martin, Anne-Marie; O'Connor-Fenelon, Maureen; Lyons, Rosemary

    2010-12-01

    This article critically synthesizes current literature regarding communication between nurses and people with an intellectual disability who communicate non-verbally. The unique context of communication between the intellectual disability nurse and people with intellectual disability and the review aims and strategies are outlined. Communication as a concept is explored in depth. Communication between the intellectual disability nurse and the person with an intellectual disability is then comprehensively examined in light of existing literature. Issues including knowledge of the person with intellectual disability, mismatch of communication ability, and knowledge of communication arose as predominant themes. A critical review of the importance of communication in nursing practice follows. The paucity of literature relating to intellectual disability nursing and non-verbal communication clearly indicates a need for research.

  16. Audiovisual English-Arabic Translation: De Beaugrande's Perspective

    Directory of Open Access Journals (Sweden)

    Alaa Eddin Hussain

    2016-05-01

    Full Text Available This paper attempts to demonstrate the significance of the seven standards of textuality with special application to audiovisual English Arabic translation.  Ample and thoroughly analysed examples have been provided to help in audiovisual English-Arabic translation decision-making. A text is meaningful if and only if it carries meaning and knowledge to its audience, and is optimally activatable, recoverable and accessible.  The same is equally applicable to audiovisual translation (AVT. The latter should also carry knowledge which can be easily accessed by the TL audience, and be processed with least energy and time, i.e. achieving the utmost level of efficiency. Communication occurs only when that text is coherent, with continuity of senses and concepts that are appropriately linked. Coherence of a text will be achieved when all aspects of cohesive devices are well accounted for pragmatically.  This combined with a good amount of psycholinguistic element will provide a text with optimal communicative value. Non-text is certainly devoid of such components and ultimately non-communicative. Communicative knowledge can be classified into three categories: determinate knowledge, typical knowledge and accidental knowledge. To create dramatic suspense and the element of surprise, the text in AV environment, as in any dialogue, often carries accidental knowledge.  This unusual knowledge aims to make AV material interesting in the eyes of its audience. That cognitive environment is enhanced by an adequate employment of material (picture and sound, and helps to recover sense in the text. Hence, the premise of this paper is the application of certain aspects of these standards to AV texts taken from various recent feature films and documentaries, in order to facilitate the translating process and produce a final appropriate product.

  17. Audiovisual focus of attention and its application to Ultra High Definition video compression

    Science.gov (United States)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  18. Audiovisual perception in amblyopia: A review and synthesis.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-05-17

    Amblyopia is a common developmental sensory disorder that has been extensively and systematically investigated as a unisensory visual impairment. However, its effects are increasingly recognized to extend beyond vision to the multisensory domain. Indeed, amblyopia is associated with altered cross-modal interactions in audiovisual temporal perception, audiovisual spatial perception, and audiovisual speech perception. Furthermore, although the visual impairment in amblyopia is typically unilateral, the multisensory abnormalities tend to persist even when viewing with both eyes. Knowledge of the extent and mechanisms of the audiovisual impairments in amblyopia, however, remains in its infancy. This work aims to review our current understanding of audiovisual processing and integration deficits in amblyopia, and considers the possible mechanisms underlying these abnormalities. Copyright © 2018. Published by Elsevier Ltd.

  19. Achieving visibility? Use of non-verbal communication in interactions between patients and pharmacists who do not share a common language.

    Science.gov (United States)

    Stevenson, Fiona

    2014-06-01

    Despite the seemingly insatiable interest in healthcare professional-patient communication, less attention has been paid to the use of non-verbal communication in medical consultations. This article considers pharmacists' and patients' use of non-verbal communication to interact directly in consultations in which they do not share a common language. In total, 12 video-recorded, interpreted pharmacy consultations concerned with a newly prescribed medication or a change in medication were analysed in detail. The analysis focused on instances of direct communication initiated by either the patient or the pharmacist, despite the presence of a multilingual pharmacy assistant acting as an interpreter. Direct communication was shown to occur through (i) the demonstration of a medical device, (ii) the indication of relevant body parts and (iii) the use of limited English. These connections worked to make patients and pharmacists visible to each other and thus to maintain a sense of mutual involvement in consultations within which patients and pharmacists could enact professionally and socially appropriate roles. In a multicultural society this work is important in understanding the dynamics involved in consultations in situations in which language is not shared and thus in considering the development of future research and policy. © 2014 The Author. Sociology of Health & Illness published by John Wiley & Sons Ltd on behalf of Foundation for SHIL (SHIL).

  20. Decreased BOLD responses in audiovisual processing

    NARCIS (Netherlands)

    Wiersinga-Post, Esther; Tomaskovic, Sonja; Slabu, Lavinia; Renken, Remco; de Smit, Femke; Duifhuis, Hendrikus

    2010-01-01

    Audiovisual processing was studied in a functional magnetic resonance imaging study using the McGurk effect. Perceptual responses and the brain activity patterns were measured as a function of audiovisual delay. In several cortical and subcortical brain areas, BOLD responses correlated negatively

  1. Functionally segregated neural substrates for arbitrary audiovisual paired-association learning.

    Science.gov (United States)

    Tanabe, Hiroki C; Honda, Manabu; Sadato, Norihiro

    2005-07-06

    To clarify the neural substrates and their dynamics during crossmodal association learning, we conducted functional magnetic resonance imaging (MRI) during audiovisual paired-association learning of delayed matching-to-sample tasks. Thirty subjects were involved in the study; 15 performed an audiovisual paired-association learning task, and the remainder completed a control visuo-visual task. Each trial consisted of the successive presentation of a pair of stimuli. Subjects were asked to identify predefined audiovisual or visuo-visual pairs by trial and error. Feedback for each trial was given regardless of whether the response was correct or incorrect. During the delay period, several areas showed an increase in the MRI signal as learning proceeded: crossmodal activity increased in unimodal areas corresponding to visual or auditory areas, and polymodal responses increased in the occipitotemporal junction and parahippocampal gyrus. This pattern was not observed in the visuo-visual intramodal paired-association learning task, suggesting that crossmodal associations might be formed by binding unimodal sensory areas via polymodal regions. In both the audiovisual and visuo-visual tasks, the MRI signal in the superior temporal sulcus (STS) in response to the second stimulus and feedback peaked during the early phase of learning and then decreased, indicating that the STS might be key to the creation of paired associations, regardless of stimulus type. In contrast to the activity changes in the regions discussed above, there was constant activity in the frontoparietal circuit during the delay period in both tasks, implying that the neural substrates for the formation and storage of paired associates are distinct from working memory circuits.

  2. Learning sparse generative models of audiovisual signals

    OpenAIRE

    Monaci, Gianluca; Sommer, Friedrich T.; Vandergheynst, Pierre

    2008-01-01

    This paper presents a novel framework to learn sparse represen- tations for audiovisual signals. An audiovisual signal is modeled as a sparse sum of audiovisual kernels. The kernels are bimodal functions made of synchronous audio and video components that can be positioned independently and arbitrarily in space and time. We design an algorithm capable of learning sets of such audiovi- sual, synchronous, shift-invariant functions by alternatingly solving a coding and a learning pr...

  3. earGram Actors: An Interactive Audiovisual System Based on Social Behavior

    Directory of Open Access Journals (Sweden)

    Peter Beyls

    2015-11-01

    Full Text Available In multi-agent systems, local interactions among system components following relatively simple rules often result in complex overall systemic behavior. Complex behavioral and morphological patterns have been used to generate and organize audiovisual systems with artistic purposes. In this work, we propose to use the Actor model of social interactions to drive a concatenative synthesis engine called earGram in real time. The Actor model was originally developed to explore the emergence of complex visual patterns. On the other hand, earGram was originally developed to facilitate the creative exploration of concatenative sound synthesis. The integrated audiovisual system allows a human performer to interact with the system dynamics while receiving visual and auditory feedback. The interaction happens indirectly by disturbing the rules governing the social relationships amongst the actors, which results in a wide range of dynamic spatiotemporal patterns. A performer thus improvises within the behavioural scope of the system while evaluating the apparent connections between parameter values and actual complexity of the system output.

  4. Audiovisual Discrimination between Laughter and Speech

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audiovisual approach to distinguishing laughter from speech and we show that integrating the information from audio and video leads to an improved reliability of audiovisual approach in

  5. Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, B.

    2007-01-01

    Laughter is a highly variable signal, and can express a spectrum of emotions. This makes the automatic detection of laughter a challenging but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is performed

  6. Emotion Recognition as a Real Strength in Williams Syndrome: Evidence From a Dynamic Non-verbal Task

    Directory of Open Access Journals (Sweden)

    Laure Ibernon

    2018-04-01

    Full Text Available The hypersocial profile characterizing individuals with Williams syndrome (WS, and particularly their attraction to human faces and their desire to form relationships with other people, could favor the development of their emotion recognition capacities. This study seeks to better understand the development of emotion recognition capacities in WS. The ability to recognize six emotions was assessed in 15 participants with WS. Their performance was compared to that of 15 participants with Down syndrome (DS and 15 typically developing (TD children of the same non-verbal developmental age, as assessed with Raven’s Colored Progressive Matrices (RCPM; Raven et al., 1998. The analysis of the three groups’ results revealed that the participants with WS performed better than the participants with DS and also than the TD children. Individuals with WS performed at a similar level to TD participants in terms of recognizing different types of emotions. The study of development trajectories confirmed that the participants with WS presented the same development profile as the TD participants. These results seem to indicate that the recognition of emotional facial expressions constitutes a real strength in people with WS.

  7. Imitation Therapy for Non-Verbal Toddlers

    Science.gov (United States)

    Gill, Cindy; Mehta, Jyutika; Fredenburg, Karen; Bartlett, Karen

    2011-01-01

    When imitation skills are not present in young children, speech and language skills typically fail to emerge. There is little information on practices that foster the emergence of imitation skills in general and verbal imitation skills in particular. The present study attempted to add to our limited evidence base regarding accelerating the…

  8. The natural statistics of audiovisual speech.

    Directory of Open Access Journals (Sweden)

    Chandramouli Chandrasekaran

    2009-07-01

    Full Text Available Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2-7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.

  9. Lip movements affect infants' audiovisual speech perception.

    Science.gov (United States)

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  10. 'When birds of a feather flock together': synesthetic correspondences modulate audiovisual integration in non-synesthetes.

    Directory of Open Access Journals (Sweden)

    Cesare Valerio Parise

    Full Text Available BACKGROUND: Synesthesia is a condition in which the stimulation of one sense elicits an additional experience, often in a different (i.e., unstimulated sense. Although only a small proportion of the population is synesthetic, there is growing evidence to suggest that neurocognitively-normal individuals also experience some form of synesthetic association between the stimuli presented to different sensory modalities (i.e., between auditory pitch and visual size, where lower frequency tones are associated with large objects and higher frequency tones with small objects. While previous research has highlighted crossmodal interactions between synesthetically corresponding dimensions, the possible role of synesthetic associations in multisensory integration has not been considered previously. METHODOLOGY: Here we investigate the effects of synesthetic associations by presenting pairs of asynchronous or spatially discrepant visual and auditory stimuli that were either synesthetically matched or mismatched. In a series of three psychophysical experiments, participants reported the relative temporal order of presentation or the relative spatial locations of the two stimuli. PRINCIPAL FINDINGS: The reliability of non-synesthetic participants' estimates of both audiovisual temporal asynchrony and spatial discrepancy were lower for pairs of synesthetically matched as compared to synesthetically mismatched audiovisual stimuli. CONCLUSIONS: Recent studies of multisensory integration have shown that the reduced reliability of perceptual estimates regarding intersensory conflicts constitutes the marker of a stronger coupling between the unisensory signals. Our results therefore indicate a stronger coupling of synesthetically matched vs. mismatched stimuli and provide the first psychophysical evidence that synesthetic congruency can promote multisensory integration. Synesthetic crossmodal correspondences therefore appear to play a crucial (if unacknowledged

  11. The level of audiovisual print-speech integration deficits in dyslexia.

    Science.gov (United States)

    Kronschnabel, Jens; Brem, Silvia; Maurer, Urs; Brandeis, Daniel

    2014-09-01

    The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No

  12. Verbal learning in marijuana users seeking treatment: a comparison between depressed and non-depressed samples.

    Science.gov (United States)

    Roebke, Patrick V; Vadhan, Nehal P; Brooks, Daniel J; Levin, Frances R

    2014-07-01

    Both individuals with marijuana use and depressive disorders exhibit verbal learning and memory decrements. This study investigated the interaction between marijuana dependence and depression on learning and memory performance. The California Verbal Learning Test-Second Edition (CVLT-II) was administered to depressed (n = 71) and non-depressed (n = 131) near-daily marijuana users. The severity of depressive symptoms was measured by the self-rated Beck Depression Inventory (BDI-II) and the clinician-rated Hamilton Depression Rating Scale (HAM-D). Multivariate analyses of covariance statistics (MANCOVA) were employed to analyze group differences in cognitive performance. Pearson's correlation coefficients were calculated to examine the relative associations between marijuana use, depression and CVLT-II performance. Findings from each group were compared to published normative data. Although both groups exhibited decreased CVLT-II performance relative to the test's normative sample (p marijuana-dependent subjects with a depressive disorder did not perform differently than marijuana-dependent subjects without a depressive disorder (p > 0.05). Further, poorer CVLT-II performance was modestly associated with increased self-reported daily amount of marijuana use (corrected p depressive symptoms (corrected p > 0.002). These findings suggest an inverse association between marijuana use and verbal learning function, but not between depression and verbal learning function in regular marijuana users.

  13. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan.

    Science.gov (United States)

    Noel, Jean-Paul; De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.

  14. Evidence for a double dissociation of articulatory rehearsal and non-articulatory maintenance of phonological information in human verbal working memory.

    Science.gov (United States)

    Trost, Sarah; Gruber, Oliver

    2012-01-01

    Recent functional neuroimaging studies have provided evidence that human verbal working memory is represented by two complementary neural systems, a left lateralized premotor-parietal network implementing articulatory rehearsal and a presumably phylogenetically older bilateral anterior-prefrontal/inferior-parietal network subserving non-articulatory maintenance of phonological information. In order to corroborate these findings from functional neuroimaging, we performed a targeted behavioural study in patients with very selective and circumscribed brain lesions to key regions suggested to support these different subcomponents of human verbal working memory. Within a sample of over 500 neurological patients assessed with high-resolution structural magnetic resonance imaging, we identified 2 patients with corresponding brain lesions, one with an isolated lesion to Broca's area and the other with a selective lesion bilaterally to the anterior middle frontal gyrus. These 2 patients as well as groups of age-matched healthy controls performed two circuit-specific verbal working memory tasks. In this way, we systematically assessed the hypothesized selective behavioural effects of these brain lesions on the different subcomponents of verbal working memory in terms of a double dissociation. Confirming prior findings, the lesion to Broca's area led to reduced performance under articulatory rehearsal, whereas the non-articulatory maintenance of phonological information was unimpaired. Conversely, the bifrontopolar brain lesion was associated with impaired non-articulatory phonological working memory, whereas performance under articulatory rehearsal was unaffected. The present experimental neuropsychological study in patients with specific and circumscribed brain lesions confirms the hypothesized double dissociation of two complementary brain systems underlying verbal working memory in humans. In particular, the results demonstrate the functional relevance of the anterior

  15. Audiovisual Interaction

    DEFF Research Database (Denmark)

    Karandreas, Theodoros-Alexandros

    in a manner that allowed the subjective audiovisual evaluation of loudspeakers under controlled conditions. Additionally, unimodal audio and visual evaluations were used as a baseline for comparison. The same procedure was applied in the investigation of the validity of less than optimal stimuli presentations...

  16. Transgressing the Non-fiction Transmedia Narrative

    NARCIS (Netherlands)

    Gifreu-Castells, Arnau; Misek, Richard; Verbruggen, Erwin

    2016-01-01

    abstractOver the last years, interactive digital media have greatly affected the logics of production, exhibition and reception of non-fiction audiovisual works, leading to the emergence of a new area called ‘interactive and transmedia non-fiction’. Whilethe audiovisual non-fiction field has been

  17. Influences of selective adaptation on perception of audiovisual speech

    Science.gov (United States)

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  18. Non-verbal mother-child communication in conditions of maternal HIV in an experimental environment Comunicación no verbal madre/hijo em la existencia del HIV materna en ambiente experimental Comunicação não-verbal mãe/filho na vigência do HIV materno em ambiente experimental

    Directory of Open Access Journals (Sweden)

    Simone de Sousa Paiva

    2010-02-01

    Full Text Available Non-verbal communication is predominant in the mother-child relation. This study aimed to analyze non-verbal mother-child communication in conditions of maternal HIV. In an experimental environment, five HIV-positive mothers were evaluated during care delivery to their babies of up to six months old. Recordings of the care were analyzed by experts, observing aspects of non-verbal communication, such as: paralanguage, kinesics, distance, visual contact, tone of voice, maternal and infant tactile behavior. In total, 344 scenes were obtained. After statistical analysis, these permitted inferring that mothers use non-verbal communication to demonstrate their close attachment to their children and to perceive possible abnormalities. It is suggested that the mother’s infection can be a determining factor for the formation of mothers’ strong attachment to their children after birth.La comunicación no verbal es predominante en la relación entre madre/hijo. Se tuvo por objetivo verificar la comunicación no verbal madre/hijo en la existencia del HIV materno. En ambiente experimental, fueron evaluadas cinco madres HIV+, que cuidaban de sus hijos de hasta seis meses de vida. Las filmaciones de los cuidados fueron analizadas por peritos, siendo observados los aspectos de la comunicación no verbal, como: paralenguaje, cinestésica, proximidad, contacto visual, tono de voz y comportamiento táctil materno e infantil. Se obtuvo 344 escenas que, después de un análisis estadístico, posibilitó inferir que la comunicación no verbal es utilizada por la madre para demonstrar su apego íntimo a los hijos y para percibir posibles anormalidades. Se sugiere que la infección materna puede ser un factor determinante para la formación del fuerte apego de la madre por su bebé después el nacimiento.A comunicação não-verbal é predominante na relação entre mãe/filho. Objetivou-se verificar a comunicação não-verbal mãe/filho na vigência do HIV

  19. Temporal Audiovisual Motion Prediction in 2D- vs. 3D-Environments.

    Science.gov (United States)

    Dittrich, Sandra; Noesselt, Tömme

    2018-01-01

    Predicting motion is essential for many everyday life activities, e.g., in road traffic. Previous studies on motion prediction failed to find consistent results, which might be due to the use of very different stimulus material and behavioural tasks. Here, we directly tested the influence of task (detection, extrapolation) and stimulus features (visual vs. audiovisual and three-dimensional vs. non-three-dimensional) on temporal motion prediction in two psychophysical experiments. In both experiments a ball followed a trajectory toward the observer and temporarily disappeared behind an occluder. In audiovisual conditions a moving white noise (congruent or non-congruent to visual motion direction) was presented concurrently. In experiment 1 the ball reappeared on a predictable or a non-predictable trajectory and participants detected when the ball reappeared. In experiment 2 the ball did not reappear after occlusion and participants judged when the ball would reach a specified position at two possible distances from the occluder (extrapolation task). Both experiments were conducted in three-dimensional space (using stereoscopic screen and polarised glasses) and also without stereoscopic presentation. Participants benefitted from visually predictable trajectories and concurrent sounds during detection. Additionally, visual facilitation was more pronounced for non-3D stimulation during detection task. In contrast, for a more complex extrapolation task group mean results indicated that auditory information impaired motion prediction. However, a post hoc cross-validation procedure (split-half) revealed that participants varied in their ability to use sounds during motion extrapolation. Most participants selectively profited from either near or far extrapolation distances but were impaired for the other one. We propose that interindividual differences in extrapolation efficiency might be the mechanism governing this effect. Together, our results indicate that both a

  20. Audiovisual Review

    Science.gov (United States)

    Physiology Teacher, 1976

    1976-01-01

    Lists and reviews recent audiovisual materials in areas of medical, dental, nursing and allied health, and veterinary medicine; undergraduate, and high school studies. Each is classified as to level, type of instruction, usefulness, and source of availability. Topics include respiration, renal physiology, muscle mechanics, anatomy, evolution,…

  1. Quality Matters! Differences between Expressive and Receptive Non-Verbal Communication Skills in Adolescents with ASD

    Science.gov (United States)

    Grossman, Ruth B.; Tager-Flusberg, Helen

    2012-01-01

    We analyzed several studies of non-verbal communication (prosody and facial expressions) completed in our lab and conducted a secondary analysis to compare performance on receptive vs. expressive tasks by adolescents with ASD and their typically developing peers. Results show a significant between-group difference for the aggregate score of…

  2. Finding the Correspondence of Audio-Visual Events by Object Manipulation

    Science.gov (United States)

    Nishibori, Kento; Takeuchi, Yoshinori; Matsumoto, Tetsuya; Kudo, Hiroaki; Ohnishi, Noboru

    A human being understands the objects in the environment by integrating information obtained by the senses of sight, hearing and touch. In this integration, active manipulation of objects plays an important role. We propose a method for finding the correspondence of audio-visual events by manipulating an object. The method uses the general grouping rules in Gestalt psychology, i.e. “simultaneity” and “similarity” among motion command, sound onsets and motion of the object in images. In experiments, we used a microphone, a camera, and a robot which has a hand manipulator. The robot grasps an object like a bell and shakes it or grasps an object like a stick and beat a drum in a periodic, or non-periodic motion. Then the object emits periodical/non-periodical events. To create more realistic scenario, we put other event source (a metronome) in the environment. As a result, we had a success rate of 73.8 percent in finding the correspondence between audio-visual events (afferent signal) which are relating to robot motion (efferent signal).

  3. Audiovisual quality assessment and prediction for videotelephony

    CERN Document Server

    Belmudez, Benjamin

    2015-01-01

    The work presented in this book focuses on modeling audiovisual quality as perceived by the users of IP-based solutions for video communication like videotelephony. It also extends the current framework for the parametric prediction of audiovisual call quality. The book addresses several aspects related to the quality perception of entire video calls, namely, the quality estimation of the single audio and video modalities in an interactive context, the audiovisual quality integration of these modalities and the temporal pooling of short sample-based quality scores to account for the perceptual quality impact of time-varying degradations.

  4. Speech-specificity of two audiovisual integration effects

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2010-01-01

    Seeing the talker’s articulatory mouth movements can influence the auditory speech percept both in speech identification and detection tasks. Here we show that these audiovisual integration effects also occur for sine wave speech (SWS), which is an impoverished speech signal that naïve observers...... often fail to perceive as speech. While audiovisual integration in the identification task only occurred when observers were informed of the speech-like nature of SWS, integration occurred in the detection task both for informed and naïve observers. This shows that both speech-specific and general...... mechanisms underlie audiovisual integration of speech....

  5. School effects on non-verbal intelligence and nutritional status in rural Zambia

    OpenAIRE

    Hein, Sascha; Tan, Mei; Reich, Jodi; Thuma, Philip E.; Grigorenko, Elena L.

    2015-01-01

    This study uses hierarchical linear modeling (HLM) to examine the school factors (i.e., related to school organization and teacher and student body) associated with non-verbal intelligence (NI) and nutritional status (i.e., body mass index; BMI) of 4204 3rd to 7th graders in rural areas of Southern Province, Zambia. Results showed that 23.5% and 7.7% of the NI and BMI variance, respectively, were conditioned by differences between schools. The set of 14 school factors accounted for 58.8% and ...

  6. Audiovisual distraction for pain relief in paediatric inpatients: A crossover study.

    Science.gov (United States)

    Oliveira, N C A C; Santos, J L F; Linhares, M B M

    2017-01-01

    Pain is a stressful experience that can have a negative impact on child development. The aim of this crossover study was to examine the efficacy of audiovisual distraction for acute pain relief in paediatric inpatients. The sample comprised 40 inpatients (6-11 years) who underwent painful puncture procedures. The participants were randomized into two groups, and all children received the intervention and served as their own controls. Stress and pain-catastrophizing assessments were initially performed using the Child Stress Scale and Pain Catastrophizing Scale for Children, with the aim of controlling these variables. The pain assessment was performed using a Visual Analog Scale and the Faces Pain Scale-Revised after the painful procedures. Group 1 received audiovisual distraction before and during the puncture procedure, which was performed again without intervention on another day. The procedure was reversed in Group 2. Audiovisual distraction used animated short films. A 2 × 2 × 2 analysis of variance for 2 × 2 crossover study was performed, with a 5% level of statistical significance. The two groups had similar baseline measures of stress and pain catastrophizing. A significant difference was found between periods with and without distraction in both groups, in which scores on both pain scales were lower during distraction compared with no intervention. The sequence of exposure to the distraction intervention in both groups and first versus second painful procedure during which the distraction was performed also significantly influenced the efficacy of the distraction intervention. Audiovisual distraction effectively reduced the intensity of pain perception in paediatric inpatients. The crossover study design provides a better understanding of the power effects of distraction for acute pain management. Audiovisual distraction was a powerful and effective non-pharmacological intervention for pain relief in paediatric inpatients. The effects were

  7. 36 CFR 1237.16 - How do agencies store audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... audiovisual records? 1237.16 Section 1237.16 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.16 How do agencies store audiovisual records? Agencies must maintain appropriate storage conditions for permanent...

  8. A Catalan code of best practices for the audiovisual sector

    OpenAIRE

    Teodoro, Emma; Casanovas, Pompeu

    2010-01-01

    In spite of a new general law regarding Audiovisual Communication, the regulatory framework of the audiovisual sector in Spain can still be defined as huge, disperse and obsolete. The first part of this paper provides an overview of the major challenges of the Spanish audiovisual sector as a result of the convergence of platforms, services and operators, paying especial attention to the Audiovisual Sector in Catalonia. In the second part, we will present an example of self-regulation through...

  9. Speech cues contribute to audiovisual spatial integration.

    Directory of Open Access Journals (Sweden)

    Christopher W Bishop

    Full Text Available Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral 'what' and dorsal 'where' pathways.

  10. Referential Interactions of Turkish-Learning Children with Their Caregivers about Non-Absent Objects: Integration of Non-Verbal Devices and Prior Discourse

    Science.gov (United States)

    Ates, Beyza S.; Küntay, Aylin C.

    2018-01-01

    This paper examines the way children younger than two use non-verbal devices (i.e., deictic gestures and communicative functional acts) and pay attention to discourse status (i.e., prior mention vs. newness) of referents in interactions with caregivers. Data based on semi-naturalistic interactions with caregivers of four children, at ages 1;00,…

  11. Audiovisual Interval Size Estimation Is Associated with Early Musical Training.

    Science.gov (United States)

    Abel, Mary Kathryn; Li, H Charles; Russo, Frank A; Schlaug, Gottfried; Loui, Psyche

    2016-01-01

    Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.

  12. Audiovisual Interval Size Estimation Is Associated with Early Musical Training.

    Directory of Open Access Journals (Sweden)

    Mary Kathryn Abel

    Full Text Available Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.

  13. Respiratory Constraints in Verbal and Non-verbal Communication.

    Science.gov (United States)

    Włodarczak, Marcin; Heldner, Mattias

    2017-01-01

    In the present paper we address the old question of respiratory planning in speech production. We recast the problem in terms of speakers' communicative goals and propose that speakers try to minimize respiratory effort in line with the H&H theory. We analyze respiratory cycles coinciding with no speech (i.e., silence), short verbal feedback expressions (SFE's) as well as longer vocalizations in terms of parameters of the respiratory cycle and find little evidence for respiratory planning in feedback production. We also investigate timing of speech and SFEs in the exhalation and contrast it with nods. We find that while speech is strongly tied to the exhalation onset, SFEs are distributed much more uniformly throughout the exhalation and are often produced on residual air. Given that nods, which do not have any respiratory constraints, tend to be more frequent toward the end of an exhalation, we propose a mechanism whereby respiratory patterns are determined by the trade-off between speakers' communicative goals and respiratory constraints.

  14. Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation.

    Science.gov (United States)

    Banks, Briony; Gowen, Emma; Munro, Kevin J; Adank, Patti

    2015-01-01

    Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker's facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants' eye gaze was recorded to verify that they looked at the speaker's face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation.

  15. Reduced audiovisual recalibration in the elderly.

    Science.gov (United States)

    Chan, Yu Man; Pianta, Michael J; McKendrick, Allison M

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22-32 years old) and 15 older (64-74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age.

  16. Treating depressive symptoms in psychosis : A Network Meta-Analysis on the Effects of Non-Verbal Therapies

    NARCIS (Netherlands)

    Steenhuis, L. A.; Nauta, M. H.; Bockting, C. L. H.; Pijnenborg, G. H. M.

    2015-01-01

    AIMS: The aim of this study was to examine whether non-verbal therapies are effective in treating depressive symptoms in psychotic disorders. MATERIAL AND METHODS: A systematic literature search was performed in PubMed, Psychinfo, Picarta, Embase and ISI Web of Science, up to January 2015.

  17. Treating depressive symptoms in psychosis : A network meta-analysis on the effects of non-verbal therapies

    NARCIS (Netherlands)

    Steenhuis, Laura A.; Nauta, Maaike H.; Bocking, Claudi L.H.; Pijnenborg, Gerdina H.M.

    2015-01-01

    AIMS: The aim of this study was to examine whether non-verbal therapies are effective in treating depressive symptoms in psychotic disorders. MATERIAL AND METHODS: A systematic literature search was performed in PubMed, Psychinfo, Picarta, Embase and ISI Web of Science, up to January 2015.

  18. Cross-cultural differences in the processing of non-verbal affective vocalizations by Japanese and canadian listeners.

    Science.gov (United States)

    Koeda, Michihiko; Belin, Pascal; Hama, Tomoko; Masuda, Tadashi; Matsuura, Masato; Okubo, Yoshiro

    2013-01-01

    The Montreal Affective Voices (MAVs) consist of a database of non-verbal affect bursts portrayed by Canadian actors, and high recognitions accuracies were observed in Canadian listeners. Whether listeners from other cultures would be as accurate is unclear. We tested for cross-cultural differences in perception of the MAVs: Japanese listeners were asked to rate the MAVs on several affective dimensions and ratings were compared to those obtained by Canadian listeners. Significant Group × Emotion interactions were observed for ratings of Intensity, Valence, and Arousal. Whereas Intensity and Valence ratings did not differ across cultural groups for sad and happy vocalizations, they were significantly less intense and less negative in Japanese listeners for angry, disgusted, and fearful vocalizations. Similarly, pleased vocalizations were rated as less intense and less positive by Japanese listeners. These results demonstrate important cross-cultural differences in affective perception not just of non-verbal vocalizations expressing positive affect (Sauter et al., 2010), but also of vocalizations expressing basic negative emotions.

  19. Cross-Cultural Differences in the Processing of Non-Verbal Affective Vocalizations by Japanese and Canadian Listeners

    Science.gov (United States)

    Koeda, Michihiko; Belin, Pascal; Hama, Tomoko; Masuda, Tadashi; Matsuura, Masato; Okubo, Yoshiro

    2013-01-01

    The Montreal Affective Voices (MAVs) consist of a database of non-verbal affect bursts portrayed by Canadian actors, and high recognitions accuracies were observed in Canadian listeners. Whether listeners from other cultures would be as accurate is unclear. We tested for cross-cultural differences in perception of the MAVs: Japanese listeners were asked to rate the MAVs on several affective dimensions and ratings were compared to those obtained by Canadian listeners. Significant Group × Emotion interactions were observed for ratings of Intensity, Valence, and Arousal. Whereas Intensity and Valence ratings did not differ across cultural groups for sad and happy vocalizations, they were significantly less intense and less negative in Japanese listeners for angry, disgusted, and fearful vocalizations. Similarly, pleased vocalizations were rated as less intense and less positive by Japanese listeners. These results demonstrate important cross-cultural differences in affective perception not just of non-verbal vocalizations expressing positive affect (Sauter et al., 2010), but also of vocalizations expressing basic negative emotions. PMID:23516137

  20. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson's Disease.

    Science.gov (United States)

    Ren, Yanna; Suzuki, Keisuke; Yang, Weiping; Ren, Yanling; Wu, Fengxia; Yang, Jiajia; Takahashi, Satoshi; Ejima, Yoshimichi; Wu, Jinglong; Hirata, Koichi

    2018-01-01

    The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson's disease (PD). This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC) and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p audiovisual stimuli was significantly faster than that to unimodal stimuli in both NC and PD ( p audiovisual integration was absent in PD; however, it did occur in NC. Further analysis showed that there was no significant audiovisual integration in PD with/without cognitive impairment or in PD with/without sleep disturbances. Furthermore, audiovisual facilitation was not associated with Hoehn and Yahr stage, disease duration, or the presence of sleep disturbances (all p > 0.05). The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.

  1. Electrophysiological assessment of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Dau, Torsten

    Speech perception integrates signal from ear and eye. This is witnessed by a wide range of audiovisual integration effects, such as ventriloquism and the McGurk illusion. Some behavioral evidence suggest that audiovisual integration of specific aspects is special for speech perception. However, our...... knowledge of such bimodal integration would be strengthened if the phenomena could be investigated by objective, neutrally based methods. One key question of the present work is if perceptual processing of audiovisual speech can be gauged with a specific signature of neurophysiological activity...... on the auditory speech percept? In two experiments, which both combine behavioral and neurophysiological measures, an uncovering of the relation between perception of faces and of audiovisual integration is attempted. Behavioral findings suggest a strong effect of face perception, whereas the MMN results are less...

  2. [Virtual audiovisual talking heads: articulatory data and models--applications].

    Science.gov (United States)

    Badin, P; Elisei, F; Bailly, G; Savariaux, C; Serrurier, A; Tarabalka, Y

    2007-01-01

    In the framework of experimental phonetics, our approach to the study of speech production is based on the measurement, the analysis and the modeling of orofacial articulators such as the jaw, the face and the lips, the tongue or the velum. Therefore, we present in this article experimental techniques that allow characterising the shape and movement of speech articulators (static and dynamic MRI, computed tomodensitometry, electromagnetic articulography, video recording). We then describe the linear models of the various organs that we can elaborate from speaker-specific articulatory data. We show that these models, that exhibit a good geometrical resolution, can be controlled from articulatory data with a good temporal resolution and can thus permit the reconstruction of high quality animation of the articulators. These models, that we have integrated in a virtual talking head, can produce augmented audiovisual speech. In this framework, we have assessed the natural tongue reading capabilities of human subjects by means of audiovisual perception tests. We conclude by suggesting a number of other applications of talking heads.

  3. Subjective Evaluation of Audiovisual Signals

    Directory of Open Access Journals (Sweden)

    F. Fikejz

    2010-01-01

    Full Text Available This paper deals with subjective evaluation of audiovisual signals, with emphasis on the interaction between acoustic and visual quality. The subjective test is realized by a simple rating method. The audiovisual signal used in this test is a combination of images compressed by JPEG compression codec and sound samples compressed by MPEG-1 Layer III. Images and sounds have various contents. It simulates a real situation when the subject listens to compressed music and watches compressed pictures without the access to original, i.e. uncompressed signals.

  4. Parametric packet-based audiovisual quality model for IPTV services

    CERN Document Server

    Garcia, Marie-Neige

    2014-01-01

    This volume presents a parametric packet-based audiovisual quality model for Internet Protocol TeleVision (IPTV) services. The model is composed of three quality modules for the respective audio, video and audiovisual components. The audio and video quality modules take as input a parametric description of the audiovisual processing path, and deliver an estimate of the audio and video quality. These outputs are sent to the audiovisual quality module which provides an estimate of the audiovisual quality. Estimates of perceived quality are typically used both in the network planning phase and as part of the quality monitoring. The same audio quality model is used for both these phases, while two variants of the video quality model have been developed for addressing the two application scenarios. The addressed packetization scheme is MPEG2 Transport Stream over Real-time Transport Protocol over Internet Protocol. In the case of quality monitoring, that is the case for which the network is already set-up, the aud...

  5. A linguagem audiovisual da lousa digital interativa no contexto educacional/Audiovisual language of the digital interactive whiteboard in the educational environment

    Directory of Open Access Journals (Sweden)

    Rosária Helena Ruiz Nakashima

    2006-01-01

    solely in the oral and written languages, but is also audiovisual and dynamic, since it allows the student to become not merely a receptor but also a producer of knowledge. Therefore, our schools should be encouraged to use these new technological devices in order to facilitate their job and to promote more interesting and revolutionary classes.

  6. Audiovisual interpretative skills: between textual culture and formalized literacy

    Directory of Open Access Journals (Sweden)

    Estefanía Jiménez, Ph. D.

    2010-01-01

    Full Text Available This paper presents the results of a study on the process of acquiring interpretative skills to decode audiovisual texts among adolescents and youth. Based on the conception of such competence as the ability to understand the meanings connoted beneath the literal discourses of audiovisual texts, this study compared two variables: the acquisition of such skills from the personal and social experience in the consumption of audiovisual products (which is affected by age difference, and, on the second hand, the differences marked by the existence of formalized processes of media literacy. Based on focus groups of young students, the research assesses the existing academic debate about these processes of acquiring skills to interpret audiovisual materials.

  7. Can you hear me yet? An intracranial investigation of speech and non-speech audiovisual interactions in human cortex.

    Science.gov (United States)

    Rhone, Ariane E; Nourski, Kirill V; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A; McMurray, Bob

    In everyday conversation, viewing a talker's face can provide information about the timing and content of an upcoming speech signal, resulting in improved intelligibility. Using electrocorticography, we tested whether human auditory cortex in Heschl's gyrus (HG) and on superior temporal gyrus (STG) and motor cortex on precentral gyrus (PreC) were responsive to visual/gestural information prior to the onset of sound and whether early stages of auditory processing were sensitive to the visual content (speech syllable versus non-speech motion). Event-related band power (ERBP) in the high gamma band was content-specific prior to acoustic onset on STG and PreC, and ERBP in the beta band differed in all three areas. Following sound onset, we found with no evidence for content-specificity in HG, evidence for visual specificity in PreC, and specificity for both modalities in STG. These results support models of audio-visual processing in which sensory information is integrated in non-primary cortical areas.

  8. Quality models for audiovisual streaming

    Science.gov (United States)

    Thang, Truong Cong; Kim, Young Suk; Kim, Cheon Seog; Ro, Yong Man

    2006-01-01

    Quality is an essential factor in multimedia communication, especially in compression and adaptation. Quality metrics can be divided into three categories: within-modality quality, cross-modality quality, and multi-modality quality. Most research has so far focused on within-modality quality. Moreover, quality is normally just considered from the perceptual perspective. In practice, content may be drastically adapted, even converted to another modality. In this case, we should consider the quality from semantic perspective as well. In this work, we investigate the multi-modality quality from the semantic perspective. To model the semantic quality, we apply the concept of "conceptual graph", which consists of semantic nodes and relations between the nodes. As an typical of multi-modality example, we focus on audiovisual streaming service. Specifically, we evaluate the amount of information conveyed by a audiovisual content where both video and audio channels may be strongly degraded, even audio are converted to text. In the experiments, we also consider the perceptual quality model of audiovisual content, so as to see the difference with semantic quality model.

  9. Respiratory Constraints in Verbal and Non-verbal Communication

    Directory of Open Access Journals (Sweden)

    Marcin Włodarczak

    2017-05-01

    Full Text Available In the present paper we address the old question of respiratory planning in speech production. We recast the problem in terms of speakers' communicative goals and propose that speakers try to minimize respiratory effort in line with the H&H theory. We analyze respiratory cycles coinciding with no speech (i.e., silence, short verbal feedback expressions (SFE's as well as longer vocalizations in terms of parameters of the respiratory cycle and find little evidence for respiratory planning in feedback production. We also investigate timing of speech and SFEs in the exhalation and contrast it with nods. We find that while speech is strongly tied to the exhalation onset, SFEs are distributed much more uniformly throughout the exhalation and are often produced on residual air. Given that nods, which do not have any respiratory constraints, tend to be more frequent toward the end of an exhalation, we propose a mechanism whereby respiratory patterns are determined by the trade-off between speakers' communicative goals and respiratory constraints.

  10. Audiovisual Archive Exploitation in the Networked Information Society

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.

    2011-01-01

    Safeguarding the massive body of audiovisual content, including rich music collections, in audiovisual archives and enabling access for various types of user groups is a prerequisite for unlocking the social-economic value of these collections. Data quantities and the need for specific content

  11. Deficits in visual short-term memory binding in children at risk of non-verbal learning disabilities.

    Science.gov (United States)

    Garcia, Ricardo Basso; Mammarella, Irene C; Pancera, Arianna; Galera, Cesar; Cornoldi, Cesare

    2015-01-01

    It has been hypothesized that learning disabled children meet short-term memory (STM) problems especially when they must bind different types of information, however the hypothesis has not been systematically tested. This study assessed visual STM for shapes and colors and the binding of shapes and colors, comparing a group of children (aged between 8 and 10 years) at risk of non-verbal learning disabilities (NLD) with a control group of children matched for general verbal abilities, age, gender, and socioeconomic level. Results revealed that groups did not differ in retention of either shapes or colors, but children at risk of NLD were poorer than controls in memory for shape-color bindings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Energy consumption of audiovisual devices in the residential sector: Economic impact of harmonic losses

    International Nuclear Information System (INIS)

    Santiago, I.; López-Rodríguez, M.A.; Gil-de-Castro, A.; Moreno-Munoz, A.; Luna-Rodríguez, J.J.

    2013-01-01

    In this work, energy losses and the economic consequences of the use of small appliances containing power electronics (PE) in the Spanish residential sector were estimated. Audiovisual devices emit harmonics, originating in the distribution system an increment in wiring losses and a greater demand in the total apparent power. Time Use Surveys (2009–10) conducted by the National Statistical Institute in Spain were used to obtain information about the activities occurring in Spanish homes regarding the use of audiovisual equipment. Moreover, measurements of different types of household appliances available in the PANDA database were also utilized, and the active and non-active annual power demand of these residential-sector devices were determined. Although a single audiovisual device has an almost negligible contribution, the aggregated actions of this type of appliances, whose total annual energy demand is greater than 4000 GWh, can be significant enough to be taken into account in any energy efficiency program. It was proven that a reduction in the total harmonic distortion in the distribution systems ranging from 50% to 5% can reduce energy losses significantly, with economic savings of around several million Euros. - Highlights: • Time Use Survey provides information about Spanish household electricity consumption. • The annual aggregated energy demand of audiovisual appliances is very significant. • TV use accounts for more than 80% of household audiovisual electricity consumption. • A reduction from 50% to 5% in the total harmonic distortion would have economic savings of around several million Euros. • Stricter regulations regarding harmonic emissions must be demanded

  13. Audiovisual Integration in High Functioning Adults with Autism

    Science.gov (United States)

    Keane, Brian P.; Rosenthal, Orna; Chun, Nicole H.; Shams, Ladan

    2010-01-01

    Autism involves various perceptual benefits and deficits, but it is unclear if the disorder also involves anomalous audiovisual integration. To address this issue, we compared the performance of high-functioning adults with autism and matched controls on experiments investigating the audiovisual integration of speech, spatiotemporal relations, and…

  14. Decision-level fusion for audio-visual laughter detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, M.; Truong, K.; Poppe, R.; Pantic, M.

    2008-01-01

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laughter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is

  15. Non-word repetition in children with specific language impairment: a deficit in phonological working memory or in long-term verbal knowledge?

    Science.gov (United States)

    Casalini, Claudia; Brizzolara, Daniela; Chilosi, Anna; Cipriani, Paola; Marcolini, Stefania; Pecini, Chiara; Roncoli, Silvia; Burani, Cristina

    2007-08-01

    In this study we investigated the effects of long-term memory (LTM) verbal knowledge on short-term memory (STM) verbal recall in a sample of Italian children affected by different subtypes of specific language impairment (SLI). The aim of the study was to evaluate if phonological working memory (PWM) abilities of SLI children can be supported by LTM linguistic representations and if PWM performances can be differently affected in the various subtypes of SLI. We tested a sample of 54 children affected by Mixed Receptive-Expressive (RE), Expressive (Ex) and Phonological (Ph) SLI (DSM-IV - American Psychiatric Association, 1994) by means of a repetition task of words (W) and non-words (NW) differing in morphemic structure [morphological non-words (MNW), consisting of combinations of roots and affixes - and simple non-words - with no morphological constituency]. We evaluated the effects of lexical and morpho-lexical LTM representations on STM recall by comparing the repetition accuracy across the three types of stimuli. Results indicated that although SLI children, as a group, showed lower repetition scores than controls, their performance was affected similarly to controls by the type of stimulus and the experimental manipulation of the non-words (better repetition of W than MNW and NW, and of MNW than NW), confirming the recourse to LTM verbal representations to support STM recall. The influence of LTM verbal knowledge on STM recall in SLI improved with age and did not differ among the three types of SLI. However, the three types of SLI differed in the accuracy of their repetition performances (PMW abilities), with the Phonological group showing the best scores. The implications for SLI theory and practice are discussed.

  16. The visual attention span deficit in dyslexia is visual and not verbal.

    Science.gov (United States)

    Lobier, Muriel; Zoubrinetzky, Rachel; Valdois, Sylviane

    2012-06-01

    The visual attention (VA) span deficit hypothesis of dyslexia posits that letter string deficits are a consequence of impaired visual processing. Alternatively, some have interpreted this deficit as resulting from a visual-to-phonology code mapping impairment. This study aims to disambiguate between the two interpretations by investigating performance in a non-verbal character string visual categorization task with verbal and non-verbal stimuli. Results show that VA span ability predicts performance for the non-verbal visual processing task in normal reading children. Furthermore, VA span impaired dyslexic children are also impaired for the categorization task independently of stimuli type. This supports the hypothesis that the underlying impairment responsible for the VA span deficit is visual, not verbal. Copyright © 2011 Elsevier Srl. All rights reserved.

  17. Non-verbal communication: aspects observed during nursing consultations with blind patients Comunicación no-verbal: aspectos observados durante la consulta de Enfermería con el paciente ciego Comunicação não-verbal: aspectos observados durante a consulta de Enfermagem com o paciente cego

    Directory of Open Access Journals (Sweden)

    Cristiana Brasil de Almeida Rebouças

    2007-03-01

    Full Text Available Exploratory-descriptive study on non-verbal communication among nurses and blind patients during nursing consultations to diabetes patients, based on Hall's theoretical reference framework. Data were collected by recording the consultations. The recordings were analyzed every fifteen seconds, totaling 1,131 non-verbal communication moments. The analysis shows intimate distance (91.0% and seated position (98.3%; no contact occurred in 83.3% of the interactions. Emblematic gestures were present, including hand movements (67.4%; looks deviated from the interlocutor (52.8%, and centered on the interlocutor (44.4%. In all recordings, considerable interference occurred at the moment of nurse-patient interaction. Nurses need to know about and deepen non-verbal communication studies and adequate its use to the type of patients attended during the consultations.Estudio exploratorio y descriptivo sobre comunicación no-verbal entre el enfermero y el paciente ciego durante la consulta de enfermería al diabético, desde el referencial teórico de Hall. Colecta de datos con filmación de la consulta, analizadas a cada quince segundos, totalizando 1.131 momentos de comunicación no-verbal. El análisis muestra alejamiento íntimo (91,0% y postura sentada (98,3%, en 83,3% de las intervenciones no hubo contacto. Estubo presente el gesto emblemático mover las manos (67,4%; el mirar desviado del interlocutor (52,8% y al mirar centrado en el interlocutor (44,4%. En todas las filmaciones, hubieron interferencias considerables en el momento de la interacción enfermero y paciente. Concluyese que el enfermero precisa conocer y profundizar los estudios en comunicación no-verbal y adecuar su utilización al tipo de pacientes asistidos durante las consultas.Estudo exploratório-descritivo sobre comunicação não-verbal entre o enfermeiro e o cego durante a consulta de enfermagem ao diabético, a partir do referencial teórico de Hall. Coleta de dados com filmagem da

  18. Boosting Vocabulary Learning by Verbal Cueing During Sleep.

    Science.gov (United States)

    Schreiner, Thomas; Rasch, Björn

    2015-11-01

    Reactivating memories during sleep by re-exposure to associated memory cues (e.g., odors or sounds) improves memory consolidation. Here, we tested for the first time whether verbal cueing during sleep can improve vocabulary learning. We cued prior learned Dutch words either during non-rapid eye movement sleep (NonREM) or during active or passive waking. Re-exposure to Dutch words during sleep improved later memory for the German translation of the cued words when compared with uncued words. Recall of uncued words was similar to an additional group receiving no verbal cues during sleep. Furthermore, verbal cueing failed to improve memory during active and passive waking. High-density electroencephalographic recordings revealed that successful verbal cueing during NonREM sleep is associated with a pronounced frontal negativity in event-related potentials, a higher frequency of frontal slow waves as well as a cueing-related increase in right frontal and left parietal oscillatory theta power. Our results indicate that verbal cues presented during NonREM sleep reactivate associated memories, and facilitate later recall of foreign vocabulary without impairing ongoing consolidation processes. Likewise, our oscillatory analysis suggests that both sleep-specific slow waves as well as theta oscillations (typically associated with successful memory encoding during wakefulness) might be involved in strengthening memories by cueing during sleep. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. The Influence of Manifest Strabismus and Stereoscopic Vision on Non-Verbal Abilities of Visually Impaired Children

    Science.gov (United States)

    Gligorovic, Milica; Vucinic, Vesna; Eskirovic, Branka; Jablan, Branka

    2011-01-01

    This research was conducted in order to examine the influence of manifest strabismus and stereoscopic vision on non-verbal abilities of visually impaired children aged between 7 and 15. The sample included 55 visually impaired children from the 1st to the 6th grade of elementary schools for visually impaired children in Belgrade. RANDOT stereotest…

  20. Role of Auditory Non-Verbal Working Memory in Sentence Repetition for Bilingual Children with Primary Language Impairment

    Science.gov (United States)

    Ebert, Kerry Danahy

    2014-01-01

    Background: Sentence repetition performance is attracting increasing interest as a valuable clinical marker for primary (or specific) language impairment (LI) in both monolingual and bilingual populations. Multiple aspects of memory appear to contribute to sentence repetition performance, but non-verbal memory has not yet been considered. Aims: To…

  1. Use of Audiovisual Texts in University Education Process

    Science.gov (United States)

    Aleksandrov, Evgeniy P.

    2014-01-01

    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  2. Alterations in Resting-State Activity Relate to Performance in a Verbal Recognition Task

    Science.gov (United States)

    López Zunini, Rocío A.; Thivierge, Jean-Philippe; Kousaie, Shanna; Sheppard, Christine; Taler, Vanessa

    2013-01-01

    In the brain, resting-state activity refers to non-random patterns of intrinsic activity occurring when participants are not actively engaged in a task. We monitored resting-state activity using electroencephalogram (EEG) both before and after a verbal recognition task. We show a strong positive correlation between accuracy in verbal recognition and pre-task resting-state alpha power at posterior sites. We further characterized this effect by examining resting-state post-task activity. We found marked alterations in resting-state alpha power when comparing pre- and post-task periods, with more pronounced alterations in participants that attained higher task accuracy. These findings support a dynamical view of cognitive processes where patterns of ongoing brain activity can facilitate –or interfere– with optimal task performance. PMID:23785436

  3. Language representation of the emotional state of the personage in non-verbal speech behavior (on the material of Russian and German languages

    Directory of Open Access Journals (Sweden)

    Scherbakova Irina Vladimirovna

    2016-06-01

    Full Text Available The article examines the features of actualization of emotions in a non-verbal speech behavior of the character of a literary text. Emotions are considered basic, the most actively used method of literary character reaction to any object, action, or the communicative situation. Nonverbal ways of expressing emotions more fully give the reader an idea of the emotional state of the character. The main focus in the allocation of non-verbal means of communication in art is focused on the description of kinetic, proxemic and prosodic components. The material of the study is the microdialogue fragments extracted by continuous sampling of their works of art texts of the Russian-speaking and German-speaking classical and modern literature XIX - XX centuries. Fragments of the dialogues were analyzed, where the recorded voice of nonverbal behavior of the character of different emotional content (surprise, joy, fear, anger, rage, excitement, etc. was fixed. It was found that means of verbalization and descriptions of emotion of nonverbal behavior of the character are primarily indirect nomination, expressed verbal vocabulary, adjectives and adverbs. The lexical level is the most significant in the presentation of the emotional state of the character.

  4. Transgressing the Non-fiction Transmedia Narrative

    OpenAIRE

    Gifreu-Castells, Arnau; Misek, Richard; Verbruggen, Erwin

    2016-01-01

    abstractOver the last years, interactive digital media have greatly affected the logics of production, exhibition and reception of non-fiction audiovisual works, leading to the emergence of a new area called ‘interactive and transmedia non-fiction’. Whilethe audiovisual non-fiction field has been partially studied, a few years ago emerged a new field focusing on interactive and transmedia non-fiction narratives, an unexplored territory that needs new theories and taxonomies to differentiate f...

  5. RECURSO AUDIOVISUAL PAA ENSEÑAR Y APRENDER EN EL AULA: ANÁLISIS Y PROPUESTA DE UN MODELO FORMATIVO

    Directory of Open Access Journals (Sweden)

    Damian Marilu Mendoza Zambrano

    2015-09-01

    teaching of the media means following the initiative of Spain and Portugal, the international protagonists of some university educational models was made. Due to the extension and focalization in information technology and web communication through the Internet, the audiovisual aid as a technological instrument have gained utility as a dynamic and conciliatory source with special characteristics that differs it form the other sources that belong to the audiovisual aids eco system. As a result of this research; two application means are proposed: A. Proposal of the iconic and audiovisual language as a learning objective and/or as a curriculum subject in the university syllabus that will include workshops for the development of the audiovisual document, digital photography and the audiovisual production. B. Usage of the audiovisual resources as education means which will imply a pre- training process to the teachers in the activities recommended for the teachers and students. As a consequence, suggestions that allow implementing both means of academic actions are presented.KEYWORDS: Media Literacy; Education Audiovisual; Media Competence; Educommunication.

  6. Some Behavioral and Neurobiological Constraints on Theories of Audiovisual Speech Integration: A Review and Suggestions for New Directions

    Science.gov (United States)

    Altieri, Nicholas; Pisoni, David B.; Townsend, James T.

    2012-01-01

    Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield’s feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration. PMID:21968081

  7. Age-related audiovisual interactions in the superior colliculus of the rat.

    Science.gov (United States)

    Costa, M; Piché, M; Lepore, F; Guillemot, J-P

    2016-04-21

    It is well established that multisensory integration is a functional characteristic of the superior colliculus that disambiguates external stimuli and therefore reduces the reaction times toward simple audiovisual targets in space. However, in a condition where a complex audiovisual stimulus is used, such as the optical flow in the presence of modulated audio signals, little is known about the processing of the multisensory integration in the superior colliculus. Furthermore, since visual and auditory deficits constitute hallmark signs during aging, we sought to gain some insight on whether audiovisual processes in the superior colliculus are altered with age. Extracellular single-unit recordings were conducted in the superior colliculus of anesthetized Sprague-Dawley adult (10-12 months) and aged (21-22 months) rats. Looming circular concentric sinusoidal (CCS) gratings were presented alone and in the presence of sinusoidally amplitude modulated white noise. In both groups of rats, two different audiovisual response interactions were encountered in the spatial domain: superadditive, and suppressive. In contrast, additive audiovisual interactions were found only in adult rats. Hence, superior colliculus audiovisual interactions were more numerous in adult rats (38%) than in aged rats (8%). These results suggest that intersensory interactions in the superior colliculus play an essential role in space processing toward audiovisual moving objects during self-motion. Moreover, aging has a deleterious effect on complex audiovisual interactions. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  8. Knowledge Generated by Audiovisual Narrative Action Research Loops

    Science.gov (United States)

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of…

  9. Audiovisual Association Learning in the Absence of Primary Visual Cortex.

    Science.gov (United States)

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J; de Gelder, Beatrice

    2015-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning.

  10. Hysteresis in audiovisual synchrony perception.

    Directory of Open Access Journals (Sweden)

    Jean-Rémy Martin

    Full Text Available The effect of stimulation history on the perception of a current event can yield two opposite effects, namely: adaptation or hysteresis. The perception of the current event thus goes in the opposite or in the same direction as prior stimulation, respectively. In audiovisual (AV synchrony perception, adaptation effects have primarily been reported. Here, we tested if perceptual hysteresis could also be observed over adaptation in AV timing perception by varying different experimental conditions. Participants were asked to judge the synchrony of the last (test stimulus of an AV sequence with either constant or gradually changing AV intervals (constant and dynamic condition, respectively. The onset timing of the test stimulus could be cued or not (prospective vs. retrospective condition, respectively. We observed hysteretic effects for AV synchrony judgments in the retrospective condition that were independent of the constant or dynamic nature of the adapted stimuli; these effects disappeared in the prospective condition. The present findings suggest that knowing when to estimate a stimulus property has a crucial impact on perceptual simultaneity judgments. Our results extend beyond AV timing perception, and have strong implications regarding the comparative study of hysteresis and adaptation phenomena.

  11. A brain-computer interface for potential non-verbal facial communication based on EEG signals related to specific emotions.

    Science.gov (United States)

    Kashihara, Koji

    2014-01-01

    Unlike assistive technology for verbal communication, the brain-machine or brain-computer interface (BMI/BCI) has not been established as a non-verbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients' emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based non-verbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600-700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus (FG). This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals. A classification method based on a support vector machine enables the easy classification of neutral faces that trigger specific individual emotions. In

  12. Gender Differences in Variance and Means on the Naglieri Non-Verbal Ability Test: Data from the Philippines

    Science.gov (United States)

    Vista, Alvin; Care, Esther

    2011-01-01

    Background: Research on gender differences in intelligence has focused mostly on samples from Western countries and empirical evidence on gender differences from Southeast Asia is relatively sparse. Aims: This article presents results on gender differences in variance and means on a non-verbal intelligence test using a national sample of public…

  13. Efficient visual search from synchronized auditory signals requires transient audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.

  14. Gestión documental de la información audiovisual deportiva en las televisiones generalistas Documentary management of the sport audio-visual information in the generalist televisions

    Directory of Open Access Journals (Sweden)

    Jorge Caldera Serrano

    2005-01-01

    Full Text Available Se analiza la gestión de la información audiovisual deportiva en el marco de los Sistemas de Información Documental de las cadenas estatales, zonales y locales. Para ello se realiza un realiza un recorrido por la cadena documental que realiza la información audiovisual deportiva con el fin de ir analizando cada uno de los parámetros, mostrando así una serie de recomendaciones y normativas para la confección del registro audiovisual deportivo. Evidentemente la documentación deportiva audiovisual no se diferencia en exceso del análisis de otros tipos documentales televisivos por lo que se lleva a cabo una profundización yampliación de su gestión y difusión, mostrando el flujo informacional dentro del Sistema.The management of the sport audio-visual documentation of the Information Systems of the state, zonal and local chains is analyzed within the framework. For it it is made makes a route by the documentary chain that makes the sport audio-visual information with the purpose of being analyzing each one of the parameters, showing therefore a series of recommendations and norms for the preparation of the sport audio-visual registry. Evidently the audio-visual sport documentation difference in excess of the analysis of other televising documentary types reason why is not carried out a deepening and extension of its management and diffusion, showing the informational flow within the System.

  15. Challenges and opportunities for audiovisual diversity in the Internet

    Directory of Open Access Journals (Sweden)

    Trinidad García Leiva

    2017-06-01

    Full Text Available http://dx.doi.org/10.5007/2175-7984.2017v16n35p132 At the gates of the first quarter of the XXI century, nobody doubts the fact that the value chain of the audiovisual industry has suffered important transformations. The digital era presents opportunities for cultural enrichment as well as displays new challenges. After presenting a general portray of the audiovisual industries in the digital era, taking as a point of departure the Spanish case and paying attention to players and logics in tension, this paper will present some notes about the advantages and disadvantages that exist for the diversity of audiovisual production, distribution and consumption online. It is here sustained that the diversity of the audiovisual sector online is not guaranteed because the formula that has made some players successful and powerful is based on walled-garden models to monetize contents (which, besides, add restrictions to their reproduction and circulation by and among consumers. The final objective is to present some ideas about the elements that prevent the strengthening of the diversity of the audiovisual industry in the digital scenario. Barriers to overcome are classified as technological, financial, social, legal and political.

  16. An Instrumented Glove for Control Audiovisual Elements in Performing Arts

    Directory of Open Access Journals (Sweden)

    Rafael Tavares

    2018-02-01

    Full Text Available The use of cutting-edge technologies such as wearable devices to control reactive audiovisual systems are rarely applied in more conventional stage performances, such as opera performances. This work reports a cross-disciplinary approach for the research and development of the WMTSensorGlove, a data-glove used in an opera performance to control audiovisual elements on stage through gestural movements. A system architecture of the interaction between the wireless wearable device and the different audiovisual systems is presented, taking advantage of the Open Sound Control (OSC protocol. The developed wearable system was used as audiovisual controller in “As sete mulheres de Jeremias Epicentro”, a portuguese opera by Quarteto Contratempus, which was premiered in September 2017.

  17. Audiovisual integration in speech perception: a multi-stage process

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    investigate whether the integration of auditory and visual speech observed in these two audiovisual integration effects are specific traits of speech perception. We further ask whether audiovisual integration is undertaken in a single processing stage or multiple processing stages....

  18. A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech.

    Directory of Open Access Journals (Sweden)

    John F Magnotti

    2017-02-01

    Full Text Available Audiovisual speech integration combines information from auditory speech (talker's voice and visual speech (talker's mouth movements to improve perceptual accuracy. However, if the auditory and visual speech emanate from different talkers, integration decreases accuracy. Therefore, a key step in audiovisual speech perception is deciding whether auditory and visual speech have the same source, a process known as causal inference. A well-known illusion, the McGurk Effect, consists of incongruent audiovisual syllables, such as auditory "ba" + visual "ga" (AbaVga, that are integrated to produce a fused percept ("da". This illusion raises two fundamental questions: first, given the incongruence between the auditory and visual syllables in the McGurk stimulus, why are they integrated; and second, why does the McGurk effect not occur for other, very similar syllables (e.g., AgaVba. We describe a simplified model of causal inference in multisensory speech perception (CIMS that predicts the perception of arbitrary combinations of auditory and visual speech. We applied this model to behavioral data collected from 60 subjects perceiving both McGurk and non-McGurk incongruent speech stimuli. The CIMS model successfully predicted both the audiovisual integration observed for McGurk stimuli and the lack of integration observed for non-McGurk stimuli. An identical model without causal inference failed to accurately predict perception for either form of incongruent speech. The CIMS model uses causal inference to provide a computational framework for studying how the brain performs one of its most important tasks, integrating auditory and visual speech cues to allow us to communicate with others.

  19. Elevated audiovisual temporal interaction in patients with migraine without aura

    Science.gov (United States)

    2014-01-01

    Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903

  20. Attenuated audiovisual integration in middle-aged adults in a discrimination task.

    Science.gov (United States)

    Yang, Weiping; Ren, Yanna

    2018-02-01

    Numerous studies have focused on the diversity of audiovisual integration between younger and older adults. However, consecutive trends in audiovisual integration throughout life are still unclear. In the present study, to clarify audiovisual integration characteristics in middle-aged adults, we instructed younger and middle-aged adults to conduct an auditory/visual stimuli discrimination experiment. Randomized streams of unimodal auditory (A), unimodal visual (V) or audiovisual stimuli were presented on the left or right hemispace of the central fixation point, and subjects were instructed to respond to the target stimuli rapidly and accurately. Our results demonstrated that the responses of middle-aged adults to all unimodal and bimodal stimuli were significantly slower than those of younger adults (p Audiovisual integration was markedly delayed (onset time 360 ms) and weaker (peak 3.97%) in middle-aged adults than in younger adults (onset time 260 ms, peak 11.86%). The results suggested that audiovisual integration was attenuated in middle-aged adults and further confirmed age-related decline in information processing.

  1. Documentary management of the sport audio-visual information in the generalist televisions

    OpenAIRE

    Jorge Caldera Serrano; Felipe Alonso

    2007-01-01

    The management of the sport audio-visual documentation of the Information Systems of the state, zonal and local chains is analyzed within the framework. For it it is made makes a route by the documentary chain that makes the sport audio-visual information with the purpose of being analyzing each one of the parameters, showing therefore a series of recommendations and norms for the preparation of the sport audio-visual registry. Evidently the audio-visual sport documentation difference i...

  2. Rapid, generalized adaptation to asynchronous audiovisual speech.

    Science.gov (United States)

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-07

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  3. Narrativa audiovisual. Estrategias y recursos [Reseña

    OpenAIRE

    Cuenca Jaramillo, María Dolores

    2011-01-01

    Reseña del libro "Narrativa audiovisual. Estrategias y recursos" de Fernando Canet y Josep Prósper. Cuenca Jaramillo, MD. (2011). Narrativa audiovisual. Estrategias y recursos [Reseña]. Vivat Academia. Revista de Comunicación. Año XIV(117):125-130. http://hdl.handle.net/10251/46210 Senia 125 130 Año XIV 117

  4. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    International Nuclear Information System (INIS)

    George, Rohini; Chung, Theodore D.; Vedam, Sastry S.; Ramakrishnan, Viswanathan; Mohan, Radhe; Weiss, Elisabeth; Keall, Paul J.

    2006-01-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathed without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating

  5. [Accommodation effects of the audiovisual stimulation in the patients experiencing eyestrain with the concomitant disturbances of psychological adaptation].

    Science.gov (United States)

    Shakula, A V; Emel'ianov, G A

    2014-01-01

    The present study was designed to evaluate the effectiveness of audiovisual stimulation on the state of the eye accommodation system in the patients experiencing eyes train with the concomitant disturbances of psychological. It was shown that a course of audiovisual stimulation (seeing a psychorelaxing film accompanied by a proper music) results in positive (5.9-21.9%) dynamics of the objective accommodation parameters and of the subjective status (4.5-33.2%). Taken together, these findings whole allow this method to be regarded as "relaxing preparation" in the integral complex of the measures for the preservation of the professional vision in this group of the patients.

  6. The Efficiency of Peer Teaching of Developing Non Verbal Communication to Children with Autism Spectrum Disorder (ASD)

    Science.gov (United States)

    Alshurman, Wael; Alsreaa, Ihsani

    2015-01-01

    This study aimed at identifying the efficiency of peer teaching of developing non-verbal communication to children with autism spectrum disorder (ASD). The study was carried out on a sample of (10) children with autism spectrum disorder (ASD), diagnosed according to basics and criteria adopted at Al-taif qualification center at (2013) in The…

  7. The role of visual spatial attention in audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias; Tiippana, K.; Laarni, J.

    2009-01-01

    Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive b......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre...... from each of the faces and from the voice on the auditory speech percept. We found that directing visual spatial attention towards a face increased the influence of that face on auditory perception. However, the influence of the voice on auditory perception did not change suggesting that audiovisual...... integration did not change. Visual spatial attention was also able to select between the faces when lip reading. This suggests that visual spatial attention acts at the level of visual speech perception prior to audiovisual integration and that the effect propagates through audiovisual integration...

  8. Media Aid Beyond the Factual: Culture, Development, and Audiovisual Assistance

    Directory of Open Access Journals (Sweden)

    Benjamin A. J. Pearson

    2015-01-01

    Full Text Available This paper discusses audiovisual assistance, a form of development aid that focuses on the production and distribution of cultural and entertainment media such as fictional films and TV shows. While the first audiovisual assistance program dates back to UNESCO’s International Fund for the Promotion of Culture in the 1970s, the past two decades have seen a proliferation of audiovisual assistance that, I argue, is related to a growing concern for culture in post-2015 global development agendas. In this paper, I examine the aims and motivations behind the EU’s audiovisual assistance programs to countries in the Global South, using data from policy documents and semi-structured, in-depth interviews with Program Managers and administrative staff in Brussels. These programs prioritize forms of audiovisual content that are locally specific, yet globally tradable. Furthermore, I argue that they have an ambivalent relationship with traditional notions of international development, one that conceptualizes media not only as a means to achieve economic development and human rights aims, but as a form of development itself.

  9. Gestión documental de la información audiovisual deportiva en las televisiones generalistas

    Documentary management of the sport audio-visual information in the generalist televisions

    OpenAIRE

    Jorge Caldera Serrano; Felipe Zapico Alonso

    2005-01-01

    Se analiza la gestión de la información audiovisual deportiva en el marco de los Sistemas de Información Documental de las cadenas estatales, zonales y locales. Para ello se realiza un realiza un recorrido por la cadena documental que realiza la información audiovisual deportiva con el fin de ir analizando cada uno de los parámetros, mostrando así una serie de recomendaciones y normativas para la confección del registro audiovisual deportivo. Evidentemente la documentación deportiva audiovisu...

  10. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2017-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  11. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2018-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  12. Trigger videos on the Web: Impact of audiovisual design

    NARCIS (Netherlands)

    Verleur, R.; Heuvelman, A.; Verhagen, Pleunes Willem

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is

  13. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    Science.gov (United States)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  14. Audiovisual Script Writing.

    Science.gov (United States)

    Parker, Norton S.

    In audiovisual writing the writer must first learn to think in terms of moving visual presentation. The writer must research his script, organize it, and adapt it to a limited running time. By use of a pleasant-sounding narrator and well-written narration, the visual and narrative can be successfully integrated. There are two types of script…

  15. Patterns of non-verbal social interactions within intensive mathematics intervention contexts

    Science.gov (United States)

    Thomas, Jonathan Norris; Harkness, Shelly Sheats

    2016-06-01

    This study examined the non-verbal patterns of interaction within an intensive mathematics intervention context. Specifically, the authors draw on social constructivist worldview to examine a teacher's use of gesture in this setting. The teacher conducted a series of longitudinal teaching experiments with a small number of young, school-age children in the context of early arithmetic development. From these experiments, the authors gathered extensive video records of teaching practice and, from an inductive analysis of these records, identified three distinct patterns of teacher gesture: behavior eliciting, behavior suggesting, and behavior replicating. Awareness of their potential to influence students via gesture may prompt teachers to more closely attend to their own interactions with mathematical tools and take these teacher interactions into consideration when forming interpretations of students' cognition.

  16. Effect of attentional load on audiovisual speech perception: Evidence from ERPs

    Directory of Open Access Journals (Sweden)

    Agnès eAlsius

    2014-07-01

    Full Text Available Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e. a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  17. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    Science.gov (United States)

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  18. Non-verbal communication of the residents living in homes for the older people in Slovenia.

    Science.gov (United States)

    Zaletel, Marija; Kovacev, Asja Nina; Sustersic, Olga; Kragelj, Lijana Zaletel

    2010-09-01

    Aging of the population is a growing problem in all developed societies. The older people need more health and social services, and their life quality in there is getting more and more important. The study aimed at determining the characteristics of non-verbal communication of the older people living in old people's homes (OPH). The sample consisted of 267 residents of the OPH, aged 65-96 years, and 267 caregivers from randomly selected twenty-seven OPH. Three types of non-verbal communication were observed and analysed using univariate and multivariate statistical methods. In face expressions and head movements about 75% older people looked at the eyes of their caregivers, and about 60% were looking around, while laughing or pressing the lips together was rarely noticed. The differences between genders were not statistically significant while statistically significant differences among different age groups was observed in dropping the eyes (p = 0.004) and smiling (0.008). In hand gestures and trunk movements, majority of older people most often moved forwards and clenched fingers, while most rarely they stroked and caressed their caregivers. The differences between genders were statistically significant in leaning on the table (p = 0.001), and changing the position on the chair (0.013). Statistically significant differences among age groups were registered in leaning forwards (p = 0.006) and pointing to the others (p = 0.036). In different modes of speaking and paralinguistic signs almost 75% older people spoke normally, about 70% kept silent, while they rarely quarrelled. The differences between genders were not statistically significant while statistically significant differences among age groups was observed in persuasive speaking (p = 0.007). The present study showed that older people in OPH in Slovenia communicated significantly less frequently with hand gestures and trunk movements than with face expressions and head movements or different modes of speaking

  19. Development of Sensitivity to Audiovisual Temporal Asynchrony during Midchildhood

    Science.gov (United States)

    Kaganovich, Natalya

    2016-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…

  20. "Audio-visuel Integre" et Communication(s) ("Integrated Audiovisual" and Communication)

    Science.gov (United States)

    Moirand, Sophie

    1974-01-01

    This article examines the usefullness of the audiovisual method in teaching communication competence, and calls for research in audiovisual methods as well as in communication theory for improvement in these areas. (Text is in French.) (AM)

  1. The Influence of Selective and Divided Attention on Audiovisual Integration in Children.

    Science.gov (United States)

    Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong

    2016-01-24

    This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder. © The Author(s) 2016.

  2. Mujeres e industria audiovisual hoy: Involución, experimentación y nuevos modelos narrativos Women and the audiovisual (industry today: regression, experiment and new narrative models

    Directory of Open Access Journals (Sweden)

    Ana MARTÍNEZ-COLLADO MARTÍNEZ

    2011-07-01

    Full Text Available Este artículo analiza las prácticas artísticas audiovisuales en el contexto actual. Describe, en primer lugar, el proceso de involución de las prácticas audiovisuales realizadas por mujeres artistas. Las mujeres no están presentes ni como productoras, ni realizadoras, ni como ejecutivas de la industria audiovisual de tal manera que inevitablemente se reconstruyen y refuerzan los estereotipos tradicionales de género. A continuación el artículo se aproxima a la práctica artística audiovisual feminista en la década de los 70 y 80. Tomar la cámara se hizo absolutamente necesario no sólo para dar voz a muchas mujeres. Era necesario reinscribir los discursos ausentes y señalar un discurso crítico respecto a la representación cultural. Analiza, también, cómo estas prácticas a partir de la década de los 90 exploran nuevos modelos narrativos vinculados a las transformaciones de la subjetividad contemporánea, al tiempo que desarrollan su producción audiovisual en un “campo expandido” de exhibición. Por último, el artículo señala la relación de las prácticas feministas audiovisuales con el complejo territorio de la globalización y la sociedad de la información. La narración de la experiencia local ha encontrado en el audiovisual un medio privilegiado para señalar los problemas de la diferencia, la identidad, la raza y la etnicidad.This article analyses audiovisual art in the contemporary context. Firstly it describes the current regression of the role of women artists’ audiovisual practices. Women have little or no presence in the audiovisual industry as producers, filmmakers or executives, a condition that inevitably reconstitutes and reinforces traditional gender stereotypes. The article goes on to look at the feminist audiovisual practices of the nineteen seventies and eighties when women’s filmmaking became an absolutely necessity, not only to give voice to women but also to inscribe discourses found to be

  3. The audiovisual communication policy of the socialist Government (2004-2009: A neoliberal turn

    Directory of Open Access Journals (Sweden)

    Ramón Zallo, Ph. D.

    2010-01-01

    Full Text Available The first legislature of Jose Luis Rodriguez Zapatero’s government (2004-08 generated important initiatives for some progressive changes in the public communicative system. However, all of these initiatives have been dissolving in the second legislature to give way to a non-regulated and privatizing model that is detrimental to the public service. Three phases can be distinguished, even temporarily: the first one is characterized by interesting reforms; followed by contradictory reforms and, in the second legislature, an accumulation of counter reforms, that lead the system towards a communicative system model completely different from the one devised in the first legislature. This indicates that there has been not one but two different audiovisual policies running the cyclical route of the audiovisual policy from one end to the other. The emphasis has changed from the public service to private concentration; from decentralization to centralization; from the diffusion of knowledge to the accumulation and appropriation of the cognitive capital; from the Keynesian model - combined with the Schumpeterian model and a preference for social access - to a delayed return to the neoliberal model, after having distorted the market through public decisions in the benefit of the most important audiovisual services providers. All this seems to crystallize the impressive process of concentration occurring between audiovisual services providers in two large groups that would be integrated by Mediaset and Sogecable and - in negotiations - between Antena 3 and Imagina. A combination of neo-statist restructuring of the market and neo-liberalism.

  4. Prácticas de producción audiovisual universitaria reflejadas en los trabajos presentados en la muestra audiovisual universitaria Ventanas 2005-2009

    Directory of Open Access Journals (Sweden)

    Maria Urbanczyk

    2011-01-01

    Full Text Available Este artículo presenta los resultados de la investigación realizada sobre la producción audiovisual universitaria en Colombia, a partir de los trabajos presentados en la muestra audiovisual Ventanas 2005-2009. El estudio de los trabajos trató de abarcar de la manera más completa posible el proceso de producción audiovisual que realizan los jóvenes universitarios, desde el nacimiento de la idea hasta el producto final, la circulación y la socialización. Se encontró que los temas más recurrentes son la violencia y los sentimientos, reflejados desde distintos géneros, tratamientos estéticos y abordajes conceptuales. Ante la ausencia de investigaciones que legitimen el saber que se produce en las aulas en cuanto al campo audiovisual en Colombia, esta investigación pretende abrir un camino para evidenciar el aporte que dejan los jóvenes en la consolidación de una narrativa nacional y en la preservación de la memoria del país.

  5. 36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?

    Science.gov (United States)

    2010-07-01

    ... standards for audiovisual records storage? 1237.18 Section 1237.18 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.18 What are the environmental standards for audiovisual records storage? (a...

  6. Neural Correlates of Audiovisual Integration of Semantic Category Information

    Science.gov (United States)

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-01-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded…

  7. Haptic and Audio-visual Stimuli: Enhancing Experiences and Interaction

    NARCIS (Netherlands)

    Nijholt, Antinus; Dijk, Esko O.; Lemmens, Paul M.C.; Luitjens, S.B.

    2010-01-01

    The intention of the symposium on Haptic and Audio-visual stimuli at the EuroHaptics 2010 conference is to deepen the understanding of the effect of combined Haptic and Audio-visual stimuli. The knowledge gained will be used to enhance experiences and interactions in daily life. To this end, a

  8. Analysis of José Luis Rodríguez Zapatero’s nonverbal communication

    OpenAIRE

    Imelda Rodríguez-Escanciano, Ph.D.; María Hernández-Herrarte, Ph.D

    2010-01-01

    Aware of television’s high level of persuasion and impact, politicians have progressively adapted their messages to the guidelines of the audiovisual media in order to strongly persuade TV viewers, which are seen as potential voters. Currently, the communication, marketing and telegenicity teams of most political parties do not only train their politicians to effectively use verbal communication, but they also try to reinforce their non-verbal communications skills, because they understand th...

  9. Neural Correlates of Temporal Complexity and Synchrony during Audiovisual Correspondence Detection.

    Science.gov (United States)

    Baumann, Oliver; Vromen, Joyce M G; Cheung, Allen; McFadyen, Jessica; Ren, Yudan; Guo, Christine C

    2018-01-01

    We often perceive real-life objects as multisensory cues through space and time. A key challenge for audiovisual integration is to match neural signals that not only originate from different sensory modalities but also that typically reach the observer at slightly different times. In humans, complex, unpredictable audiovisual streams lead to higher levels of perceptual coherence than predictable, rhythmic streams. In addition, perceptual coherence for complex signals seems less affected by increased asynchrony between visual and auditory modalities than for simple signals. Here, we used functional magnetic resonance imaging to determine the human neural correlates of audiovisual signals with different levels of temporal complexity and synchrony. Our study demonstrated that greater perceptual asynchrony and lower signal complexity impaired performance in an audiovisual coherence-matching task. Differences in asynchrony and complexity were also underpinned by a partially different set of brain regions. In particular, our results suggest that, while regions in the dorsolateral prefrontal cortex (DLPFC) were modulated by differences in memory load due to stimulus asynchrony, areas traditionally thought to be involved in speech production and recognition, such as the inferior frontal and superior temporal cortex, were modulated by the temporal complexity of the audiovisual signals. Our results, therefore, indicate specific processing roles for different subregions of the fronto-temporal cortex during audiovisual coherence detection.

  10. Toward a functional analysis of private verbal self-regulation.

    OpenAIRE

    Taylor, I; O'Reilly, M F

    1997-01-01

    We developed a methodology, derived from the theoretical literatures on rule-governed behavior and private events, to experimentally investigate the relationship between covert verbal self-regulation and nonverbal behavior. The methodology was designed to assess whether (a) nonverbal behavior was under the control of covert rules and (b) verbal reports of these rules were functionally equivalent to the covert rules that control non-verbal behavior. The research was conducted in the context of...

  11. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    Science.gov (United States)

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Dissociated roles of the inferior frontal gyrus and superior temporal sulcus in audiovisual processing: top-down and bottom-up mismatch detection.

    Science.gov (United States)

    Uno, Takeshi; Kawai, Kensuke; Sakai, Katsuyuki; Wakebe, Toshihiro; Ibaraki, Takuya; Kunii, Naoto; Matsuo, Takeshi; Saito, Nobuhito

    2015-01-01

    Visual inputs can distort auditory perception, and accurate auditory processing requires the ability to detect and ignore visual input that is simultaneous and incongruent with auditory information. However, the neural basis of this auditory selection from audiovisual information is unknown, whereas integration process of audiovisual inputs is intensively researched. Here, we tested the hypothesis that the inferior frontal gyrus (IFG) and superior temporal sulcus (STS) are involved in top-down and bottom-up processing, respectively, of target auditory information from audiovisual inputs. We recorded high gamma activity (HGA), which is associated with neuronal firing in local brain regions, using electrocorticography while patients with epilepsy judged the syllable spoken by a voice while looking at a voice-congruent or -incongruent lip movement from the speaker. The STS exhibited stronger HGA if the patient was presented with information of large audiovisual incongruence than of small incongruence, especially if the auditory information was correctly identified. On the other hand, the IFG exhibited stronger HGA in trials with small audiovisual incongruence when patients correctly perceived the auditory information than when patients incorrectly perceived the auditory information due to the mismatched visual information. These results indicate that the IFG and STS have dissociated roles in selective auditory processing, and suggest that the neural basis of selective auditory processing changes dynamically in accordance with the degree of incongruity between auditory and visual information.

  13. Robust audio-visual speech recognition under noisy audio-video conditions.

    Science.gov (United States)

    Stewart, Darryl; Seymour, Rowan; Pass, Adrian; Ming, Ji

    2014-02-01

    This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.

  14. Adults with Asperger Syndrome with and without a Cognitive Profile Associated with "Non-Verbal Learning Disability." A Brief Report

    Science.gov (United States)

    Nyden, Agneta; Niklasson, Lena; Stahlberg, Ola; Anckarsater, Henrik; Dahlgren-Sandberg, Annika; Wentz, Elisabet; Rastam, Maria

    2010-01-01

    Asperger syndrome (AS) and non-verbal learning disability (NLD) are both characterized by impairments in motor coordination, visuo-perceptual abilities, pragmatics and comprehension of language and social understanding. NLD is also defined as a learning disorder affecting functions in the right cerebral hemisphere. The present study investigates…

  15. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    Science.gov (United States)

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate

  16. Encoding of Physics Concepts: Concreteness and Presentation Modality Reflected by Human Brain Dynamics

    OpenAIRE

    Lai, Kevin; She, Hsiao-Ching; Chen, Sheng-Chang; Chou, Wen-Chi; Huang, Li-Yu; Jung, Tzyy-Ping; Gramann, Klaus

    2012-01-01

    Previous research into working memory has focused on activations in different brain areas accompanying either different presentation modalities (verbal vs. non-verbal) or concreteness (abstract vs. concrete) of non-science concepts. Less research has been conducted investigating how scientific concepts are learned and further processed in working memory. To bridge this gap, the present study investigated human brain dynamics associated with encoding of physics concepts, taking both presentati...

  17. Trigger Videos on the Web: Impact of Audiovisual Design

    Science.gov (United States)

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  18. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect.

    Science.gov (United States)

    Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi

    2015-11-01

    Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then

  19. Neurophysiology underlying influence of stimulus reliability on audiovisual integration.

    Science.gov (United States)

    Shatzer, Hannah; Shen, Stanley; Kerlin, Jess R; Pitt, Mark A; Shahin, Antoine J

    2018-01-24

    We tested the predictions of the dynamic reweighting model (DRM) of audiovisual (AV) speech integration, which posits that spectrotemporally reliable (informative) AV speech stimuli induce a reweighting of processing from low-level to high-level auditory networks. This reweighting decreases sensitivity to acoustic onsets and in turn increases tolerance to AV onset asynchronies (AVOA). EEG was recorded while subjects watched videos of a speaker uttering trisyllabic nonwords that varied in spectrotemporal reliability and asynchrony of the visual and auditory inputs. Subjects judged the stimuli as in-sync or out-of-sync. Results showed that subjects exhibited greater AVOA tolerance for non-blurred than blurred visual speech and for less than more degraded acoustic speech. Increased AVOA tolerance was reflected in reduced amplitude of the P1-P2 auditory evoked potentials, a neurophysiological indication of reduced sensitivity to acoustic onsets and successful AV integration. There was also sustained visual alpha band (8-14 Hz) suppression (desynchronization) following acoustic speech onsets for non-blurred vs. blurred visual speech, consistent with continuous engagement of the visual system as the speech unfolds. The current findings suggest that increased spectrotemporal reliability of acoustic and visual speech promotes robust AV integration, partly by suppressing sensitivity to acoustic onsets, in support of the DRM's reweighting mechanism. Increased visual signal reliability also sustains the engagement of the visual system with the auditory system to maintain alignment of information across modalities. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Automatic analysis of children’s engagement using interactional network features

    NARCIS (Netherlands)

    Kim, Jaebok; Truong, Khiet Phuong

    We explored the automatic analysis of vocal non-verbal cues of a group of children in the context of engagement and collaborative play. For the current study, we defined two types of engagement on groups of children: harmonised and unharmonised. A spontaneous audiovisual corpus with groups of

  1. Cross-modal cueing in audiovisual spatial attention

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Greenlee, Mark W.; Gondan, Matthias

    2015-01-01

    effects have been reported for endogenous visual cues while exogenous cues seem to be mostly ineffective. In three experiments, we investigated cueing effects on the processing of audiovisual signals. In Experiment 1 we used endogenous cues to investigate their effect on the detection of auditory, visual......, and audiovisual targets presented with onset asynchrony. Consistent cueing effects were found in all target conditions. In Experiment 2 we used exogenous cues and found cueing effects only for visual target detection, but not auditory target detection. In Experiment 3 we used predictive exogenous cues to examine...

  2. Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special

    Science.gov (United States)

    Vroomen, Jean; Stekelenburg, Jeroen J.

    2011-01-01

    Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…

  3. GÖRSEL-İŞİTSEL ÇEVİRİ / AUDIOVISUAL TRANSLATION

    Directory of Open Access Journals (Sweden)

    Sevtap GÜNAY KÖPRÜLÜ

    2016-04-01

    Full Text Available Audiovisual translation dating back to the silent film era is a special translation method which has been developed for the translation of the movies and programs shown on TV and cinema. Therefore, in the beginning, the term “film translation” was used for this type of translation. Due to the growing number of audiovisual texts it has attracted the interest of scientists and has been assessed under the translation studies. Also in our country the concept of film translation was used for this area, but recently, the concept of audio-visual has been used. Since it not only encompasses the films but also covers all the audio-visual communicatian tools as well, especially in scientific field. In this study, the aspects are analyzed which should be taken into consideration by the translator during the audio-visual translation process within the framework of source text, translated text, film, technical knowledge and knowledge. At the end of the study, it is shown that there are some factors, apart from linguistic and paralinguistic factors and they must be considered carefully as they can influence the quality of the translation. And it is also shown that the given factors require technical knowledge in translation. In this sense, the audio-visual translation is accessed from a different angle compared to the other researches which have been done.

  4. 36 CFR 1237.12 - What record elements must be created and preserved for permanent audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... created and preserved for permanent audiovisual records? 1237.12 Section 1237.12 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC... permanent audiovisual records? For permanent audiovisual records, the following record elements must be...

  5. The production of audiovisual teaching tools in minimally invasive surgery.

    Science.gov (United States)

    Tolerton, Sarah K; Hugh, Thomas J; Cosman, Peter H

    2012-01-01

    Audiovisual learning resources have become valuable adjuncts to formal teaching in surgical training. This report discusses the process and challenges of preparing an audiovisual teaching tool for laparoscopic cholecystectomy. The relative value in surgical education and training, for both the creator and viewer are addressed. This audiovisual teaching resource was prepared as part of the Master of Surgery program at the University of Sydney, Australia. The different methods of video production used to create operative teaching tools are discussed. Collating and editing material for an audiovisual teaching resource can be a time-consuming and technically challenging process. However, quality learning resources can now be produced even with limited prior video editing experience. With minimal cost and suitable guidance to ensure clinically relevant content, most surgeons should be able to produce short, high-quality education videos of both open and minimally invasive surgery. Despite the challenges faced during production of audiovisual teaching tools, these resources are now relatively easy to produce using readily available software. These resources are particularly attractive to surgical trainees when real time operative footage is used. They serve as valuable adjuncts to formal teaching, particularly in the setting of minimally invasive surgery. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  6. Bi-directional effects of depressed mood in the postnatal period on mother-infant non-verbal engagement with picture books.

    Science.gov (United States)

    Reissland, Nadja; Burt, Mike

    2010-12-01

    The purpose of the present study is to examine the bi-directional nature of maternal depressed mood in the postnatal period on maternal and infant non-verbal behaviors while looking at a picture book. Although, it is acknowledged that non-verbal engagement with picture books in infancy plays an important role, the effect of maternal depressed mood on stimulating the interest of infants in books is not known. Sixty-one mothers and their infants, 38 boys and 23 girls, were observed twice approximately 3 months apart (first observation: mean age 6.8 months, range 3-11 months, 32 mothers with depressed mood; second observation: mean age 10.2 months, range 6-16 months, 17 mothers with depressed mood). There was a significant effect for depressed mood on negative behaviors: infants of mothers with depressed mood tended to push away and close books more often. The frequency of negative behaviors (pushing the book away/closing it on the part of the infant and withholding the book and restraining the infant on the part of the mother) were behaviors which if expressed during the first visit were more likely to be expressed during the second visit. Levels of negative behaviors by mother and infant were strongly related during each visit. Additionally, the pattern between visits suggests that maternal negative behavior may be the cause of her infant negative behavior. These results are discussed in terms of the effects of maternal depressed mood on the bi-directional relation of non-verbal engagement of mother and child. Crown Copyright © 2010. Published by Elsevier Inc. All rights reserved.

  7. Computerized training of non-verbal reasoning and working memory in children with intellectual disability

    Directory of Open Access Journals (Sweden)

    Stina eSöderqvist

    2012-10-01

    Full Text Available Children with intellectual disabilities show deficits in both reasoning ability and working memory (WM that impact everyday functioning and academic achievement. In this study we investigated the feasibility of cognitive training for improving WM and non-verbal reasoning (NVR ability in children with intellectual disability. Participants were randomized to a 5-week adaptive training program (intervention group or non-adaptive version of the program (active control group. Cognitive assessments were conducted prior to and directly after training, and one year later to examine effects of the training. Improvements during training varied largely and amount of progress during training predicted transfer to WM and comprehension of instructions, with higher training progress being associated with greater transfer effects. The strongest predictors for training progress were found to be gender, co-morbidity and baseline capacity on verbal WM. In particular, females without an additional diagnosis and with higher baseline performance showed greater progress. No significant effects of training were observed at the one-year follow-up, suggesting that training should be more intense or repeated in order for effects to persist in children with intellectual disabilities. A major finding of this study is that cognitive training is feasible in children with intellectual disabilities and can help improve their cognitive capacities. However, a minimum cognitive capacity or training ability seems necessary for the training to be beneficial, with some individuals showing little improvement in performance. Future studies of cognitive training should take into consideration how inter-individual differences in training progress influence transfer effects and further investigate how baseline capacities predict training outcome.

  8. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    Science.gov (United States)

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  9. From "Piracy" to Payment: Audio-Visual Copyright and Teaching Practice.

    Science.gov (United States)

    Anderson, Peter

    1993-01-01

    The changing circumstances in Australia governing the use of broadcast television and radio material in education are examined, from the uncertainty of the early 1980s to current management of copyrighted audiovisual material under the statutory licensing agreement between universities and an audiovisual copyright agency. (MSE)

  10. Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression

    Science.gov (United States)

    McNorgan, Chris; Booth, James R.

    2015-01-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276

  11. Skill dependent audiovisual integration in the fusiform induces repetition suppression.

    Science.gov (United States)

    McNorgan, Chris; Booth, James R

    2015-02-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Contextual analysis of human non-verbal guide behaviors to inform the development of FROG, the Fun Robotic Outdoor Guide

    NARCIS (Netherlands)

    Karreman, Daphne Eleonora; van Dijk, Elisabeth M.A.G.; Evers, Vanessa

    2012-01-01

    This paper reports the first step in a series of studies to design the interaction behaviors of an outdoor robotic guide. We describe and report the use case development carried out to identify effective human tour guide behaviors. In this paper we focus on non-verbal communication cues in gaze,

  13. Audiovisual consumption and its social logics on the web

    OpenAIRE

    Rose Marie Santini; Juan C. Calvi

    2013-01-01

    This article analyzes the social logics underlying audiovisualconsumption on digital networks. We retrieved some data on the Internet globaltraffic of audiovisual files since 2008 to identify formats, modes of distributionand consumption of audiovisual contents that tend to prevail on the Web. Thisresearch shows the types of social practices which are dominant among usersand its relation to what we designate as “Internet culture”.

  14. 36 CFR 1237.10 - How must agencies manage their audiovisual, cartographic, and related records?

    Science.gov (United States)

    2010-07-01

    ... their audiovisual, cartographic, and related records? 1237.10 Section 1237.10 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.10 How must agencies manage their audiovisual, cartographic, and related...

  15. [Audio-visual communication in the history of psychiatry].

    Science.gov (United States)

    Farina, B; Remoli, V; Russo, F

    1993-12-01

    The authors analyse the evolution of visual communication in the history of psychiatry. From the 18th century oil paintings to the first dagherrotic prints until the cinematography and the modern audiovisual systems they observed an increasing diffusion of the new communication techniques in psychiatry, and described the use of the different techniques in psychiatric practice. The article ends with a brief review of the current applications of the audiovisual in therapy, training, teaching, and research.

  16. Non-verbal emotion communication training induces specific changes in brain function and structure.

    Science.gov (United States)

    Kreifelts, Benjamin; Jacob, Heike; Brück, Carolin; Erb, Michael; Ethofer, Thomas; Wildgruber, Dirk

    2013-01-01

    The perception of emotional cues from voice and face is essential for social interaction. However, this process is altered in various psychiatric conditions along with impaired social functioning. Emotion communication trainings have been demonstrated to improve social interaction in healthy individuals and to reduce emotional communication deficits in psychiatric patients. Here, we investigated the impact of a non-verbal emotion communication training (NECT) on cerebral activation and brain structure in a controlled and combined functional magnetic resonance imaging (fMRI) and voxel-based morphometry study. NECT-specific reductions in brain activity occurred in a distributed set of brain regions including face and voice processing regions as well as emotion processing- and motor-related regions presumably reflecting training-induced familiarization with the evaluation of face/voice stimuli. Training-induced changes in non-verbal emotion sensitivity at the behavioral level and the respective cerebral activation patterns were correlated in the face-selective cortical areas in the posterior superior temporal sulcus and fusiform gyrus for valence ratings and in the temporal pole, lateral prefrontal cortex and midbrain/thalamus for the response times. A NECT-induced increase in gray matter (GM) volume was observed in the fusiform face area. Thus, NECT induces both functional and structural plasticity in the face processing system as well as functional plasticity in the emotion perception and evaluation system. We propose that functional alterations are presumably related to changes in sensory tuning in the decoding of emotional expressions. Taken together, these findings highlight that the present experimental design may serve as a valuable tool to investigate the altered behavioral and neuronal processing of emotional cues in psychiatric disorders as well as the impact of therapeutic interventions on brain function and structure.

  17. Timing in audiovisual speech perception: A mini review and new psychophysical data.

    Science.gov (United States)

    Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory

    2016-02-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.

  18. Timing in Audiovisual Speech Perception: A Mini Review and New Psychophysical Data

    Science.gov (United States)

    Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory

    2015-01-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309

  19. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... considerations in the maintenance of audiovisual records? 1237.20 Section 1237.20 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.20 What are special considerations in the maintenance of audiovisual...

  20. 77 FR 22803 - Certain Audiovisual Components and Products Containing the Same; Institution of Investigation...

    Science.gov (United States)

    2012-04-17

    ... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-837] Certain Audiovisual Components and Products... importation of certain audiovisual components and products containing the same by reason of infringement of... importation, or the sale within the United States after importation of certain audiovisual components and...

  1. DANCING AROUND THE SUBJECT WITH ROBOTS: ETHICAL COMMUNICATION AS A “TRIPLE AUDIOVISUAL REALITY”

    Directory of Open Access Journals (Sweden)

    Eleanor Sandry

    2012-06-01

    Full Text Available Communication is often thought of as a bridge between self and other, supported by what they have in common, and pursued with the aim of further developing this commonality. However, theorists such as John Durham Peters and Amit Pinchevski argue that this conception, connected as it is with the need to resolve and remove difference, is inherently ‘violent’ to the other and therefore unethical. To encourage ethical communication, they suggest that theory should instead support acts of communication for which the differences between self and other are not only retained, but also valued for the possibilities they offer. As a means of moving towards a more ethical stance, this paper stresses the importance of understanding communication as more than the transmission of information in spoken and written language. In particular, it draws on Fernando Poyatos’ research into simultaneous translation, which suggests that communication is a “triple audiovisual reality” consisting of language, paralanguage and kinesics. This perspective is then extended by considering the way in which Alan Fogel’s dynamic systems model also stresses the place of nonverbal signs. The paper explores and illustrates these theories by considering human-robot interactions because analysis of such interactions, with both humanoid and non-humanoid robots, helps to draw out the importance of paralanguage and kinesics as elements of communication. The human-robot encounters discussed here also highlight the way in which these theories position both reason and emotion as valuable in communication. The resulting argument – that communication occurs as a dynamic process, relying on a triple audiovisual reality drawn from both reason and emotion – supports a theoretical position that values difference, rather than promoting commonality as a requirement for successful communicative events. In conclusion, this paper extends this theory and suggests that it can form a basis

  2. (Re)Constructing the Wicked Problem Through the Visual and the Verbal

    DEFF Research Database (Denmark)

    Holm Jacobsen, Peter; Harty, Chris; Tryggestad, Kjell

    2016-01-01

    Wicked problems are open ended and complex societal problems. There is a lack of empirical research into the dynamics and mechanisms that (re) construct problems to become wicked. This paper builds on an ethnographic study of a dialogue-based architect competition to do just that. The competition...... processes creates new knowledge and insights, but at the same time present new problems related to the ongoing verbal feedback. The design problem being (re) constructed appears as Heracles' fight with Hydra: Every time Heracles cut of a head, two new heads grow back. The paper contributes to understanding...... the relationship between the visual and the verbal (dialogue) in complex design processes in the early phases of large construction projects, and how the dynamic interplay between the design visualization and verbal dialogue develops before the competition produces, or negotiates, “a "winning design”....

  3. Non-verbal Persuasion and Communication in an Affective Agent

    NARCIS (Netherlands)

    André, Elisabeth; Bevacqua, Elisabetta; Heylen, Dirk K.J.; Niewiadomski, Radoslaw; Pelachaud, Catherine; Peters, Christopher; Poggi, Isabella; Rehm, Matthias; Cowie, Roddy; Pelachaud, Catherine; Petta, Paolo

    2011-01-01

    This chapter deals with the communication of persuasion. Only a small percentage of communication involves words: as the old saying goes, “it’s not what you say, it’s how you say it‿. While this likely underestimates the importance of good verbal persuasion techniques, it is accurate in underlining

  4. Context-specific effects of musical expertise on audiovisual integration

    Science.gov (United States)

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  5. Audiovisual consumption and its social logics on the web

    Directory of Open Access Journals (Sweden)

    Rose Marie Santini

    2013-06-01

    Full Text Available This article analyzes the social logics underlying audiovisualconsumption on digital networks. We retrieved some data on the Internet globaltraffic of audiovisual files since 2008 to identify formats, modes of distributionand consumption of audiovisual contents that tend to prevail on the Web. Thisresearch shows the types of social practices which are dominant among usersand its relation to what we designate as “Internet culture”.

  6. Audiovisual Integration of Speech in a Patient with Broca’s Aphasia

    DEFF Research Database (Denmark)

    Andersen, Tobias; Starrfelt, Randi

    2015-01-01

    's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical......, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing...

  7. Search in audiovisual broadcast archives

    NARCIS (Netherlands)

    Huurnink, B.

    2010-01-01

    Documentary makers, journalists, news editors, and other media professionals routinely require previously recorded audiovisual material for new productions. For example, a news editor might wish to reuse footage from overseas services for the evening news, or a documentary maker describing the

  8. Contrasting visual working memory for verbal and non-verbal material with multivariate analysis of fMRI

    Science.gov (United States)

    Habeck, Christian; Rakitin, Brian; Steffener, Jason; Stern, Yaakov

    2012-01-01

    We performed a delayed-item-recognition task to investigate the neural substrates of non-verbal visual working memory with event-related fMRI (‘Shape task’). 25 young subjects (mean age: 24.0 years; STD=3.8 years) were instructed to study a list of either 1,2 or 3 unnamable nonsense line drawings for 3 seconds (‘stimulus phase’ or STIM). Subsequently, the screen went blank for 7 seconds (‘retention phase’ or RET), and then displayed a probe stimulus for 3 seconds in which subject indicated with a differential button press whether the probe was contained in the studied shape-array or not (‘probe phase’ or PROBE). Ordinal Trend Canonical Variates Analysis (Habeck et al., 2005a) was performed to identify spatial covariance patterns that showed a monotonic increase in expression with memory load during all task phases. Reliable load-related patterns were identified in the stimulus and retention phase (pmemory loads (pmemory load, and mediofrontal and temporal regions that were decreasing. Mean subject expression of both patterns across memory load during retention also correlated positively with recognition accuracy (dL) in the Shape task (prehearsal processes. Encoding processes, on the other hand, are critically dependent on the to-be-remembered material, and seem to necessitate material-specific neural substrates. PMID:22652306

  9. The process of developing audiovisual patient information: challenges and opportunities.

    Science.gov (United States)

    Hutchison, Catherine; McCreaddie, May

    2007-11-01

    The aim of this project was to produce audiovisual patient information, which was user friendly and fit for purpose. The purpose of the audiovisual patient information is to inform patients about randomized controlled trials, as a supplement to their trial-specific written information sheet. Audiovisual patient information is known to be an effective way of informing patients about treatment. User involvement is also recognized as being important in the development of service provision. The aim of this paper is (i) to describe and discuss the process of developing the audiovisual patient information and (ii) to highlight the challenges and opportunities, thereby identifying implications for practice. A future study will test the effectiveness of the audiovisual patient information in the cancer clinical trial setting. An advisory group was set up to oversee the project and provide guidance in relation to information content, level and delivery. An expert panel of two patients provided additional guidance and a dedicated operational team dealt with the logistics of the project including: ethics; finance; scriptwriting; filming; editing and intellectual property rights. Challenges included the limitations of filming in a busy clinical environment, restricted technical and financial resources, ethical needs and issues around copyright. There were, however, substantial opportunities that included utilizing creative skills, meaningfully involving patients, teamworking and mutual appreciation of clinical, multidisciplinary and technical expertise. Developing audiovisual patient information is an important area for nurses to be involved with. However, this must be performed within the context of the multiprofessional team. Teamworking, including patient involvement, is crucial as a wide variety of expertise is required. Many aspects of the process are transferable and will provide information and guidance for nurses, regardless of specialty, considering developing this

  10. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  11. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate...... of audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase...... visual detection task. In our first experiment, observers presented with congruent and incongruent audiovisual sine-wave speech stimuli did only show a McGurk effect when informed of the speech nature of the stimulus. Performance on the secondary visual task was very good, thus supporting the finding...

  12. Strategies for media literacy: Audiovisual skills and the citizenship in Andalusia

    Directory of Open Access Journals (Sweden)

    Ignacio Aguaded-Gómez

    2012-07-01

    Full Text Available Media consumption is an undeniable fact in present-day society. The hours that members of all social segments spend in front of a screen take up a large part of their leisure time worldwide. Audiovisual communication becomes especially important within the context of today’s digital society (society-network, where information and communication technologies pervade all corners of everyday life. However, people do not own enough audiovisual media skills to cope with this mass media omnipresence. Neither the education system nor civic associations, or the media themselves, have promoted audiovisual skills to make people critically competent when viewing media. This study aims to provide an updated conceptualization of the “audiovisual skill” in this digital environment and transpose it onto a specific interventional environment, seeking to detect needs and shortcomings, plan global strategies to be adopted by governments and devise training programmes for the various sectors involved.

  13. Situación actual de la traducción audiovisual en Colombia

    Directory of Open Access Journals (Sweden)

    Jeffersson David Orrego Carmona

    2010-05-01

    Full Text Available Objetivos: el presente artículo tiene dos objetivos: dar a conocer el panorama general del mercado actual de la traducción audiovisual en Colombia y resaltar la importancia de desarrollar estudios en esta área. Método: la metodología empleada incluyó investigación y lectura de bibliografía relacionada con el tema, aplicación de encuestas a diferentes grupos vinculados con la traducción audiovisual y el posterior análisis. Resultados: éstos mostraron el desconocimiento general que hay sobre esta labor y las preferencias de los grupos encuestados sobre las modalidades de traducción audiovisual. Se pudo observar que hay una marcada preferencia por el subtitulaje, por razones particulares de cada grupo. Conclusiones: los traductores colombianos necesitan un entrenamiento en traducción audiovisual para satisfacer las demandas del mercado y se resalta la importancia de desarrollar estudios más profundos enfocados en el desarrollo de la traducción audiovisual en Colombia.

  14. Audiovisual speech facilitates voice learning.

    Science.gov (United States)

    Sheffert, Sonya M; Olson, Elizabeth

    2004-02-01

    In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.

  15. A conceptual framework for audio-visual museum media

    DEFF Research Database (Denmark)

    Kirkedahl Lysholm Nielsen, Mikkel

    2017-01-01

    In today's history museums, the past is communicated through many other means than original artefacts. This interdisciplinary and theoretical article suggests a new approach to studying the use of audio-visual media, such as film, video and related media types, in a museum context. The centre...... and museum studies, existing case studies, and real life observations, the suggested framework instead stress particular characteristics of contextual use of audio-visual media in history museums, such as authenticity, virtuality, interativity, social context and spatial attributes of the communication...

  16. Understanding the basics of audiovisual archiving in Africa and the ...

    African Journals Online (AJOL)

    In the developed world, the cultural value of the audiovisual media gained legitimacy and widening acceptance after World War II, and this is what Africa still requires. There are a lot of problems in Africa, and because of this, activities such as preservation of a historical record, especially in the audiovisual media are seen as ...

  17. Audio-visual temporal recalibration can be constrained by content cues regardless of spatial overlap

    Directory of Open Access Journals (Sweden)

    Warrick eRoseboom

    2013-04-01

    Full Text Available It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this was necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; Experiment 1 and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; Experiment 2 we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  18. Personal Audiovisual Aptitude Influences the Interaction Between Landscape and Soundscape Appraisal.

    Science.gov (United States)

    Sun, Kang; Echevarria Sanchez, Gemma M; De Coensel, Bert; Van Renterghem, Timothy; Talsma, Durk; Botteldooren, Dick

    2018-01-01

    It has been established that there is an interaction between audition and vision in the appraisal of our living environment, and that this appraisal is influenced by personal factors. Here, we test the hypothesis that audiovisual aptitude influences appraisal of our sonic and visual environment. To measure audiovisual aptitude, an auditory deviant detection experiment was conducted in an ecologically valid and complex context. This experiment allows us to distinguish between accurate and less accurate listeners. Additionally, it allows to distinguish between participants that are easily visually distracted and those who are not. To do so, two previously conducted laboratory experiments were re-analyzed. The first experiment focuses on self-reported noise annoyance in a living room context, whereas the second experiment focuses on the perceived pleasantness of using outdoor public spaces. In the first experiment, the influence of visibility of vegetation on self-reported noise annoyance was modified by audiovisual aptitude. In the second one, it was found that the overall appraisal of walking across a bridge is influenced by audiovisual aptitude, in particular when a visually intrusive noise barrier is used to reduce highway traffic noise levels. We conclude that audiovisual aptitude may affect the appraisal of the living environment.

  19. Longitudinal effects of single-dose simulation education with structured debriefing and verbal feedback on endotracheal suctioning knowledge and skills: A randomized controlled trial.

    Science.gov (United States)

    Jansson, Miia M; Syrjälä, Hannu P; Ohtonen, Pasi P; Meriläinen, Merja H; Kyngäs, Helvi A; Ala-Kokko, Tero I

    2017-01-01

    We evaluated the longitudinal effects of single-dose simulation education with structured debriefing and verbal feedback on critical care nurses' endotracheal suctioning knowledge and skills. To do this we used an experimental design without other competing intervention. Twenty-four months after simulation education, no significant time and group differences or time × group interactions were identified between the study groups. The need for regularly repeated educational interventions with audiovisual or individualized performance feedback and repeated bedside demonstrations is evident. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  20. On the relevance of script writing basics in audiovisual translation practice and training

    Directory of Open Access Journals (Sweden)

    Juan José Martínez-Sierra

    2012-07-01

    Full Text Available http://dx.doi.org/10.5007/2175-7968.2012v1n29p145   Audiovisual texts possess characteristics that clearly differentiate audiovisual translation from both oral and written translation, and prospective screen translators are usually taught about the issues that typically arise in audiovisual translation. This article argues for the development of an interdisciplinary approach that brings together Translation Studies and Film Studies, which would prepare future audiovisual translators to work with the nature and structure of a script in mind, in addition to the study of common and diverse translational aspects. Focusing on film, the article briefly discusses the nature and structure of scripts, and identifies key points in the development and structuring of a plot. These key points and various potential hurdles are illustrated with examples from the films Chinatown and La habitación de Fermat. The second part of this article addresses some implications for teaching audiovisual translation.

  1. 36 CFR 1237.26 - What materials and processes must agencies use to create audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... must agencies use to create audiovisual records? 1237.26 Section 1237.26 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.26 What materials and processes must agencies use to create audiovisual...

  2. La estacíon de trabajo del traductor audiovisual: Herramientas y Recursos.

    Directory of Open Access Journals (Sweden)

    Anna Matamala

    2005-01-01

    Full Text Available In this article, we discuss the relationship between audiovisual translation and new technologies, and describe the characteristics of the audiovisual translator´s workstation, especially as regards dubbing and voiceover. After presenting the tools necessary for the translator to perform his/ her task satisfactorily as well as pointing to future perspectives, we make a list of sources that can be consulted in order to solve translation problems, including those available on the Internet. Keywords: audiovisual translation, new technologies, Internet, translator´s tools.

  3. A general audiovisual temporal processing deficit in adult readers with dyslexia

    NARCIS (Netherlands)

    Francisco, A.A.; Jesse, A.; Groen, M.A.; McQueen, J.M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with

  4. Kreative metoder i verbal supervision

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard

    2013-01-01

    , bevægelser i rummet, etc.) og 4) der primært kommunikeres via verbal-sproglige udvekslinger. Efter en diskussion af forholdet mellem kreativitet og kreative metoder, fokuseres der på relevansen af og måder til adgang til ubevidste manifestationer. Sproget non- og paraverbale betydning inddrages. Et centralt...

  5. How physician electronic health record screen sharing affects patient and doctor non-verbal communication in primary care.

    Science.gov (United States)

    Asan, Onur; Young, Henry N; Chewning, Betty; Montague, Enid

    2015-03-01

    Use of electronic health records (EHRs) in primary-care exam rooms changes the dynamics of patient-physician interaction. This study examines and compares doctor-patient non-verbal communication (eye-gaze patterns) during primary care encounters for three different screen/information sharing groups: (1) active information sharing, (2) passive information sharing, and (3) technology withdrawal. Researchers video recorded 100 primary-care visits and coded the direction and duration of doctor and patient gaze. Descriptive statistics compared the length of gaze patterns as a percentage of visit length. Lag sequential analysis determined whether physician eye-gaze influenced patient eye gaze, and vice versa, and examined variations across groups. Significant differences were found in duration of gaze across groups. Lag sequential analysis found significant associations between several gaze patterns. Some, such as DGP-PGD ("doctor gaze patient" followed by "patient gaze doctor") were significant for all groups. Others, such DGT-PGU ("doctor gaze technology" followed by "patient gaze unknown") were unique to one group. Some technology use styles (active information sharing) seem to create more patient engagement, while others (passive information sharing) lead to patient disengagement. Doctors can engage patients in communication by using EHRs in the visits. EHR training and design should facilitate this. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation.

    Science.gov (United States)

    Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina

    2017-01-01

    Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this "online" multisensory improvement, there is evidence of long-lasting, "offline" effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced "online" effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual

  7. Audiovisual Temporal Processing and Synchrony Perception in the Rat.

    Science.gov (United States)

    Schormans, Ashley L; Scott, Kaela E; Vo, Albert M Q; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L

    2016-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer's ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats ( n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats ( n = 7) perceived the synchronous audiovisual stimuli to be "visual first" for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20-40 ms. Ultimately, given

  8. A pilot study of audiovisual family meetings in the intensive care unit.

    Science.gov (United States)

    de Havenon, Adam; Petersen, Casey; Tanana, Michael; Wold, Jana; Hoesch, Robert

    2015-10-01

    We hypothesized that virtual family meetings in the intensive care unit with conference calling or Skype videoconferencing would result in increased family member satisfaction and more efficient decision making. This is a prospective, nonblinded, nonrandomized pilot study. A 6-question survey was completed by family members after family meetings, some of which used conference calling or Skype by choice. Overall, 29 (33%) of the completed surveys came from audiovisual family meetings vs 59 (67%) from control meetings. The survey data were analyzed using hierarchical linear modeling, which did not find any significant group differences between satisfaction with the audiovisual meetings vs controls. There was no association between the audiovisual intervention and withdrawal of care (P = .682) or overall hospital length of stay (z = 0.885, P = .376). Although we do not report benefit from an audiovisual intervention, these results are preliminary and heavily influenced by notable limitations to the study. Given that the intervention was feasible in this pilot study, audiovisual and social media intervention strategies warrant additional investigation given their unique ability to facilitate communication among family members in the intensive care unit. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Development and psychometric validation of the verbal affective memory test

    DEFF Research Database (Denmark)

    Jensen, Christian Gaden; Hjordt, Liv V; Stenbæk, Dea S

    2015-01-01

    . Furthermore, larger seasonal decreases in positive recall significantly predicted larger increases in depressive symptoms. Retest reliability was satisfactory, rs ≥ .77. In conclusion, VAMT-24 is more thoroughly developed and validated than existing verbal affective memory tests and showed satisfactory...... psychometric properties. VAMT-24 seems especially sensitive to measuring positive verbal recall bias, perhaps due to the application of common, non-taboo words. Based on the psychometric and clinical results, we recommend VAMT-24 for international translations and studies of affective memory.......We here present the development and validation of the Verbal Affective Memory Test-24 (VAMT-24). First, we ensured face validity by selecting 24 words reliably perceived as positive, negative or neutral, respectively, according to healthy Danish adults' valence ratings of 210 common and non...

  10. Exploring Student Perceptions of Audiovisual Feedback via Screencasting in Online Courses

    Science.gov (United States)

    Mathieson, Kathleen

    2012-01-01

    Using Moore's (1993) theory of transactional distance as a framework, this action research study explored students' perceptions of audiovisual feedback provided via screencasting as a supplement to text-only feedback. A crossover design was employed to ensure that all students experienced both text-only and text-plus-audiovisual feedback and to…

  11. La comunicación corporativa audiovisual: propuesta metodológica de estudio

    OpenAIRE

    Lorán Herrero, María Dolores

    2016-01-01

    Esta investigación, versa en torno a dos conceptos, la Comunicación Audiovisual y La Comunicación Corporativa, disciplinas que afectan a las organizaciones y que se van articulando de tal manera que dan lugar a la Comunicación Corporativa Audiovisual, concepto que se propone en esta tesis. Se realiza una clasificación y definición de los formatos que utilizan las organizaciones para su comunicación. Se trata de poder analizar cualquier documento audiovisual corporativo para constatar si el l...

  12. Audiovisual Perception of Noise Vocoded Speech in Dyslexic and Non-Dyslexic Adults: The Role of Low-Frequency Visual Modulations

    Science.gov (United States)

    Megnin-Viggars, Odette; Goswami, Usha

    2013-01-01

    Visual speech inputs can enhance auditory speech information, particularly in noisy or degraded conditions. The natural statistics of audiovisual speech highlight the temporal correspondence between visual and auditory prosody, with lip, jaw, cheek and head movements conveying information about the speech envelope. Low-frequency spatial and…

  13. Verbal memory retrieval engages visual cortex in musicians.

    Science.gov (United States)

    Huang, Z; Zhang, J X; Yang, Z; Dong, G; Wu, J; Chan, A S; Weng, X

    2010-06-16

    As one major line of research on brain plasticity, many imaging studies have been conducted to identify the functional and structural reorganization associated with musical expertise. Based on previous behavioral research, the present study used functional magnetic resonance imaging to identify the neural correlates of superior verbal memory performance in musicians. Participants with and without musical training performed a verbal memory task to first encode a list of words auditorily delivered and then silently recall as many words as possible. They performed in separate blocks a control task involving pure tone pitch judgment. Post-scan recognition test showed better memory performance in musicians than non-musicians. During memory retrieval, the musicians showed significantly greater activations in bilateral though left-lateralized visual cortex relative to the pitch judgment baseline. In comparison, no such visual cortical activations were found in the non-musicians. No group differences were observed during the encoding stage. The results echo a previous report of visual cortical activation during verbal memory retrieval in the absence of any visual sensory stimulation in the blind population, who are also known to possess superior verbal memory. It is suggested that the visual cortex can be recruited to serve as extra memory resources and contributes to the superior verbal memory in special situations. While in the blind population, such cross-modal functional reorganization may be induced by sensory deprivation; in the musicians it may be induced by the long-term and demanding nature of musical training to use as much available neural resources as possible. 2010 IBRO. Published by Elsevier Ltd. All rights reserved.

  14. The neural basis of non-verbal communication-enhanced processing of perceived give-me gestures in 9-month-old girls.

    Science.gov (United States)

    Bakker, Marta; Kaduk, Katharina; Elsner, Claudia; Juvrud, Joshua; Gustaf Gredebäck

    2015-01-01

    This study investigated the neural basis of non-verbal communication. Event-related potentials were recorded while 29 nine-month-old infants were presented with a give-me gesture (experimental condition) and the same hand shape but rotated 90°, resulting in a non-communicative hand configuration (control condition). We found different responses in amplitude between the two conditions, captured in the P400 ERP component. Moreover, the size of this effect was modulated by participants' sex, with girls generally demonstrating a larger relative difference between the two conditions than boys.

  15. 77 FR 16561 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-03-21

    ... INTERNATIONAL TRADE COMMISSION [DN 2884] Certain Audiovisual Components and Products Containing.... International Trade Commission has received a complaint entitled Certain Audiovisual Components and Products... audiovisual components and products containing the same. The complaint names as respondents Funai Electric...

  16. 36 CFR 1237.14 - What are the additional scheduling requirements for audiovisual, cartographic, and related records?

    Science.gov (United States)

    2010-07-01

    ... scheduling requirements for audiovisual, cartographic, and related records? 1237.14 Section 1237.14 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL... audiovisual, cartographic, and related records? The disposition instructions should also provide that...

  17. 77 FR 16560 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-03-21

    ... INTERNATIONAL TRADE COMMISSION [DN 2884] Certain Audiovisual Components and Products Containing.... International Trade Commission has received a complaint entitled Certain Audiovisual Components and Products... audiovisual components and products containing the same. The complaint names as respondents Funai Electric...

  18. Adverse Life Events and Emotional and Behavioral Problems in Adolescence: The Role of Non-Verbal Cognitive Ability and Negative Cognitive Errors

    Science.gov (United States)

    Flouri, Eirini; Panourgia, Constantina

    2011-01-01

    The aim of this study was to test whether negative cognitive errors (overgeneralizing, catastrophizing, selective abstraction, and personalizing) mediate the moderator effect of non-verbal cognitive ability on the association between adverse life events (life stress) and emotional and behavioral problems in adolescence. The sample consisted of 430…

  19. Psychophysiological effects of audiovisual stimuli during cycle exercise.

    Science.gov (United States)

    Barreto-Silva, Vinícius; Bigliassi, Marcelo; Chierotti, Priscila; Altimari, Leandro R

    2018-05-01

    Immersive environments induced by audiovisual stimuli are hypothesised to facilitate the control of movements and ameliorate fatigue-related symptoms during exercise. The objective of the present study was to investigate the effects of pleasant and unpleasant audiovisual stimuli on perceptual and psychophysiological responses during moderate-intensity exercises performed on an electromagnetically braked cycle ergometer. Twenty young adults were administered three experimental conditions in a randomised and counterbalanced order: unpleasant stimulus (US; e.g. images depicting laboured breathing); pleasant stimulus (PS; e.g. images depicting pleasant emotions); and neutral stimulus (NS; e.g. neutral facial expressions). The exercise had 10 min of duration (2 min of warm-up + 6 min of exercise + 2 min of warm-down). During all conditions, the rate of perceived exertion and heart rate variability were monitored to further understanding of the moderating influence of audiovisual stimuli on perceptual and psychophysiological responses, respectively. The results of the present study indicate that PS ameliorated fatigue-related symptoms and reduced the physiological stress imposed by the exercise bout. Conversely, US increased the global activity of the autonomic nervous system and increased exertional responses to a greater degree when compared to PS. Accordingly, audiovisual stimuli appear to induce a psychophysiological response in which individuals visualise themselves within the story presented in the video. In such instances, individuals appear to copy the behaviour observed in the videos as if the situation was real. This mirroring mechanism has the potential to up-/down-regulate the cardiac work as if in fact the exercise intensities were different in each condition.

  20. Direct observation of mother-child communication in pediatric cancer: assessment of verbal and non-verbal behavior and emotion.

    Science.gov (United States)

    Dunn, Madeleine J; Rodriguez, Erin M; Miller, Kimberly S; Gerhardt, Cynthia A; Vannatta, Kathryn; Saylor, Megan; Scheule, C Melanie; Compas, Bruce E

    2011-06-01

    To examine the acceptability and feasibility of coding observed verbal and nonverbal behavioral and emotional components of mother-child communication among families of children with cancer. Mother-child dyads (N=33, children ages 5-17 years) were asked to engage in a videotaped 15-min conversation about the child's cancer. Coding was done using the Iowa Family Interaction Rating Scale (IFIRS). Acceptability and feasibility of direct observation in this population were partially supported: 58% consented and 81% of those (47% of all eligible dyads) completed the task; trained raters achieved 78% agreement in ratings across codes. The construct validity of the IFIRS was demonstrated by expected associations within and between positive and negative behavioral/emotional code ratings and between mothers' and children's corresponding code ratings. Direct observation of mother-child communication about childhood cancer has the potential to be an acceptable and feasible method of assessing verbal and nonverbal behavior and emotion in this population.

  1. On the Role of Crossmodal Prediction in Audiovisual Emotion Perception

    Directory of Open Access Journals (Sweden)

    Sarah eJessen

    2013-07-01

    Full Text Available Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others’ emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of crossmodal prediction. In emotion perception, as in most other settings, visual information precedes the auditory one. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, it has not been addressed so far in audiovisual emotion perception. Based on the current state of the art in (a crossmodal prediction and (b multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG and magnetoencephalographic (MEG studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow for a more reliable prediction of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 response in the EEG and the duration of visual emotional but not non-emotional information. If the assumption that emotional content allows for more reliable predictions can be corroborated in future studies, crossmodal prediction is a crucial factor in our understanding of multisensory emotion perception.

  2. On the role of crossmodal prediction in audiovisual emotion perception.

    Science.gov (United States)

    Jessen, Sarah; Kotz, Sonja A

    2013-01-01

    Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others' emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of cross-modal prediction. In emotion perception, as in most other settings, visual information precedes the auditory information. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, so far it has not been addressed in audiovisual emotion perception. Based on the current state of the art in (a) cross-modal prediction and (b) multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG) and magnetoencephalographic (MEG) studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow more reliable predicting of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 EEG response and the duration of visual emotional, but not non-emotional information. If the assumption that emotional content allows more reliable predicting can be corroborated in future studies, cross-modal prediction is a crucial factor in our understanding of multisensory emotion perception.

  3. Verbal behavior

    OpenAIRE

    Michael, Jack

    1984-01-01

    The recent history and current status of the area of verbal behavior are considered in terms of three major thematic lines: the operant conditioning of adult verbal behavior, learning to be an effective speaker and listener, and developments directly related to Skinner's Verbal Behavior. Other topics not directly related to the main themes are also considered: the work of Kurt Salzinger, ape-language research, and human operant research related to rule-governed behavior.

  4. "He says, she says": a comparison of fathers' and mothers' verbal behavior during child cold pressor pain.

    Science.gov (United States)

    Moon, Erin C; Chambers, Christine T; McGrath, Patrick J

    2011-11-01

    Mothers' behavior has a powerful impact on child pain. Maternal attending talk (talk focused on child pain) is associated with increased child pain whereas maternal non-attending talk (talk not focused on child pain) is associated with decreased child pain. The present study compared mothers' and fathers' verbal behavior during child pain. Forty healthy 8- to 12-year-old children completed the cold pressor task (CPT)-once with their mothers present and once with their fathers present in a counterbalanced order. Parent verbalizations were coded as Attending Talk or Non-Attending Talk. Results indicated that child symptom complaints were positively correlated with parent Attending Talk and negatively correlated with parent Non-Attending Talk. Furthermore, child pain tolerance was negatively correlated with parent Attending Talk and positively correlated with parent Non-Attending Talk. Mothers and fathers did not use different proportions of Attending or Non-Attending Talk. Exploratory analyses of parent verbalization subcodes indicated that mothers used more nonsymptom-focused verbalizations whereas fathers used more criticism (a low-frequency occurence). The findings indicate that for both mothers and fathers, verbal attention is associated with higher child pain and verbal non-attention is associated with lower child pain. The results also suggest that mothers' and fathers' verbal behavior during child pain generally does not differ. To date, studies of the effects of parental behavior on child pain have focused almost exclusively on mothers. The present study compared mothers' and fathers' verbal behavior during child pain. The results can be used to inform clinical recommendations for mothers and fathers to help their children cope with pain. Copyright © 2011 American Pain Society. Published by Elsevier Inc. All rights reserved.

  5. KAMAN PELAYANAN MEDIA AUDIOVISUAL: STUDI KASUS DI THE BRITISH COUNCIL JAKARTA

    Directory of Open Access Journals (Sweden)

    Hindar Purnomo

    2015-12-01

    Full Text Available Tujuan penelitian ini adalah untuk mengetahui cara penyelenggaraan pelayanan media AV, efektivitas pelayanan serta tingkat kepuasan pemustaka terhadap berbagai aspek pelayanan. Penelitian dilakukan di The British Council Jakarta dengan cara evaluasi karena dengan cara ini dapat diketahui berbagai fenomena yang terjadi. Perpustakaan British Council menyediakan tiga jenis media yaitu berupa kaset video, kaset audio, dan siaran televisi BBC. Subjek penelitian adalah pemakai jasa pelaya-nan media audiovisual yang terdaftar sebagai anggota. Subjek dikelompokkan berdasarkan kelompok usia dan kelompok tujuan pemanfaatan media AV. Data angket terkumpul sebanyak 157 responden (75,48% kemudian dianalisis secara statistik dengan uji analisis varian sate arah Kruskal-Wallis. Hasil penelitian menunjukkan bahwa ketiga media tersebut diminati oleh banyak pemakai terutama pada kelompok usia muda. Sebagian besar pemustaka lebih menyukai jenis fiksi dibandingkan jenis nonfiksi, mereka menggunakan media audiovisual untuk mencari informasi pengetahuan. Pelayanan media audiovisual terbukti sangat efektif dilihat dari angka keterpakaian koleksi maupun tingkat kepuasan pemakain. Hasil uji hipotesis menunjukkan bahwa antarkelompok usia maupun tujuan kegunaan tidak ada perbedaan yang berarti dalam menanggapi berbagai aspek pelayanan media audiovisual. Kata Kunci: MediaAudio Visual-Layanan Perpustakaan

  6. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    Science.gov (United States)

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.

  7. Neural initialization of audiovisual integration in prereaders at varying risk for developmental dyslexia.

    Science.gov (United States)

    I Karipidis, Iliana; Pleisch, Georgette; Röthlisberger, Martina; Hofstetter, Christoph; Dornbierer, Dario; Stämpfli, Philipp; Brem, Silvia

    2017-02-01

    Learning letter-speech sound correspondences is a major step in reading acquisition and is severely impaired in children with dyslexia. Up to now, it remains largely unknown how quickly neural networks adopt specific functions during audiovisual integration of linguistic information when prereading children learn letter-speech sound correspondences. Here, we simulated the process of learning letter-speech sound correspondences in 20 prereading children (6.13-7.17 years) at varying risk for dyslexia by training artificial letter-speech sound correspondences within a single experimental session. Subsequently, we acquired simultaneously event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) scans during implicit audiovisual presentation of trained and untrained pairs. Audiovisual integration of trained pairs correlated with individual learning rates in right superior temporal, left inferior temporal, and bilateral parietal areas and with phonological awareness in left temporal areas. In correspondence, a differential left-lateralized parietooccipitotemporal ERP at 400 ms for trained pairs correlated with learning achievement and familial risk. Finally, a late (650 ms) posterior negativity indicating audiovisual congruency of trained pairs was associated with increased fMRI activation in the left occipital cortex. Taken together, a short (audiovisual integration in neural systems that are responsible for processing linguistic information in proficient readers. To conclude, the ability to learn grapheme-phoneme correspondences, the familial history of reading disability, and phonological awareness of prereading children account for the degree of audiovisual integration in a distributed brain network. Such findings on emerging linguistic audiovisual integration could allow for distinguishing between children with typical and atypical reading development. Hum Brain Mapp 38:1038-1055, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals

  8. 36 CFR 1235.42 - What specifications and standards for transfer apply to audiovisual records, cartographic, and...

    Science.gov (United States)

    2010-07-01

    ... standards for transfer apply to audiovisual records, cartographic, and related records? 1235.42 Section 1235... Standards § 1235.42 What specifications and standards for transfer apply to audiovisual records... elements that are needed for future preservation, duplication, and reference for audiovisual records...

  9. 16 CFR 307.8 - Requirements for disclosure in audiovisual and audio advertising.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Requirements for disclosure in audiovisual and audio advertising. 307.8 Section 307.8 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... ACT OF 1986 Advertising Disclosures § 307.8 Requirements for disclosure in audiovisual and audio...

  10. Mathematics as verbal behavior.

    Science.gov (United States)

    Marr, M Jackson

    2015-04-01

    "Behavior which is effective only through the mediation of other persons has so many distinguishing dynamic and topographical properties that a special treatment is justified and indeed demanded" (Skinner, 1957, p. 2). Skinner's demand for a special treatment of verbal behavior can be extended within that field to domains such as music, poetry, drama, and the topic of this paper: mathematics. For centuries, mathematics has been of special concern to philosophers who have continually argued to the present day about what some deem its "special nature." Two interrelated principal questions have been: (1) Are the subjects of mathematical interest pre-existing in some transcendental realm and thus are "discovered" as one might discover a new planet; and (2) Why is mathematics so effective in the practices of science and engineering even though originally such mathematics was "pure" with applications neither contemplated or even desired? I argue that considering the actual practice of mathematics in its history and in the context of acquired verbal behavior one can address at least some of its apparent mysteries. To this end, I discuss some of the structural and functional features of mathematics including verbal operants, rule-and contingency-modulated behavior, relational frames, the shaping of abstraction, and the development of intuition. How is it possible to understand Nature by properly talking about it? Essentially, it is because nature taught us how to talk. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection

    Science.gov (United States)

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-01-01

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC. PMID:26123281

  12. A General Audiovisual Temporal Processing Deficit in Adult Readers with Dyslexia

    Science.gov (United States)

    Francisco, Ana A.; Jesse, Alexandra; Groen, Margriet A.; McQueen, James M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of…

  13. Film Studies in Motion : From Audiovisual Essay to Academic Research Video

    NARCIS (Netherlands)

    Kiss, Miklós; van den Berg, Thomas

    2016-01-01

    Our (co-written with Thomas van den Berg) ‪media rich,‬ ‪‎open access‬ ‪‎Scalar‬ ‪e-book‬ on the ‪‎Audiovisual Essay‬ practice is available online: http://scalar.usc.edu/works/film-studies-in-motion Audiovisual essaying should be more than an appropriation of traditional video artistry, or a mere

  14. Ordinal models of audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2011-01-01

    Audiovisual information is integrated in speech perception. One manifestation of this is the McGurk illusion in which watching the articulating face alters the auditory phonetic percept. Understanding this phenomenon fully requires a computational model with predictive power. Here, we describe...

  15. Audiovisual cultural heritage: bridging the gap between digital archives and its users

    NARCIS (Netherlands)

    Ongena, G.; Donoso, Veronica; Geerts, David; Cesar, Pablo; de Grooff, Dirk

    2009-01-01

    This document describes a PhD research track on the disclosure of audiovisual digital archives. The domain of audiovisual material is introduced as well as a problem description is formulated. The main research objective is to investigate the gap between the different users and the digital archives.

  16. Catching Audiovisual Interactions With a First-Person Fisherman Video Game.

    Science.gov (United States)

    Sun, Yile; Hickey, Timothy J; Shinn-Cunningham, Barbara; Sekuler, Robert

    2017-07-01

    The human brain is excellent at integrating information from different sources across multiple sensory modalities. To examine one particularly important form of multisensory interaction, we manipulated the temporal correlation between visual and auditory stimuli in a first-person fisherman video game. Subjects saw rapidly swimming fish whose size oscillated, either at 6 or 8 Hz. Subjects categorized each fish according to its rate of size oscillation, while trying to ignore a concurrent broadband sound seemingly emitted by the fish. In three experiments, categorization was faster and more accurate when the rate at which a fish oscillated in size matched the rate at which the accompanying, task-irrelevant sound was amplitude modulated. Control conditions showed that the difference between responses to matched and mismatched audiovisual signals reflected a performance gain in the matched condition, rather than a cost from the mismatched condition. The performance advantage with matched audiovisual signals was remarkably robust over changes in task demands between experiments. Performance with matched or unmatched audiovisual signals improved over successive trials at about the same rate, emblematic of perceptual learning in which visual oscillation rate becomes more discriminable with experience. Finally, analysis at the level of individual subjects' performance pointed to differences in the rates at which subjects can extract information from audiovisual stimuli.

  17. Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception

    DEFF Research Database (Denmark)

    Baart, Martijn; Lindborg, Alma Cornelia; Andersen, Tobias S

    2017-01-01

    Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure...... of audiovisual integration) for fusions was comparable to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli. This article is protected...

  18. Enhancing audiovisual experience with haptic feedback: a survey on HAV.

    Science.gov (United States)

    Danieau, F; Lecuyer, A; Guillotel, P; Fleureau, J; Mollet, N; Christie, M

    2013-01-01

    Haptic technology has been widely employed in applications ranging from teleoperation and medical simulation to art and design, including entertainment, flight simulation, and virtual reality. Today there is a growing interest among researchers in integrating haptic feedback into audiovisual systems. A new medium emerges from this effort: haptic-audiovisual (HAV) content. This paper presents the techniques, formalisms, and key results pertinent to this medium. We first review the three main stages of the HAV workflow: the production, distribution, and rendering of haptic effects. We then highlight the pressing necessity for evaluation techniques in this context and discuss the key challenges in the field. By building on existing technologies and tackling the specific challenges of the enhancement of audiovisual experience with haptics, we believe the field presents exciting research perspectives whose financial and societal stakes are significant.

  19. The Functional Role of Neural Oscillations in Non-Verbal Emotional Communication.

    Science.gov (United States)

    Symons, Ashley E; El-Deredy, Wael; Schwartze, Michael; Kotz, Sonja A

    2016-01-01

    Effective interpersonal communication depends on the ability to perceive and interpret nonverbal emotional expressions from multiple sensory modalities. Current theoretical models propose that visual and auditory emotion perception involves a network of brain regions including the primary sensory cortices, the superior temporal sulcus (STS), and orbitofrontal cortex (OFC). However, relatively little is known about how the dynamic interplay between these regions gives rise to the perception of emotions. In recent years, there has been increasing recognition of the importance of neural oscillations in mediating neural communication within and between functional neural networks. Here we review studies investigating changes in oscillatory activity during the perception of visual, auditory, and audiovisual emotional expressions, and aim to characterize the functional role of neural oscillations in nonverbal emotion perception. Findings from the reviewed literature suggest that theta band oscillations most consistently differentiate between emotional and neutral expressions. While early theta synchronization appears to reflect the initial encoding of emotionally salient sensory information, later fronto-central theta synchronization may reflect the further integration of sensory information with internal representations. Additionally, gamma synchronization reflects facilitated sensory binding of emotional expressions within regions such as the OFC, STS, and, potentially, the amygdala. However, the evidence is more ambiguous when it comes to the role of oscillations within the alpha and beta frequencies, which vary as a function of modality (or modalities), presence or absence of predictive information, and attentional or task demands. Thus, the synchronization of neural oscillations within specific frequency bands mediates the rapid detection, integration, and evaluation of emotional expressions. Moreover, the functional coupling of oscillatory activity across multiples

  20. Temporal Ventriloquism Reveals Intact Audiovisual Temporal Integration in Amblyopia.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-02-01

    We have shown previously that amblyopia involves impaired detection of asynchrony between auditory and visual events. To distinguish whether this impairment represents a defect in temporal integration or nonintegrative multisensory processing (e.g., cross-modal matching), we used the temporal ventriloquism effect in which visual temporal order judgment (TOJ) is normally enhanced by a lagging auditory click. Participants with amblyopia (n = 9) and normally sighted controls (n = 9) performed a visual TOJ task. Pairs of clicks accompanied the two lights such that the first click preceded the first light, or second click lagged the second light by 100, 200, or 450 ms. Baseline audiovisual synchrony and visual-only conditions also were tested. Within both groups, just noticeable differences for the visual TOJ task were significantly reduced compared with baseline in the 100- and 200-ms click lag conditions. Within the amblyopia group, poorer stereo acuity and poorer visual acuity in the amblyopic eye were significantly associated with greater enhancement in visual TOJ performance in the 200-ms click lag condition. Audiovisual temporal integration is intact in amblyopia, as indicated by perceptual enhancement in the temporal ventriloquism effect. Furthermore, poorer stereo acuity and poorer visual acuity in the amblyopic eye are associated with a widened temporal binding window for the effect. These findings suggest that previously reported abnormalities in audiovisual multisensory processing may result from impaired cross-modal matching rather than a diminished capacity for temporal audiovisual integration.

  1. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    Science.gov (United States)

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  2. Audiovisual Association Learning in the Absence of Primary Visual Cortex

    OpenAIRE

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J.; de Gelder, Beatrice

    2016-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit ...

  3. [Intermodal timing cues for audio-visual speech recognition].

    Science.gov (United States)

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  4. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    OpenAIRE

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin?Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possib...

  5. Detecting Functional Connectivity During Audiovisual Integration with MEG: A Comparison of Connectivity Metrics.

    Science.gov (United States)

    Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard

    2015-08-01

    In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.

  6. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    Science.gov (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Ensenyar amb casos audiovisuals en l'entorn virtual: metodologia i resultats

    OpenAIRE

    Triadó i Ivern, Xavier Ma.; Aparicio Chueca, Ma. del Pilar (María del Pilar); Jaría Chacón, Natalia; Gallardo-Gallardo, Eva; Elasri Ejjaberi, Amal

    2010-01-01

    Aquest quadern pretén posar i donar a conèixer les bases d'una metodologia que serveixi per engegar experiències d'aprenentatge amb casos audiovisuals en l'entorn del campus virtual. Per aquest motiu, s'ha definit un protocol metodològic per utilitzar els casos audiovisuals dins l'entorn del campus virtual a diferents assignatures.

  8. Audiovisual integration increases the intentional step synchronization of side-by-side walkers.

    Science.gov (United States)

    Noy, Dominic; Mouta, Sandra; Lamas, Joao; Basso, Daniel; Silva, Carlos; Santos, Jorge A

    2017-12-01

    When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Impaired verbal memory in Parkinson disease: relationship to prefrontal dysfunction and somatosensory discrimination

    Directory of Open Access Journals (Sweden)

    Weniger Dorothea

    2009-12-01

    Full Text Available Abstract Objective To study the neurocognitive profile and its relationship to prefrontal dysfunction in non-demented Parkinson's disease (PD with deficient haptic perception. Methods Twelve right-handed patients with PD and 12 healthy control subjects underwent thorough neuropsychological testing including Rey complex figure, Rey auditory verbal and figural learning test, figural and verbal fluency, and Stroop test. Test scores reflecting significant differences between patients and healthy subjects were correlated with the individual expression coefficients of one principal component, obtained in a principal component analysis of an oxygen-15-labeled water PET study exploring somatosensory discrimination that differentiated between the two groups and involved prefrontal cortices. Results We found significantly decreased total scores for the verbal learning trials and verbal delayed free recall in PD patients compared with normal volunteers. Further analysis of these parameters using Spearman's ranking correlation showed a significantly negative correlation of deficient verbal recall with expression coefficients of the principal component whose image showed a subcortical-cortical network, including right dorsolateral-prefrontal cortex, in PD patients. Conclusion PD patients with disrupted right dorsolateral prefrontal cortex function and associated diminished somatosensory discrimination are impaired also in verbal memory functions. A negative correlation between delayed verbal free recall and PET activation in a network including the prefrontal cortices suggests that verbal cues and accordingly declarative memory processes may be operative in PD during activities that demand sustained attention such as somatosensory discrimination. Verbal cues may be compensatory in nature and help to non-specifically enhance focused attention in the presence of a functionally disrupted prefrontal cortex.

  10. Attentional and non-attentional systems in the maintenance of verbal information in working memory: the executive and phonological loops.

    Science.gov (United States)

    Camos, Valérie; Barrouillet, Pierre

    2014-01-01

    Working memory is the structure devoted to the maintenance of information at short term during concurrent processing activities. In this respect, the question regarding the nature of the mechanisms and systems fulfilling this maintenance function is of particular importance and has received various responses in the recent past. In the time-based resource-sharing (TBRS) model, we suggest that only two systems sustain the maintenance of information at the short term, counteracting the deleterious effect of temporal decay and interference. A non-attentional mechanism of verbal rehearsal, similar to the one described by Baddeley in the phonological loop model, uses language processes to reactivate phonological memory traces. Besides this domain-specific mechanism, an executive loop allows the reconstruction of memory traces through an attention-based mechanism of refreshing. The present paper reviews evidence of the involvement of these two independent systems in the maintenance of verbal memory items.

  11. Attentional and non-attentional systems in the maintenance of verbal information in working memory: the executive and phonological loops

    Science.gov (United States)

    Camos, Valérie; Barrouillet, Pierre

    2014-01-01

    Working memory is the structure devoted to the maintenance of information at short term during concurrent processing activities. In this respect, the question regarding the nature of the mechanisms and systems fulfilling this maintenance function is of particular importance and has received various responses in the recent past. In the time-based resource-sharing (TBRS) model, we suggest that only two systems sustain the maintenance of information at the short term, counteracting the deleterious effect of temporal decay and interference. A non-attentional mechanism of verbal rehearsal, similar to the one described by Baddeley in the phonological loop model, uses language processes to reactivate phonological memory traces. Besides this domain-specific mechanism, an executive loop allows the reconstruction of memory traces through an attention-based mechanism of refreshing. The present paper reviews evidence of the involvement of these two independent systems in the maintenance of verbal memory items. PMID:25426049

  12. Attentional and non-attentional systems in the maintenance of verbal information in working memory: the executive and phonological loops.

    Directory of Open Access Journals (Sweden)

    Valerie eCamos

    2014-11-01

    Full Text Available Working memory is the structure devoted to the maintenance of information at short term during concurrent processing activities. In this respect, the question regarding the nature of the mechanisms and systems fulfilling this maintenance function is of particular importance and has received various responses in the recent past. In the time-based resource-sharing model, we suggest that only two systems sustain the maintenance of information at the short term, counteracting the deleterious effect of temporal decay and interference. A non-attentional mechanism of verbal rehearsal, similar to the one described by Baddeley in the phonological loop model, uses language processes to reactivate phonological memory traces. Besides this domain-specific mechanism, an executive loop allows the reconstruction of memory traces through an attention-based mechanism of refreshing. The present paper reviews evidence of the involvement of these two independent systems in the maintenance of verbal memory items.

  13. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals

    Science.gov (United States)

    Lidestam, Björn; Rönnberg, Jerker

    2016-01-01

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667

  14. Neurofunctional Underpinnings of Audiovisual Emotion Processing in Teens with Autism Spectrum Disorders

    Science.gov (United States)

    Doyle-Thomas, Krissy A.R.; Goldberg, Jeremy; Szatmari, Peter; Hall, Geoffrey B.C.

    2013-01-01

    Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD). Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n = 18) and typically developing controls (n = 16) during audiovisual and unimodal emotion processing. Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviors, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that in the absence of engaging integrative emotional networks during audiovisual emotion matching, teens with ASD may have recruited the parietofrontal network as an alternate compensatory system. PMID:23750139

  15. [Verbal and gestural communication in interpersonal interaction with Alzheimer's disease patients].

    Science.gov (United States)

    Schiaratura, Loris Tamara; Di Pastena, Angela; Askevis-Leherpeux, Françoise; Clément, Sylvain

    2015-03-01

    Communication can be defined as a verbal and non verbal exchange of thoughts and emotions. While verbal communication deficit in Alzheimer's disease is well documented, very little is known about gestural communication, especially in interpersonal situations. This study examines the production of gestures and its relations with verbal aspects of communication. Three patients suffering from moderately severe Alzheimer's disease were compared to three healthy adults. Each one were given a series of pictures and asked to explain which one she preferred and why. The interpersonal interaction was video recorded. Analyses concerned verbal production (quantity and quality) and gestures. Gestures were either non representational (i.e., gestures of small amplitude punctuating speech or accentuating some parts of utterance) or representational (i.e., referring to the object of the speech). Representational gestures were coded as iconic (depicting of concrete aspects), metaphoric (depicting of abstract meaning) or deictic (pointing toward an object). In comparison with healthy participants, patients revealed a decrease in quantity and quality of speech. Nevertheless, their production of gestures was always present. This pattern is in line with the conception that gestures and speech depend on different communicational systems and look inconsistent with the assumption of a parallel dissolution of gesture and speech. Moreover, analyzing the articulation between verbal and gestural dimensions suggests that representational gestures may compensate for speech deficits. It underlines the importance for the role of gestures in maintaining interpersonal communication.

  16. Plan empresa productora de audiovisuales : La Central Audiovisual y Publicidad

    OpenAIRE

    Arroyave Velasquez, Alejandro

    2015-01-01

    El presente documento corresponde al plan de creación de empresa La Central Publicidad y Audiovisual, una empresa dedicada a la pre-producción, producción y post-producción de material de tipo audiovisual. La empresa estará ubicada en la ciudad de Cali y tiene como mercado objetivo atender los diferentes tipos de empresas de la ciudad, entre las cuales se encuentran las pequeñas, medianas y grandes empresas.

  17. Verbal learning and memory in adolescent cannabis users, alcohol users and non-users.

    Science.gov (United States)

    Solowij, Nadia; Jones, Katy A; Rozman, Megan E; Davis, Sasha M; Ciarrochi, Joseph; Heaven, Patrick C L; Lubman, Dan I; Yücel, Murat

    2011-07-01

    Long-term heavy cannabis use can result in memory impairment. Adolescent users may be especially vulnerable to the adverse neurocognitive effects of cannabis. In a cross-sectional and prospective neuropsychological study of 181 adolescents aged 16-20 (mean 18.3 years), we compared performance indices from one of the most widely used measures of learning and memory--the Rey Auditory Verbal Learning Test--between cannabis users (n=52; mean 2.4 years of use, 14 days/month, median abstinence 20.3 h), alcohol users (n=67) and non-user controls (n=62) matched for age, education and premorbid intellectual ability (assessed prospectively), and alcohol consumption for cannabis and alcohol users. Cannabis users performed significantly worse than alcohol users and non-users on all performance indices. They recalled significantly fewer words overall (pmemory performance after controlling for extent of exposure to cannabis. Despite relatively brief exposure, adolescent cannabis users relative to their age-matched counterparts demonstrated similar memory deficits to those reported in adult long-term heavy users. The results indicate that cannabis adversely affects the developing brain and reinforce concerns regarding the impact of early exposure.

  18. Self-organizing maps for measuring similarity of audiovisual speech percepts

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich

    The goal of this work is to find a way to measure similarity of audiovisual speech percepts. Phoneme-related self-organizing maps (SOM) with a rectangular basis are trained with data material from a (labeled) video film. For the training, a combination of auditory speech features and corresponding....... Dependent on the training data, these other units may also be contextually immediate neighboring units. The poster demonstrates the idea with text material spoken by one individual subject using a set of simple audio-visual features. The data material for the training process consists of 44 labeled...... sentences in German with a balanced phoneme repertoire. As a result it can be stated that (i) the SOM can be trained to map auditory and visual features in a topology-preserving way and (ii) they show strain due to the influence of other audio-visual units. The SOM can be used to measure similarity amongst...

  19. Audiovisual Speech Perception and Eye Gaze Behavior of Adults with Asperger Syndrome

    Science.gov (United States)

    Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-01-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…

  20. Effects of Classroom Bilingualism on Task Shifting, Verbal Memory, and Word Learning in Children

    Science.gov (United States)

    Kaushanskaya, Margarita; Gross, Megan; Buac, Milijana

    2014-01-01

    We examined the effects of classroom bilingual experience in children on an array of cognitive skills. Monolingual English-speaking children were compared with children who spoke English as the native language and who had been exposed to Spanish in the context of dual-immersion schooling for an average of two years. The groups were compared on a measure of non-linguistic task-shifting; measures of verbal short-term and working memory; and measures of word-learning. The two groups of children did not differ on measures of non-linguistic task-shifting and verbal short-term memory. However, the classroom-exposure bilingual group outperformed the monolingual group on the measure of verbal working memory and a measure of word-learning. Together, these findings indicate that while exposure to a second language in a classroom setting may not be sufficient to engender changes in cognitive control, it can facilitate verbal memory and verbal learning. PMID:24576079

  1. Audiovisual integration for speech during mid-childhood: Electrophysiological evidence

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-01-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7–8-year-olds and 10–11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception. PMID:25463815

  2. Effect of Audiovisual Treatment Information on Relieving Anxiety in Patients Undergoing Impacted Mandibular Third Molar Removal.

    Science.gov (United States)

    Choi, Sung-Hwan; Won, Ji-Hoon; Cha, Jung-Yul; Hwang, Chung-Ju

    2015-11-01

    The authors hypothesized that an audiovisual slide presentation that provided treatment information regarding the removal of an impacted mandibular third molar could improve patient knowledge of postoperative complications and decrease anxiety in young adults before and after surgery. A group that received an audiovisual description was compared with a group that received the conventional written description of the procedure. This randomized clinical trial included young adult patients who required surgical removal of an impacted mandibular third molar and fulfilled the predetermined criteria. The predictor variable was the presentation of an audiovisual slideshow. The audiovisual informed group provided informed consent after viewing an audiovisual slideshow. The control group provided informed consent after reading a written description of the procedure. The outcome variables were the State-Trait Anxiety Inventory, the Dental Anxiety Scale, a self-reported anxiety questionnaire, completed immediately before and 1 week after surgery, and a postoperative questionnaire about the level of understanding of potential postoperative complications. The data were analyzed with χ(2) tests, independent t tests, Mann-Whitney U  tests, and Spearman rank correlation coefficients. Fifty-one patients fulfilled the inclusion criteria. The audiovisual informed group was comprised of 20 men and 5 women; the written informed group was comprised of 21 men and 5 women. The audiovisual informed group remembered significantly more information than the control group about a potential allergic reaction to local anesthesia or medication and potential trismus (P audiovisual informed group had lower self-reported anxiety scores than the control group 1 week after surgery (P audiovisual slide presentation could improve patient knowledge about postoperative complications and aid in alleviating anxiety after the surgical removal of an impacted mandibular third molar. Copyright © 2015

  3. Academic e-learning experience in the enhancement of open access audiovisual and media education

    OpenAIRE

    Pacholak, Anna; Sidor, Dorota

    2015-01-01

    The paper presents how the academic e-learning experience and didactic methods of the Centre for Open and Multimedia Education (COME UW), University of Warsaw, enhance the open access to audiovisual and media education at various levels of education. The project is implemented within the Audiovisual and Media Education Programme (PEAM). It is funded by the Polish Film Institute (PISF). The aim of the project is to create a proposal of a comprehensive and open programme for the audiovisual (me...

  4. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Yanna Ren

    2018-01-01

    Full Text Available The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson’s disease (PD. This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p0.05. The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.

  5. Cortical Auditory Disorders: A Case of Non-Verbal Disturbances Assessed with Event-Related Brain Potentials

    Directory of Open Access Journals (Sweden)

    Sönke Johannes

    1998-01-01

    Full Text Available In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians’ musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19–30 and by event-related potentials (ERP recorded in a modified 'oddball paradigm’. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  6. Cortical auditory disorders: a case of non-verbal disturbances assessed with event-related brain potentials.

    Science.gov (United States)

    Johannes, Sönke; Jöbges, Michael E.; Dengler, Reinhard; Münte, Thomas F.

    1998-01-01

    In the auditory modality, there has been a considerable debate about some aspects of cortical disorders, especially about auditory forms of agnosia. Agnosia refers to an impaired comprehension of sensory information in the absence of deficits in primary sensory processes. In the non-verbal domain, sound agnosia and amusia have been reported but are frequently accompanied by language deficits whereas pure deficits are rare. Absolute pitch and musicians' musical abilities have been associated with left hemispheric functions. We report the case of a right handed sound engineer with the absolute pitch who developed sound agnosia and amusia in the absence of verbal deficits after a right perisylvian stroke. His disabilities were assessed with the Seashore Test of Musical Functions, the tests of Wertheim and Botez (Wertheim and Botez, Brain 84, 1961, 19-30) and by event-related potentials (ERP) recorded in a modified 'oddball paradigm'. Auditory ERP revealed a dissociation between the amplitudes of the P3a and P3b subcomponents with the P3b being reduced in amplitude while the P3a was undisturbed. This is interpreted as reflecting disturbances in target detection processes as indexed by the P3b. The findings that contradict some aspects of current knowledge about left/right hemispheric specialization in musical processing are discussed and related to the literature concerning cortical auditory disorders.

  7. Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception.

    Science.gov (United States)

    Baart, Martijn; Lindborg, Alma; Andersen, Tobias S

    2017-11-01

    Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure of audiovisual integration) for fusions was similar to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli. © 2017 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  8. "You can also save a life!": children's drawings as a non-verbal assessment of the impact of cardiopulmonary resuscitation training.

    Science.gov (United States)

    Petriş, Antoniu Octavian; Tatu-Chiţoiu, Gabriel; Cimpoeşu, Diana; Ionescu, Daniela Florentina; Pop, Călin; Oprea, Nadia; Ţînţ, Diana

    2017-04-01

    Drawings made by training children into cardiopulmonary resuscitation (CPR) during the special education week called "School otherwise" can be used as non-verbal means of expression and communication to assess the impact of such training. We analyzed the questionnaires and drawings completed by 327 schoolchildren in different stages of education. After a brief overview of the basic life support (BLS) steps and after watching a video presenting the dynamic performance of the BLS sequence, subjects were asked to complete a questionnaire and make a drawing to express main CPR messages. Questionnaires were filled completely in 97.6 % and drawings were done in 90.2 % cases. Half of the subjects had already witnessed a kind of medical emergency and 96.94 % knew the correct "112" emergency phone number. The drawings were single images (83.81 %) and less cartoon strips (16.18 %). Main themes of the slogans were "Save a life!", "Help!", "Call 112!", "Do not be indifferent/insensible/apathic!" through the use of drawings interpretation, CPR trainers can use art as a way to build a better relation with schoolchildren, to connect to their thoughts and feelings and obtain the highest quality education.

  9. Verbal communication skills in typical language development: a case series.

    Science.gov (United States)

    Abe, Camila Mayumi; Bretanha, Andreza Carolina; Bozza, Amanda; Ferraro, Gyovanna Junya Klinke; Lopes-Herrera, Simone Aparecida

    2013-01-01

    The aim of the current study was to investigate verbal communication skills in children with typical language development and ages between 6 and 8 years. Participants were 10 children of both genders in this age range without language alterations. A 30-minute video of each child's interaction with an adult (father and/or mother) was recorded, fully transcribed, and analyzed by two trained researchers in order to determine reliability. The recordings were analyzed according to a protocol that categorizes verbal communicative abilities, including dialogic, regulatory, narrative-discursive, and non-interactive skills. The frequency of use of each category of verbal communicative ability was analyzed (in percentage) for each subject. All subjects used more dialogical and regulatory skills, followed by narrative-discursive and non-interactive skills. This suggests that children in this age range are committed to continue dialog, which shows that children with typical language development have more dialogic interactions during spontaneous interactions with a familiar adult.

  10. Verbal fluency in idiopathic Parkinson's disease

    International Nuclear Information System (INIS)

    Thut, G.; Antonini, A.; Roelcke, U.; Missimer, J.; Maguire, R.P.; Leenders, K.L.; Regard, M.

    1997-01-01

    In the present study, the relationship between resting metabolism and verbal fluency, a correlate of frontal lobe cognition, was examined in 33 PD patients. We aimed to determine brain structures involved in frontal lobe cognitive impairment with special emphasis on differences between demented and non-demented PD patients. (author) 3 figs., 2 refs

  11. Audiovisual Asynchrony Detection in Human Speech

    Science.gov (United States)

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  12. Multisensory integration in complete unawareness: evidence from audiovisual congruency priming.

    Science.gov (United States)

    Faivre, Nathan; Mudrik, Liad; Schwartz, Naama; Koch, Christof

    2014-11-01

    Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter "b" or "m," respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit "6" or "8," respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration. © The Author(s) 2014.

  13. Atypical audiovisual speech integration in infants at risk for autism.

    Directory of Open Access Journals (Sweden)

    Jeanne A Guiraud

    Full Text Available The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/- audio/ba/and the congruent visual/ba/- audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/- audio/ga/display compared with the congruent visual/ga/- audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16 = 17.153, p = 0.001. The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25 = 0.09, p = 0.767, in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41 = 4.466, p = 0.041. In some cases this reduced ability might lead to the poor communication skills characteristic of autism.

  14. Threats and opportunities for new audiovisual cultural heritage archive services: the Dutch case

    NARCIS (Netherlands)

    Ongena, G.; Huizer, E.; van de Wijngaert, Lidwien

    2012-01-01

    Purpose The purpose of this paper is to analyze the business-to-consumer market for digital audiovisual archiving services. In doing so we identify drivers, threats, and opportunities for new services based on audiovisual archives in the cultural heritage domain. By analyzing the market we provide

  15. Non-equilibrium dynamics from RPMD and CMD.

    Science.gov (United States)

    Welsch, Ralph; Song, Kai; Shi, Qiang; Althorpe, Stuart C; Miller, Thomas F

    2016-11-28

    We investigate the calculation of approximate non-equilibrium quantum time correlation functions (TCFs) using two popular path-integral-based molecular dynamics methods, ring-polymer molecular dynamics (RPMD) and centroid molecular dynamics (CMD). It is shown that for the cases of a sudden vertical excitation and an initial momentum impulse, both RPMD and CMD yield non-equilibrium TCFs for linear operators that are exact for high temperatures, in the t = 0 limit, and for harmonic potentials; the subset of these conditions that are preserved for non-equilibrium TCFs of non-linear operators is also discussed. Furthermore, it is shown that for these non-equilibrium initial conditions, both methods retain the connection to Matsubara dynamics that has previously been established for equilibrium initial conditions. Comparison of non-equilibrium TCFs from RPMD and CMD to Matsubara dynamics at short times reveals the orders in time to which the methods agree. Specifically, for the position-autocorrelation function associated with sudden vertical excitation, RPMD and CMD agree with Matsubara dynamics up to O(t 4 ) and O(t 1 ), respectively; for the position-autocorrelation function associated with an initial momentum impulse, RPMD and CMD agree with Matsubara dynamics up to O(t 5 ) and O(t 2 ), respectively. Numerical tests using model potentials for a wide range of non-equilibrium initial conditions show that RPMD and CMD yield non-equilibrium TCFs with an accuracy that is comparable to that for equilibrium TCFs. RPMD is also used to investigate excited-state proton transfer in a system-bath model, and it is compared to numerically exact calculations performed using a recently developed version of the Liouville space hierarchical equation of motion approach; again, similar accuracy is observed for non-equilibrium and equilibrium initial conditions.

  16. Vicarious audiovisual learning in perfusion education.

    Science.gov (United States)

    Rath, Thomas E; Holt, David W

    2010-12-01

    Perfusion technology is a mechanical and visual science traditionally taught with didactic instruction combined with clinical experience. It is difficult to provide perfusion students the opportunity to experience difficult clinical situations, set up complex perfusion equipment, or observe corrective measures taken during catastrophic events because of patient safety concerns. Although high fidelity simulators offer exciting opportunities for future perfusion training, we explore the use of a less costly low fidelity form of simulation instruction, vicarious audiovisual learning. Two low fidelity modes of instruction; description with text and a vicarious, first person audiovisual production depicting the same content were compared. Students (n = 37) sampled from five North American perfusion schools were prospectively randomized to one of two online learning modules, text or video.These modules described the setup and operation of the MAQUET ROTAFLOW stand-alone centrifugal console and pump. Using a 10 question multiple-choice test, students were assessed immediately after viewing the module (test #1) and then again 2 weeks later (test #2) to determine cognition and recall of the module content. In addition, students completed a questionnaire assessing the learning preferences of today's perfusion student. Mean test scores from test #1 for video learners (n = 18) were significantly higher (88.89%) than for text learners (n = 19) (74.74%), (p audiovisual learning modules may be an efficacious, low cost means of delivering perfusion training on subjects such as equipment setup and operation. Video learning appears to improve cognition and retention of learned content and may play an important role in how we teach perfusion in the future, as simulation technology becomes more prevalent.

  17. Lousa Digital Interativa: avaliação da interação didática e proposta de aplicação de narrativa audiovisual / Interactive White Board – IWB: assessment in interaction didactic and audiovisual narrative proposal

    Directory of Open Access Journals (Sweden)

    Francisco García García

    2011-04-01

    Full Text Available O uso de audiovisual em sala de aula não garante uma eficácia na aprendizagem, mas para os estudantes é um elemento interessante e ainda atrativo. Este trabalho — uma aproximação de duas pesquisas: a primeira apresenta a importância da interação didática com a LDI e a segunda, uma lista de elementos de narrativa audiovisual que podem ser aplicados em sala de aula — propõe o domínio de elementos da narrativa audiovisual como uma possibilidade teórica para o professor que quer produzir um conteúdo audiovisual para aplicar em plataformas digitais, como é o caso da Lousa Digital Interativa - LDI. O texto está divido em três partes: a primeira apresenta os conceitos teóricos das duas pesquisas, a segunda discute os resultados de ambas e, por fim, a terceira parte propõe uma prática pedagógica de interação didática com elementos de narrativa audiovisual para uso em LDI. AbstractThe audiovisual use in classroom does not guarantee effectiveness in learning, but for students is an interesting element and still attractive. This work suggests that the field of audiovisual elements of the narrative is a theoretical possibility for the teacher who wants to produce an audiovisual content to apply to digital platforms, such as the Interactive Digital Whiteboard - LDI. This work is an approximation of two doctoral theses, the first that shows the importance of interaction with the didactic and the second LDI provides a list of audiovisual narrative elements that can be applied in the classroom. This work is divided into three parts, the first part presents the theoretical concepts of the two surveys, the second part discusses the results of two surveys and finally the third part, proposes a practical pedagogical didactic interaction with audiovisual narrative elements to use in LDI.

  18. Non-ergodicity of Nosé–Hoover dynamics

    International Nuclear Information System (INIS)

    Legoll, Frédéric; Luskin, Mitchell; Moeckel, Richard

    2009-01-01

    The Nosé–Hoover dynamics is a deterministic method that is commonly used to sample the canonical Gibbs measure. This dynamics extends the physical Hamiltonian dynamics by the addition of a 'thermostat' variable, which is coupled nonlinearly with the physical variables. The accuracy of the method depends on the dynamics being ergodic. Numerical experiments have been published earlier that are consistent with non-ergodicity of the dynamics for some model problems. The authors recently proved the non-ergodicity of the Nosé–Hoover dynamics for the one-dimensional harmonic oscillator. In this paper, this result is extended to non-harmonic one-dimensional systems. We also show that, for some multidimensional systems, the averaged dynamics for the limit of infinite thermostat 'mass' has many invariants, thus giving theoretical support for either non-ergodicity or slow ergodization. Numerical experiments for a two-dimensional central force problem and the one-dimensional pendulum problem give evidence for non-ergodicity

  19. Longevity and Depreciation of Audiovisual Equipment.

    Science.gov (United States)

    Post, Richard

    1987-01-01

    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  20. Verbal Working Memory Is Related to the Acquisition of Cross-Linguistic Phonological Regularities.

    Science.gov (United States)

    Bosma, Evelyn; Heeringa, Wilbert; Hoekstra, Eric; Versloot, Arjen; Blom, Elma

    2017-01-01

    Closely related languages share cross-linguistic phonological regularities, such as Frisian -âld [ͻ:t] and Dutch -oud [ʱut], as in the cognate pairs kâld [kͻ:t] - koud [kʱut] 'cold' and wâld [wͻ:t] - woud [wʱut] 'forest'. Within Bybee's (1995, 2001, 2008, 2010) network model, these regularities are, just like grammatical rules within a language, generalizations that emerge from schemas of phonologically and semantically related words. Previous research has shown that verbal working memory is related to the acquisition of grammar, but not vocabulary. This suggests that verbal working memory supports the acquisition of linguistic regularities. In order to test this hypothesis we investigated whether verbal working memory is also related to the acquisition of cross-linguistic phonological regularities. For three consecutive years, 5- to 8-year-old Frisian-Dutch bilingual children ( n = 120) were tested annually on verbal working memory and a Frisian receptive vocabulary task that comprised four cognate categories: (1) identical cognates, (2) non-identical cognates that either do or (3) do not exhibit a phonological regularity between Frisian and Dutch, and (4) non-cognates. The results showed that verbal working memory had a significantly stronger effect on cognate category (2) than on the other three cognate categories. This suggests that verbal working memory is related to the acquisition of cross-linguistic phonological regularities. More generally, it confirms the hypothesis that verbal working memory plays a role in the acquisition of linguistic regularities.

  1. Audiovisual quality assessment in communications applications: Current status, trends and challenges

    DEFF Research Database (Denmark)

    Korhonen, Jari

    2010-01-01

    Audiovisual quality assessment is one of the major challenges in multimedia communications. Traditionally, algorithm-based (objective) assessment methods have focused primarily on the compression artifacts. However, compression is only one of the numerous factors influencing the perception...... addressed in practical quality metrics is the co-impact of audio and video qualities. This paper provides an overview of the current trends and challenges in objective audiovisual quality assessment, with emphasis on communication applications...

  2. Alfasecuencialización: la enseñanza del cine en la era del audiovisual Sequential literacy: the teaching of cinema in the age of audio-visual speech

    Directory of Open Access Journals (Sweden)

    José Antonio Palao Errando

    2007-10-01

    Full Text Available En la llamada «sociedad de la información» los estudios sobre cine se han visto diluidos en el abordaje pragmático y tecnológico del discurso audiovisual, así como la propia fruición del cine se ha visto atrapada en la red del DVD y del hipertexto. El propio cine reacciona ante ello a través de estructuras narrativas complejas que lo alejan del discurso audiovisual estándar. La función de los estudios sobre cine y de su enseñanza universitaria debe ser la reintroducción del sujeto rechazado del saber informativo por medio de la interpretación del texto fílmico. In the so called «information society», film studies have been diluted in the pragmatic and technological approaching of the audiovisual speech, as well as the own fruition of the cinema has been caught in the net of DVD and hypertext. The cinema itself reacts in the face of it through complex narrative structures that take it away from the standard audio-visual speech. The function of film studies at the university education should be the reintroduction of the rejected subject of the informative knowledge by means of the interpretation of film text.

  3. A economia do audiovisual no contexto contemporâneo das Cidades Criativas

    Directory of Open Access Journals (Sweden)

    Paulo Celso da Silva

    2012-12-01

    Full Text Available Este trabalho aborda a economia do audiovisual em cidades com status de criativas. Mais do que um adjetivo, é no bojo das atividades ligadas à comunicação, o audiovisual entre elas, cultura, moda, arquitetura, artes manuais ou artesanato local, que tais cidades renovaram a forma de acumulação, reorganizando espaços públicos e privados. As cidades de  Barcelona, Berlim, New York, Milão e São Paulo, são representativas para atingir o objetivo de analisar as cidades relacionado ao desenvolvimento do setor audiovisual. Ainda que tal hipótese possa parecer indicar, através de dados oficiais que auxiliam em uma compreensão mais realista de cada uma delas.

  4. Venezuela: Nueva Experiencia Audiovisual

    Directory of Open Access Journals (Sweden)

    Revista Chasqui

    2015-01-01

    Full Text Available La Universidad Simón Bolívar (USB creó en 1986, la Fundación para el Desarrollo del Arte Audiovisual, ARTEVISION. Su objetivo general es la promoción y venta de servicios y productos para la televisión, radio, cine, diseño y fotografía de alta calidad artística y técnica. Todo esto sin descuidar los aspectos teóricos-académicos de estas disciplinas.

  5. Audiovisual Narrative Creation and Creative Retrieval: How Searching for a Story Shapes the Story

    NARCIS (Netherlands)

    Sauer, Sabrina

    2017-01-01

    Media professionals – such as news editors, image researchers, and documentary filmmakers - increasingly rely on online access to digital content within audiovisual archives to create narratives. Retrieving audiovisual sources therefore requires an in-depth knowledge of how to find sources

  6. Selective attention modulates the direction of audio-visual temporal recalibration.

    Science.gov (United States)

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  7. Selective attention modulates the direction of audio-visual temporal recalibration.

    Directory of Open Access Journals (Sweden)

    Nara Ikumi

    Full Text Available Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging, was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  8. Expert monitoring and verbal feedback as sources of performance pressure.

    Science.gov (United States)

    Buchanan, John J; Park, Inchon; Chen, Jing; Mehta, Ranjana K; McCulloch, Austin; Rhee, Joohyun; Wright, David L

    2018-05-01

    The influence of monitoring-pressure and verbal feedback on the performance of the intrinsically stable bimanual coordination patterns of in-phase and anti-phase was examined. The two bimanual patterns were produced under three conditions: 1) no-monitoring, 2) monitoring-pressure (viewed by experts), and 3) monitoring-pressure (viewed by experts) combined with verbal feedback emphasizing poor performance. The bimanual patterns were produced at self-paced movement frequencies. Anti-phase coordination was always less stable than in-phase coordination across all three conditions. When performed under conditions 2 and 3, both bimanual patterns were performed with less variability in relative phase across a wide range of self-paced movement frequencies compared to the no-monitoring condition. Thus, monitoring-pressure resulted in performance stabilization rather than degradation and the presence of verbal feedback had no impact on the influence of monitoring pressure. The current findings are inconsistent with the predictions of explicit monitoring theory; however, the findings are consistent with studies that have revealed increased stability for the system's intrinsic dynamics as a result of attentional focus and intentional control. The results are discussed within the contexts of the dynamic pattern theory of coordination, explicit monitoring theory, and action-focused theories as explanations for choking under pressure. Copyright © 2018. Published by Elsevier B.V.

  9. Audiovisual integration of speech falters under high attention demands.

    Science.gov (United States)

    Alsius, Agnès; Navarra, Jordi; Campbell, Ruth; Soto-Faraco, Salvador

    2005-05-10

    One of the most commonly cited examples of human multisensory integration occurs during exposure to natural speech, when the vocal and the visual aspects of the signal are integrated in a unitary percept. Audiovisual association of facial gestures and vocal sounds has been demonstrated in nonhuman primates and in prelinguistic children, arguing for a general basis for this capacity. One critical question, however, concerns the role of attention in such multisensory integration. Although both behavioral and neurophysiological studies have converged on a preattentive conceptualization of audiovisual speech integration, this mechanism has rarely been measured under conditions of high attentional load, when the observers' attention resources are depleted. We tested the extent to which audiovisual integration was modulated by the amount of available attentional resources by measuring the observers' susceptibility to the classic McGurk illusion in a dual-task paradigm. The proportion of visually influenced responses was severely, and selectively, reduced if participants were concurrently performing an unrelated visual or auditory task. In contrast with the assumption that crossmodal speech integration is automatic, our results suggest that these multisensory binding processes are subject to attentional demands.

  10. Rehabilitation of balance-impaired stroke patients through audio-visual biofeedback

    DEFF Research Database (Denmark)

    Gheorghe, Cristina; Nissen, Thomas; Juul Rosengreen Christensen, Daniel

    2015-01-01

    This study explored how audio-visual biofeedback influences physical balance of seven balance-impaired stroke patients, between 33–70 years-of-age. The setup included a bespoke balance board and a music rhythm game. The procedure was designed as follows: (1) a control group who performed a balance...... training exercise without any technological input, (2) a visual biofeedback group, performing via visual input, and (3) an audio-visual biofeedback group, performing via audio and visual input. Results retrieved from comparisons between the data sets (2) and (3) suggested superior postural stability...

  11. 78 FR 63243 - Certain Audiovisual Components and Products Containing the Same; Commission Determination To...

    Science.gov (United States)

    2013-10-23

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-837] Certain Audiovisual Components and Products Containing the Same; Commission Determination To Review a Final Initial Determination Finding a... section 337 as to certain audiovisual components and products containing the same with respect to claims 1...

  12. Discerning non-autonomous dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Clemson, Philip T.; Stefanovska, Aneta, E-mail: aneta@lancaster.ac.uk

    2014-09-30

    Structure and function go hand in hand. However, while a complex structure can be relatively safely broken down into the minutest parts, and technology is now delving into nanoscales, the function of complex systems requires a completely different approach. Here the complexity clearly arises from nonlinear interactions, which prevents us from obtaining a realistic description of a system by dissecting it into its structural component parts. At best, the result of such investigations does not substantially add to our understanding or at worst it can even be misleading. Not surprisingly, the dynamics of complex systems, facilitated by increasing computational efficiency, is now readily tackled in the case of measured time series. Moreover, time series can now be collected in practically every branch of science and in any structural scale—from protein dynamics in a living cell to data collected in astrophysics or even via social networks. In searching for deterministic patterns in such data we are limited by the fact that no complex system in the real world is autonomous. Hence, as an alternative to the stochastic approach that is predominantly applied to data from inherently non-autonomous complex systems, theory and methods specifically tailored to non-autonomous systems are needed. Indeed, in the last decade we have faced a huge advance in mathematical methods, including the introduction of pullback attractors, as well as time series methods that cope with the most important characteristic of non-autonomous systems—their time-dependent behaviour. Here we review current methods for the analysis of non-autonomous dynamics including those for extracting properties of interactions and the direction of couplings. We illustrate each method by applying it to three sets of systems typical for chaotic, stochastic and non-autonomous behaviour. For the chaotic class we select the Lorenz system, for the stochastic the noise-forced Duffing system and for the non

  13. Discerning non-autonomous dynamics

    International Nuclear Information System (INIS)

    Clemson, Philip T.; Stefanovska, Aneta

    2014-01-01

    Structure and function go hand in hand. However, while a complex structure can be relatively safely broken down into the minutest parts, and technology is now delving into nanoscales, the function of complex systems requires a completely different approach. Here the complexity clearly arises from nonlinear interactions, which prevents us from obtaining a realistic description of a system by dissecting it into its structural component parts. At best, the result of such investigations does not substantially add to our understanding or at worst it can even be misleading. Not surprisingly, the dynamics of complex systems, facilitated by increasing computational efficiency, is now readily tackled in the case of measured time series. Moreover, time series can now be collected in practically every branch of science and in any structural scale—from protein dynamics in a living cell to data collected in astrophysics or even via social networks. In searching for deterministic patterns in such data we are limited by the fact that no complex system in the real world is autonomous. Hence, as an alternative to the stochastic approach that is predominantly applied to data from inherently non-autonomous complex systems, theory and methods specifically tailored to non-autonomous systems are needed. Indeed, in the last decade we have faced a huge advance in mathematical methods, including the introduction of pullback attractors, as well as time series methods that cope with the most important characteristic of non-autonomous systems—their time-dependent behaviour. Here we review current methods for the analysis of non-autonomous dynamics including those for extracting properties of interactions and the direction of couplings. We illustrate each method by applying it to three sets of systems typical for chaotic, stochastic and non-autonomous behaviour. For the chaotic class we select the Lorenz system, for the stochastic the noise-forced Duffing system and for the non

  14. Verbal learning changes in older adults across 18 months.

    Science.gov (United States)

    Zimprich, Daniel; Rast, Philippe

    2009-07-01

    The major aim of this study was to investigate individual changes in verbal learning across a period of 18 months. Individual differences in verbal learning have largely been neglected in the last years and, even more so, individual differences in change in verbal learning. The sample for this study comes from the Zurich Longitudinal Study on Cognitive Aging (ZULU; Zimprich et al., 2008a) and comprised 336 older adults in the age range of 65-80 years at first measurement occasion. In order to address change in verbal learning we used a latent change model of structured latent growth curves to account for the non-linearity of the verbal learning data. The individual learning trajectories were captured by a hyperbolic function which yielded three psychologically distinct parameters: initial performance, learning rate, and asymptotic performance. We found that average performance increased with respect to initial performance, but not in learning rate or in asymptotic performance. Further, variances and covariances remained stable across both measurement occasions, indicating that the amount of individual differences in the three parameters remained stable, as did the relationships among them. Moreover, older adults differed reliably in their amount of change in initial performance and asymptotic performance. Eventually, changes in asymptotic performance and learning rate were strongly negatively correlated. It thus appears as if change in verbal learning in old age is a constrained process: an increase in total learning capacity implies that it takes longer to learn. Together, these results point to the significance of individual differences in change of verbal learning in the elderly.

  15. Search in audiovisual broadcast archives : doctoral abstract

    NARCIS (Netherlands)

    Huurnink, B.

    Documentary makers, journalists, news editors, and other media professionals routinely require previously recorded audiovisual material for new productions. For example, a news editor might wish to reuse footage shot by overseas services for the evening news, or a documentary maker might require

  16. Inactivation of Primate Prefrontal Cortex Impairs Auditory and Audiovisual Working Memory.

    Science.gov (United States)

    Plakke, Bethany; Hwang, Jaewon; Romanski, Lizabeth M

    2015-07-01

    The prefrontal cortex is associated with cognitive functions that include planning, reasoning, decision-making, working memory, and communication. Neurophysiology and neuropsychology studies have established that dorsolateral prefrontal cortex is essential in spatial working memory while the ventral frontal lobe processes language and communication signals. Single-unit recordings in nonhuman primates has shown that ventral prefrontal (VLPFC) neurons integrate face and vocal information and are active during audiovisual working memory. However, whether VLPFC is essential in remembering face and voice information is unknown. We therefore trained nonhuman primates in an audiovisual working memory paradigm using naturalistic face-vocalization movies as memoranda. We inactivated VLPFC, with reversible cortical cooling, and examined performance when faces, vocalizations or both faces and vocalization had to be remembered. We found that VLPFC inactivation impaired subjects' performance in audiovisual and auditory-alone versions of the task. In contrast, VLPFC inactivation did not disrupt visual working memory. Our studies demonstrate the importance of VLPFC in auditory and audiovisual working memory for social stimuli but suggest a different role for VLPFC in unimodal visual processing. The ventral frontal lobe, or inferior frontal gyrus, plays an important role in audiovisual communication in the human brain. Studies with nonhuman primates have found that neurons within ventral prefrontal cortex (VLPFC) encode both faces and vocalizations and that VLPFC is active when animals need to remember these social stimuli. In the present study, we temporarily inactivated VLPFC by cooling the cortex while nonhuman primates performed a working memory task. This impaired the ability of subjects to remember a face and vocalization pair or just the vocalization alone. Our work highlights the importance of the primate VLPFC in the processing of faces and vocalizations in a manner that

  17. Automated social skills training with audiovisual information.

    Science.gov (United States)

    Tanaka, Hiroki; Sakti, Sakriani; Neubig, Graham; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi

    2016-08-01

    People with social communication difficulties tend to have superior skills using computers, and as a result computer-based social skills training systems are flourishing. Social skills training, performed by human trainers, is a well-established method to obtain appropriate skills in social interaction. Previous works have attempted to automate one or several parts of social skills training through human-computer interaction. However, while previous work on simulating social skills training considered only acoustic and linguistic features, human social skills trainers take into account visual features (e.g. facial expression, posture). In this paper, we create and evaluate a social skills training system that closes this gap by considering audiovisual features regarding ratio of smiling, yaw, and pitch. An experimental evaluation measures the difference in effectiveness of social skill training when using audio features and audiovisual features. Results showed that the visual features were effective to improve users' social skills.

  18. Trace Dynamics and a non-commutative special relativity

    International Nuclear Information System (INIS)

    Lochan, Kinjalk; Singh, T.P.

    2011-01-01

    Trace Dynamics is a classical dynamical theory of non-commuting matrices in which cyclic permutation inside a trace is used to define the derivative with respect to an operator. We use the methods of Trace Dynamics to construct a non-commutative special relativity. We define a line-element using the Trace over space-time coordinates which are assumed to be operators. The line-element is shown to be invariant under standard Lorentz transformations, and is used to construct a non-commutative relativistic dynamics. The eventual motivation for constructing such a non-commutative relativity is to relate the statistical thermodynamics of this classical theory to quantum mechanics. -- Highlights: → Classical time is external to quantum mechanics. → This implies need for a formulation of quantum theory without classical time. → A starting point could be a non-commutative special relativity. → Such a relativity is developed here using the theory of Trace Dynamics. → A line-element is defined using the Trace over non-commuting space-time operators.

  19. The audiovisual mounting narrative as a basis for the documentary film interactive: news studies

    Directory of Open Access Journals (Sweden)

    Mgs. Denis Porto Renó

    2008-01-01

    Full Text Available This paper presents a literature review and experiment results from pilot-doctoral research "assembly language visual narrative for the documentary film interactive," which defend the thesis that there are features interactive audio and video editing of the movie, even as causing agent of interactivity. The search for interactive audio-visual formats are present in international investigations, but sob glances technology. He believes that this paper is to propose possible formats for interactive audiovisual production film, video, television, computer and cell phone from the postmodern society. Key words: Audiovisual, language, interactivity, cinema interactive, documentary, communication.

  20. Verbal communication among Alzheimer's disease patients, their caregivers, and primary care physicians during primary care office visits.

    Science.gov (United States)

    Schmidt, Karen L; Lingler, Jennifer H; Schulz, Richard

    2009-11-01

    Primary care visits of patients with Alzheimer's disease (AD) often involve communication among patients, family caregivers, and primary care physicians (PCPs). The objective of this study was to understand the nature of each individual's verbal participation in these triadic interactions. To define the verbal communication dynamics of AD care triads, we compared verbal participation (percent of total visit speech) by each participant in patient/caregiver/PCP triads. Twenty-three triads were audio taped during a routine primary care visit. Rates of verbal participation were described and effects of patient cognitive status (MMSE score, verbal fluency) on verbal participation were assessed. PCP verbal participation was highest at 53% of total visit speech, followed by caregivers (31%) and patients (16%). Patient cognitive measures were related to patient and caregiver verbal participation, but not to PCP participation. Caregiver satisfaction with interpersonal treatment by PCP was positively related to caregiver's own verbal participation. Caregivers of AD patients and PCPs maintain active, coordinated verbal participation in primary care visits while patients participate less. Encouraging verbal participation by AD patients and their caregivers may increase the AD patient's active role and caregiver satisfaction with primary care visits.

  1. A comparison of processing load during non-verbal decision-making in two individuals with aphasia

    Directory of Open Access Journals (Sweden)

    Salima Suleman

    2015-05-01

    Full Text Available INTRODUCTION A growing body of evidence suggests people with aphasia (PWA can have impairments to cognitive functions such as attention, working memory and executive functions.(1-5 Such cognitive impairments have been shown to negatively affect the decision-making (DM abilities adults with neurological damage. (6,7 However, little is known about DM abilities of PWA.(8 Pupillometry is “the measurement of changes in pupil diameter”.(9;p.1 Researchers have reported a positive relationship between processing load and phasic pupil size (i.e., as processing load increases, pupil size increases.(10 Thus pupillometry has the potential to be a useful tool for investigating processing load during DM in PWA. AIMS The primary aim of this study was to establish the feasibility of using pupillometry during a non-verbal DM task with PWA. The secondary aim was to explore non-verbal DM performance in PWA and determine the relationship between DM performance and processing load using pupillometry. METHOD DESIGN. A single-subject case-study design with two participants was used in this study. PARTICIPANTS. Two adult males with anomic aphasia participated in this study. Participants were matched for age and education. Both participants were independent, able to drive, and had legal autonomy. MEASURES. PERFORMANCE ON A DM TASK. We used a computerized risk-taking card game called the Iowa Gambling Task (IGT as our non-verbal DM task.(11 In the IGT, participants made 100 selections (via eye gaze from four decks of cards presented on the computer screen with the goal of maximizing their overall hypothetical monetary gain. PROCESSING LOAD. The EyeLink 1000+ eye tracking system was used to collect pupil size measures while participants deliberated before each deck selection during the IGT. For this analysis, we calculated change in pupil size as a measure of processing load. RESULTS P1. P1 made increasingly advantageous decisions as the task progressed (Fig.1. When

  2. Comparison of audio and audiovisual measures of adult stuttering: Implications for clinical trials.

    Science.gov (United States)

    O'Brian, Sue; Jones, Mark; Onslow, Mark; Packman, Ann; Menzies, Ross; Lowe, Robyn

    2015-04-15

    This study investigated whether measures of percentage syllables stuttered (%SS) and stuttering severity ratings with a 9-point scale differ when made from audiovisual compared with audio-only recordings. Four experienced speech-language pathologists measured %SS and assigned stuttering severity ratings to 10-minute audiovisual and audio-only recordings of 36 adults. There was a mean 18% increase in %SS scores when samples were presented in audiovisual compared with audio-only mode. This result was consistent across both higher and lower %SS scores and was found to be directly attributable to counts of stuttered syllables rather than the total number of syllables. There was no significant difference between stuttering severity ratings made from the two modes. In clinical trials research, when using %SS as the primary outcome measure, audiovisual samples would be preferred as long as clear, good quality, front-on images can be easily captured. Alternatively, stuttering severity ratings may be a more valid measure to use as they correlate well with %SS and values are not influenced by the presentation mode.

  3. Characterizing measles transmission in India: a dynamic modeling study using verbal autopsy data.

    Science.gov (United States)

    Verguet, Stéphane; Jones, Edward O; Johri, Mira; Morris, Shaun K; Suraweera, Wilson; Gauvreau, Cindy L; Jha, Prabhat; Jit, Mark

    2017-08-10

    Decreasing trends in measles mortality have been reported in recent years. However, such estimates of measles mortality have depended heavily on assumed regional measles case fatality risks (CFRs) and made little use of mortality data from low- and middle-income countries in general and India, the country with the highest measles burden globally, in particular. We constructed a dynamic model of measles transmission in India with parameters that were empirically inferred using spectral analysis from a time series of measles mortality extracted from the Million Death Study, an ongoing longitudinal study recording deaths across 2.4 million Indian households and attributing causes of death using verbal autopsy. The model was then used to estimate the measles CFR, the number of measles deaths, and the impact of vaccination in 2000-2015 among under-five children in India and in the states of Bihar and Uttar Pradesh (UP), two states with large populations and the highest numbers of measles deaths in India. We obtained the following estimated CFRs among under-five children for the year 2005: 0.63% (95% confidence interval (CI): 0.40-1.00%) for India as a whole, 0.62% (0.38-1.00%) for Bihar, and 1.19% (0.80-1.75%) for UP. During 2000-2015, we estimated that 607,000 (95% CI: 383,000-958,000) under-five deaths attributed to measles occurred in India as a whole. If no routine vaccination or supplemental immunization activities had occurred from 2000 to 2015, an additional 1.6 (1.0-2.6) million deaths for under-five children would have occurred across India. We developed a data- and model-driven estimation of the historical measles dynamics, CFR, and vaccination impact in India, extracting the periodicity of epidemics using spectral and coherence analysis, which allowed us to infer key parameters driving measles transmission dynamics and mortality.

  4. Verbal Reports as Data.

    Science.gov (United States)

    Ericsson, K. Anders; Simon, Herbert A.

    1980-01-01

    Accounting for verbal reports requires explication of the mechanisms by which the reports are generated and influenced by experimental factors. We discuss different cognitive processes underlying verbalization and present a model of how subjects, when asked to think aloud, verbalize information from their short-term memory. (Author/GDC)

  5. Audiovisual associations alter the perception of low-level visual motion

    Directory of Open Access Journals (Sweden)

    Hulusi eKafaligonul

    2015-03-01

    Full Text Available Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role.

  6. Verbal Working Memory Is Related to the Acquisition of Cross-Linguistic Phonological Regularities

    Directory of Open Access Journals (Sweden)

    Evelyn Bosma

    2017-09-01

    Full Text Available Closely related languages share cross-linguistic phonological regularities, such as Frisian -âld [ͻ:t] and Dutch -oud [ʱut], as in the cognate pairs kâld [kͻ:t] – koud [kʱut] ‘cold’ and wâld [wͻ:t] – woud [wʱut] ‘forest’. Within Bybee’s (1995, 2001, 2008, 2010 network model, these regularities are, just like grammatical rules within a language, generalizations that emerge from schemas of phonologically and semantically related words. Previous research has shown that verbal working memory is related to the acquisition of grammar, but not vocabulary. This suggests that verbal working memory supports the acquisition of linguistic regularities. In order to test this hypothesis we investigated whether verbal working memory is also related to the acquisition of cross-linguistic phonological regularities. For three consecutive years, 5- to 8-year-old Frisian-Dutch bilingual children (n = 120 were tested annually on verbal working memory and a Frisian receptive vocabulary task that comprised four cognate categories: (1 identical cognates, (2 non-identical cognates that either do or (3 do not exhibit a phonological regularity between Frisian and Dutch, and (4 non-cognates. The results showed that verbal working memory had a significantly stronger effect on cognate category (2 than on the other three cognate categories. This suggests that verbal working memory is related to the acquisition of cross-linguistic phonological regularities. More generally, it confirms the hypothesis that verbal working memory plays a role in the acquisition of linguistic regularities.

  7. Propuestas para la investigavción en comunicación audiovisual: publicidad social y creación colectiva en Internet / Research proposals for audiovisual communication: social advertising and collective creation on the internet

    Directory of Open Access Journals (Sweden)

    Teresa Fraile Prieto

    2011-09-01

    Full Text Available Resumen: La sociedad de la información digital plantea nuevos retos a los investigadores. A mediada que la comunicación audiovisual se ha consolidado como disciplina, los estudios culturales se muestran como una perspectiva de análisis ventajosa para acercarse a las nuevas prácticas creativas y de consumo del medio audiovisual. Este artículo defiende el estudio de los productos culturales audiovisuales que esta sociedad digital produce por cuanto son un testimonio de los cambios sociales que se operan en ella. En concreto se propone el acercamiento a la publicidad social y a los objetos de creación colectiva en Internet como medio para conocer las circunstancias de nuestra sociedad. Abstract: The information society poses new challenges to researchers. While audiovisual communication has been consolidated as a discipline, cultural studies is an advantageous analytical perspective to approach the new creative practices and consumption of audiovisual media. This article defends the study of audiovisual cultural products produced by the digital society because they are a testimony of the social changes taking place in it. Specifically, it proposes an approach to social advertising and objects of collective creation on the Internet as a means to know the circumstances of our society.

  8. Audiovisual laughter detection based on temporal features

    NARCIS (Netherlands)

    Petridis, Stavros; Nijholt, Antinus; Nijholt, A.; Pantic, M.; Pantic, Maja; Poel, Mannes; Poel, M.; Hondorp, G.H.W.

    2008-01-01

    Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audiovisual approach to distinguishing laughter from speech based on temporal features and we show that the integration of audio and visual information leads to improved

  9. Audio-visual materials usage preference among agricultural ...

    African Journals Online (AJOL)

    It was found that respondents preferred radio, television, poster, advert, photographs, specimen, bulletin, magazine, cinema, videotape, chalkboard, and bulletin board as audio-visual materials for extension work. These are the materials that can easily be manipulated and utilized for extension work. Nigerian Journal of ...

  10. Voice activity detection using audio-visual information

    DEFF Research Database (Denmark)

    Petsatodis, Theodore; Pnevmatikakis, Aristodemos; Boukis, Christos

    2009-01-01

    An audio-visual voice activity detector that uses sensors positioned distantly from the speaker is presented. Its constituting unimodal detectors are based on the modeling of the temporal variation of audio and visual features using Hidden Markov Models; their outcomes are fused using a post...

  11. First clinical implementation of audiovisual biofeedback in liver cancer stereotactic body radiation therapy

    International Nuclear Information System (INIS)

    Pollock, Sean; Tse, Regina; Martin, Darren

    2015-01-01

    This case report details a clinical trial's first recruited liver cancer patient who underwent a course of stereotactic body radiation therapy treatment utilising audiovisual biofeedback breathing guidance. Breathing motion results for both abdominal wall motion and tumour motion are included. Patient 1 demonstrated improved breathing motion regularity with audiovisual biofeedback. A training effect was also observed.

  12. Brain responses to audiovisual speech mismatch in infants are associated with individual differences in looking behaviour.

    Science.gov (United States)

    Kushnerenko, Elena; Tomalski, Przemyslaw; Ballieux, Haiko; Ribeiro, Helena; Potton, Anita; Axelsson, Emma L; Murphy, Elizabeth; Moore, Derek G

    2013-11-01

    Research on audiovisual speech integration has reported high levels of individual variability, especially among young infants. In the present study we tested the hypothesis that this variability results from individual differences in the maturation of audiovisual speech processing during infancy. A developmental shift in selective attention to audiovisual speech has been demonstrated between 6 and 9 months with an increase in the time spent looking to articulating mouths as compared to eyes (Lewkowicz & Hansen-Tift. (2012) Proc. Natl Acad. Sci. USA, 109, 1431-1436; Tomalski et al. (2012) Eur. J. Dev. Psychol., 1-14). In the present study we tested whether these changes in behavioural maturational level are associated with differences in brain responses to audiovisual speech across this age range. We measured high-density event-related potentials (ERPs) in response to videos of audiovisually matching and mismatched syllables /ba/ and /ga/, and subsequently examined visual scanning of the same stimuli with eye-tracking. There were no clear age-specific changes in ERPs, but the amplitude of audiovisual mismatch response (AVMMR) to the combination of visual /ba/ and auditory /ga/ was strongly negatively associated with looking time to the mouth in the same condition. These results have significant implications for our understanding of individual differences in neural signatures of audiovisual speech processing in infants, suggesting that they are not strictly related to chronological age but instead associated with the maturation of looking behaviour, and develop at individual rates in the second half of the first year of life. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  13. Audiovisual Webjournalism: An analysis of news on UOL News and on TV UERJ Online

    Directory of Open Access Journals (Sweden)

    Leila Nogueira

    2008-06-01

    Full Text Available This work shows the development of audiovisual webjournalism on the Brazilian Internet. This paper, based on the analysis of UOL News on UOL TV – pioneer format on commercial web television - and of UERJ Online TV – first on-line university television in Brazil - investigates the changes in the gathering, production and dissemination processes of audiovisual news when it starts to be transmitted through the web. Reflections of authors such as Herreros (2003, Manovich (2001 and Gosciola (2003 are used to discuss the construction of audiovisual narrative on the web. To comprehend the current changes in today’s webjournalism, we draw on the concepts developed by Fidler (1997; Bolter and Grusin (1998; Machado (2000; Mattos (2002 and Palacios (2003. We may conclude that the organization of narrative elements in cyberspace makes for the efficiency of journalistic messages, while establishing the basis of a particular language for audiovisual news on the Internet.

  14. Audio/visual analysis for high-speed TV advertisement detection from MPEG bitstream

    OpenAIRE

    Sadlier, David A.

    2002-01-01

    Advertisement breaks dunng or between television programmes are typically flagged by senes of black-and-silent video frames, which recurrendy occur in order to audio-visually separate individual advertisement spots from one another. It is the regular prevalence of these flags that enables automatic differentiauon between what is programme content and what is advertisement break. Detection of these audio-visual depressions within broadcast television content provides a basis on which advertise...

  15. An audiovisual emotion recognition system

    Science.gov (United States)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  16. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  17. The verbal-visual discourse in Brazilian Sign Language – Libras

    Directory of Open Access Journals (Sweden)

    Tanya Felipe

    2013-11-01

    Full Text Available This article aims to broaden the discussion on verbal-visual utterances, reflecting upon theoretical assumptions of the Bakhtin Circle that can reinforce the argument that the utterances of a language that employs a visual-gestural modality convey plastic-pictorial and spatial values of signs also through non-manual markers (NMMs. This research highlights the difference between affective expressions, which are paralinguistic communications that may complement an utterance, and verbal-visual grammatical markers, which are linguistic because they are part of the architecture of phonological, morphological, syntactic-semantic and discursive levels in a particular language. These markers will be described, taking the Brazilian Sign Language–Libras as a starting point, thereby including this language in discussions of verbal-visual discourse when investigating the need to do research on this discourse also in the linguistic analyses of oral-auditory modality languages, including Transliguistics as an area of knowledge that analyzes discourse, focusing upon the verbal-visual markers used by the subjects in their utterance acts.

  18. Early and late beta-band power reflect audiovisual perception in the McGurk illusion.

    Science.gov (United States)

    Roa Romero, Yadira; Senkowski, Daniel; Keil, Julian

    2015-04-01

    The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13-30 Hz) at short (0-500 ms) and long (500-800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept. Copyright © 2015 the American Physiological Society.

  19. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    Directory of Open Access Journals (Sweden)

    Yuanqing Li

    Full Text Available One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG. The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  20. Planning and Producing Audiovisual Materials. Third Edition.

    Science.gov (United States)

    Kemp, Jerrold E.

    A revised edition of this handbook provides illustrated, step-by-step explanations of how to plan and produce audiovisual materials. Included are sections on the fundamental skills--photography, graphics and recording sound--followed by individual sections on photographic print series, slide series, filmstrips, tape recordings, overhead…

  1. Content-based analysis improves audiovisual archive retrieval

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2012-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. In this paper, we take into account the information needs

  2. Quantifying temporal ventriloquism in audiovisual synchrony perception

    NARCIS (Netherlands)

    Kuling, I.A.; Kohlrausch, A.G.; Juola, J.F.

    2013-01-01

    The integration of visual and auditory inputs in the human brain works properly only if the components are perceived in close temporal proximity. In the present study, we quantified cross-modal interactions in the human brain for audiovisual stimuli with temporal asynchronies, using a paradigm from

  3. 36 CFR 1256.100 - What is the copying policy for USIA audiovisual records that either have copyright protection or...

    Science.gov (United States)

    2010-07-01

    ... for USIA audiovisual records that either have copyright protection or contain copyrighted material... Distribution of United States Information Agency Audiovisual Materials in the National Archives of the United States § 1256.100 What is the copying policy for USIA audiovisual records that either have copyright...

  4. The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study.

    Science.gov (United States)

    Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence

    2017-09-25

    At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Audiovisual Modulation in Mouse Primary Visual Cortex Depends on Cross-Modal Stimulus Configuration and Congruency.

    Science.gov (United States)

    Meijer, Guido T; Montijn, Jorrit S; Pennartz, Cyriel M A; Lansink, Carien S

    2017-09-06

    audiovisual stimulation resulted in a balanced pattern of response enhancement and suppression compared with unisensory visual stimuli, whereas incongruent or dissimilar stimuli at full contrast gave rise to a population dominated by response-suppressing neurons. Our results indicate that V1 dynamically integrates nonvisual sources of information while still attributing most of its resources to coding visual information. Copyright © 2017 the authors 0270-6474/17/378783-14$15.00/0.

  6. Selective Attention and Audiovisual Integration: Is Attending to Both Modalities a Prerequisite for Early Integration?

    NARCIS (Netherlands)

    Talsma, D.; Doty, Tracy J.; Woldorff, Marty G.

    2007-01-01

    Interactions between multisensory integration and attention were studied using a combined audiovisual streaming design and a rapid serial visual presentation paradigm. Event-related potentials (ERPs) following audiovisual objects (AV) were compared with the sum of the ERPs following auditory (A) and

  7. Non-Markovian nuclear dynamics

    International Nuclear Information System (INIS)

    Kolomietz, V.M.

    2011-01-01

    A prove of equations of motion for the nuclear shape variables which establish a direct connection of the memory effects with the dynamic distortion of the Fermi surface is suggested. The equations of motion for the nuclear Fermi liquid drop are derived from the collisional kinetic equation. In general, the corresponding equations are non-Markovian. The memory effects appear due to the Fermi surface distortions and depend on the relaxation time. The main purpose of the present work is to apply the non-Markovian dynamics to the description of the nuclear giant multipole resonances (GMR) and the large amplitude motion. We take also into consideration the random forces and concentrate on the formation of both the conservative and the friction forces to make more clear the memory effect on the nuclear dynamics. In this respect, the given approach represents an extension of the traditional liquid drop model (LDM) to the case of the nuclear Fermi liquid drop. In practical application, we pay close attention to the description of the descent of the nucleus from the fission barrier to the scission point.

  8. A Similarity-Based Approach for Audiovisual Document Classification Using Temporal Relation Analysis

    Directory of Open Access Journals (Sweden)

    Ferrane Isabelle

    2011-01-01

    Full Text Available Abstract We propose a novel approach for video classification that bases on the analysis of the temporal relationships between the basic events in audiovisual documents. Starting from basic segmentation results, we define a new representation method that is called Temporal Relation Matrix (TRM. Each document is then described by a set of TRMs, the analysis of which makes events of a higher level stand out. This representation has been first designed to analyze any audiovisual document in order to find events that may well characterize its content and its structure. The aim of this work is to use this representation to compute a similarity measure between two documents. Approaches for audiovisual documents classification are presented and discussed. Experimentations are done on a set of 242 video documents and the results show the efficiency of our proposals.

  9. Conditioning Influences Audio-Visual Integration by Increasing Sound Saliency

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    2011-10-01

    Full Text Available We investigated the effect of prior conditioning of an auditory stimulus on audiovisual integration in a series of four psychophysical experiments. The experiments factorially manipulated the conditioning procedure (picture vs monetary conditioning and multisensory paradigm (2AFC visual detection vs redundant target paradigm. In the conditioning sessions, subjects were presented with three pure tones (= conditioned stimulus, CS that were paired with neutral, positive, or negative unconditioned stimuli (US, monetary: +50 euro cents,.–50 cents, 0 cents; pictures: highly pleasant, unpleasant, and neutral IAPS. In a 2AFC visual selective attention paradigm, detection of near-threshold Gabors was improved by concurrent sounds that had previously been paired with a positive (monetary or negative (picture outcome relative to neutral sounds. In the redundant target paradigm, sounds previously paired with positive (monetary or negative (picture outcomes increased response speed to both auditory and audiovisual targets similarly. Importantly, prior conditioning did not increase the multisensory response facilitation (ie, (A + V/2 – AV or the race model violation. Collectively, our results suggest that prior conditioning primarily increases the saliency of the auditory stimulus per se rather than influencing audiovisual integration directly. In turn, conditioned sounds are rendered more potent for increasing response accuracy or speed in detection of visual targets.

  10. Effects of audio-visual aids on foreign language test anxiety, reading and listening comprehension, and retention in EFL learners.

    Science.gov (United States)

    Lee, Shu-Ping; Lee, Shin-Da; Liao, Yuan-Lin; Wang, An-Chi

    2015-04-01

    This study examined the effects of audio-visual aids on anxiety, comprehension test scores, and retention in reading and listening to short stories in English as a Foreign Language (EFL) classrooms. Reading and listening tests, general and test anxiety, and retention were measured in English-major college students in an experimental group with audio-visual aids (n=83) and a control group without audio-visual aids (n=94) with similar general English proficiency. Lower reading test anxiety, unchanged reading comprehension scores, and better reading short-term and long-term retention after four weeks were evident in the audiovisual group relative to the control group. In addition, lower listening test anxiety, higher listening comprehension scores, and unchanged short-term and long-term retention were found in the audiovisual group relative to the control group after the intervention. Audio-visual aids may help to reduce EFL learners' listening test anxiety and enhance their listening comprehension scores without facilitating retention of such materials. Although audio-visual aids did not increase reading comprehension scores, they helped reduce EFL learners' reading test anxiety and facilitated retention of reading materials.

  11. Special Aspects of the Media Category Realization in Chinese Microblogs

    Directory of Open Access Journals (Sweden)

    Li Feixiang

    2016-12-01

    Full Text Available The article is devoted to the research of the phenomenon of microblogging, which is an essential part of Chinese culture media. Microblog is considered as an innovative genre of Internet-journalism and as a kind of media text, since in many cases it is also the fact of mass communication and multimedia products, which involves different ways of transmitting information: verbal, visual and audiovisual. Specific examples demonstrate specificity of realization of media categories in the texts of the Chinese microblogging, it is analyzed the characteristics of the combination of diverse components of the text: the actual verbal, visual static and dynamic visual. The classification of the nature of microblogging used in multimedia is given. It is differentiated the following types of microblogging: monocomponent polycoded microblogging, two-component polycoded microblogging with static iconic element, two-component polycoded microblogging with dynamic visual component, multi-component monocoded and polycoded microblogging, including a hyperlink. The role of hyperlinks and the actual verbal component in microblogging is considered. Hyperlink promotes expansion of the boundaries of the text and implies to overcome technical limitations, reduce the amount of information transmitted. Actually verbal text conveys the author’s emotions, but also acts as a metatext in relation to the static or dynamic images.

  12. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  13. Audiovisual preconditioning enhances the efficacy of an anatomical dissection course: A randomised study.

    Science.gov (United States)

    Collins, Anne M; Quinlan, Christine S; Dolan, Roisin T; O'Neill, Shane P; Tierney, Paul; Cronin, Kevin J; Ridgway, Paul F

    2015-07-01

    The benefits of incorporating audiovisual materials into learning are well recognised. The outcome of integrating such a modality in to anatomical education has not been reported previously. The aim of this randomised study was to determine whether audiovisual preconditioning is a useful adjunct to learning at an upper limb dissection course. Prior to instruction participants completed a standardised pre course multiple-choice questionnaire (MCQ). The intervention group was subsequently shown a video with a pre-recorded commentary. Following initial dissection, both groups completed a second MCQ. The final MCQ was completed at the conclusion of the course. Statistical analysis confirmed a significant improvement in the performance in both groups over the duration of the three MCQs. The intervention group significantly outperformed their control group counterparts immediately following audiovisual preconditioning and in the post course MCQ. Audiovisual preconditioning is a practical and effective tool that should be incorporated in to future course curricula to optimise learning. Level of evidence This study appraises an intervention in medical education. Kirkpatrick Level 2b (modification of knowledge). Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  14. Nonlinear dynamics non-integrable systems and chaotic dynamics

    CERN Document Server

    Borisov, Alexander

    2017-01-01

    This monograph reviews advanced topics in the area of nonlinear dynamics. Starting with theory of integrable systems – including methods to find and verify integrability – the remainder of the book is devoted to non-integrable systems with an emphasis on dynamical chaos. Topics include structural stability, mechanisms of emergence of irreversible behaviour in deterministic systems as well as chaotisation occurring in dissipative systems.

  15. On-line repository of audiovisual material feminist research methodology

    Directory of Open Access Journals (Sweden)

    Lena Prado

    2014-12-01

    Full Text Available This paper includes a collection of audiovisual material available in the repository of the Interdisciplinary Seminar of Feminist Research Methodology SIMReF (http://www.simref.net.

  16. Análise da comunicação verbal e não-verbal de crianças com deficiencia visual durante interação com a mãe Analysis of the verbal and non-verbal communication of children with visual impairment during interaction with their mothers

    Directory of Open Access Journals (Sweden)

    Jáima Pinheiro de Oliveira

    2005-12-01

    blind children, with low vision capacity and children with normal vision and, therefore, to analyze the particularities of the maternal communication during the interaction within free and planned contexts. Six children participated in the study: two blind; two with low vision capacity and; two with normal vision, who were selected from specific criteria. Two recordings of each were carried out in the familiar environment: free and planned situations. The analysis was performed by means of functional characterization of the verbal and non-verbal communication of the children with their mothers. The data showed that the verbal communicative resources were predominant in both free and planned situations. Overall, the results of this study indicate that although there were particularities during its use, the language of the visual impairment children does not present deficit in relation to the one of those with normal vision. Moreover, the mothers of the blind children and with low vision capacity used strategies such as descriptions of the environment, indications and localization of objects during their interactions that favored their performance.

  17. Net neutrality and audiovisual services

    OpenAIRE

    van Eijk, N.; Nikoltchev, S.

    2011-01-01

    Net neutrality is high on the European agenda. New regulations for the communication sector provide a legal framework for net neutrality and need to be implemented on both a European and a national level. The key element is not just about blocking or slowing down traffic across communication networks: the control over the distribution of audiovisual services constitutes a vital part of the problem. In this contribution, the phenomenon of net neutrality is described first. Next, the European a...

  18. Audiovisual integration of speech in a patient with Broca's Aphasia

    Science.gov (United States)

    Andersen, Tobias S.; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia. PMID:25972819

  19. Severity and Co-occurrence of Oral and Verbal Apraxias in Left Brain Damaged Adults

    Directory of Open Access Journals (Sweden)

    Fariba Yadegari

    2012-04-01

    Full Text Available Objective: Oral and verbal apraxias represent motor programming deficits of nonverbal and verbal movements respectively. Studying their properties may shed light on speech motor control processes. This study was focused on identifying cases with oral or verbal apraxia, their co–occurrences and severities. Materials & Methods: In this non-experimental study, 55 left adult subjects with left brain lesion including 22 women and 33 men with age range of 23 to 84 years, were examined and videotaped using oral apraxia and verbal apraxia tasks. Three speech and language pathologists independently scored apraxia severities. Data were analyzed by independent t test, Pearson, Phi and Contingency coefficients using SPSS 12. Results: Mean score of oral and verbal apraxias in patients with and without oral and verbal apraxias were significantly different (P<0.001. Forty- two patients had simultaneous oral and verbal apraxias, with significant correlation between their oral and verbal apraxia scores (r=0.75, P<0.001. Six patients showed no oral or verbal apraxia and 7 had just one type of apraxia. Comparison of co-occurrence of two disorders (Phi=0.59 and different oral and verbal intensities (C=0.68 were relatively high (P<0.001. Conclusion: The present research revealed co-occurrence of oral and verbal apraxias to a great extent. It appears that speech motor control is influenced by a more general verbal and nonverbal motor control.

  20. School effects on non-verbal intelligence and nutritional status in rural Zambia.

    Science.gov (United States)

    Hein, Sascha; Tan, Mei; Reich, Jodi; Thuma, Philip E; Grigorenko, Elena L

    2016-02-01

    This study uses hierarchical linear modeling (HLM) to examine the school factors (i.e., related to school organization and teacher and student body) associated with non-verbal intelligence (NI) and nutritional status (i.e., body mass index; BMI) of 4204 3 rd to 7 th graders in rural areas of Southern Province, Zambia. Results showed that 23.5% and 7.7% of the NI and BMI variance, respectively, were conditioned by differences between schools. The set of 14 school factors accounted for 58.8% and 75.9% of the between-school differences in NI and BMI, respectively. Grade-specific HLM yielded higher between-school variation of NI (41%) and BMI (14.6%) for students in grade 3 compared to grades 4 to 7. School factors showed a differential pattern of associations with NI and BMI across grades. The distance to a health post and teacher's teaching experience were the strongest predictors of NI (particularly in grades 4, 6 and 7); the presence of a preschool was linked to lower BMI in grades 4 to 6. Implications for improving access and quality of education in rural Zambia are discussed.