WorldWideScience

Sample records for vicarious audiovisual learning

  1. Vicarious audiovisual learning in perfusion education.

    Science.gov (United States)

    Rath, Thomas E; Holt, David W

    2010-12-01

    Perfusion technology is a mechanical and visual science traditionally taught with didactic instruction combined with clinical experience. It is difficult to provide perfusion students the opportunity to experience difficult clinical situations, set up complex perfusion equipment, or observe corrective measures taken during catastrophic events because of patient safety concerns. Although high fidelity simulators offer exciting opportunities for future perfusion training, we explore the use of a less costly low fidelity form of simulation instruction, vicarious audiovisual learning. Two low fidelity modes of instruction; description with text and a vicarious, first person audiovisual production depicting the same content were compared. Students (n = 37) sampled from five North American perfusion schools were prospectively randomized to one of two online learning modules, text or video.These modules described the setup and operation of the MAQUET ROTAFLOW stand-alone centrifugal console and pump. Using a 10 question multiple-choice test, students were assessed immediately after viewing the module (test #1) and then again 2 weeks later (test #2) to determine cognition and recall of the module content. In addition, students completed a questionnaire assessing the learning preferences of today's perfusion student. Mean test scores from test #1 for video learners (n = 18) were significantly higher (88.89%) than for text learners (n = 19) (74.74%), (p audiovisual learning modules may be an efficacious, low cost means of delivering perfusion training on subjects such as equipment setup and operation. Video learning appears to improve cognition and retention of learned content and may play an important role in how we teach perfusion in the future, as simulation technology becomes more prevalent.

  2. A comparison of positive vicarious learning and verbal information for reducing vicariously learned fear.

    Science.gov (United States)

    Reynolds, Gemma; Wasely, David; Dunne, Güler; Askew, Chris

    2017-10-19

    Research with children has demonstrated that both positive vicarious learning (modelling) and positive verbal information can reduce children's acquired fear responses for a particular stimulus. However, this fear reduction appears to be more effective when the intervention pathway matches the initial fear learning pathway. That is, positive verbal information is a more effective intervention than positive modelling when fear is originally acquired via negative verbal information. Research has yet to explore whether fear reduction pathways are also important for fears acquired via vicarious learning. To test this, an experiment compared the effectiveness of positive verbal information and positive vicarious learning interventions for reducing vicariously acquired fears in children (7-9 years). Both vicarious and informational fear reduction interventions were found to be equally effective at reducing vicariously acquired fears, suggesting that acquisition and intervention pathways do not need to match for successful fear reduction. This has significant implications for parents and those working with children because it suggests that providing children with positive information or positive vicarious learning immediately after a negative modelling event may prevent more serious fears developing.

  3. Still to Learn from Vicarious Learning

    Science.gov (United States)

    Mayes, J. T.

    2015-01-01

    The term "vicarious learning" was introduced in the 1960s by Bandura, who demonstrated how learning can occur through observing the behaviour of others. Such social learning is effective without the need for the observer to experience feedback directly. More than twenty years later a series of studies on vicarious learning was undertaken…

  4. Computer Support for Vicarious Learning.

    Science.gov (United States)

    Monthienvichienchai, Rachada; Sasse, M. Angela

    This paper investigates how computer support for vicarious learning can be implemented by taking a principled approach to selecting and combining different media to capture educational dialogues. The main goal is to create vicarious learning materials of appropriate pedagogic content and production quality, and at the same time minimize the…

  5. Vicarious learning: a review of the literature.

    Science.gov (United States)

    Roberts, Debbie

    2010-01-01

    Experiential learning theory stresses the primacy of personal experience and the literature suggests that direct clinical experience is required in order for learning to take place. However, raw or first hand experience may not be the only mechanisms by which students engage in experiential learning. There is a growing body of literature within higher education which suggests that students are able to use another's experience to learn: vicarious learning. This literature review aims to outline vicarious learning within a nursing context. Many of the studies regarding vicarious learning are situated within Higher Education in general, however, within the United States these relate more specifically to nursing students. The literature indicates the increasing global interest in this area. This paper reveals that whilst the literature offers a number of examples illustrating how vicarious learning takes place, opinion on the role of the lecturer is divided and requires further exploration and clarification. The implications for nurse education are discussed.

  6. Learning to fear a second-order stimulus following vicarious learning.

    Science.gov (United States)

    Reynolds, Gemma; Field, Andy P; Askew, Chris

    2017-04-01

    Vicarious fear learning refers to the acquisition of fear via observation of the fearful responses of others. The present study aims to extend current knowledge by exploring whether second-order vicarious fear learning can be demonstrated in children. That is, whether vicariously learnt fear responses for one stimulus can be elicited in a second stimulus associated with that initial stimulus. Results demonstrated that children's (5-11 years) fear responses for marsupials and caterpillars increased when they were seen with fearful faces compared to no faces. Additionally, the results indicated a second-order effect in which fear-related learning occurred for other animals seen together with the fear-paired animal, even though the animals were never observed with fearful faces themselves. Overall, the findings indicate that for children in this age group vicariously learnt fear-related responses for one stimulus can subsequently be observed for a second stimulus without it being experienced in a fear-related vicarious learning event. These findings may help to explain why some individuals do not recall involvement of a traumatic learning episode in the development of their fear of a specific stimulus.

  7. Learning to fear a second-order stimulus following vicarious learning

    OpenAIRE

    Reynolds, G; Field, AP; Askew, C

    2015-01-01

    Vicarious fear learning refers to the acquisition of fear via observation of the fearful responses of others. The present study aims to extend current knowledge by exploring whether second-order vicarious fear learning can be demonstrated in children. That is, whether vicariously learnt fear responses for one stimulus can be elicited in a second stimulus associated with that initial stimulus. Results demonstrated that children’s (5–11 years) fear responses for marsupials and caterpillars incr...

  8. Vicarious learning from human models in monkeys.

    Science.gov (United States)

    Falcone, Rossella; Brunamonti, Emiliano; Genovesio, Aldo

    2012-01-01

    We examined whether monkeys can learn by observing a human model, through vicarious learning. Two monkeys observed a human model demonstrating an object-reward association and consuming food found underneath an object. The monkeys observed human models as they solved more than 30 learning problems. For each problem, the human models made a choice between two objects, one of which concealed a piece of apple. In the test phase afterwards, the monkeys made a choice of their own. Learning was apparent from the first trial of the test phase, confirming the ability of monkeys to learn by vicarious observation of human models.

  9. Vicarious Acquisition Of Learned Helplessness

    Science.gov (United States)

    And Others; DeVellis, Robert F.

    1978-01-01

    Reports a study conducted to determine whether individuals who observed others experiencing noncontingency would develop learned helplessness vicariously. Subjects were 75 college female undergraduates. (MP)

  10. Vicarious learning from human models in monkeys.

    Directory of Open Access Journals (Sweden)

    Rossella Falcone

    Full Text Available We examined whether monkeys can learn by observing a human model, through vicarious learning. Two monkeys observed a human model demonstrating an object-reward association and consuming food found underneath an object. The monkeys observed human models as they solved more than 30 learning problems. For each problem, the human models made a choice between two objects, one of which concealed a piece of apple. In the test phase afterwards, the monkeys made a choice of their own. Learning was apparent from the first trial of the test phase, confirming the ability of monkeys to learn by vicarious observation of human models.

  11. Vicarious learning revisited: a contemporary behavior analytic interpretation.

    Science.gov (United States)

    Masia, C L; Chase, P N

    1997-03-01

    Beginning in the 1960s, social learning theorists argued that behavioral learning principles could not account for behavior acquired through observation. Such a viewpoint is still widely held today. This rejection of behavioral principles in explaining vicarious learning was based on three phenomena: (1) imitation that occurred without direct reinforcement of the observer's behavior; (2) imitation that occurred after a long delay following modeling; and (3) a greater probability of imitation of the model's reinforced behavior than of the model's nonreinforced or punished behavior. These observations convinced social learning theorists that cognitive variables were required to explain behavior. Such a viewpoint has progressed aggressively, as evidenced by the change in name from social learning theory to social cognitive theory, and has been accompanied by the inclusion of information-processing theory. Many criticisms of operant theory, however, have ignored the full range of behavioral concepts and principles that have been derived to account for complex behavior. This paper will discuss some problems with the social learning theory explanation of vicarious learning and provide an interpretation of vicarious learning from a contemporary behavior analytic viewpoint.

  12. Effects of Competition on Students' Self-Efficacy in Vicarious Learning

    Science.gov (United States)

    Chan, Joanne C. Y.; Lam, Shui-fong

    2008-01-01

    Background: Vicarious learning is one of the fundamental sources of self-efficacy that is frequently employed in educational settings. However, little research has investigated the effects of competition on students' writing self-efficacy when they engage in vicarious learning. Aim: This study compared the effects of competitive and…

  13. Inhibition of vicariously learned fear in children using positive modeling and prior exposure.

    Science.gov (United States)

    Askew, Chris; Reynolds, Gemma; Fielding-Smith, Sarah; Field, Andy P

    2016-02-01

    One of the challenges to conditioning models of fear acquisition is to explain how different individuals can experience similar learning events and only some of them subsequently develop fear. Understanding factors moderating the impact of learning events on fear acquisition is key to understanding the etiology and prevention of fear in childhood. This study investigates these moderators in the context of vicarious (observational) learning. Two experiments tested predictions that the acquisition or inhibition of fear via vicarious learning is driven by associative learning mechanisms similar to direct conditioning. In Experiment 1, 3 groups of children aged 7 to 9 years received 1 of 3 inhibitive information interventions-psychoeducation, factual information, or no information (control)-prior to taking part in a vicarious fear learning procedure. In Experiment 2, 3 groups of children aged 7 to 10 years received 1 of 3 observational learning interventions-positive modeling (immunization), observational familiarity (latent inhibition), or no prevention (control)-before vicarious fear learning. Results indicated that observationally delivered manipulations inhibited vicarious fear learning, while preventions presented via written information did not. These findings confirm that vicarious learning shares some of the characteristics of direct conditioning and can explain why not all individuals will develop fear following a vicarious learning event. They also suggest that the modality of inhibitive learning is important and should match the fear learning pathway for increased chances of inhibition. Finally, the results demonstrate that positive modeling is likely to be a particularly effective method for preventing fear-related observational learning in children. (c) 2016 APA, all rights reserved).

  14. Types of vicarious learning experienced by pre-dialysis patients

    Directory of Open Access Journals (Sweden)

    Kate McCarthy

    2015-04-01

    Full Text Available Objective: Haemodialysis and peritoneal dialysis renal replacement treatment options are in clinical equipoise, although the cost of haemodialysis to the National Health Service is £16,411/patient/year greater than peritoneal dialysis. Treatment decision-making takes place during the pre-dialysis year when estimated glomerular filtration rate drops to between 15 and 30 mL/min/1.73 m2. Renal disease can be familial, and the majority of patients have considerable health service experience when they approach these treatment decisions. Factors affecting patient treatment decisions are currently unknown. The objective of this article is to explore data from a wider study in specific relation to the types of vicarious learning experiences reported by pre-dialysis patients. Methods: A qualitative study utilised unstructured interviews and grounded theory analysis during the participant’s pre-dialysis year. The interview cohort comprised 20 pre-dialysis participants between 24 and 80 years of age. Grounded theory design entailed thematic sampling and analysis, scrutinised by secondary coding and checked with participants. Participants were recruited from routine renal clinics at two local hospitals when their estimated glomerular filtration rate was between 15 and 30 mL/min/1.73 m2. Results: Vicarious learning that contributed to treatment decision-making fell into three main categories: planned vicarious leaning, unplanned vicarious learning and historical vicarious experiences. Conclusion: Exploration and acknowledgement of service users’ prior vicarious learning, by healthcare professionals, is important in understanding its potential influences on individuals’ treatment decision-making. This will enable healthcare professionals to challenge heuristic decisions based on limited information and to encourage analytic thought processes.

  15. Types of vicarious learning experienced by pre-dialysis patients.

    Science.gov (United States)

    McCarthy, Kate; Sturt, Jackie; Adams, Ann

    2015-01-01

    Haemodialysis and peritoneal dialysis renal replacement treatment options are in clinical equipoise, although the cost of haemodialysis to the National Health Service is £16,411/patient/year greater than peritoneal dialysis. Treatment decision-making takes place during the pre-dialysis year when estimated glomerular filtration rate drops to between 15 and 30 mL/min/1.73 m(2). Renal disease can be familial, and the majority of patients have considerable health service experience when they approach these treatment decisions. Factors affecting patient treatment decisions are currently unknown. The objective of this article is to explore data from a wider study in specific relation to the types of vicarious learning experiences reported by pre-dialysis patients. A qualitative study utilised unstructured interviews and grounded theory analysis during the participant's pre-dialysis year. The interview cohort comprised 20 pre-dialysis participants between 24 and 80 years of age. Grounded theory design entailed thematic sampling and analysis, scrutinised by secondary coding and checked with participants. Participants were recruited from routine renal clinics at two local hospitals when their estimated glomerular filtration rate was between 15 and 30 mL/min/1.73 m(2). Vicarious learning that contributed to treatment decision-making fell into three main categories: planned vicarious leaning, unplanned vicarious learning and historical vicarious experiences. Exploration and acknowledgement of service users' prior vicarious learning, by healthcare professionals, is important in understanding its potential influences on individuals' treatment decision-making. This will enable healthcare professionals to challenge heuristic decisions based on limited information and to encourage analytic thought processes.

  16. A comparison of positive vicarious learning and verbal information for reducing vicariously learned fear

    OpenAIRE

    Reynolds, Gemma; Wasely, David; Dunne, Guler; Askew, Chris

    2017-01-01

    Research with children has demonstrated that both positive vicarious learning (modelling) and positive verbal information can reduce children’s acquired fear responses for a particular stimulus. However, this fear reduction appears to be more effective when the intervention pathway matches the initial fear learning pathway. That is, positive verbal information is a more effective intervention than positive modelling when fear is originally acquired via negative verbal information. Research ha...

  17. Vicarious learning during simulations: is it more effective than hands-on training?

    Science.gov (United States)

    Stegmann, Karsten; Pilz, Florian; Siebeck, Matthias; Fischer, Frank

    2012-10-01

    Doctor-patient communication skills are often fostered by using simulations with standardised patients (SPs). The efficiency of such experiences is greater if student observers learn at least as much from the simulation as do students who actually interact with the patient. This study aimed to investigate whether the type of simulation-based learning (learning by doing versus vicarious learning) and the order in which these activities are carried out (learning by doing → vicarious learning versus vicarious learninglearning by doing) have any effect on the acquisition of knowledge on effective doctor-patient communication strategies. In addition, we wished to examine the extent to which an observation script and a feedback formulation script affect knowledge acquisition in this domain. The sample consisted of 200 undergraduate medical students (126 female, 74 male). They participated in two separate simulation sessions, each of which was 30 minutes long and was followed by a collaborative peer feedback phase. Half of the students first performed (learning by doing) and then observed (vicarious learning) the simulation, and the other half participated in the reverse order. Knowledge of doctor-patient communication was measured before, between and after the simulations. Vicarious learning led to greater knowledge of doctor-patient communication scores than learning by doing. The order in which vicarious learning was experienced had no influence. The inclusion of an observation script also enabled significantly greater learning in students to whom this script was given compared with students who were not supported in this way, but the presence of a feedback script had no effect. Students appear to learn at least as much, if not more, about doctor-patient communication by observing their peers interact with SPs as they do from interacting with SPs themselves. Instructional support for observing simulations in the form of observation scripts facilitates both

  18. Vicarious extinction learning during reconsolidation neutralizes fear memory.

    Science.gov (United States)

    Golkar, Armita; Tjaden, Cathelijn; Kindt, Merel

    2017-05-01

    Previous studies have suggested that fear memories can be updated when recalled, a process referred to as reconsolidation. Given the beneficial effects of model-based safety learning (i.e. vicarious extinction) in preventing the recovery of short-term fear memory, we examined whether consolidated long-term fear memories could be updated with safety learning accomplished through vicarious extinction learning initiated within the reconsolidation time-window. We assessed this in a final sample of 19 participants that underwent a three-day within-subject fear-conditioning design, using fear-potentiated startle as our primary index of fear learning. On day 1, two fear-relevant stimuli (reinforced CSs) were paired with shock (US) and a third stimulus served as a control (CS). On day 2, one of the two previously reinforced stimuli (the reminded CS) was presented once in order to reactivate the fear memory 10 min before vicarious extinction training was initiated for all CSs. The recovery of the fear memory was tested 24 h later. Vicarious extinction training conducted within the reconsolidation time window specifically prevented the recovery of the reactivated fear memory (p = 0.03), while leaving fear-potentiated startle responses to the non-reactivated cue intact (p = 0.62). These findings are relevant to both basic and clinical research, suggesting that a safe, non-invasive model-based exposure technique has the potential to enhance the efficiency and durability of anxiolytic therapies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Promoting Constructive Activities that Support Vicarious Learning during Computer-Based Instruction

    Science.gov (United States)

    Gholson, Barry; Craig, Scotty D.

    2006-01-01

    This article explores several ways computer-based instruction can be designed to support constructive activities and promote deep-level comprehension during vicarious learning. Vicarious learning, discussed in the first section, refers to knowledge acquisition under conditions in which the learner is not the addressee and does not physically…

  20. Stimulus fear relevance and the speed, magnitude, and robustness of vicariously learned fear.

    Science.gov (United States)

    Dunne, Güler; Reynolds, Gemma; Askew, Chris

    2017-08-01

    Superior learning for fear-relevant stimuli is typically indicated in the laboratory by faster acquisition of fear responses, greater learned fear, and enhanced resistance to extinction. Three experiments investigated the speed, magnitude, and robustness of UK children's (6-10 years; N = 290; 122 boys, 168 girls) vicariously learned fear responses for three types of stimuli. In two experiments, children were presented with pictures of novel animals (Australian marsupials) and flowers (fear-irrelevant stimuli) alone (control) or together with faces expressing fear or happiness. To determine learning speed the number of stimulus-face pairings seen by children was varied (1, 10, or 30 trials). Robustness of learning was examined via repeated extinction procedures over 3 weeks. A third experiment compared the magnitude and robustness of vicarious fear learning for snakes and marsupials. Significant increases in fear responses were found for snakes, marsupials and flowers. There was no indication that vicarious learning for marsupials was faster than for flowers. Moreover, vicariously learned fear was neither greater nor more robust for snakes compared to marsupials, or for marsupials compared to flowers. These findings suggest that for this age group stimulus fear relevance may have little influence on vicarious fear learning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Vicarious Fear Learning Depends on Empathic Appraisals and Trait Empathy.

    Science.gov (United States)

    Olsson, Andreas; McMahon, Kibby; Papenberg, Goran; Zaki, Jamil; Bolger, Niall; Ochsner, Kevin N

    2016-01-01

    Empathy and vicarious learning of fear are increasingly understood as separate phenomena, but the interaction between the two remains poorly understood. We investigated how social (vicarious) fear learning is affected by empathic appraisals by asking participants to either enhance or decrease their empathic responses to another individual (the demonstrator), who received electric shocks paired with a predictive conditioned stimulus. A third group of participants received no appraisal instructions and responded naturally to the demonstrator. During a later test, participants who had enhanced their empathy evinced the strongest vicarious fear learning as measured by skin conductance responses to the conditioned stimulus in the absence of the demonstrator. Moreover, this effect was augmented in observers high in trait empathy. Our results suggest that a demonstrator's expression can serve as a "social" unconditioned stimulus (US), similar to a personally experienced US in Pavlovian fear conditioning, and that learning from a social US depends on both empathic appraisals and the observers' stable traits. © The Author(s) 2015.

  2. Vicarious Learning and Reduction of Fear in Children via Adult and Child Models.

    Science.gov (United States)

    Dunne, Güler; Askew, Chris

    2017-06-01

    Children can learn to fear stimuli vicariously, by observing adults' or peers' responses to them. Given that much of school-age children's time is typically spent with their peers, it is important to establish whether fear learning from peers is as effective or robust as learning from adults, and also whether peers can be successful positive models for reducing fear. During a vicarious fear learning procedure, children (6 to 10 years; N = 60) were shown images of novel animals together with images of adult or peer faces expressing fear. Later they saw their fear-paired animal again together with positive emotional adult or peer faces. Children's fear beliefs and avoidance for the animals increased following vicarious fear learning and decreased following positive vicarious counterconditioning. There was little evidence of differences in learning from adults and peers, demonstrating that for this age group peer models are effective models for both fear acquisition and reduction. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Audiovisual Blindsight: Audiovisual learning in the absence of primary visual cortex

    OpenAIRE

    Mehrdad eSeirafi; Peter eDe Weerd; Alan J Pegna; Beatrice ede Gelder

    2016-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit...

  4. Learning sparse generative models of audiovisual signals

    OpenAIRE

    Monaci, Gianluca; Sommer, Friedrich T.; Vandergheynst, Pierre

    2008-01-01

    This paper presents a novel framework to learn sparse represen- tations for audiovisual signals. An audiovisual signal is modeled as a sparse sum of audiovisual kernels. The kernels are bimodal functions made of synchronous audio and video components that can be positioned independently and arbitrarily in space and time. We design an algorithm capable of learning sets of such audiovi- sual, synchronous, shift-invariant functions by alternatingly solving a coding and a learning pr...

  5. Vicarious learning through capturing taskdirected discussions

    Directory of Open Access Journals (Sweden)

    F. Dineen

    1999-12-01

    Full Text Available The research programme on vicarious learning, part of which we report in this paper, has been aimed at exploring the idea that learning can be facilitated by providing learners with access to the experiences of other learners. We use Bandura's term vicarious learning to describe this (Bandura, 1986, and we believe it to be a paradigm that offers particular promise when seen as an innovative way of exploiting recent technical advances in multimedia and distance learning technologies. It offers the prospect of a real alternative to the building of intelligent tutors (which directly address the problem of allowing learners access to dialogue, but which have proved largely intractable in practice or to the direct support of live dialogues (which do not offer a solution to the problem of providing 'live' tutors - unless they are between peer learners. In the research reported here our main objectives were to develop techniques to facilitate learners' access to, especially, dialogues and discussions which have arisen when other learners were faced with similar issues or problems in understanding the material. This required us to investigate means of indexing and retrieving appropriate dialogues and build on these to create an advanced prototype system for use in educational settings.

  6. Vicarious Learning from Human Models in Monkeys

    OpenAIRE

    Falcone, Rossella; Brunamonti, Emiliano; Genovesio, Aldo

    2012-01-01

    We examined whether monkeys can learn by observing a human model, through vicarious learning. Two monkeys observed a human model demonstrating an object-reward association and consuming food found underneath an object. The monkeys observed human models as they solved more than 30 learning problems. For each problem, the human models made a choice between two objects, one of which concealed a piece of apple. In the test phase afterwards, the monkeys made a choice of their own. Learning was app...

  7. Effect of vicarious fear learning on children's heart rate responses and attentional bias for novel animals.

    Science.gov (United States)

    Reynolds, Gemma; Field, Andy P; Askew, Chris

    2014-10-01

    Research with children has shown that vicarious learning can result in changes to 2 of Lang's (1968) 3 anxiety response systems: subjective report and behavioral avoidance. The current study extended this research by exploring the effect of vicarious learning on physiological responses (Lang's final response system) and attentional bias. The study used Askew and Field's (2007) vicarious learning procedure and demonstrated fear-related increases in children's cognitive, behavioral, and physiological responses. Cognitive and behavioral changes were retested 1 week and 1 month later, and remained elevated. In addition, a visual search task demonstrated that fear-related vicarious learning creates an attentional bias for novel animals, which is moderated by increases in fear beliefs during learning. The findings demonstrate that vicarious learning leads to lasting changes in all 3 of Lang's anxiety response systems and is sufficient to create attentional bias to threat in children. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  8. The vicarious learning pathway to fear 40 years on.

    Science.gov (United States)

    Askew, Chris; Field, Andy P

    2008-10-01

    Forty years on from the initial idea that fears could be learnt vicariously through observing other people's responses to a situation or stimulus, this review looks at the evidence for this theory as an explanatory model of clinical fear. First, we review early experimental evidence that fears can be learnt vicariously before turning to the evidence from both primate and human research that clinical fears can be acquired in this way. Finally, we review recent evidence from research on non-anxious children. Throughout the review we highlight problems and areas for future research. We conclude by exploring the likely underlying mechanisms in the vicarious learning of fear and the resulting clinical implications.

  9. Comparing Learning from Productive Failure and Vicarious Failure

    Science.gov (United States)

    Kapur, Manu

    2014-01-01

    A total of 136 eighth-grade math students from 2 Singapore schools learned from either productive failure (PF) or vicarious failure (VF). PF students "generated" solutions to a complex problem targeting the concept of variance that they had not learned yet before receiving instruction on the targeted concept. VF students…

  10. Scanning and vicarious learning from adverse events in health care

    Directory of Open Access Journals (Sweden)

    2001-01-01

    Full Text Available Studies have shown that serious adverse clinical events occur in approximately 3%-10% of acute care hospital admissions, and one third of these adverse events result in permanent disability or death. These findings have led to calls for national medical error reporting systems and for greater organizational learning by hospitals. But do hospitals and hospital personnel pay enough attention to such risk information that they might learn from each other's failures or adverse events? This paper gives an overview of the importance of scanning and vicarious learning from adverse events. In it I propose that health care organizations' attention and information focus, organizational affinity, and absorptive capacity may each influence scanning and vicarious learning outcomes. Implications for future research are discussed.

  11. Effect of vicarious fear learning on children's heart rate responses and attentional bias for novel animals

    OpenAIRE

    Reynolds, G; Field, AP; Askew, C

    2014-01-01

    Research with children has shown that vicarious learning can result in changes to 2 of Lang's (1968) 3 anxiety response systems: subjective report and behavioral avoidance. The current study extended this research by exploring the effect of vicarious learning on physiological responses (Lang's final response system) and attentional bias. The study used Askew and Field's (2007) vicarious learning procedure and demonstrated fear-related increases in children's cognitive, behavioral, and physiol...

  12. Effect of Vicarious Fear Learning on Children’s Heart Rate Responses and Attentional Bias for Novel Animals

    Science.gov (United States)

    2014-01-01

    Research with children has shown that vicarious learning can result in changes to 2 of Lang’s (1968) 3 anxiety response systems: subjective report and behavioral avoidance. The current study extended this research by exploring the effect of vicarious learning on physiological responses (Lang’s final response system) and attentional bias. The study used Askew and Field’s (2007) vicarious learning procedure and demonstrated fear-related increases in children’s cognitive, behavioral, and physiological responses. Cognitive and behavioral changes were retested 1 week and 1 month later, and remained elevated. In addition, a visual search task demonstrated that fear-related vicarious learning creates an attentional bias for novel animals, which is moderated by increases in fear beliefs during learning. The findings demonstrate that vicarious learning leads to lasting changes in all 3 of Lang’s anxiety response systems and is sufficient to create attentional bias to threat in children. PMID:25151521

  13. Enabling the Development of Student Teacher Professional Identity through Vicarious Learning during an Educational Excursion

    Science.gov (United States)

    Steenekamp, Karen; van der Merwe, Martyn; Mehmedova, Aygul Salieva

    2018-01-01

    This paper explores the views of student teachers who were provided vicarious learning opportunities during an educational excursion, and how the learning enabled them to develop their teacher professional identity. This qualitative research study, using a social-constructivist lens highlights how vicarious learning influenced student teachers'…

  14. Other people as means to a safe end: vicarious extinction blocks the return of learned fear.

    Science.gov (United States)

    Golkar, Armita; Selbing, Ida; Flygare, Oskar; Ohman, Arne; Olsson, Andreas

    2013-11-01

    Information about what is dangerous and safe in the environment is often transferred from other individuals through social forms of learning, such as observation. Past research has focused on the observational, or vicarious, acquisition of fears, but little is known about how social information can promote safety learning. To address this issue, we studied the effects of vicarious-extinction learning on the recovery of conditioned fear. Compared with a standard extinction procedure, vicarious extinction promoted better extinction and effectively blocked the return of previously learned fear. We confirmed that these effects could not be attributed to the presence of a learning model per se but were specifically driven by the model's experience of safety. Our results confirm that vicarious and direct emotional learning share important characteristics but that social-safety information promotes superior down-regulation of learned fear. These findings have implications for emotional learning, social-affective processes, and clinical practice.

  15. FACTORS INFLUENCING VICARIOUS LEARNING MECHANISM EFFECTIVENESS WITHIN ORGANIZATIONS

    OpenAIRE

    JOHN R. VOIT; COLIN G. DRURY

    2013-01-01

    As organizations become larger it becomes increasingly difficult to share lessons-learned across their disconnected units allowing individuals to learn vicariously from each other's experiences. This lesson-learned information is often unsolicited by the recipient group or individual and required an individual or group to react to the information to yield benefits for the organization. Data was collected using 39 interviews and 582 survey responses that proved the effects of information usefu...

  16. Teaching parents about responsive feeding through a vicarious learning video: A pilot randomized controlled trial

    Science.gov (United States)

    The American Academy of Pediatrics and World Health Organization recommend responsive feeding (RF) to promote healthy eating behaviors in early childhood. This project developed and tested a vicarious learning video to teach parents RF practices. A RF vicarious learning video was developed using com...

  17. Hybrid E-Learning Tool TransLearning: Video Storytelling to Foster Vicarious Learning within Multi-Stakeholder Collaboration Networks

    Science.gov (United States)

    van der Meij, Marjoleine G.; Kupper, Frank; Beers, Pieter J.; Broerse, Jacqueline E. W.

    2016-01-01

    E-learning and storytelling approaches can support informal vicarious learning within geographically widely distributed multi-stakeholder collaboration networks. This case study evaluates hybrid e-learning and video-storytelling approach "TransLearning" by investigation into how its storytelling e-tool supported informal vicarious…

  18. Audiovisual Association Learning in the Absence of Primary Visual Cortex.

    Science.gov (United States)

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J; de Gelder, Beatrice

    2015-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning.

  19. Vicarious extinction learning during reconsolidation neutralizes fear memory

    NARCIS (Netherlands)

    Golkar, A.; Tjaden, C.; Kindt, M.

    Background: Previous studies have suggested that fear memories can be updated when recalled, a process referred to as reconsolidation. Given the beneficial effects of model-based safety learning (i.e. vicarious extinction) in preventing the recovery of short-term fear memory, we examined whether

  20. Teaching Parents about Responsive Feeding through a Vicarious Learning Video: A Pilot Randomized Controlled Trial

    Science.gov (United States)

    Ledoux, Tracey; Robinson, Jessica; Baranowski, Tom; O'Connor, Daniel P.

    2018-01-01

    The American Academy of Pediatrics and World Health Organization recommend responsive feeding (RF) to promote healthy eating behaviors in early childhood. This project developed and tested a vicarious learning video to teach parents RF practices. A RF vicarious learning video was developed using community-based participatory research methods.…

  1. Vicarious reinforcement learning signals when instructing others.

    Science.gov (United States)

    Apps, Matthew A J; Lesage, Elise; Ramnani, Narender

    2015-02-18

    Reinforcement learning (RL) theory posits that learning is driven by discrepancies between the predicted and actual outcomes of actions (prediction errors [PEs]). In social environments, learning is often guided by similar RL mechanisms. For example, teachers monitor the actions of students and provide feedback to them. This feedback evokes PEs in students that guide their learning. We report the first study that investigates the neural mechanisms that underpin RL signals in the brain of a teacher. Neurons in the anterior cingulate cortex (ACC) signal PEs when learning from the outcomes of one's own actions but also signal information when outcomes are received by others. Does a teacher's ACC signal PEs when monitoring a student's learning? Using fMRI, we studied brain activity in human subjects (teachers) as they taught a confederate (student) action-outcome associations by providing positive or negative feedback. We examined activity time-locked to the students' responses, when teachers infer student predictions and know actual outcomes. We fitted a RL-based computational model to the behavior of the student to characterize their learning, and examined whether a teacher's ACC signals when a student's predictions are wrong. In line with our hypothesis, activity in the teacher's ACC covaried with the PE values in the model. Additionally, activity in the teacher's insula and ventromedial prefrontal cortex covaried with the predicted value according to the student. Our findings highlight that the ACC signals PEs vicariously for others' erroneous predictions, when monitoring and instructing their learning. These results suggest that RL mechanisms, processed vicariously, may underpin and facilitate teaching behaviors. Copyright © 2015 Apps et al.

  2. Neural signals of vicarious extinction learning.

    Science.gov (United States)

    Golkar, Armita; Haaker, Jan; Selbing, Ida; Olsson, Andreas

    2016-10-01

    Social transmission of both threat and safety is ubiquitous, but little is known about the neural circuitry underlying vicarious safety learning. This is surprising given that these processes are critical to flexibly adapt to a changeable environment. To address how the expression of previously learned fears can be modified by the transmission of social information, two conditioned stimuli (CS + s) were paired with shock and the third was not. During extinction, we held constant the amount of direct, non-reinforced, exposure to the CSs (i.e. direct extinction), and critically varied whether another individual-acting as a demonstrator-experienced safety (CS + vic safety) or aversive reinforcement (CS + vic reinf). During extinction, ventromedial prefrontal cortex (vmPFC) responses to the CS + vic reinf increased but decreased to the CS + vic safety This pattern of vmPFC activity was reversed during a subsequent fear reinstatement test, suggesting a temporal shift in the involvement of the vmPFC. Moreover, only the CS + vic reinf association recovered. Our data suggest that vicarious extinction prevents the return of conditioned fear responses, and that this efficacy is reflected by diminished vmPFC involvement during extinction learning. The present findings may have important implications for understanding how social information influences the persistence of fear memories in individuals suffering from emotional disorders. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  3. Examining the Effect of Small Group Discussions and Question Prompts on Vicarious Learning Outcomes

    Science.gov (United States)

    Lee, Yekyung; Ertmer, Peggy A.

    2006-01-01

    This study investigated the effect of group discussions and question prompts on students' vicarious learning experiences. Vicarious experiences were delivered to 65 preservice teachers via VisionQuest, a Web site that provided examples of successful technology integration. A 2x2 factorial research design employed group discussions and question…

  4. Vicarious learning and the development of fears in childhood.

    Science.gov (United States)

    Askew, Chris; Field, Andy P

    2007-11-01

    Vicarious learning has long been assumed to be an indirect pathway to fear; however, there is only retrospective evidence that children acquire fears in this way. In two experiments, children (aged 7-9 years) were exposed to pictures of novel animals paired with pictures of either scared, happy or no facial expressions to see the impact on their fear cognitions and avoidance behavior about the animals. In Experiment 1, directly (self-report) and indirectly measured (affective priming) fear attitudes towards the animals changed congruent with the facial expressions with which these were paired. The indirectly measured fear beliefs persisted up to 3 months. Experiment 2 showed that children took significantly longer to approach a box they believed to contain an animal they had previously seen paired with scared faces. These results support theories of fear acquisition that suppose that vicarious learning affects cognitive and behavioral fear emotion, and suggest possibilities for interventions to weaken fear acquired in this way.

  5. The Deep-Level-Reasoning-Question Effect: The Role of Dialogue and Deep-Level-Reasoning Questions during Vicarious Learning

    Science.gov (United States)

    Craig, Scotty D.; Sullins, Jeremiah; Witherspoon, Amy; Gholson, Barry

    2006-01-01

    We investigated the impact of dialogue and deep-level-reasoning questions on vicarious learning in 2 studies with undergraduates. In Experiment 1, participants learned material by interacting with AutoTutor or by viewing 1 of 4 vicarious learning conditions: a noninteractive recorded version of the AutoTutor dialogues, a dialogue with a…

  6. Vicarious learning and unlearning of fear in childhood via mother and stranger models.

    Science.gov (United States)

    Dunne, Güler; Askew, Chris

    2013-10-01

    Evidence shows that anxiety runs in families. One reason may be that children are particularly susceptible to learning fear from their parents. The current study compared children's fear beliefs and avoidance preferences for animals following positive or fearful modeling by mothers and strangers in vicarious learning and unlearning procedures. Children aged 6 to 10 years (N = 60) were exposed to pictures of novel animals either alone (control) or together with pictures of their mother or a stranger expressing fear or happiness. During unlearning (counterconditioning), children saw each animal again with their mother or a stranger expressing the opposite facial expression. Following vicarious learning, children's fear beliefs increased for animals seen with scared faces and this effect was the same whether fear was modeled by mothers or strangers. Fear beliefs and avoidance preferences decreased following positive counterconditioning and increased following fear counterconditioning. Again, learning was the same whether the model was the child's mother or a stranger. These findings indicate that children in this age group can vicariously learn and unlearn fear-related cognitions from both strangers and mothers. This has implications for our understanding of fear acquisition and the development of early interventions to prevent and reverse childhood fears and phobias.

  7. Concern for Others Leads to Vicarious Optimism.

    Science.gov (United States)

    Kappes, Andreas; Faber, Nadira S; Kahane, Guy; Savulescu, Julian; Crockett, Molly J

    2018-03-01

    An optimistic learning bias leads people to update their beliefs in response to better-than-expected good news but neglect worse-than-expected bad news. Because evidence suggests that this bias arises from self-concern, we hypothesized that a similar bias may affect beliefs about other people's futures, to the extent that people care about others. Here, we demonstrated the phenomenon of vicarious optimism and showed that it arises from concern for others. Participants predicted the likelihood of unpleasant future events that could happen to either themselves or others. In addition to showing an optimistic learning bias for events affecting themselves, people showed vicarious optimism when learning about events affecting friends and strangers. Vicarious optimism for strangers correlated with generosity toward strangers, and experimentally increasing concern for strangers amplified vicarious optimism for them. These findings suggest that concern for others can bias beliefs about their future welfare and that optimism in learning is not restricted to oneself.

  8. Audiovisual Association Learning in the Absence of Primary Visual Cortex

    OpenAIRE

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J.; de Gelder, Beatrice

    2016-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit ...

  9. Audiovisual speech facilitates voice learning.

    Science.gov (United States)

    Sheffert, Sonya M; Olson, Elizabeth

    2004-02-01

    In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.

  10. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    Science.gov (United States)

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Promoting Vicarious Learning of Physics Using Deep Questions with Explanations

    Science.gov (United States)

    Craig, Scotty D.; Gholson, Barry; Brittingham, Joshua K.; Williams, Joah L.; Shubeck, Keith T.

    2012-01-01

    Two experiments explored the role of vicarious "self" explanations in facilitating student learning gains during computer-presented instruction. In Exp. 1, college students with low or high knowledge on Newton's laws were tested in four conditions: (a) monologue (M), (b) questions (Q), (c) explanation (E), and (d) question + explanation (Q + E).…

  12. Stimulus fear-relevance and the vicarious learning pathway to childhood fears

    OpenAIRE

    Askew, C.; Dunne, G.; Ozdil, A.; Reynolds, G.; Field, A.P.

    2013-01-01

    Enhanced fear learning for fear-relevant stimuli has been demonstrated in procedures with adults in the laboratory. Three experiments investigated the effect of stimulus fear-relevance on vicarious fear learning in children (aged 6-11 years). Pictures of stimuli with different levels of fear-relevance (flowers, caterpillars, snakes, worms, and Australian marsupials) were presented alone or together with scared faces. In line with previous studies, children's fear beliefs and avoidance prefere...

  13. Differential influence of social versus isolate housing on vicarious fear learning in adolescent mice.

    Science.gov (United States)

    Panksepp, Jules B; Lahvis, Garet P

    2016-04-01

    Laboratory rodents can adopt the pain or fear of nearby conspecifics. This phenotype conceptually lies within the domain of empathy, a bio-psycho-social process through which individuals come to share each other's emotion. Using a model of cue-conditioned fear, we show here that the expression of vicarious fear varies with respect to whether mice are raised socially or in solitude during adolescence. The impact of the adolescent housing environment was selective: (a) vicarious fear was more influenced than directly acquired fear, (b) "long-term" (24-h postconditioning) vicarious fear memories were stronger than "short-term" (15-min postconditioning) memories in socially reared mice whereas the opposite was true for isolate mice, and (c) females were more fearful than males. Housing differences during adolescence did not alter the general mobility of mice or their vocal response to receiving the unconditioned stimulus. Previous work with this mouse model underscored a genetic influence on vicarious fear learning, and the present study complements these findings by elucidating an interaction between the adolescent social environment and vicarious experience. Collectively, these findings are relevant to developing models of empathy amenable to mechanistic exploitation in the laboratory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. Vicariously learned helplessness: the role of perceived dominance and prestige of a model.

    Science.gov (United States)

    Chambers, Sheridan; Hammonds, Frank

    2014-01-01

    Prior research has examined the relationship between various model characteristics (e.g., age, competence, similarity) and the likelihood that the observers will experience vicariously learned helplessness. However, no research in this area has investigated dominance as a relevant model characteristic. This study explored whether the vicarious acquisition of learned helplessness could be mediated by the perceived dominance of a model. Participants observed a model attempting to solve anagrams. Across participant groups, the model displayed either dominant or nondominant characteristics and was either successful or unsuccessful at solving the anagrams. The characteristics displayed by the model significantly affected observers' ratings of his dominance and prestige. After viewing the model, participants attempted to solve 40 anagrams. When the dominant model was successful, observers solved significantly more anagrams than when he was unsuccessful. This effect was not found when the model was nondominant.

  15. Text-to-audiovisual speech synthesizer for children with learning disabilities.

    Science.gov (United States)

    Mendi, Engin; Bayrak, Coskun

    2013-01-01

    Learning disabilities affect the ability of children to learn, despite their having normal intelligence. Assistive tools can highly increase functional capabilities of children with learning disorders such as writing, reading, or listening. In this article, we describe a text-to-audiovisual synthesizer that can serve as an assistive tool for such children. The system automatically converts an input text to audiovisual speech, providing synchronization of the head, eye, and lip movements of the three-dimensional face model with appropriate facial expressions and word flow of the text. The proposed system can enhance speech perception and help children having learning deficits to improve their chances of success.

  16. Functionally segregated neural substrates for arbitrary audiovisual paired-association learning.

    Science.gov (United States)

    Tanabe, Hiroki C; Honda, Manabu; Sadato, Norihiro

    2005-07-06

    To clarify the neural substrates and their dynamics during crossmodal association learning, we conducted functional magnetic resonance imaging (MRI) during audiovisual paired-association learning of delayed matching-to-sample tasks. Thirty subjects were involved in the study; 15 performed an audiovisual paired-association learning task, and the remainder completed a control visuo-visual task. Each trial consisted of the successive presentation of a pair of stimuli. Subjects were asked to identify predefined audiovisual or visuo-visual pairs by trial and error. Feedback for each trial was given regardless of whether the response was correct or incorrect. During the delay period, several areas showed an increase in the MRI signal as learning proceeded: crossmodal activity increased in unimodal areas corresponding to visual or auditory areas, and polymodal responses increased in the occipitotemporal junction and parahippocampal gyrus. This pattern was not observed in the visuo-visual intramodal paired-association learning task, suggesting that crossmodal associations might be formed by binding unimodal sensory areas via polymodal regions. In both the audiovisual and visuo-visual tasks, the MRI signal in the superior temporal sulcus (STS) in response to the second stimulus and feedback peaked during the early phase of learning and then decreased, indicating that the STS might be key to the creation of paired associations, regardless of stimulus type. In contrast to the activity changes in the regions discussed above, there was constant activity in the frontoparietal circuit during the delay period in both tasks, implying that the neural substrates for the formation and storage of paired associates are distinct from working memory circuits.

  17. Learning from the Pros: Influence of Web-Based Expert Commentary on Vicarious Learning about Financial Markets

    Science.gov (United States)

    Ford, Matthew W.; Kent, Daniel W.; Devoto, Steven

    2007-01-01

    Web-based financial commentary, in which experts routinely express market-related thought processes, is proposed as a means for college students to learn vicariously about financial markets. Undergraduate business school students from a regional university were exposed to expert market commentary from a single financial Web site for a 6-week…

  18. Stimulus fear-relevance and the vicarious learning pathway to childhood fears.

    Science.gov (United States)

    Askew, Chris; Dunne, Güler; Özdil, Zehra; Reynolds, Gemma; Field, Andy P

    2013-10-01

    Enhanced fear learning for fear-relevant stimuli has been demonstrated in procedures with adults in the laboratory. Three experiments investigated the effect of stimulus fear-relevance on vicarious fear learning in children (aged 6-11 years). Pictures of stimuli with different levels of fear-relevance (flowers, caterpillars, snakes, worms, and Australian marsupials) were presented alone or together with scared faces. In line with previous studies, children's fear beliefs and avoidance preferences increased for stimuli they had seen with scared faces. However, in contrast to evidence with adults, learning was mostly similar for all stimulus types irrespective of fear-relevance. The results support a proposal that stimulus preparedness is bypassed when children observationally learn threat-related information from adults.

  19. Vicarious Reinforcement In Rhesus Macaques (Macaca mulatta

    Directory of Open Access Journals (Sweden)

    Steve W. C. Chang

    2011-03-01

    Full Text Available What happens to others profoundly influences our own behavior. Such other-regarding outcomes can drive observational learning, as well as motivate cooperation, charity, empathy, and even spite. Vicarious reinforcement may serve as one of the critical mechanisms mediating the influence of other-regarding outcomes on behavior and decision-making in groups. Here we show that rhesus macaques spontaneously derive vicarious reinforcement from observing rewards given to another monkey, and that this reinforcement can motivate them to subsequently deliver or withhold rewards from the other animal. We exploited Pavlovian and instrumental conditioning to associate rewards to self (M1 and/or rewards to another monkey (M2 with visual cues. M1s made more errors in the instrumental trials when cues predicted reward to M2 compared to when cues predicted reward to M1, but made even more errors when cues predicted reward to no one. In subsequent preference tests between pairs of conditioned cues, M1s preferred cues paired with reward to M2 over cues paired with reward to no one. By contrast, M1s preferred cues paired with reward to self over cues paired with reward to both monkeys simultaneously. Rates of attention to M2 strongly predicted the strength and valence of vicarious reinforcement. These patterns of behavior, which were absent in nonsocial control trials, are consistent with vicarious reinforcement based upon sensitivity to observed, or counterfactual, outcomes with respect to another individual. Vicarious reward may play a critical role in shaping cooperation and competition, as well as motivating observational learning and group coordination in rhesus macaques, much as it does in humans. We propose that vicarious reinforcement signals mediate these behaviors via homologous neural circuits involved in reinforcement learning and decision-making.

  20. Vicarious reinforcement in rhesus macaques (macaca mulatta).

    Science.gov (United States)

    Chang, Steve W C; Winecoff, Amy A; Platt, Michael L

    2011-01-01

    What happens to others profoundly influences our own behavior. Such other-regarding outcomes can drive observational learning, as well as motivate cooperation, charity, empathy, and even spite. Vicarious reinforcement may serve as one of the critical mechanisms mediating the influence of other-regarding outcomes on behavior and decision-making in groups. Here we show that rhesus macaques spontaneously derive vicarious reinforcement from observing rewards given to another monkey, and that this reinforcement can motivate them to subsequently deliver or withhold rewards from the other animal. We exploited Pavlovian and instrumental conditioning to associate rewards to self (M1) and/or rewards to another monkey (M2) with visual cues. M1s made more errors in the instrumental trials when cues predicted reward to M2 compared to when cues predicted reward to M1, but made even more errors when cues predicted reward to no one. In subsequent preference tests between pairs of conditioned cues, M1s preferred cues paired with reward to M2 over cues paired with reward to no one. By contrast, M1s preferred cues paired with reward to self over cues paired with reward to both monkeys simultaneously. Rates of attention to M2 strongly predicted the strength and valence of vicarious reinforcement. These patterns of behavior, which were absent in non-social control trials, are consistent with vicarious reinforcement based upon sensitivity to observed, or counterfactual, outcomes with respect to another individual. Vicarious reward may play a critical role in shaping cooperation and competition, as well as motivating observational learning and group coordination in rhesus macaques, much as it does in humans. We propose that vicarious reinforcement signals mediate these behaviors via homologous neural circuits involved in reinforcement learning and decision-making.

  1. Macaque monkeys can learn token values from human models through vicarious reward.

    Science.gov (United States)

    Bevacqua, Sara; Cerasti, Erika; Falcone, Rossella; Cervelloni, Milena; Brunamonti, Emiliano; Ferraina, Stefano; Genovesio, Aldo

    2013-01-01

    Monkeys can learn the symbolic meaning of tokens, and exchange them to get a reward. Monkeys can also learn the symbolic value of a token by observing conspecifics but it is not clear if they can learn passively by observing other actors, e.g., humans. To answer this question, we tested two monkeys in a token exchange paradigm in three experiments. Monkeys learned token values through observation of human models exchanging them. We used, after a phase of object familiarization, different sets of tokens. One token of each set was rewarded with a bit of apple. Other tokens had zero value (neutral tokens). Each token was presented only in one set. During the observation phase, monkeys watched the human model exchange tokens and watched them consume rewards (vicarious rewards). In the test phase, the monkeys were asked to exchange one of the tokens for food reward. Sets of three tokens were used in the first experiment and sets of two tokens were used in the second and third experiments. The valuable token was presented with different probabilities in the observation phase during the first and second experiments in which the monkeys exchanged the valuable token more frequently than any of the neutral tokens. The third experiments examined the effect of unequal probabilities. Our results support the view that monkeys can learn from non-conspecific actors through vicarious reward, even a symbolic task like the token-exchange task.

  2. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2017-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  3. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2018-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  4. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    Directory of Open Access Journals (Sweden)

    David Alais

    2010-06-01

    Full Text Available An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question.Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ. Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones was slightly weaker than visual learning (lateralised grating patches. Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes.The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order

  5. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    Science.gov (United States)

    Alais, David; Cass, John

    2010-06-23

    An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be

  6. Vicarious shame.

    Science.gov (United States)

    Welten, Stephanie C M; Zeelenberg, Marcel; Breugelmans, Seger M

    2012-01-01

    We examined an account of vicarious shame that explains how people can experience a self-conscious emotion for the behaviour of another person. Two divergent processes have been put forward to explain how another's behaviour links to the self. The group-based emotion account explains vicarious shame in terms of an in-group member threatening one's social identity by behaving shamefully. The empathy account explains vicarious shame in terms of empathic perspective taking; people imagine themselves in another's shameful behaviour. In three studies using autobiographical recall and experimental inductions, we revealed that both processes can explain why vicarious shame arises in different situations, what variation can be observed in the experience of vicarious shame, and how all vicarious shame can be related to a threat to the self. Results are integrated in a functional account of shame.

  7. Reductions in Children's Vicariously Learnt Avoidance and Heart Rate Responses Using Positive Modeling.

    Science.gov (United States)

    Reynolds, Gemma; Field, Andy P; Askew, Chris

    2016-03-23

    Recent research has indicated that vicarious learning can lead to increases in children's fear beliefs and avoidance preferences for stimuli and that these fear responses can subsequently be reversed using positive modeling (counterconditioning). The current study investigated children's vicariously acquired avoidance behavior, physiological responses (heart rate), and attentional bias for stimuli and whether these could also be reduced via counterconditioning. Ninety-six (49 boys, 47 girls) 7- to 11-year-olds received vicarious fear learning for novel stimuli and were then randomly assigned to a counterconditioning, extinction, or control group. Fear beliefs and avoidance preferences were measured pre- and post-learning, whereas avoidance behavior, heart rate, and attentional bias were all measured post-learning. Control group children showed increases in fear beliefs and avoidance preferences for animals seen in vicarious fear learning trials. In addition, significantly greater avoidance behavior, heart rate responding, and attentional bias were observed for these animals compared to a control animal. In contrast, vicariously acquired avoidance preferences of children in the counterconditioning group were significantly reduced post-positive modeling, and these children also did not show the heightened heart rate responding to fear-paired animals. Children in the extinction group demonstrated comparable responses to the control group; thus the extinction procedure showed no effect on any fear measures. The findings suggest that counterconditioning with positive modelling can be used as an effective early intervention to reduce the behavioral and physiological effects of vicarious fear learning in childhood.

  8. Changes of the Prefrontal EEG (Electroencephalogram) Activities According to the Repetition of Audio-Visual Learning.

    Science.gov (United States)

    Kim, Yong-Jin; Chang, Nam-Kee

    2001-01-01

    Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…

  9. The Role of Audiovisual Mass Media News in Language Learning

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2011-01-01

    The present paper focuses on the role of audio/visual mass media news in language learning. In this regard, the two important issues regarding the selection and preparation of TV news for language learning are the content of the news and the linguistic difficulty. Content is described as whether the news is specialized or universal. Universal…

  10. Effects of MK-801 on vicarious trial-and-error and reversal of olfactory discrimination learning in weanling rats.

    Science.gov (United States)

    Griesbach, G S; Hu, D; Amsel, A

    1998-12-01

    The effects of dizocilpine maleate (MK-801) on vicarious trial-and-error (VTE), and on simultaneous olfactory discrimination learning and its reversal, were observed in weanling rats. The term VTE was used by Tolman (The determiners of behavior at a choice point. Psychol. Rev. 1938;46:318-336), who described it as conflict-like behavior at a choice-point in simultaneous discrimination learning. It takes the form of head movements from one stimulus to the other, and has recently been proposed by Amsel (Hippocampal function in the rat: cognitive mapping or vicarious trial-and-error? Hippocampus, 1993;3:251-256) as related to hippocampal, nonspatial function during this learning. Weanling male rats received systemic MK-801 either 30 min before the onset of olfactory discrimination training and its reversal, or only before its reversal. The MK-801-treated animals needed significantly more sessions to acquire the discrimination and showed significantly fewer VTEs in the acquisition phase of learning. Impaired reversal learning was shown only when MK-801 was administered during the reversal-learning phase, itself, and not when it was administered throughout both phases.

  11. Impact of audio-visual storytelling in simulation learning experiences of undergraduate nursing students.

    Science.gov (United States)

    Johnston, Sandra; Parker, Christina N; Fox, Amanda

    2017-09-01

    Use of high fidelity simulation has become increasingly popular in nursing education to the extent that it is now an integral component of most nursing programs. Anecdotal evidence suggests that students have difficulty engaging with simulation manikins due to their unrealistic appearance. Introduction of the manikin as a 'real patient' with the use of an audio-visual narrative may engage students in the simulated learning experience and impact on their learning. A paucity of literature currently exists on the use of audio-visual narratives to enhance simulated learning experiences. This study aimed to determine if viewing an audio-visual narrative during a simulation pre-brief altered undergraduate nursing student perceptions of the learning experience. A quasi-experimental post-test design was utilised. A convenience sample of final year baccalaureate nursing students at a large metropolitan university. Participants completed a modified version of the Student Satisfaction with Simulation Experiences survey. This 12-item questionnaire contained questions relating to the ability to transfer skills learned in simulation to the real clinical world, the realism of the simulation and the overall value of the learning experience. Descriptive statistics were used to summarise demographic information. Two tailed, independent group t-tests were used to determine statistical differences within the categories. Findings indicated that students reported high levels of value, realism and transferability in relation to the viewing of an audio-visual narrative. Statistically significant results (t=2.38, psimulation to clinical practice. The subgroups of age and gender although not significant indicated some interesting results. High satisfaction with simulation was indicated by all students in relation to value and realism. There was a significant finding in relation to transferability on knowledge and this is vital to quality educational outcomes. Copyright © 2017. Published by

  12. Academic e-learning experience in the enhancement of open access audiovisual and media education

    OpenAIRE

    Pacholak, Anna; Sidor, Dorota

    2015-01-01

    The paper presents how the academic e-learning experience and didactic methods of the Centre for Open and Multimedia Education (COME UW), University of Warsaw, enhance the open access to audiovisual and media education at various levels of education. The project is implemented within the Audiovisual and Media Education Programme (PEAM). It is funded by the Polish Film Institute (PISF). The aim of the project is to create a proposal of a comprehensive and open programme for the audiovisual (me...

  13. Audiovisual Cues and Perceptual Learning of Spectrally Distorted Speech

    Science.gov (United States)

    Pilling, Michael; Thomas, Sharon

    2011-01-01

    Two experiments investigate the effectiveness of audiovisual (AV) speech cues (cues derived from both seeing and hearing a talker speak) in facilitating perceptual learning of spectrally distorted speech. Speech was distorted through an eight channel noise-vocoder which shifted the spectral envelope of the speech signal to simulate the properties…

  14. Career Coaches as a Source of Vicarious Learning for Racial and Ethnic Minority PhD Students in the Biomedical Sciences: A Qualitative Study.

    Science.gov (United States)

    Williams, Simon N; Thakore, Bhoomi K; McGee, Richard

    2016-01-01

    Many recent mentoring initiatives have sought to help improve the proportion of underrepresented racial and ethnic minorities (URMs) in academic positions across the biomedical sciences. However, the intractable nature of the problem of underrepresentation suggests that many young scientists may require supplemental career development beyond what many mentors are able to offer. As an adjunct to traditional scientific mentoring, we created a novel academic career "coaching" intervention for PhD students in the biomedical sciences. To determine whether and how academic career coaches can provide effective career-development-related learning experiences for URM PhD students in the biomedical sciences. We focus specifically on vicarious learning experiences, where individuals learn indirectly through the experiences of others. The intervention is being tested as part of a longitudinal randomized control trial (RCT). Here, we describe a nested qualitative study, using a framework approach to analyze data from a total of 48 semi-structured interviews from 24 URM PhD students (2 interviews per participant, 1 at baseline, 1 at 12-month follow-up) (16 female, 8 male; 11 Black, 12 Hispanic, 1 Native-American). We explored the role of the coach as a source of vicarious learning, in relation to the students' goal of being future biomedical science faculty. Coaches were resources through which most students in the study were able to learn vicariously about how to pursue, and succeed within, an academic career. Coaches were particularly useful in instances where students' research mentors are unable to provide such vicarious learning opportunities, for example because the mentor is too busy to have career-related discussions with a student, or because they have, or value, a different type of academic career to the type the student hopes to achieve. Coaching can be an important way to address the lack of structured career development that students receive in their home training

  15. Spontaneous eye movements and trait empathy predict vicarious learning of fear.

    Science.gov (United States)

    Kleberg, Johan L; Selbing, Ida; Lundqvist, Daniel; Hofvander, Björn; Olsson, Andreas

    2015-12-01

    Learning to predict dangerous outcomes is important to survival. In humans, this kind of learning is often transmitted through the observation of others' emotional responses. We analyzed eye movements during an observational/vicarious fear learning procedure, in which healthy participants (N=33) watched another individual ('learning model') receiving aversive treatment (shocks) paired with a predictive conditioned stimulus (CS+), but not a control stimulus (CS-). Participants' gaze pattern towards the model differentiated as a function of whether the CS was predictive or not of a shock to the model. Consistent with our hypothesis that the face of a conspecific in distress can act as an unconditioned stimulus (US), we found that the total fixation time at a learning model's face increased when the CS+ was shown. Furthermore, we found that the total fixation time at the CS+ during learning predicted participants' conditioned responses (CRs) at a later test in the absence of the model. We also demonstrated that trait empathy was associated with stronger CRs, and that autistic traits were positively related to autonomic reactions to watching the model receiving the aversive treatment. Our results have implications for both healthy and dysfunctional socio-emotional learning. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Information about the model's unconditioned stimulus and response in vicarious classical conditioning.

    Science.gov (United States)

    Hygge, S

    1976-06-01

    Four groups with 16 observers each participated in a differential, vicarious conditioning experiment with skin conductance responses as the dependent variable. The information available to the observer about the model's unconditioned stimulus and response was varied in a 2 X 2 factorial design. Results clearly showed that information about the model's unconditioned stimulus (a high or low dB level) was not necessary for vicarious instigation, but that information about the unconditioned response (a high or low emotional aversiveness) was necessary. Data for conditioning of responses showed almost identical patterns to those for vicarious instigation. To explain the results, a distinction between factors necessary for the development and elicitation of vicariously instigated responses was introduced, and the effectiveness of information about the model's response on the elicitation of vicariously instigated responses was considered in terms of an expansion of Bandura's social learning theory.

  17. Developing an Audiovisual Notebook as a Self-Learning Tool in Histology: Perceptions of Teachers and Students

    Science.gov (United States)

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four…

  18. Impact of Vicarious Learning Experiences and Goal Setting on Preservice Teachers' Self-Efficacy for Technology Integration: A Pilot Study.

    Science.gov (United States)

    Wang, Ling; Ertmer, Peggy A.

    This pilot study was designed to explore how vicarious learning experiences and goal setting influence preservice teachers' self-efficacy for integrating technology into the classroom. Twenty undergraduate students who were enrolled in an introductory educational technology course at a large midwestern university participated and were assigned…

  19. Vicarious trial-and-error behavior and hippocampal cytochrome oxidase activity during Y-maze discrimination learning in the rat.

    Science.gov (United States)

    Hu, Dan; Xu, Xiaojuan; Gonzalez-Lima, Francisco

    2006-03-01

    The present study investigated whether more vicarious trial-and-error (VTE) behavior, defined by head movement from one stimulus to another at a choice point during simultaneous discriminations, led to better visual discrimination learning in a Y-maze, and whether VTE behavior was a function of the hippocampus by measuring regional brain cytochrome oxidase (C.O.) activity, an index of neuronal metabolic activity. The results showed that the more VTEs a rat made, the better the rat learned the visual discrimination. Furthermore, both learning and VTE behavior during learning were correlated to C.O. activity in the hippocampus, suggesting that the hippocampus plays a role in VTE behavior during discrimination learning.

  20. Using Audiovisual TV Interviews to Create Visible Authors that Reduce the Learning Gap between Native and Non-Native Language Speakers

    Science.gov (United States)

    Inglese, Terry; Mayer, Richard E.; Rigotti, Francesca

    2007-01-01

    Can archives of audiovisual TV interviews be used to make authors more visible to students, and thereby reduce the learning gap between native and non-native language speakers in college classes? We examined students in a college course who learned about one scholar's ideas through watching an audiovisual TV interview (i.e., visible author format)…

  1. The role of empathy in experiencing vicarious anxiety.

    Science.gov (United States)

    Shu, Jocelyn; Hassell, Samuel; Weber, Jochen; Ochsner, Kevin N; Mobbs, Dean

    2017-08-01

    With depictions of others facing threats common in the media, the experience of vicarious anxiety may be prevalent in the general population. However, the phenomenon of vicarious anxiety-the experience of anxiety in response to observing others expressing anxiety-and the interpersonal mechanisms underlying it have not been fully investigated in prior research. In 4 studies, we investigate the role of empathy in experiencing vicarious anxiety, using film clips depicting target victims facing threats. In Studies 1 and 2, trait emotional empathy was associated with greater self-reported anxiety when observing target victims, and with perceiving greater anxiety to be experienced by the targets. Study 3 extended these findings by demonstrating that trait empathic concern-the tendency to feel concern and compassion for others-was associated with experiencing vicarious anxiety, whereas trait personal distress-the tendency to experience distress in stressful situations-was not. Study 4 manipulated state empathy to establish a causal relationship between empathy and experience of vicarious anxiety. Participants who took an empathic perspective when observing target victims, as compared to those who took an objective perspective using reappraisal-based strategies, reported experiencing greater anxiety, risk-aversion, and sleep disruption the following night. These results highlight the impact of one's social environment on experiencing anxiety, particularly for those who are highly empathic. In addition, these findings have implications for extending basic models of anxiety to incorporate interpersonal processes, understanding the role of empathy in social learning, and potential applications for therapeutic contexts. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Concurrent Unimodal Learning Enhances Multisensory Responses of Bi-Directional Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2018-01-01

    modalities to independently update modality-specific neural weights on a moment-by-moment basis, in response to dynamic changes in noisy sensory stimuli. The circuit is embodied as a non-holonomic robotic agent that must orient a towards a moving audio-visual target. The circuit continuously learns the best...

  3. Developing an audiovisual notebook as a self-learning tool in histology: perceptions of teachers and students.

    Science.gov (United States)

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four questionnaires with items about information, images, text and music, and filmmaking were used to investigate students' (n = 115) and teachers' perceptions (n = 28) regarding the development of a video focused on a histological technique. The results show that both students and teachers significantly prioritize informative components, images and filmmaking more than text and music. The scores were significantly higher for teachers than for students for all four components analyzed. The highest scores were given to items related to practical and medically oriented elements, and the lowest values were given to theoretical and complementary elements. For most items, there were no differences between genders. A strong positive correlation was found between the scores given to each item by teachers and students. These results show that both students' and teachers' perceptions tend to coincide for most items, and suggest that audiovisual notebooks developed by students would emphasize the same items as those perceived by teachers to be the most relevant. Further, these findings suggest that the use of video as an audiovisual learning notebook would not only preserve the curricular objectives but would also offer the advantages of self-learning processes. © 2013 American Association of Anatomists.

  4. Independent Interactive Inquiry-Based Learning Modules Using Audio-Visual Instruction In Statistics

    OpenAIRE

    McDaniel, Scott N.; Green, Lisa

    2012-01-01

    Simulations can make complex ideas easier for students to visualize and understand. It has been shown that guidance in the use of these simulations enhances students’ learning. This paper describes the implementation and evaluation of the Independent Interactive Inquiry-based (I3) Learning Modules, which use existing open-source Java applets, combined with audio-visual instruction. Students are guided to discover and visualize important concepts in post-calculus and algebra-based courses in p...

  5. Learning cardiopulmonary resuscitation theory with face-to-face versus audiovisual instruction for secondary school students: a randomized controlled trial.

    Science.gov (United States)

    Cerezo Espinosa, Cristina; Nieto Caballero, Sergio; Juguera Rodríguez, Laura; Castejón-Mochón, José Francisco; Segura Melgarejo, Francisca; Sánchez Martínez, Carmen María; López López, Carmen Amalia; Pardo Ríos, Manuel

    2018-02-01

    To compare secondary students' learning of basic life support (BLS) theory and the use of an automatic external defibrillator (AED) through face-to-face classroom instruction versus educational video instruction. A total of 2225 secondary students from 15 schools were randomly assigned to one of the following 5 instructional groups: 1) face-to-face instruction with no audiovisual support, 2) face-to-face instruction with audiovisual support, 3) audiovisual instruction without face-to-face instruction, 4) audiovisual instruction with face-to-face instruction, and 5) a control group that received no instruction. The students took a test of BLS and AED theory before instruction, immediately after instruction, and 2 months later. The median (interquartile range) scores overall were 2.33 (2.17) at baseline, 5.33 (4.66) immediately after instruction (Paudiovisual instruction for learning BLS and AED theory were found in secondary school students either immediately after instruction or 2 months later.

  6. Primary School Pupils' Response to Audio-Visual Learning Process in Port-Harcourt

    Science.gov (United States)

    Olube, Friday K.

    2015-01-01

    The purpose of this study is to examine primary school children's response on the use of audio-visual learning processes--a case study of Chokhmah International Academy, Port-Harcourt (owned by Salvation Ministries). It looked at the elements that enhance pupils' response to educational television programmes and their hindrances to these…

  7. Dissociable brain systems mediate vicarious learning of stimulus-response and action-outcome contingencies.

    Science.gov (United States)

    Liljeholm, Mimi; Molloy, Ciara J; O'Doherty, John P

    2012-07-18

    Two distinct strategies have been suggested to support action selection in humans and other animals on the basis of experiential learning: a goal-directed strategy that generates decisions based on the value and causal antecedents of action outcomes, and a habitual strategy that relies on the automatic elicitation of actions by environmental stimuli. In the present study, we investigated whether a similar dichotomy exists for actions that are acquired vicariously, through observation of other individuals rather than through direct experience, and assessed whether these strategies are mediated by distinct brain regions. We scanned participants with functional magnetic resonance imaging while they performed an observational learning task designed to encourage either goal-directed encoding of the consequences of observed actions, or a mapping of observed actions to conditional discriminative cues. Activity in different parts of the action observation network discriminated between the two conditions during observational learning and correlated with the degree of insensitivity to outcome devaluation in subsequent performance. Our findings suggest that, in striking parallel to experiential learning, neural systems mediating the observational acquisition of actions may be dissociated into distinct components: a goal-directed, outcome-sensitive component and a less flexible stimulus-response component.

  8. Online Dissection Audio-Visual Resources for Human Anatomy: Undergraduate Medical Students' Usage and Learning Outcomes

    Science.gov (United States)

    Choi-Lundberg, Derek L.; Cuellar, William A.; Williams, Anne-Marie M.

    2016-01-01

    In an attempt to improve undergraduate medical student preparation for and learning from dissection sessions, dissection audio-visual resources (DAVR) were developed. Data from e-learning management systems indicated DAVR were accessed by 28% ± 10 (mean ± SD for nine DAVR across three years) of students prior to the corresponding dissection…

  9. Burnout, vicarious traumatization and its prevention.

    Science.gov (United States)

    Pross, Christian

    2006-01-01

    Previous studies on burnout and vicarious traumatization are reviewed and summarized with a list of signs and symptoms. From the author's own observations two histories of caregivers working with torture survivors are described which exemplify the risk,implications and consequences of secondary trauma. Contributing factors in the social and political framework in which caregivers operate are analyzed and possible means of prevention suggested, particularly focussing on the conflict of roles when providing evaluations on trauma victims for health and immigration authorities. Caregivers working with victims of violence carry a high risk of suffering from burnout and vicarious traumatization unless preventive factors are considered such as: self care, solid professional training in psychotherapy, therapeutic self-awareness, regular self-examination by collegial and external supervision, limiting caseload, continuing professional education and learning about new concepts in trauma, occasional research sabbaticals, keeping a balance between empathy and a proper professional distance to clients, protecting oneself against being mislead by clients with fictitious PTSD. An institutional setting should be provided in which the roles of therapists and evaluators are separated. Important factors for burnout and vicarious traumatization are the lack of social recognition for caregivers and the financial and legal outsider status of many centers. Therefore politicians and social insurance carriers should be urged to integrate facilities for traumatized refugees into the general health care system and centers should work on more alliances with the medical mainstream and academic medicine.

  10. Development of vicarious trial-and-error behavior in odor discrimination learning in the rat: relation to hippocampal function?

    Science.gov (United States)

    Hu, D; Griesbach, G; Amsel, A

    1997-06-01

    Previous work from our laboratory has suggested that hippocampal electrolytic lesions result in a deficit in simultaneous, black-white discrimination learning and reduce the frequency of vicarious trial-and-error (VTE) at a choice-point. VTE is a term Tolman used to describe the rat's conflict-like behavior, moving its head from one stimulus to the other at a choice point, and has been proposed as a major nonspatial feature of hippocampal function in both visual and olfactory discrimination learning. Simultaneous odor discrimination and VTE behavior were examined at three different ages. The results were that 16-day-old pups made fewer VTEs and learned much more slowly than 30- and 60-day-olds, a finding in accord with levels of hippocampal maturity in the rat.

  11. Audiovisual integration facilitates monkeys' short-term memory.

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2016-07-01

    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans.

  12. Vicarious resilience and vicarious traumatisation: Experiences of working with refugees and asylum seekers in South Australia.

    Science.gov (United States)

    Puvimanasinghe, Teresa; Denson, Linley A; Augoustinos, Martha; Somasundaram, Daya

    2015-12-01

    The negative psychological impacts of working with traumatised people are well documented and include vicarious traumatisation (VT): the cumulative effect of identifying with clients' trauma stories that negatively impacts on service providers' memory, emotions, thoughts, and worldviews. More recently, the concept of vicarious resilience (VR) has been also identified: the strength, growth, and empowerment experienced by trauma workers as a consequence of their work. VR includes service providers' awareness and appreciation of their clients' capacity to grow, maintaining hope for change, as well as learning from and reassessing personal problems in the light of clients' stories of perseverance, strength, and growth. This study aimed at exploring the experiences of mental health, physical healthcare, and settlement workers caring for refugees and asylum seekers in South Australia. Using a qualitative method (data-based thematic analysis) to collect and analyse 26 semi-structured face-to-face interviews, we identified four prominent and recurring themes emanating from the data: VT, VR, work satisfaction, and cultural flexibility. These findings-among the first to describe both VT and VR in Australians working with refugee people-have important implications for policy, service quality, service providers' wellbeing, and refugee clients' lives. © The Author(s) 2015.

  13. Concern for others leads to vicarious optimism

    OpenAIRE

    Kappes, A.; Faber, N. S.; Kahane, G.; Savulescu, J.; Crockett, M. J.

    2018-01-01

    An optimistic learning bias leads people to update their beliefs in response to better-than-expected good news but neglect worse-than-expected bad news. Because evidence suggests that this bias arises from self-concern, we hypothesized that a similar bias may affect beliefs about other people’s futures, to the extent that people care about others. Here, we demonstrated the phenomenon of vicarious optimism and showed that it arises from concern for others. Participants predicted the likelihood...

  14. Use of Audiovisual Texts in University Education Process

    Science.gov (United States)

    Aleksandrov, Evgeniy P.

    2014-01-01

    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  15. Just watching the game ain't enough: striatal fMRI reward responses to successes and failures in a video game during active and vicarious playing.

    Science.gov (United States)

    Kätsyri, Jari; Hari, Riitta; Ravaja, Niklas; Nummenmaa, Lauri

    2013-01-01

    Although the multimodal stimulation provided by modern audiovisual video games is pleasing by itself, the rewarding nature of video game playing depends critically also on the players' active engagement in the gameplay. The extent to which active engagement influences dopaminergic brain reward circuit responses remains unsettled. Here we show that striatal reward circuit responses elicited by successes (wins) and failures (losses) in a video game are stronger during active than vicarious gameplay. Eleven healthy males both played a competitive first-person tank shooter game (active playing) and watched a pre-recorded gameplay video (vicarious playing) while their hemodynamic brain activation was measured with 3-tesla functional magnetic resonance imaging (fMRI). Wins and losses were paired with symmetrical monetary rewards and punishments during active and vicarious playing so that the external reward context remained identical during both conditions. Brain activation was stronger in the orbitomedial prefrontal cortex (omPFC) during winning than losing, both during active and vicarious playing. In contrast, both wins and losses suppressed activations in the midbrain and striatum during active playing; however, the striatal suppression, particularly in the anterior putamen, was more pronounced during loss than win events. Sensorimotor confounds related to joystick movements did not account for the results. Self-ratings indicated losing to be more unpleasant during active than vicarious playing. Our findings demonstrate striatum to be selectively sensitive to self-acquired rewards, in contrast to frontal components of the reward circuit that process both self-acquired and passively received rewards. We propose that the striatal responses to repeated acquisition of rewards that are contingent on game related successes contribute to the motivational pull of video-game playing.

  16. Just watching the game ain’t enough: Striatal fMRI reward responses to successes and failures in a video game during active and vicarious playing

    Directory of Open Access Journals (Sweden)

    Jari eKätsyri

    2013-06-01

    Full Text Available Although the multimodal stimulation provided by modern audiovisual video games is pleasing by itself, the rewarding nature of video game playing depends critically also on the players’ active engagement in the gameplay. The extent to which active engagement influences dopaminergic brain reward circuit responses remains unsettled. Here we show that striatal reward circuit responses elicited by successes (wins and failures (losses in a video game are stronger during active than vicarious gameplay. Eleven healthy males both played a competitive first-person tank shooter game (active playing and watched a pre-recorded gameplay video (vicarious playing while their hemodynamic brain activation was measured with 3-tesla functional magnetic resonance imaging (fMRI. Wins and losses were paired with symmetrical monetary rewards and punishments during active and vicarious playing so that the external reward context remained identical during both conditions. Brain activation was stronger in the orbitomedial prefrontal cortex (omPFC during winning than losing, both during active and vicarious playing conditions. In contrast, both wins and losses suppressed activations in the midbrain and striatum during active playing; however, the striatal suppression, particularly in the anterior putamen, was more pronounced during loss than win events. Sensorimotor confounds related to joystick movements did not account for the results. Self-ratings indicated losing to be more unpleasant during active than vicarious playing. Our findings demonstrate striatum to be selectively sensitive to self-acquired rewards, in contrast to frontal components of the reward circuit that process both self-acquired and passively received rewards. We propose that the striatal responses to repeated acquisition of rewards that are contingent on game related successes contribute to the motivational pull of video-game playing.

  17. Neural initialization of audiovisual integration in prereaders at varying risk for developmental dyslexia.

    Science.gov (United States)

    I Karipidis, Iliana; Pleisch, Georgette; Röthlisberger, Martina; Hofstetter, Christoph; Dornbierer, Dario; Stämpfli, Philipp; Brem, Silvia

    2017-02-01

    Learning letter-speech sound correspondences is a major step in reading acquisition and is severely impaired in children with dyslexia. Up to now, it remains largely unknown how quickly neural networks adopt specific functions during audiovisual integration of linguistic information when prereading children learn letter-speech sound correspondences. Here, we simulated the process of learning letter-speech sound correspondences in 20 prereading children (6.13-7.17 years) at varying risk for dyslexia by training artificial letter-speech sound correspondences within a single experimental session. Subsequently, we acquired simultaneously event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) scans during implicit audiovisual presentation of trained and untrained pairs. Audiovisual integration of trained pairs correlated with individual learning rates in right superior temporal, left inferior temporal, and bilateral parietal areas and with phonological awareness in left temporal areas. In correspondence, a differential left-lateralized parietooccipitotemporal ERP at 400 ms for trained pairs correlated with learning achievement and familial risk. Finally, a late (650 ms) posterior negativity indicating audiovisual congruency of trained pairs was associated with increased fMRI activation in the left occipital cortex. Taken together, a short (audiovisual integration in neural systems that are responsible for processing linguistic information in proficient readers. To conclude, the ability to learn grapheme-phoneme correspondences, the familial history of reading disability, and phonological awareness of prereading children account for the degree of audiovisual integration in a distributed brain network. Such findings on emerging linguistic audiovisual integration could allow for distinguishing between children with typical and atypical reading development. Hum Brain Mapp 38:1038-1055, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals

  18. Vicarious resilience in sexual assault and domestic violence advocates.

    Science.gov (United States)

    Frey, Lisa L; Beesley, Denise; Abbott, Deah; Kendrick, Elizabeth

    2017-01-01

    There is little research related to sexual assault and domestic violence advocates' experiences, with the bulk of the literature focused on stressors and systemic barriers that negatively impact efforts to assist survivors. However, advocates participating in these studies have also emphasized the positive impact they experience consequent to their work. This study explores the positive impact. Vicarious resilience, personal trauma experiences, peer relational quality, and perceived organizational support in advocates (n = 222) are examined. Also, overlap among the conceptual components of vicarious resilience is explored. The first set of multiple regressions showed that personal trauma experiences and peer relational health predicted compassion satisfaction and vicarious posttraumatic growth, with organizational support predicting only compassion satisfaction. The second set of multiple regressions showed that (a) there was significant shared variance between vicarious posttraumatic growth and compassion satisfaction; (b) after accounting for vicarious posttraumatic growth, organizational support accounted for significant variance in compassion satisfaction; and (c) after accounting for compassion satisfaction, peer relational health accounted for significant variance in vicarious posttraumatic growth. Results suggest that it may be more meaningful to conceptualize advocates' personal growth related to their work through the lens of a multidimensional construct such as vicarious resilience. Organizational strategies promoting vicarious resilience (e.g., shared organizational power, training components) are offered, and the value to trauma-informed care of fostering advocates' vicarious resilience is discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Group Vicarious Desensitization of Test Anxiety.

    Science.gov (United States)

    Altmaier, Elizabeth Mitchell; Woodward, Margaret

    1981-01-01

    Studied test-anxious college students (N=43) who received either vicarious desensitization, study skills training, or both treatments; there was also a no-treatment control condition. Self-report measures indicated that vicarious desensitization resulted in lower test and trait anxiety than study skills training alone or no treatment. (Author)

  20. Thomas Vicary, barber-surgeon.

    Science.gov (United States)

    Thomas, Duncan P

    2006-05-01

    An Act of Parliament in 1540 uniting the barbers and surgeons to form the Barber-Surgeons' Company represented an important foundation stone towards better surgery in England. Thomas Vicary, who played a pivotal role in promoting this union, was a leading surgeon in London in the middle of the 16th century. While Vicary made no direct contribution to surgical knowledge, he should be remembered primarily as one who contributed much towards the early organization and teaching of surgery and to the consequent benefits that flowed from this improvement.

  1. Historia audiovisual para una sociedad audiovisual

    Directory of Open Access Journals (Sweden)

    Julio Montero Díaz

    2013-04-01

    Full Text Available This article analyzes the possibilities of presenting an audiovisual history in a society in which audiovisual media has progressively gained greater protagonism. We analyze specific cases of films and historical documentaries and we assess the difficulties faced by historians to understand the keys of audiovisual language and by filmmakers to understand and incorporate history into their productions. We conclude that it would not be possible to disseminate history in the western world without audiovisual resources circulated through various types of screens (cinema, television, computer, mobile phone, video games.

  2. Audiovisual Capture with Ambiguous Audiovisual Stimuli

    Directory of Open Access Journals (Sweden)

    Jean-Michel Hupé

    2011-10-01

    Full Text Available Audiovisual capture happens when information across modalities get fused into a coherent percept. Ambiguous multi-modal stimuli have the potential to be powerful tools to observe such effects. We used such stimuli made of temporally synchronized and spatially co-localized visual flashes and auditory tones. The flashes produced bistable apparent motion and the tones produced ambiguous streaming. We measured strong interferences between perceptual decisions in each modality, a case of audiovisual capture. However, does this mean that audiovisual capture occurs before bistable decision? We argue that this is not the case, as the interference had a slow temporal dynamics and was modulated by audiovisual congruence, suggestive of high-level factors such as attention or intention. We propose a framework to integrate bistability and audiovisual capture, which distinguishes between “what” competes and “how” it competes (Hupé et al., 2008. The audiovisual interactions may be the result of contextual influences on neural representations (“what” competes, quite independent from the causal mechanisms of perceptual switches (“how” it competes. This framework predicts that audiovisual capture can bias bistability especially if modalities are congruent (Sato et al., 2007, but that is fundamentally distinct in nature from the bistable competition mechanism.

  3. Audiovisual Script Writing.

    Science.gov (United States)

    Parker, Norton S.

    In audiovisual writing the writer must first learn to think in terms of moving visual presentation. The writer must research his script, organize it, and adapt it to a limited running time. By use of a pleasant-sounding narrator and well-written narration, the visual and narrative can be successfully integrated. There are two types of script…

  4. Use of High-Definition Audiovisual Technology in a Gross Anatomy Laboratory: Effect on Dental Students' Learning Outcomes and Satisfaction.

    Science.gov (United States)

    Ahmad, Maha; Sleiman, Naama H; Thomas, Maureen; Kashani, Nahid; Ditmyer, Marcia M

    2016-02-01

    Laboratory cadaver dissection is essential for three-dimensional understanding of anatomical structures and variability, but there are many challenges to teaching gross anatomy in medical and dental schools, including a lack of available space and qualified anatomy faculty. The aim of this study was to determine the efficacy of high-definition audiovisual educational technology in the gross anatomy laboratory in improving dental students' learning outcomes and satisfaction. Exam scores were compared for two classes of first-year students at one U.S. dental school: 2012-13 (no audiovisual technology) and 2013-14 (audiovisual technology), and section exams were used to compare differences between semesters. Additionally, an online survey was used to assess the satisfaction of students who used the technology. All 284 first-year students in the two years (2012-13 N=144; 2013-14 N=140) participated in the exams. Of the 140 students in the 2013-14 class, 63 completed the survey (45% response rate). The results showed that those students who used the technology had higher scores on the laboratory exams than those who did not use it, and students in the winter semester scored higher (90.17±0.56) than in the fall semester (82.10±0.68). More than 87% of those surveyed strongly agreed or agreed that the audiovisual devices represented anatomical structures clearly in the gross anatomy laboratory. These students reported an improved experience in learning and understanding anatomical structures, found the laboratory to be less overwhelming, and said they were better able to follow dissection instructions and understand details of anatomical structures with the new technology. Based on these results, the study concluded that the ability to provide the students a clear view of anatomical structures and high-quality imaging had improved their learning experience.

  5. Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression

    Science.gov (United States)

    McNorgan, Chris; Booth, James R.

    2015-01-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276

  6. Skill dependent audiovisual integration in the fusiform induces repetition suppression.

    Science.gov (United States)

    McNorgan, Chris; Booth, James R

    2015-02-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Vicarious learning of children's social-anxiety-related fear beliefs and emotional Stroop bias.

    Science.gov (United States)

    Askew, Chris; Hagel, Anna; Morgan, Julie

    2015-08-01

    Models of social anxiety suggest that negative social experiences contribute to the development of social anxiety, and this is supported by self-report research. However, there is relatively little experimental evidence for the effects of learning experiences on social cognitions. The current study examined the effect of observing a social performance situation with a negative outcome on children's (8 to 11 years old) fear-related beliefs and cognitive processing. Two groups of children were each shown 1 of 2 animated films of a person trying to score in basketball while being observed by others; in 1 film, the outcome was negative, and in the other, it was neutral. Children's fear-related beliefs about performing in front of others were measured before and after the film and children were asked to complete an emotional Stroop task. Results showed that social fear beliefs increased for children who saw the negative social performance film. In addition, these children showed an emotional Stroop bias for social-anxiety-related words compared to children who saw the neutral film. The findings have implications for our understanding of social anxiety disorder and suggest that vicarious learning experiences in childhood may contribute to the development of social anxiety. (c) 2015 APA, all rights reserved).

  8. Flexible goal imitation: Vicarious feedback influences stimulus-response binding by observation.

    Science.gov (United States)

    Giesen, Carina; Scherdin, Kerstin; Rothermund, Klaus

    2017-06-01

    This study investigated whether vicarious feedback influences binding processes between stimuli and observed responses. Two participants worked together in a shared color categorization task, taking the roles of actor and observer in turns. During a prime trial, participants saw a word while observing the other person executing a specific response. Automatic binding of words and observed responses into stimulus-response (S-R) episodes was assessed via word repetition effects in a subsequent probe trial in which either the same (compatible) or a different (incompatible) response had to be executed by the participants in response to the same or a different word. Results showed that vicarious prime feedback (i.e., the feedback that the other participant received for her or his response in the prime) modulated S-R retrieval effects: After positive vicarious prime feedback, typical S-R retrieval effects emerged (i.e., performance benefits for stimulus repetition probes with compatible responses, but performance costs for stimulus repetition probes with incompatible responses emerged). Notably, however, S-R-retrieval effects were reversed after vicarious negative prime feedback (meaning that stimulus repetition in the probe resulted in performance costs if prime and probe responses were compatible, and in performance benefits for incompatible responses). Findings are consistent with a flexible goal imitation account, according to which imitation is based on an interpretative and therefore feedback-sensitive reconstruction of action goals from observed movements. In concert with earlier findings, this data support the conclusion that transient S-R binding and retrieval processes are involved in social learning phenomena.

  9. Vicarious experience affects patients' treatment preferences for depression.

    Directory of Open Access Journals (Sweden)

    Seth A Berkowitz

    Full Text Available Depression is common in primary care but often under-treated. Personal experiences with depression can affect adherence to therapy, but the effect of vicarious experience is unstudied. We sought to evaluate the association between a patient's vicarious experiences with depression (those of friends or family and treatment preferences for depressive symptoms.We sampled 1054 English and/or Spanish speaking adult subjects from July through December 2008, randomly selected from the 2008 California Behavioral Risk Factor Survey System, regarding depressive symptoms and treatment preferences. We then constructed a unidimensional scale using item analysis that reflects attitudes about antidepressant pharmacotherapy. This became the dependent variable in linear regression analyses to examine the association between vicarious experiences and treatment preferences for depressive symptoms.Our sample was 68% female, 91% white, and 13% Hispanic. Age ranged from 18-94 years. Mean PHQ-9 score was 4.3; 14.5% of respondents had a PHQ-9 score >9.0, consistent with active depressive symptoms. Analyses controlling for current depression symptoms and socio-demographic factors found that in patients both with (coefficient 1.08, p = 0.03 and without (coefficient 0.77, p = 0.03 a personal history of depression, having a vicarious experience (family and friend, respectively with depression is associated with a more favorable attitude towards antidepressant medications.Patients with vicarious experiences of depression express more acceptance of pharmacotherapy. Conversely, patients lacking vicarious experiences of depression have more negative attitudes towards antidepressants. When discussing treatment with patients, clinicians should inquire about vicarious experiences of depression. This information may identify patients at greater risk for non-adherence and lead to more tailored patient-specific education about treatment.

  10. Vicarious liability and criminal prosecutions for regulatory offences.

    Science.gov (United States)

    Freckelton, Ian

    2006-08-01

    The parameters of vicarious liability of corporations for the conduct of their employees, especially in the context of provisions that criminalise breaches of regulatory provisions, are complex. The decision of Bell J in ABC Developmental Learning Centres Pty Ltd v Wallace [2006] VSC 171 raises starkly the potential unfairness of an approach which converts criminal liability of corporations too readily into absolute liability, irrespective of the absence of any form of proven culpability. The author queries whether fault should not be brought back in some form to constitute a determinant of criminal liability for corporations.

  11. The production of audiovisual teaching tools in minimally invasive surgery.

    Science.gov (United States)

    Tolerton, Sarah K; Hugh, Thomas J; Cosman, Peter H

    2012-01-01

    Audiovisual learning resources have become valuable adjuncts to formal teaching in surgical training. This report discusses the process and challenges of preparing an audiovisual teaching tool for laparoscopic cholecystectomy. The relative value in surgical education and training, for both the creator and viewer are addressed. This audiovisual teaching resource was prepared as part of the Master of Surgery program at the University of Sydney, Australia. The different methods of video production used to create operative teaching tools are discussed. Collating and editing material for an audiovisual teaching resource can be a time-consuming and technically challenging process. However, quality learning resources can now be produced even with limited prior video editing experience. With minimal cost and suitable guidance to ensure clinically relevant content, most surgeons should be able to produce short, high-quality education videos of both open and minimally invasive surgery. Despite the challenges faced during production of audiovisual teaching tools, these resources are now relatively easy to produce using readily available software. These resources are particularly attractive to surgical trainees when real time operative footage is used. They serve as valuable adjuncts to formal teaching, particularly in the setting of minimally invasive surgery. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  12. Comparing Auditory-Only and Audiovisual Word Learning for Children with Hearing Loss.

    Science.gov (United States)

    McDaniel, Jena; Camarata, Stephen; Yoder, Paul

    2018-05-15

    Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed.

  13. Memory and learning with rapid audiovisual sequences

    Science.gov (United States)

    Keller, Arielle S.; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  14. Memory and learning with rapid audiovisual sequences.

    Science.gov (United States)

    Keller, Arielle S; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed.

  15. Left Prefrontal Activity Reflects the Ability of Vicarious Fear Learning: A Functional Near-Infrared Spectroscopy Study

    Directory of Open Access Journals (Sweden)

    Qingguo Ma

    2013-01-01

    Full Text Available Fear could be acquired indirectly via social observation. However, it remains unclear which cortical substrate activities are involved in vicarious fear transmission. The present study was to examine empathy-related processes during fear learning by-proxy and to examine the activation of prefrontal cortex by using functional near-infrared spectroscopy. We simultaneously measured participants’ hemodynamic responses and skin conductance responses when they were exposed to a movie. In this movie, a demonstrator (i.e., another human being was receiving a classical fear conditioning. A neutral colored square paired with shocks (CSshock and another colored square paired with no shocks (CSno-shock were randomly presented in front of the demonstrator. Results showed that increased concentration of oxygenated hemoglobin in left prefrontal cortex was observed when participants watched a demonstrator seeing CSshock compared with that exposed to CSno-shock. In addition, enhanced skin conductance responses showing a demonstrator's aversive experience during learning object-fear association were observed. The present study suggests that left prefrontal cortex, which may reflect speculation of others’ mental state, is associated with social fear transmission.

  16. Left prefrontal activity reflects the ability of vicarious fear learning: a functional near-infrared spectroscopy study.

    Science.gov (United States)

    Ma, Qingguo; Huang, Yujing; Wang, Lei

    2013-01-01

    Fear could be acquired indirectly via social observation. However, it remains unclear which cortical substrate activities are involved in vicarious fear transmission. The present study was to examine empathy-related processes during fear learning by-proxy and to examine the activation of prefrontal cortex by using functional near-infrared spectroscopy. We simultaneously measured participants' hemodynamic responses and skin conductance responses when they were exposed to a movie. In this movie, a demonstrator (i.e., another human being) was receiving a classical fear conditioning. A neutral colored square paired with shocks (CS(shock)) and another colored square paired with no shocks (CS(no-shock)) were randomly presented in front of the demonstrator. Results showed that increased concentration of oxygenated hemoglobin in left prefrontal cortex was observed when participants watched a demonstrator seeing CS(shock) compared with that exposed to CS(no-shock). In addition, enhanced skin conductance responses showing a demonstrator's aversive experience during learning object-fear association were observed. The present study suggests that left prefrontal cortex, which may reflect speculation of others' mental state, is associated with social fear transmission.

  17. Music evokes vicarious emotions in listeners.

    Science.gov (United States)

    Kawakami, Ai; Furukawa, Kiyoshi; Okanoya, Kazuo

    2014-01-01

    Why do we listen to sad music? We seek to answer this question using a psychological approach. It is possible to distinguish perceived emotions from those that are experienced. Therefore, we hypothesized that, although sad music is perceived as sad, listeners actually feel (experience) pleasant emotions concurrent with sadness. This hypothesis was supported, which led us to question whether sadness in the context of art is truly an unpleasant emotion. While experiencing sadness may be unpleasant, it may also be somewhat pleasant when experienced in the context of art, for example, when listening to sad music. We consider musically evoked emotion vicarious, as we are not threatened when we experience it, in the way that we can be during the course of experiencing emotion in daily life. When we listen to sad music, we experience vicarious sadness. In this review, we propose two sides to sadness by suggesting vicarious emotion.

  18. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    Science.gov (United States)

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  19. Campaign for vicarious calibration of SumbandilaSat in Argentina

    CSIR Research Space (South Africa)

    Vhengani, LM

    2011-07-01

    Full Text Available assessment, they are also calibrated post-launch. Various post-launch techniques exist including cross-sensor, solar, lunar and vicarious calibration. Vicarious calibration relies on in-situ measurements of surface reflectance and atmospheric transmittance...

  20. Vicarious pain experiences while observing another in pain: an experimental approach

    Directory of Open Access Journals (Sweden)

    Sophie eVandenbroucke

    2013-06-01

    Full Text Available Objective: This study aimed at developing an experimental paradigm to assess vicarious pain experiences. We further explored the putative moderating role of observer’s characteristics such as hypervigilance for pain and dispositional empathy. Methods: Two experiments are reported using a similar procedure. Undergraduate students were selected based upon whether they reported vicarious pain in daily life, and categorized into a pain responder group or a comparison group. Participants were presented a series of videos showing hands being pricked whilst receiving occasionally pricking (electrocutaneous stimuli themselves. In congruent trials, pricking and visual stimuli were applied to the same spatial location. In incongruent trials, pricking and visual stimuli were in the opposite spatial location. Participants were required to report on which location they felt a pricking sensation. Of primary interest was the effect of viewing another in pain upon vicarious pain errors, i.e., the number of trials in which an illusionary sensation was reported. Furthermore, we explored the effect of individual differences in hypervigilance to pain, dispositional empathy and the rubber hand illusion (RHI upon vicarious pain errors. Results: Results of both experiments indicated that the number of vicarious pain errors was overall low. In line with expectations, the number of vicarious pain errors was higher in the pain responder group than in the comparison group. Self-reported hypervigilance for pain lowered the probability of reporting vicarious pain errors in the pain responder group, but dispositional empathy and the RHI did not. Conclusion: Our paradigm allows measuring vicarious pain experiences in students. However, the prevalence of vicarious experiences of pain is low, and only a small percentage of participants display the phenomenon. It remains however unknown which variables affect its occurrence.

  1. Behavioural and neurobiological foundations of vicarious processing

    OpenAIRE

    Lockwood, P. L.

    2015-01-01

    Empathy can be broadly defined as the ability to vicariously experience and to understand the affect of other people. This thesis will argue that such a capacity for vicarious processing is fundamental for successful social-cognitive ability and behaviour. To this end, four outstanding research questions regarding the behavioural and neural basis of empathy are addressed 1) can empathy be dissected into different components and do these components differentially explain individual differences...

  2. Multi-sensory learning and learning to read.

    Science.gov (United States)

    Blomert, Leo; Froyen, Dries

    2010-09-01

    The basis of literacy acquisition in alphabetic orthographies is the learning of the associations between the letters and the corresponding speech sounds. In spite of this primacy in learning to read, there is only scarce knowledge on how this audiovisual integration process works and which mechanisms are involved. Recent electrophysiological studies of letter-speech sound processing have revealed that normally developing readers take years to automate these associations and dyslexic readers hardly exhibit automation of these associations. It is argued that the reason for this effortful learning may reside in the nature of the audiovisual process that is recruited for the integration of in principle arbitrarily linked elements. It is shown that letter-speech sound integration does not resemble the processes involved in the integration of natural audiovisual objects such as audiovisual speech. The automatic symmetrical recruitment of the assumedly uni-sensory visual and auditory cortices in audiovisual speech integration does not occur for letter and speech sound integration. It is also argued that letter-speech sound integration only partly resembles the integration of arbitrarily linked unfamiliar audiovisual objects. Letter-sound integration and artificial audiovisual objects share the necessity of a narrow time window for integration to occur. However, they differ from these artificial objects, because they constitute an integration of partly familiar elements which acquire meaning through the learning of an orthography. Although letter-speech sound pairs share similarities with audiovisual speech processing as well as with unfamiliar, arbitrary objects, it seems that letter-speech sound pairs develop into unique audiovisual objects that furthermore have to be processed in a unique way in order to enable fluent reading and thus very likely recruit other neurobiological learning mechanisms than the ones involved in learning natural or arbitrary unfamiliar

  3. Explaining Self and Vicarious Reactance: A Process Model Approach.

    Science.gov (United States)

    Sittenthaler, Sandra; Jonas, Eva; Traut-Mattausch, Eva

    2016-04-01

    Research shows that people experience a motivational state of agitation known as reactance when they perceive restrictions to their freedoms. However, research has yet to show whether people experience reactance if they merely observe the restriction of another person's freedom. In Study 1, we activated realistic vicarious reactance in the laboratory. In Study 2, we compared people's responses with their own and others' restrictions and found the same levels of experienced reactance and behavioral intentions as well as aggressive tendencies. We did, however, find differences in physiological arousal: Physiological arousal increased quickly after participants imagined their own freedom being restricted, but arousal in response to imagining a friend's freedom being threatened was weaker and delayed. In line with the physiological data, Study 3's results showed that self-restrictions aroused more emotional thoughts than vicarious restrictions, which induced more cognitive responses. Furthermore, in Study 4a, a cognitive task affected only the cognitive process behind vicarious reactance. In contrast, in Study 4b, an emotional task affected self-reactance but not vicarious reactance. We propose a process model explaining the emotional and cognitive processes of self- and vicarious reactance. © 2016 by the Society for Personality and Social Psychology, Inc.

  4. Audiovisual preconditioning enhances the efficacy of an anatomical dissection course: A randomised study.

    Science.gov (United States)

    Collins, Anne M; Quinlan, Christine S; Dolan, Roisin T; O'Neill, Shane P; Tierney, Paul; Cronin, Kevin J; Ridgway, Paul F

    2015-07-01

    The benefits of incorporating audiovisual materials into learning are well recognised. The outcome of integrating such a modality in to anatomical education has not been reported previously. The aim of this randomised study was to determine whether audiovisual preconditioning is a useful adjunct to learning at an upper limb dissection course. Prior to instruction participants completed a standardised pre course multiple-choice questionnaire (MCQ). The intervention group was subsequently shown a video with a pre-recorded commentary. Following initial dissection, both groups completed a second MCQ. The final MCQ was completed at the conclusion of the course. Statistical analysis confirmed a significant improvement in the performance in both groups over the duration of the three MCQs. The intervention group significantly outperformed their control group counterparts immediately following audiovisual preconditioning and in the post course MCQ. Audiovisual preconditioning is a practical and effective tool that should be incorporated in to future course curricula to optimise learning. Level of evidence This study appraises an intervention in medical education. Kirkpatrick Level 2b (modification of knowledge). Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  5. Copyright for audiovisual work and analysis of websites offering audiovisual works

    OpenAIRE

    Chrastecká, Nicolle

    2014-01-01

    This Bachelor's thesis deals with the matter of audiovisual piracy. It discusses the question of audiovisual piracy being caused not by the wrong interpretation of law but by the lack of competitiveness among websites with legal audiovisual content. This thesis questions the quality of legal interpretation in the matter of audiovisual piracy and focuses on its sufficiency. It analyses the responsibility of website providers, providers of the illegal content, the responsibility of illegal cont...

  6. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    Science.gov (United States)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  7. Lecture Hall and Learning Design: A Survey of Variables, Parameters, Criteria and Interrelationships for Audio-Visual Presentation Systems and Audience Reception.

    Science.gov (United States)

    Justin, J. Karl

    Variables and parameters affecting architectural planning and audiovisual systems selection for lecture halls and other learning spaces are surveyed. Interrelationships of factors are discussed, including--(1) design requirements for modern educational techniques as differentiated from cinema, theater or auditorium design, (2) general hall…

  8. Risk of vicarious trauma in nursing research: a focused mapping review and synthesis.

    Science.gov (United States)

    Taylor, Julie; Bradbury-Jones, Caroline; Breckenridge, Jenna P; Jones, Christine; Herber, Oliver Rudolf

    2016-10-01

    To provide a snapshot of how vicarious trauma is considered within the published nursing research literature. Vicarious trauma (secondary traumatic stress) has been the focus of attention in nursing practice for many years. The most pertinent areas to invoke vicarious trauma in research have been suggested as abuse/violence and death/dying. What is not known is how researchers account for the risks of vicarious trauma in research. Focused mapping review and synthesis. Empirical studies meeting criteria for abuse/violence or death/dying in relevant Scopus ranked top nursing journals (n = 6) January 2009 to December 2014. Relevant papers were scrutinised for the extent to which researchers discussed the risk of vicarious trauma. Aspects of the studies were mapped systematically to a pre-defined template, allowing patterns and gaps in authors' reporting to be determined. These were synthesised into a coherent profile of current reporting practices and from this, a new conceptualisation seeking to anticipate and address the risk of vicarious trauma was developed. Two thousand five hundred and three papers were published during the review period, of which 104 met the inclusion criteria. Studies were distributed evenly by method (52 qualitative; 51 quantitative; one mixed methods) and by focus (54 abuse/violence; 50 death/dying). The majority of studies (98) were carried out in adult populations. Only two papers reported on vicarious trauma. The conceptualisation of vicarious trauma takes account of both sensitivity of the substantive data collected, and closeness of those involved with the research. This might assist researchers in designing ethical and protective research and foreground the importance of managing risks of vicarious trauma. Vicarious trauma is not well considered in research into clinically important topics. Our proposed framework allows for consideration of these so that precautionary measures can be put in place to minimise harm to staff. © 2016

  9. Late Cretaceous vicariance in Gondwanan amphibians.

    Directory of Open Access Journals (Sweden)

    Ines Van Bocxlaer

    Full Text Available Overseas dispersals are often invoked when Southern Hemisphere terrestrial and freshwater organism phylogenies do not fit the sequence or timing of Gondwana fragmentation. We used dispersal-vicariance analyses and molecular timetrees to show that two species-rich frog groups, Microhylidae and Natatanura, display congruent patterns of spatial and temporal diversification among Gondwanan plates in the Late Cretaceous, long after the presumed major tectonic break-up events. Because amphibians are notoriously salt-intolerant, these analogies are best explained by simultaneous vicariance, rather than by oceanic dispersal. Hence our results imply Late Cretaceous connections between most adjacent Gondwanan landmasses, an essential concept for biogeographic and palaeomap reconstructions.

  10. Vicarious retribution: the role of collective blame in intergroup aggression.

    Science.gov (United States)

    Lickel, Brian; Miller, Norman; Stenstrom, Douglas M; Denson, Thomas F; Schmader, Toni

    2006-01-01

    We provide a new framework for understanding 1 aspect of aggressive conflict between groups, which we refer to as vicarious retribution. Vicarious retribution occurs when a member of a group commits an act of aggression toward the members of an outgroup for an assault or provocation that had no personal consequences for him or her but which did harm a fellow ingroup member. Furthermore, retribution is often directed at outgroup members who, themselves, were not the direct causal agents in the original attack against the person's ingroup. Thus, retribution is vicarious in that neither the agent of retaliation nor the target of retribution were directly involved in the original event that precipitated the intergroup conflict. We describe how ingroup identification, outgroup entitativity, and other variables, such as group power, influence vicarious retribution. We conclude by considering a variety of conflict reduction strategies in light of this new theoretical framework.

  11. Online dissection audio-visual resources for human anatomy: Undergraduate medical students' usage and learning outcomes.

    Science.gov (United States)

    Choi-Lundberg, Derek L; Cuellar, William A; Williams, Anne-Marie M

    2016-11-01

    In an attempt to improve undergraduate medical student preparation for and learning from dissection sessions, dissection audio-visual resources (DAVR) were developed. Data from e-learning management systems indicated DAVR were accessed by 28% ± 10 (mean ± SD for nine DAVR across three years) of students prior to the corresponding dissection sessions, representing at most 58% ± 20 of assigned dissectors. Approximately 50% of students accessed all available DAVR by the end of semester, while 10% accessed none. Ninety percent of survey respondents (response rate 58%) generally agreed that DAVR improved their preparation for and learning from dissection when used. Of several learning resources, only DAVR usage had a significant positive correlation (P = 0.002) with feeling prepared for dissection. Results on cadaveric anatomy practical examination questions in year 2 (Y2) and year 3 (Y3) cohorts were 3.9% (P learning outcomes of more students. Anat Sci Educ 9: 545-554. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.

  12. Vicarious motor activation during action perception: beyond correlational evidence

    Directory of Open Access Journals (Sweden)

    Alessio eAvenanti

    2013-05-01

    Full Text Available Neurophysiological and imaging studies have shown that seeing the actions of other individuals brings about the vicarious activation of motor regions involved in performing the same actions. While this suggests a simulative mechanism mediating the perception of others’ actions, one cannot use such evidence to make inferences about the functional significance of vicarious activations. Indeed, a central aim in social neuroscience is to comprehend how vicarious activations allow the understanding of other people’s behavior, and this requires to use stimulation or lesion methods to establish causal links from brain activity to cognitive functions. In the present work we review studies investigating the effects of transient manipulations of brain activity or stable lesions in the motor system on individuals’ ability to perceive and understand the actions of others. We conclude there is now compelling evidence that neural activity in the motor system is critical for such cognitive ability. More research using causal methods, however, is needed in order to disclose the limits and the conditions under which vicarious activations are required to perceive and understand actions of others as well as their emotions and somatic feelings.

  13. School Building Design and Audio-Visual Resources.

    Science.gov (United States)

    National Committee for Audio-Visual Aids in Education, London (England).

    The design of new schools should facilitate the use of audiovisual resources by ensuring that the materials used in the construction of the buildings provide adequate sound insulation and acoustical and viewing conditions in all learning spaces. The facilities to be considered are: electrical services; electronic services; light control and…

  14. Hybrid e-learning tool TransLearning

    NARCIS (Netherlands)

    Meij, van der Marjoleine G.; Kupper, Frank; Beers, P.J.; Broerse, Jacqueline E.W.

    2016-01-01

    E-learning and storytelling approaches can support informal vicarious learning within geographically widely distributed multi-stakeholder collaboration networks. This case study evaluates hybrid e-learning and video-storytelling approach ‘TransLearning’ by investigation into how its storytelling

  15. The efficacy of an audiovisual aid in teaching the Neo-Classical ...

    African Journals Online (AJOL)

    This study interrogated the central theoretical statement that understanding and learning to apply the abstract concept of classical dramatic narrative structure can be addressed effectively through a useful audiovisual teaching method. The purpose of the study was to design an effective DVD teaching and learning aid, ...

  16. The Picmonic(®) Learning System: enhancing memory retention of medical sciences, using an audiovisual mnemonic Web-based learning platform.

    Science.gov (United States)

    Yang, Adeel; Goel, Hersh; Bryan, Matthew; Robertson, Ron; Lim, Jane; Islam, Shehran; Speicher, Mark R

    2014-01-01

    Medical students are required to retain vast amounts of medical knowledge on the path to becoming physicians. To address this challenge, multimedia Web-based learning resources have been developed to supplement traditional text-based materials. The Picmonic(®) Learning System (PLS; Picmonic, Phoenix, AZ, USA) is a novel multimedia Web-based learning platform that delivers audiovisual mnemonics designed to improve memory retention of medical sciences. A single-center, randomized, subject-blinded, controlled study was conducted to compare the PLS with traditional text-based material for retention of medical science topics. Subjects were randomly assigned to use two different types of study materials covering several diseases. Subjects randomly assigned to the PLS group were given audiovisual mnemonics along with text-based materials, whereas subjects in the control group were given the same text-based materials with key terms highlighted. The primary endpoints were the differences in performance on immediate, 1 week, and 1 month delayed free-recall and paired-matching tests. The secondary endpoints were the difference in performance on a 1 week delayed multiple-choice test and self-reported satisfaction with the study materials. Differences were calculated using unpaired two-tailed t-tests. PLS group subjects demonstrated improvements of 65%, 161%, and 208% compared with control group subjects on free-recall tests conducted immediately, 1 week, and 1 month after study of materials, respectively. The results of performance on paired-matching tests showed an improvement of up to 331% for PLS group subjects. PLS group subjects also performed 55% greater than control group subjects on a 1 week delayed multiple choice test requiring higher-order thinking. The differences in test performance between the PLS group subjects and the control group subjects were statistically significant (P<0.001), and the PLS group subjects reported higher overall satisfaction with the

  17. My partner's stories: relationships between personal and vicarious life stories within romantic couples.

    Science.gov (United States)

    Panattoni, Katherine; Thomsen, Dorthe Kirkegaard

    2018-06-12

    In this paper, we examined relationships and differences between personal and vicarious life stories, i.e., the life stories one knows of others. Personal and vicarious life stories of both members of 51 young couples (102 participants), based on McAdams' Life Story Interview (2008), were collected. We found significant positive relationships between participants' personal and vicarious life stories on agency and communion themes and redemption sequences. We also found significant positive relationships between participants' vicarious life stories about their partners and those partners' personal life stories on agency and communion, but not redemption. Furthermore, these relationships were not explained by similarity between couples' two personal life stories, as no associations were found between couples' personal stories on agency, communion and redemption. These results suggest that the way we construct the vicarious life stories of close others may reflect how we construct our personal life stories.

  18. Active, Passive, and Vicarious Desensitization

    Science.gov (United States)

    Denney, Douglas R.

    1974-01-01

    Two variations of desensitization therapy for reducing test anxiety were studied, active desensitization in which the client describes his visualizations of the scenes and vicarious desensitization in which the client merely observes the desensitization treatment of another test anxious client. The relaxation treatment which emphasized application…

  19. Digital audiovisual archives

    CERN Document Server

    Stockinger, Peter

    2013-01-01

    Today, huge quantities of digital audiovisual resources are already available - everywhere and at any time - through Web portals, online archives and libraries, and video blogs. One central question with respect to this huge amount of audiovisual data is how they can be used in specific (social, pedagogical, etc.) contexts and what are their potential interest for target groups (communities, professionals, students, researchers, etc.).This book examines the question of the (creative) exploitation of digital audiovisual archives from a theoretical, methodological, technical and practical

  20. Beyond Vicary's fantasies: The impact of subliminal priming and brand choice

    NARCIS (Netherlands)

    Karremans, J.C.T.M.; Stroebe, W.; Claus, J.

    2006-01-01

    With his claim to have increased sales of Coca Cola and popcorn in a movie theatre through subliminal messages flashed on the screen, James Vicary raised the possibility of subliminal advertising. Nobody has ever replicated Vicary's findings and his study was a hoax. This article reports two

  1. Observing the restriction of another person: Vicarious reactance and the role of self-construal and culture

    Directory of Open Access Journals (Sweden)

    Sandra eSittenthaler

    2015-08-01

    Full Text Available Psychological reactance occurs in response to threats posed to perceived behavioral freedoms. Research has shown that people can also experience vicarious reactance. They feel restricted in their own freedom even though they are not personally involved in the restriction but only witness the situation. The phenomenon of vicarious reactance is especially interesting when considered in a cross-cultural context because the cultural specific self-construal plays a crucial role in understanding people’s response to self- and vicariously experienced restrictions. Previous studies and our pilot study (N = 197 could show that people with a collectivistic cultural background show higher vicarious reactance compared to people with an individualistic cultural background. But does it matter whether people experience the vicarious restriction for an in-group or an out-group member? Differentiating vicarious-in-group and vicarious-out-group restrictions, Study 1 (N = 159 suggests that people with a more interdependent self-construal show stronger vicarious reactance only with regard to in-group restrictions but not with regard to out-group restrictions. In contrast, participants with a more independent self-construal experience stronger reactance when being self-restricted compared to vicariously-restricted. Study 2 (N = 180 replicates this pattern conceptually with regard to individualistic and collectivistic cultural background groups. Additionally, participants’ behavioral intentions show the same pattern of results. Moreover a mediation analysis demonstrates that cultural differences in behavioral intentions could be explained through people´s self-construal differences. Thus, the present studies provide new insights and show consistent evidence for vicarious reactance depending on participants’ culturally determined self-construal.

  2. Observing the restriction of another person: vicarious reactance and the role of self-construal and culture.

    Science.gov (United States)

    Sittenthaler, Sandra; Traut-Mattausch, Eva; Jonas, Eva

    2015-01-01

    Psychological reactance occurs in response to threats posed to perceived behavioral freedoms. Research has shown that people can also experience vicarious reactance. They feel restricted in their own freedom even though they are not personally involved in the restriction but only witness the situation. The phenomenon of vicarious reactance is especially interesting when considered in a cross-cultural context because the cultural specific self-construal plays a crucial role in understanding people's response to self- and vicariously experienced restrictions. Previous studies and our pilot study (N = 197) could show that people with a collectivistic cultural background show higher vicarious reactance compared to people with an individualistic cultural background. But does it matter whether people experience the vicarious restriction for an in-group or an out-group member? Differentiating vicarious-in-group and vicarious-out-group restrictions, Study 1 (N = 159) suggests that people with a more interdependent self-construal show stronger vicarious reactance only with regard to in-group restrictions but not with regard to out-group restrictions. In contrast, participants with a more independent self-construal experience stronger reactance when being self-restricted compared to vicariously-restricted. Study 2 (N = 180) replicates this pattern conceptually with regard to individualistic and collectivistic cultural background groups. Additionally, participants' behavioral intentions show the same pattern of results. Moreover a mediation analysis demonstrates that cultural differences in behavioral intentions could be explained through people's self-construal differences. Thus, the present studies provide new insights and show consistent evidence for vicarious reactance depending on participants' culturally determined self-construal.

  3. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training.

    Science.gov (United States)

    Bernstein, Lynne E; Auer, Edward T; Eberhardt, Silvio P; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

  4. The organization and reorganization of audiovisual speech perception in the first year of life.

    Science.gov (United States)

    Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F

    2017-04-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.

  5. Functions of personal and vicarious life stories: Identity and empathy

    DEFF Research Database (Denmark)

    Lind, Majse; Thomsen, Dorthe Kirkegaard

    2018-01-01

    The present study investigates functions of personal and vicarious life stories focusing on identity and empathy. Two-hundred-and-forty Danish high school students completed two life story questionnaires: One for their personal life story and one for a close other’s life story. In both...... questionnaires, they identified up to 10 chapters and self-rated the chapters on valence and valence of causal connections. In addition, they completed measures of identity disturbance and empathy. More positive personal life stories were related to lower identity disturbance and higher empathy. Vicarious life...... stories showed a similar pattern with respect to identity but surprisingly were unrelated to empathy. In addition, we found positive correlations between personal and vicarious life stories for number of chapters, chapter valence, and valence of causal connections. The study indicates that both personal...

  6. Effects of vicarious pain on self-pain perception: investigating the role of awareness

    Science.gov (United States)

    Terrighena, Esslin L; Lu, Ge; Yuen, Wai Ping; Lee, Tatia MC; Keuper, Kati

    2017-01-01

    The observation of pain in others may enhance or reduce self-pain, yet the boundary conditions and factors that determine the direction of such effects are poorly understood. The current study set out to show that visual stimulus awareness plays a crucial role in determining whether vicarious pain primarily activates behavioral defense systems that enhance pain sensitivity and stimulate withdrawal or appetitive systems that attenuate pain sensitivity and stimulate approach. We employed a mixed factorial design with the between-subject factors exposure time (subliminal vs optimal) and vicarious pain (pain vs no pain images), and the within-subject factor session (baseline vs trial) to investigate how visual awareness of vicarious pain images affects subsequent self-pain in the cold-pressor test. Self-pain tolerance, intensity and unpleasantness were evaluated in a sample of 77 healthy participants. Results revealed significant interactions of exposure time and vicarious pain in all three dependent measures. In the presence of visual awareness (optimal condition), vicarious pain compared to no-pain elicited overall enhanced self-pain sensitivity, indexed by reduced pain tolerance and enhanced ratings of pain intensity and unpleasantness. Conversely, in the absence of visual awareness (subliminal condition), vicarious pain evoked decreased self-pain intensity and unpleasantness while pain tolerance remained unaffected. These findings suggest that the activation of defense mechanisms by vicarious pain depends on relatively elaborate cognitive processes, while – strikingly – the appetitive system is activated in highly automatic manner independent from stimulus awareness. Such mechanisms may have evolved to facilitate empathic, protective approach responses toward suffering individuals, ensuring survival of the protective social group. PMID:28831270

  7. The Picmonic® Learning System: enhancing memory retention of medical sciences, using an audiovisual mnemonic Web-based learning platform

    Directory of Open Access Journals (Sweden)

    Yang A

    2014-05-01

    Full Text Available Adeel Yang,1,* Hersh Goel,1,* Matthew Bryan,2 Ron Robertson,1 Jane Lim,1 Shehran Islam,1 Mark R Speicher2 1College of Medicine, The University of Arizona, Tucson, AZ, USA; 2Arizona College of Osteopathic Medicine, Midwestern University, Glendale, AZ, USA *These authors contributed equally to this work Background: Medical students are required to retain vast amounts of medical knowledge on the path to becoming physicians. To address this challenge, multimedia Web-based learning resources have been developed to supplement traditional text-based materials. The Picmonic® Learning System (PLS; Picmonic, Phoenix, AZ, USA is a novel multimedia Web-based learning platform that delivers audiovisual mnemonics designed to improve memory retention of medical sciences. Methods: A single-center, randomized, subject-blinded, controlled study was conducted to compare the PLS with traditional text-based material for retention of medical science topics. Subjects were randomly assigned to use two different types of study materials covering several diseases. Subjects randomly assigned to the PLS group were given audiovisual mnemonics along with text-based materials, whereas subjects in the control group were given the same text-based materials with key terms highlighted. The primary endpoints were the differences in performance on immediate, 1 week, and 1 month delayed free-recall and paired-matching tests. The secondary endpoints were the difference in performance on a 1 week delayed multiple-choice test and self-reported satisfaction with the study materials. Differences were calculated using unpaired two-tailed t-tests. Results: PLS group subjects demonstrated improvements of 65%, 161%, and 208% compared with control group subjects on free-recall tests conducted immediately, 1 week, and 1 month after study of materials, respectively. The results of performance on paired-matching tests showed an improvement of up to 331% for PLS group subjects. PLS group

  8. Use of Audiovisual Media and Equipment by Medical Educationists ...

    African Journals Online (AJOL)

    The most frequently used audiovisual medium and equipment is transparency on Overhead projector (O. H. P.) while the medium and equipment that is barely used for teaching is computer graphics on multi-media projector. This study also suggests ways of improving teaching-learning processes in medical education, ...

  9. Common and distinct neural correlates of personal and vicarious reward: A quantitative meta-analysis

    Science.gov (United States)

    Morelli, Sylvia A.; Sacchet, Matthew D.; Zaki, Jamil

    2015-01-01

    Individuals experience reward not only when directly receiving positive outcomes (e.g., food or money), but also when observing others receive such outcomes. This latter phenomenon, known as vicarious reward, is a perennial topic of interest among psychologists and economists. More recently, neuroscientists have begun exploring the neuroanatomy underlying vicarious reward. Here we present a quantitative whole-brain meta-analysis of this emerging literature. We identified 25 functional neuroimaging studies that included contrasts between vicarious reward and a neutral control, and subjected these contrasts to an activation likelihood estimate (ALE) meta-analysis. This analysis revealed a consistent pattern of activation across studies, spanning structures typically associated with the computation of value (especially ventromedial prefrontal cortex) and mentalizing (including dorsomedial prefrontal cortex and superior temporal sulcus). We further quantitatively compared this activation pattern to activation foci from a previous meta-analysis of personal reward. Conjunction analyses yielded overlapping VMPFC activity in response to personal and vicarious reward. Contrast analyses identified preferential engagement of the nucleus accumbens in response to personal as compared to vicarious reward, and in mentalizing-related structures in response to vicarious as compared to personal reward. These data shed light on the common and unique components of the reward that individuals experience directly and through their social connections. PMID:25554428

  10. Researching embodied learning by using videographic participation for data collection and audiovisual narratives for dissemination - illustrated by the encounter between two acrobats

    DEFF Research Database (Denmark)

    Degerbøl, Stine; Svendler Nielsen, Charlotte

    2015-01-01

    to qualitative research and presents a case from contemporary circus education examining embodied learning, whereas the particular focus in this article is methodology and the development of a dissemination strategy for empirical material generated through videographic participation. Drawing on contributions...... concerned with the senses from the field of sport sciences and from the field of visual anthropology and sensory ethnography, the article concludes that using videographic participation and creating audiovisual narratives might be a good option to capture the multisensuous dimensions of a learning situation....

  11. Multisensory integration in complete unawareness: evidence from audiovisual congruency priming.

    Science.gov (United States)

    Faivre, Nathan; Mudrik, Liad; Schwartz, Naama; Koch, Christof

    2014-11-01

    Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter "b" or "m," respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit "6" or "8," respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration. © The Author(s) 2014.

  12. Catching Audiovisual Interactions With a First-Person Fisherman Video Game.

    Science.gov (United States)

    Sun, Yile; Hickey, Timothy J; Shinn-Cunningham, Barbara; Sekuler, Robert

    2017-07-01

    The human brain is excellent at integrating information from different sources across multiple sensory modalities. To examine one particularly important form of multisensory interaction, we manipulated the temporal correlation between visual and auditory stimuli in a first-person fisherman video game. Subjects saw rapidly swimming fish whose size oscillated, either at 6 or 8 Hz. Subjects categorized each fish according to its rate of size oscillation, while trying to ignore a concurrent broadband sound seemingly emitted by the fish. In three experiments, categorization was faster and more accurate when the rate at which a fish oscillated in size matched the rate at which the accompanying, task-irrelevant sound was amplitude modulated. Control conditions showed that the difference between responses to matched and mismatched audiovisual signals reflected a performance gain in the matched condition, rather than a cost from the mismatched condition. The performance advantage with matched audiovisual signals was remarkably robust over changes in task demands between experiments. Performance with matched or unmatched audiovisual signals improved over successive trials at about the same rate, emblematic of perceptual learning in which visual oscillation rate becomes more discriminable with experience. Finally, analysis at the level of individual subjects' performance pointed to differences in the rates at which subjects can extract information from audiovisual stimuli.

  13. 29 CFR 2.13 - Audiovisual coverage prohibited.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage prohibited. 2.13 Section 2.13 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.13 Audiovisual coverage prohibited. The Department shall not permit audiovisual coverage of the...

  14. Audio-Visual Tibetan Speech Recognition Based on a Deep Dynamic Bayesian Network for Natural Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Yue Zhao

    2012-12-01

    Full Text Available Audio-visual speech recognition is a natural and robust approach to improving human-robot interaction in noisy environments. Although multi-stream Dynamic Bayesian Network and coupled HMM are widely used for audio-visual speech recognition, they fail to learn the shared features between modalities and ignore the dependency of features among the frames within each discrete state. In this paper, we propose a Deep Dynamic Bayesian Network (DDBN to perform unsupervised extraction of spatial-temporal multimodal features from Tibetan audio-visual speech data and build an accurate audio-visual speech recognition model under a no frame-independency assumption. The experiment results on Tibetan speech data from some real-world environments showed the proposed DDBN outperforms the state-of-art methods in word recognition accuracy.

  15. Vicarious Trauma: Predictors of Clinicians' Disrupted Cognitions about Self-Esteem and Self-Intimacy

    Science.gov (United States)

    Way, Ineke; VanDeusen, Karen; Cottrell, Tom

    2007-01-01

    This study examined vicarious trauma in clinicians who provide sexual abuse treatment (N = 383). A random sample of clinical members from the Association for the Treatment of Sexual Abusers and American Professional Society on the Abuse of Children were surveyed. Vicarious trauma was measured using the Trauma Stress Institute Belief Scale…

  16. Game of Objects: vicarious causation and multi-modal media

    Directory of Open Access Journals (Sweden)

    Aaron Pedinotti

    2013-09-01

    Full Text Available This paper applies philosopher Graham Harman's object-oriented theory of "vicarious causation" to an analysis of the multi-modal media phenomenon known as "Game of Thrones." Examining the manner in which George R.R. Martin's best-selling series of fantasy novels has been adapted into a board game, a video game, and a hit HBO television series, it uses the changes entailed by these processes to trace the contours of vicariously generative relations. In the course of the resulting analysis, it provides new suggestions concerning the eidetic dimensions of Harman's causal model, particularly with regard to causation in linear networks and in differing types of game systems.

  17. Plantilla 1: El documento audiovisual: elementos importantes

    OpenAIRE

    Alemany, Dolores

    2011-01-01

    Concepto de documento audiovisual y de documentación audiovisual, profundizando en la distinción de documentación de imagen en movimiento con posible incorporación de sonido frente al concepto de documentación audiovisual según plantea Jorge Caldera. Diferenciación entre documentos audiovisuales, obras audiovisuales y patrimonio audiovisual según Félix del Valle.

  18. Effects of vicarious pain on self-pain perception: investigating the role of awareness

    Directory of Open Access Journals (Sweden)

    Terrighena EL

    2017-07-01

    Full Text Available Esslin L Terrighena,1,2 Ge Lu,1 Wai Ping Yuen,1 Tatia M C Lee,1–4 Kati Keuper1,2,5 1Department of Psychology, Laboratory of Neuropsychology, The University of Hong Kong, Hong Kong; 2Laboratory of Social Cognitive Affective Neuroscience, The University of Hong Kong, Hong Kong; 3The State Key Laboratory of Brain and Cognitive Sciences, Hong Kong; 4Institute of Clinical Neuropsychology, The University of Hong Kong, Hong Kong; 5Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany Abstract: The observation of pain in others may enhance or reduce self-pain, yet the boundary conditions and factors that determine the direction of such effects are poorly understood. The current study set out to show that visual stimulus awareness plays a crucial role in ­determining whether vicarious pain primarily activates behavioral defense systems that enhance pain sensitivity and stimulate withdrawal or appetitive systems that attenuate pain sensitivity and stimulate approach. We employed a mixed factorial design with the between-subject factors exposure time (subliminal vs optimal and vicarious pain (pain vs no pain images, and the within-subject factor session (baseline vs trial to investigate how visual awareness of vicarious pain images affects subsequent self-pain in the cold-pressor test. Self-pain tolerance, intensity and unpleasantness were evaluated in a sample of 77 healthy participants. Results revealed ­significant interactions of exposure time and vicarious pain in all three dependent measures. In the presence of visual awareness (optimal condition, vicarious pain compared to no-pain elicited overall enhanced self-pain sensitivity, indexed by reduced pain tolerance and enhanced ratings of pain intensity and unpleasantness. Conversely, in the absence of visual awareness (subliminal condition, vicarious pain evoked decreased self-pain intensity and unpleasantness while pain tolerance remained unaffected. These

  19. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    Science.gov (United States)

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  20. An audiovisual emotion recognition system

    Science.gov (United States)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  1. Benefits of stimulus congruency for multisensory facilitation of visual learning.

    Directory of Open Access Journals (Sweden)

    Robyn S Kim

    Full Text Available BACKGROUND: Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning. METHODOLOGY/PRINCIPLE FINDINGS: Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli. CONCLUSIONS/SIGNIFICANCE: This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.

  2. Vicarious Effort-Based Decision-Making in Autism Spectrum Disorders.

    Science.gov (United States)

    Mosner, Maya G; Kinard, Jessica L; McWeeny, Sean; Shah, Jasmine S; Markiewitz, Nathan D; Damiano-Goodwin, Cara R; Burchinal, Margaret R; Rutherford, Helena J V; Greene, Rachel K; Treadway, Michael T; Dichter, Gabriel S

    2017-10-01

    This study investigated vicarious effort-based decision-making in 50 adolescents with autism spectrum disorders (ASD) compared to 32 controls using the Effort Expenditure for Rewards Task. Participants made choices to win money for themselves or for another person. When choosing for themselves, the ASD group exhibited relatively similar patterns of effort-based decision-making across reward parameters. However, when choosing for another person, the ASD group demonstrated relatively decreased sensitivity to reward magnitude, particularly in the high magnitude condition. Finally, patterns of responding in the ASD group were related to individual differences in consummatory pleasure capacity. These findings indicate atypical vicarious effort-based decision-making in ASD and more broadly add to the growing body of literature addressing social reward processing deficits in ASD.

  3. 29 CFR 2.12 - Audiovisual coverage permitted.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage permitted. 2.12 Section 2.12 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.12 Audiovisual coverage permitted. The following are the types of hearings where the Department...

  4. Audiovisual preservation strategies, data models and value-chains

    OpenAIRE

    Addis, Matthew; Wright, Richard

    2010-01-01

    This is a report on preservation strategies, models and value-chains for digital file-based audiovisual content. The report includes: (a)current and emerging value-chains and business-models for audiovisual preservation;(b) a comparison of preservation strategies for audiovisual content including their strengths and weaknesses, and(c) a review of current preservation metadata models, and requirements for extension to support audiovisual files.

  5. Audiovisual segregation in cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Simon Landry

    Full Text Available It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition, as well as in normal controls. A visual speech recognition task (i.e. speechreading was administered either in silence or in combination with three types of auditory distractors: i noise ii reverse speech sound and iii non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users.

  6. Vicarious traumatization and coping in medical students: a pilot study.

    Science.gov (United States)

    Al-Mateen, Cheryl S; Linker, Julie A; Damle, Neha; Hupe, Jessica; Helfer, Tamara; Jessick, Veronica

    2015-02-01

    This study explored the impact of traumatic experiences on medical students during their clerkships. Medical students completed an anonymous online survey inquiring about traumatic experiences on required clerkships during their third year of medical school, including any symptoms they may have experienced as well as coping strategies they may have used. Twenty-six percent of students reported experiencing vicarious traumatization (VT) during their third year of medical school. The experience of VT in medical students is relevant to medical educators, given that the resulting symptoms may impact student performance and learning as well as ongoing well-being. Fifty percent of the students who experienced VT in this study did so on the psychiatry clerkship. It is important for psychiatrists to recognize that this is a potential risk for students in order to increase the likelihood that appropriate supports are provided.

  7. RECURSO AUDIOVISUAL PAA ENSEÑAR Y APRENDER EN EL AULA: ANÁLISIS Y PROPUESTA DE UN MODELO FORMATIVO

    Directory of Open Access Journals (Sweden)

    Damian Marilu Mendoza Zambrano

    2015-09-01

    Full Text Available La usabilidad de los recursos audiovisuales, gráficos y digitales, que en la actualidad se están introduciendo en el sistema educativo se despliega en varios países de la región como Chile, Colombia, México, Cuba, El Salvador, Uruguay y Venezuela. Se analiza y se justifica subtemas relacionados con la enseñanza de los medios, desde la iniciativa de España y Portugal; países que fueron convirtiéndose en protagonistas internacionales de algunos modelos educativos en el contexto universitario. Debido a la extensión y focalización en la informática y las redes de información y comunicación en la internet; el audiovisual como instrumento tecnológico va ganando espacios como un recurso dinámico e integrador; con características especiales que lo distingue del resto de los medios que conforman el ecosistema audiovisual. Como resultado de esta investigación se proponen dos líneas de aplicación: A. Propuesta del lenguaje icónico y audiovisual como objetivo de aprendizaje y/o materia curricular en los planes de estudio universitarios con talleres para el desarrollo del documento audiovisual, la fotografía digital y la producción audiovisual y B. Uso de los recursos audiovisuales como medio educativo, lo que implicaría un proceso previo de capacitación a la comunidad docente en actividades recomendadas al profesorado y alumnado respectivamente. En consecuencia, se presentan sugerencias que permiten implementar ambas líneas de acción académica.PALABRAS CLAVE: Alfabetización Mediática; Educación Audiovisual; Competencia Mediática; Educomunicación.AUDIOVISUAL RESOURCE FOR TEACHING AND LEARNING IN THE CLASSROOM: ANALYSIS AND PROPOSAL OF A TRAINING MODELABSTRACTThe usage of the graphic and digital audiovisual resources in Education that is been applied in the present, have displayed in countries such as Chile, Colombia, Mexico, Cuba, El Salvador, Uruguay, and Venezuela. The analysis and justification of the topics related to the

  8. Audiovisual Styling and the Film Experience

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2015-01-01

    Approaches to music and audiovisual meaning in film appear to be very different in nature and scope when considered from the point of view of experimental psychology or humanistic studies. Nevertheless, this article argues that experimental studies square with ideas of audiovisual perception...... and meaning in humanistic film music studies in two ways: through studies of vertical synchronous interaction and through studies of horizontal narrative effects. Also, it is argued that the combination of insights from quantitative experimental studies and qualitative audiovisual film analysis may actually...... be combined into a more complex understanding of how audiovisual features interact in the minds of their audiences. This is demonstrated through a review of a series of experimental studies. Yet, it is also argued that textual analysis and concepts from within film and music studies can provide insights...

  9. PENINGKATAN KUALITAS PEMBELAJARAN IPA MELALUI MODEL PROBLEM BASED LEARNING (PBL MENGGUNAKAN AUDIOVISUAL

    Directory of Open Access Journals (Sweden)

    Endang Eka Wulandari, Sri Hartati

    2016-11-01

    Full Text Available Tujuan Penelitian ini untuk meningkatkan kualitas pembelajaran IPA pada siswa kelas IV melalui model PBL menggunakan audiovisual. Penelitian ini menggunakan desain penelitian tindakan kelas yang berlangsung dalam tiga siklus. Data dianalisis dengan menggunakan teknik analisis deskriptif kuantitatif dan kualitatif. Hasil penelitian menunjukan bahwa (1 Keterampilan guru pada siklus I mendapat skor 18, siklus II skor 22, meningkat pada siklus III skor 25.(2 Aktivitas siswa pada siklus I skor 16,8, pada siklus II skor 22, meningkat menjadi 24,4 pada siklus III. (3 Respon siswa pada siklus I dengan persentase 71% siklus II dengan persentase 78%, meningkat 92% pada siklus III (4 Hasil belajar siswa pada siklus I mengalami ketuntasan klasikal sebesar 60%, siklus II sebesar 73%, dan mengalami peningkatan pada siklus III menjadi 94%. Kesimpulan penelitian ini menunjukan model PBL menggunakan audiovisual dapat meningkatkan kualitas pembelajaran IPA yang ditandai dengan meningkatnya keterampilan guru, aktivitas siswa, respon siswa dan hasil belajar siswa.

  10. Audiovisual communication of object-names improves the spatial accuracy of recalled object-locations in topographic maps.

    Science.gov (United States)

    Lammert-Siepmann, Nils; Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank

    2017-01-01

    Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory.

  11. Social learning theory and the effects of living arrangement on heavy alcohol use: results from a national study of college students.

    Science.gov (United States)

    Ward, Brian W; Gryczynski, Jan

    2009-05-01

    This study examined the relationship between living arrangement and heavy episodic drinking among college students in the United States. Using social learning theory as a framework, it was hypothesized that vicarious learning of peer and family alcohol-use norms would mediate the effects of living arrangement on heavy episodic drinking. Analyses were conducted using data from the 2001 Harvard School of Public Health College Alcohol Study, a national survey of full-time undergraduate students attending 4-year colleges or universities in the United States (N = 10,008). Logistic regression models examined the relationship between heavy episodic drinking and various measures of living arrangement and vicarious learning/social norms. Mediation of the effects of living arrangement was tested using both indirect and direct methods. Both student living arrangement and vicarious-learning/social-norm variables remained significant predictors of heavy episodic drinking in multivariate models when controlling for a variety of individual characteristics. Slight mediation of the effects of living arrangement on heavy episodic drinking by vicarious learning/social norms was confirmed for some measures. Although vicarious learning of social norms does appear to play a role in the association between living arrangement and alcohol use, other processes may underlie the relationship. These findings suggest that using theory alongside empirical evidence to inform the manipulation of living environments could present a promising policy strategy to reduce alcohol-related harm in collegiate contexts.

  12. La regulación audiovisual: argumentos a favor y en contra The audio-visual regulation: the arguments for and against

    Directory of Open Access Journals (Sweden)

    Jordi Sopena Palomar

    2008-03-01

    Full Text Available El artículo analiza la efectividad de la regulación audiovisual y valora los diversos argumentos a favor y en contra de la existencia de consejos reguladores a nivel estatal. El debate sobre la necesidad de un organismo de este calado en España todavía persiste. La mayoría de los países comunitarios se han dotado de consejos competentes en esta materia, como es el caso del OFCOM en el Reino Unido o el CSA en Francia. En España, la regulación audiovisual se limita a organismos de alcance autonómico, como son el Consejo Audiovisual de Navarra, el de Andalucía y el Consell de l’Audiovisual de Catalunya (CAC, cuyo modelo también es abordado en este artículo. The article analyzes the effectiveness of the audio-visual regulation and assesses the different arguments for and against the existence of the broadcasting authorities at the state level. The debate of the necessity of a Spanish organism of regulation is still active. Most of the European countries have created some competent authorities, like the OFCOM in United Kingdom and the CSA in France. In Spain, the broadcasting regulation is developed by regional organisms, like the Consejo Audiovisual de Navarra, the Consejo Audiovisual de Andalucía and the Consell de l’Audiovisual de Catalunya (CAC, whose case is also studied in this article.

  13. Lousa Digital Interativa: avaliação da interação didática e proposta de aplicação de narrativa audiovisual / Interactive White Board – IWB: assessment in interaction didactic and audiovisual narrative proposal

    Directory of Open Access Journals (Sweden)

    Francisco García García

    2011-04-01

    Full Text Available O uso de audiovisual em sala de aula não garante uma eficácia na aprendizagem, mas para os estudantes é um elemento interessante e ainda atrativo. Este trabalho — uma aproximação de duas pesquisas: a primeira apresenta a importância da interação didática com a LDI e a segunda, uma lista de elementos de narrativa audiovisual que podem ser aplicados em sala de aula — propõe o domínio de elementos da narrativa audiovisual como uma possibilidade teórica para o professor que quer produzir um conteúdo audiovisual para aplicar em plataformas digitais, como é o caso da Lousa Digital Interativa - LDI. O texto está divido em três partes: a primeira apresenta os conceitos teóricos das duas pesquisas, a segunda discute os resultados de ambas e, por fim, a terceira parte propõe uma prática pedagógica de interação didática com elementos de narrativa audiovisual para uso em LDI. AbstractThe audiovisual use in classroom does not guarantee effectiveness in learning, but for students is an interesting element and still attractive. This work suggests that the field of audiovisual elements of the narrative is a theoretical possibility for the teacher who wants to produce an audiovisual content to apply to digital platforms, such as the Interactive Digital Whiteboard - LDI. This work is an approximation of two doctoral theses, the first that shows the importance of interaction with the didactic and the second LDI provides a list of audiovisual narrative elements that can be applied in the classroom. This work is divided into three parts, the first part presents the theoretical concepts of the two surveys, the second part discusses the results of two surveys and finally the third part, proposes a practical pedagogical didactic interaction with audiovisual narrative elements to use in LDI.

  14. Audiovisual alignment of co-speech gestures to speech supports word learning in 2-year-olds.

    Science.gov (United States)

    Jesse, Alexandra; Johnson, Elizabeth K

    2016-05-01

    Analyses of caregiver-child communication suggest that an adult tends to highlight objects in a child's visual scene by moving them in a manner that is temporally aligned with the adult's speech productions. Here, we used the looking-while-listening paradigm to examine whether 25-month-olds use audiovisual temporal alignment to disambiguate and learn novel word-referent mappings in a difficult word-learning task. Videos of two equally interesting and animated novel objects were simultaneously presented to children, but the movement of only one of the objects was aligned with an accompanying object-labeling audio track. No social cues (e.g., pointing, eye gaze, touch) were available to the children because the speaker was edited out of the videos. Immediately afterward, toddlers were presented with still images of the two objects and asked to look at one or the other. Toddlers looked reliably longer to the labeled object, demonstrating their acquisition of the novel word-referent mapping. A control condition showed that children's performance was not solely due to the single unambiguous labeling that had occurred at experiment onset. We conclude that the temporal link between a speaker's utterances and the motion they imposed on the referent object helps toddlers to deduce a speaker's intended reference in a difficult word-learning scenario. In combination with our previous work, these findings suggest that intersensory redundancy is a source of information used by language users of all ages. That is, intersensory redundancy is not just a word-learning tool used by young infants. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    Science.gov (United States)

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation

  16. In-Orbit Vicarious Calibration for Ocean Color and Aerosol Products

    National Research Council Canada - National Science Library

    Wang, Menghua

    2005-01-01

    It is well known that, to accurately retrieve the spectrum of the water-leaving radiance and derive the ocean color products from satellite sensors, a vicarious calibration procedure, which performs...

  17. Audiovisual perception in amblyopia: A review and synthesis.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-05-17

    Amblyopia is a common developmental sensory disorder that has been extensively and systematically investigated as a unisensory visual impairment. However, its effects are increasingly recognized to extend beyond vision to the multisensory domain. Indeed, amblyopia is associated with altered cross-modal interactions in audiovisual temporal perception, audiovisual spatial perception, and audiovisual speech perception. Furthermore, although the visual impairment in amblyopia is typically unilateral, the multisensory abnormalities tend to persist even when viewing with both eyes. Knowledge of the extent and mechanisms of the audiovisual impairments in amblyopia, however, remains in its infancy. This work aims to review our current understanding of audiovisual processing and integration deficits in amblyopia, and considers the possible mechanisms underlying these abnormalities. Copyright © 2018. Published by Elsevier Ltd.

  18. Using modeling and vicarious reinforcement to produce more positive attitudes toward mental health treatment.

    Science.gov (United States)

    Buckley, Gary I; Malouff, John M

    2005-05-01

    In this study, the authors evaluated the effectiveness of a video, developed for this study and using principles of cognitive learning theory, to produce positive attitudinal change toward mental health treatment. The participants were 35 men and 45 women who were randomly assigned to watch either an experimental video, which included 3 positive 1st-person accounts of psychotherapy or a control video that focused on the psychological construct of self. Pre-intervention, post-intervention, and 2-week follow-up levels of attitude toward mental health treatment were measured using the Attitude Toward Seeking Professional Help Scale (E. H. Fischer & J. L. Turner, 1970). The experimental video group showed a significantly greater increase in positive attitude than did the control group. These results support the effectiveness of using the vicarious reinforcement elements of cognitive learning theory as a basis for changing attitudes toward mental health treatment.

  19. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    Science.gov (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  20. Decreased BOLD responses in audiovisual processing

    NARCIS (Netherlands)

    Wiersinga-Post, Esther; Tomaskovic, Sonja; Slabu, Lavinia; Renken, Remco; de Smit, Femke; Duifhuis, Hendrikus

    2010-01-01

    Audiovisual processing was studied in a functional magnetic resonance imaging study using the McGurk effect. Perceptual responses and the brain activity patterns were measured as a function of audiovisual delay. In several cortical and subcortical brain areas, BOLD responses correlated negatively

  1. Vicarious Social Touch Biases Gazing at Faces and Facial Emotions.

    Science.gov (United States)

    Schirmer, Annett; Ng, Tabitha; Ebstein, Richard P

    2018-02-01

    Research has suggested that interpersonal touch promotes social processing and other-concern, and that women may respond to it more sensitively than men. In this study, we asked whether this phenomenon would extend to third-party observers who experience touch vicariously. In an eye-tracking experiment, participants (N = 64, 32 men and 32 women) viewed prime and target images with the intention of remembering them. Primes comprised line drawings of dyadic interactions with and without touch. Targets comprised two faces shown side-by-side, with one being neutral and the other being happy or sad. Analysis of prime fixations revealed that faces in touch interactions attracted longer gazing than faces in no-touch interactions. In addition, touch enhanced gazing at the area of touch in women but not men. Analysis of target fixations revealed that touch priming increased looking at both faces immediately after target onset, and subsequently, at the emotional face in the pair. Sex differences in target processing were nonsignificant. Together, the present results imply that vicarious touch biases visual attention to faces and promotes emotion sensitivity. In addition, they suggest that, compared with men, women are more aware of tactile exchanges in their environment. As such, vicarious touch appears to share important qualities with actual physical touch. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Designing between Pedagogies and Cultures: Audio-Visual Chinese Language Resources for Australian Schools

    Science.gov (United States)

    Yuan, Yifeng; Shen, Huizhong

    2016-01-01

    This design-based study examines the creation and development of audio-visual Chinese language teaching and learning materials for Australian schools by incorporating users' feedback and content writers' input that emerged in the designing process. Data were collected from workshop feedback of two groups of Chinese-language teachers from primary…

  3. Audiovisual Discrimination between Laughter and Speech

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audiovisual approach to distinguishing laughter from speech and we show that integrating the information from audio and video leads to an improved reliability of audiovisual approach in

  4. Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, B.

    2007-01-01

    Laughter is a highly variable signal, and can express a spectrum of emotions. This makes the automatic detection of laughter a challenging but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is performed

  5. [Learning to use semiautomatic external defibrillators through audiovisual materials for schoolchildren].

    Science.gov (United States)

    Jorge-Soto, Cristina; Abelairas-Gómez, Cristian; Barcala-Furelos, Roberto; Gregorio-García, Carolina; Prieto-Saborit, José Antonio; Rodríguez-Núñez, Antonio

    2016-01-01

    To assess the ability of schoolchildren to use a automated external defibrillator (AED) to provide an effective shock and their retention of the skill 1 month after a training exercise supported by audiovisual materials. Quasi-experimental controlled study in 205 initially untrained schoolchildren aged 6 to 16 years old. SAEDs were used to apply shocks to manikins. The students took a baseline test (T0) of skill, and were then randomized to an experimental or control group in the first phase (T1). The experimental group watched a training video, and both groups were then retested. The children were tested in simulations again 1 month later (T2). A total of 196 students completed all 3 phases. Ninety-six (95.0%) of the secondary school students and 54 (56.8%) of the primary schoolchildren were able to explain what a SAED is. Twenty of the secondary school students (19.8%) and 8 of the primary schoolchildren (8.4%) said they knew how to use one. At T0, 78 participants (39.8%) were able to simulate an effective shock. At T1, 36 controls (34.9%) and 56 experimental-group children (60.2%) achieved an effective shock (Paudiovisual instruction improves students' skill in managing a SAED and helps them retain what they learned for later use.

  6. Lip movements affect infants' audiovisual speech perception.

    Science.gov (United States)

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  7. Perception of the Multisensory Coherence of Fluent Audiovisual Speech in Infancy: Its Emergence & the Role of Experience

    Science.gov (United States)

    Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa

    2014-01-01

    To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038

  8. Employers' Statutory Vicarious Liability in Terms of the Protection of Personal Information Act

    Directory of Open Access Journals (Sweden)

    Daleen Millard

    2016-07-01

    Full Text Available A person whose privacy has been infringed upon through the unlawful, culpable processing of his or her personal information can sue the infringer's employer based on vicarious liability or institute action based on the Protection of Personal Information Act 4 of 2013 (POPI. Section 99(1 of POPI provides a person (a "data subject" whose privacy has been infringed upon with the right to institute a civil action against the responsible party. POPI defines the responsible party as the person who determines the purpose of and means for the processing of the personal information of data subjects. Although POPI does not equate a responsible party to an employer, the term "responsible party" is undoubtedly a synonym for "employer" in this context. By holding an employer accountable for its employees' unlawful processing of a data subject's personal information, POPI creates a form of statutory vicarious liability. Since the defences available to an employer at common law and developed by case law differ from the statutory defences available to an employer in terms of POPI, it is necessary to compare the impact this new statute has on employers. From a risk perspective, employers must be aware of the serious implications of POPI. The question that arises is whether the Act perhaps takes matters too far. This article takes a critical look at the statutory defences available to an employer in vindication of a vicarious liability action brought by a data subject in terms of section 99(1 of POPI. It compares the defences found in section 99(2 of POPI and the common-law defences available to an employer fending off a delictual claim founded on the doctrine of vicarious liability. To support the argument that the statutory vicarious liability created by POPI is too harsh, the defences contained in section 99(2 of POPI are further analogised with those available to an employer in terms of section 60(4 of the Employment Equity Act 55 of 1998 (EEA and other

  9. The level of audiovisual print-speech integration deficits in dyslexia.

    Science.gov (United States)

    Kronschnabel, Jens; Brem, Silvia; Maurer, Urs; Brandeis, Daniel

    2014-09-01

    The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No

  10. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan.

    Science.gov (United States)

    Noel, Jean-Paul; De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.

  11. Audiovisual perceptual learning with multiple speakers.

    Science.gov (United States)

    Mitchel, Aaron D; Gerfen, Chip; Weiss, Daniel J

    2016-05-01

    One challenge for speech perception is between-speaker variability in the acoustic parameters of speech. For example, the same phoneme (e.g. the vowel in "cat") may have substantially different acoustic properties when produced by two different speakers and yet the listener must be able to interpret these disparate stimuli as equivalent. Perceptual tuning, the use of contextual information to adjust phonemic representations, may be one mechanism that helps listeners overcome obstacles they face due to this variability during speech perception. Here we test whether visual contextual cues to speaker identity may facilitate the formation and maintenance of distributional representations for individual speakers, allowing listeners to adjust phoneme boundaries in a speaker-specific manner. We familiarized participants to an audiovisual continuum between /aba/ and /ada/. During familiarization, the "b-face" mouthed /aba/ when an ambiguous token was played, while the "D-face" mouthed /ada/. At test, the same ambiguous token was more likely to be identified as /aba/ when paired with a stilled image of the "b-face" than with an image of the "D-face." This was not the case in the control condition when the two faces were paired equally with the ambiguous token. Together, these results suggest that listeners may form speaker-specific phonemic representations using facial identity cues.

  12. Audiovisual Interaction

    DEFF Research Database (Denmark)

    Karandreas, Theodoros-Alexandros

    in a manner that allowed the subjective audiovisual evaluation of loudspeakers under controlled conditions. Additionally, unimodal audio and visual evaluations were used as a baseline for comparison. The same procedure was applied in the investigation of the validity of less than optimal stimuli presentations...

  13. Influences of selective adaptation on perception of audiovisual speech

    Science.gov (United States)

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  14. Vicarious Effort-Based Decision-Making in Autism Spectrum Disorders

    Science.gov (United States)

    Mosner, Maya G.; Kinard, Jessica L.; McWeeny, Sean; Shah, Jasmine S.; Markiewitz, Nathan D.; Damiano-Goodwin, Cara R.; Burchinal, Margaret R.; Rutherford, Helena J. V.; Greene, Rachel K.; Treadway, Michael T.; Dichter, Gabriel S.

    2017-01-01

    This study investigated vicarious effort-based decision-making in 50 adolescents with autism spectrum disorders (ASD) compared to 32 controls using the Effort Expenditure for Rewards Task. Participants made choices to win money for themselves or for another person. When choosing for themselves, the ASD group exhibited relatively similar patterns…

  15. Effects of Vicarious Experiences of Nature, Environmental Attitudes, and Outdoor Recreation Benefits on Support for Increased Funding Allocations

    Science.gov (United States)

    Kil, Namyun

    2016-01-01

    This study examined the effects of vicarious experiences of nature, environmental attitudes, and recreation benefits sought by participants on their support for funding of natural resources and alternative energy options. Using a national scenic trail user survey, results demonstrated that vicarious experiences of nature influenced environmental…

  16. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

    Science.gov (United States)

    Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa

    2015-02-01

    To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. The Moderating Effects of Peer and Parental Support on the Relationship Between Vicarious Victimization and Substance Use.

    Science.gov (United States)

    Miller, Riane N; Fagan, Abigail A; Wright, Emily M

    2014-10-01

    General strain theory (GST) hypothesizes that youth are more likely to engage in delinquency when they experience vicarious victimization, defined as knowing about or witnessing violence perpetrated against others, but that this relationship may be attenuated for those who receive social support from significant others. Based on prospective data from youth aged 8 to 17 participating in the Project on Human Development in Chicago Neighborhoods (PHDCN), this article found mixed support for these hypotheses. Controlling for prior involvement in delinquency, as well as other risk and protective factors, adolescents who reported more vicarious victimization had an increased likelihood of alcohol use in the short term, but not the long term, and victimization was not related to tobacco or marijuana use. Peer support did not moderate the relationship between vicarious victimization and substance use, but family support did. In contrast to strain theory's predictions, the relationship between vicarious victimization and substance use was stronger for those who had higher compared with lower levels of family support. Implications of these findings for strain theory and future research are discussed.

  18. Audiovisual Review

    Science.gov (United States)

    Physiology Teacher, 1976

    1976-01-01

    Lists and reviews recent audiovisual materials in areas of medical, dental, nursing and allied health, and veterinary medicine; undergraduate, and high school studies. Each is classified as to level, type of instruction, usefulness, and source of availability. Topics include respiration, renal physiology, muscle mechanics, anatomy, evolution,…

  19. Vicarious neural processing of outcomes during observational learning.

    Directory of Open Access Journals (Sweden)

    Elisabetta Monfardini

    Full Text Available Learning what behaviour is appropriate in a specific context by observing the actions of others and their outcomes is a key constituent of human cognition, because it saves time and energy and reduces exposure to potentially dangerous situations. Observational learning of associative rules relies on the ability to map the actions of others onto our own, process outcomes, and combine these sources of information. Here, we combined newly developed experimental tasks and functional magnetic resonance imaging (fMRI to investigate the neural mechanisms that govern such observational learning. Results show that the neural systems involved in individual trial-and-error learning and in action observation and execution both participate in observational learning. In addition, we identified brain areas that specifically activate for others' incorrect outcomes during learning in the posterior medial frontal cortex (pMFC, the anterior insula and the posterior superior temporal sulcus (pSTS.

  20. Vicarious neural processing of outcomes during observational learning.

    Science.gov (United States)

    Monfardini, Elisabetta; Gazzola, Valeria; Boussaoud, Driss; Brovelli, Andrea; Keysers, Christian; Wicker, Bruno

    2013-01-01

    Learning what behaviour is appropriate in a specific context by observing the actions of others and their outcomes is a key constituent of human cognition, because it saves time and energy and reduces exposure to potentially dangerous situations. Observational learning of associative rules relies on the ability to map the actions of others onto our own, process outcomes, and combine these sources of information. Here, we combined newly developed experimental tasks and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms that govern such observational learning. Results show that the neural systems involved in individual trial-and-error learning and in action observation and execution both participate in observational learning. In addition, we identified brain areas that specifically activate for others' incorrect outcomes during learning in the posterior medial frontal cortex (pMFC), the anterior insula and the posterior superior temporal sulcus (pSTS).

  1. Electrophysiological evidence for speech-specific audiovisual integration.

    Science.gov (United States)

    Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode. © 2013 Published by Elsevier Ltd.

  2. Audiovisual quality assessment and prediction for videotelephony

    CERN Document Server

    Belmudez, Benjamin

    2015-01-01

    The work presented in this book focuses on modeling audiovisual quality as perceived by the users of IP-based solutions for video communication like videotelephony. It also extends the current framework for the parametric prediction of audiovisual call quality. The book addresses several aspects related to the quality perception of entire video calls, namely, the quality estimation of the single audio and video modalities in an interactive context, the audiovisual quality integration of these modalities and the temporal pooling of short sample-based quality scores to account for the perceptual quality impact of time-varying degradations.

  3. Speech-specificity of two audiovisual integration effects

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2010-01-01

    Seeing the talker’s articulatory mouth movements can influence the auditory speech percept both in speech identification and detection tasks. Here we show that these audiovisual integration effects also occur for sine wave speech (SWS), which is an impoverished speech signal that naïve observers...... often fail to perceive as speech. While audiovisual integration in the identification task only occurred when observers were informed of the speech-like nature of SWS, integration occurred in the detection task both for informed and naïve observers. This shows that both speech-specific and general...... mechanisms underlie audiovisual integration of speech....

  4. 36 CFR 1237.16 - How do agencies store audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... audiovisual records? 1237.16 Section 1237.16 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.16 How do agencies store audiovisual records? Agencies must maintain appropriate storage conditions for permanent...

  5. A Catalan code of best practices for the audiovisual sector

    OpenAIRE

    Teodoro, Emma; Casanovas, Pompeu

    2010-01-01

    In spite of a new general law regarding Audiovisual Communication, the regulatory framework of the audiovisual sector in Spain can still be defined as huge, disperse and obsolete. The first part of this paper provides an overview of the major challenges of the Spanish audiovisual sector as a result of the convergence of platforms, services and operators, paying especial attention to the Audiovisual Sector in Catalonia. In the second part, we will present an example of self-regulation through...

  6. Vicarious Learning in PBL Variants for Learning Electronics

    Science.gov (United States)

    Podges, Martin; Kommers, Piet

    2017-01-01

    Three different groups in a class of first-year tertiary engineering students had to solve a problem based on a project by applying the distinctive problem-based learning (PBL) approach. Each group's project (PBL project) was then studied by the other two groups after successful completion and demonstration. Each group then had to study the…

  7. Speech cues contribute to audiovisual spatial integration.

    Directory of Open Access Journals (Sweden)

    Christopher W Bishop

    Full Text Available Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral 'what' and dorsal 'where' pathways.

  8. Vocabulary Teaching in Foreign Language via Audiovisual Method Technique of Listening and Following Writing Scripts

    Science.gov (United States)

    Bozavli, Ebubekir

    2017-01-01

    The objective is hereby study is to compare the effects of conventional and audiovisual methods on learning efficiency and success of retention with regard to vocabulary teaching in foreign language. Research sample consists of 21 undergraduate and 7 graduate students studying at Department of French Language Teaching, Kazim Karabekir Faculty of…

  9. Audiovisual Interval Size Estimation Is Associated with Early Musical Training.

    Science.gov (United States)

    Abel, Mary Kathryn; Li, H Charles; Russo, Frank A; Schlaug, Gottfried; Loui, Psyche

    2016-01-01

    Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.

  10. Audiovisual Interval Size Estimation Is Associated with Early Musical Training.

    Directory of Open Access Journals (Sweden)

    Mary Kathryn Abel

    Full Text Available Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.

  11. Reduced audiovisual recalibration in the elderly.

    Science.gov (United States)

    Chan, Yu Man; Pianta, Michael J; McKendrick, Allison M

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22-32 years old) and 15 older (64-74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age.

  12. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson's Disease.

    Science.gov (United States)

    Ren, Yanna; Suzuki, Keisuke; Yang, Weiping; Ren, Yanling; Wu, Fengxia; Yang, Jiajia; Takahashi, Satoshi; Ejima, Yoshimichi; Wu, Jinglong; Hirata, Koichi

    2018-01-01

    The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson's disease (PD). This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC) and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p audiovisual stimuli was significantly faster than that to unimodal stimuli in both NC and PD ( p audiovisual integration was absent in PD; however, it did occur in NC. Further analysis showed that there was no significant audiovisual integration in PD with/without cognitive impairment or in PD with/without sleep disturbances. Furthermore, audiovisual facilitation was not associated with Hoehn and Yahr stage, disease duration, or the presence of sleep disturbances (all p > 0.05). The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.

  13. Electrophysiological assessment of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Dau, Torsten

    Speech perception integrates signal from ear and eye. This is witnessed by a wide range of audiovisual integration effects, such as ventriloquism and the McGurk illusion. Some behavioral evidence suggest that audiovisual integration of specific aspects is special for speech perception. However, our...... knowledge of such bimodal integration would be strengthened if the phenomena could be investigated by objective, neutrally based methods. One key question of the present work is if perceptual processing of audiovisual speech can be gauged with a specific signature of neurophysiological activity...... on the auditory speech percept? In two experiments, which both combine behavioral and neurophysiological measures, an uncovering of the relation between perception of faces and of audiovisual integration is attempted. Behavioral findings suggest a strong effect of face perception, whereas the MMN results are less...

  14. Multistage audiovisual integration of speech: dissociating identification and detection.

    Science.gov (United States)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias S

    2011-02-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers, the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multistage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.

  15. Testing Audiovisual Comprehension Tasks with Questions Embedded in Videos as Subtitles: A Pilot Multimethod Study

    Science.gov (United States)

    Núñez, Juan Carlos Casañ

    2017-01-01

    Listening, watching, reading and writing simultaneously in a foreign language is very complex. This paper is part of wider research which explores the use of audiovisual comprehension questions imprinted in the video image in the form of subtitles and synchronized with the relevant fragments for the purpose of language learning and testing.…

  16. Audiovisual signs and information science: an evaluation

    Directory of Open Access Journals (Sweden)

    Jalver Bethônico

    2006-12-01

    Full Text Available This work evaluates the relationship of Information Science with audiovisual signs, pointing out conceptual limitations, difficulties imposed by the verbal fundament of knowledge, the reduced use within libraries and the ways in the direction of a more consistent analysis of the audiovisual means, supported by the semiotics of Charles Peirce.

  17. Subjective Evaluation of Audiovisual Signals

    Directory of Open Access Journals (Sweden)

    F. Fikejz

    2010-01-01

    Full Text Available This paper deals with subjective evaluation of audiovisual signals, with emphasis on the interaction between acoustic and visual quality. The subjective test is realized by a simple rating method. The audiovisual signal used in this test is a combination of images compressed by JPEG compression codec and sound samples compressed by MPEG-1 Layer III. Images and sounds have various contents. It simulates a real situation when the subject listens to compressed music and watches compressed pictures without the access to original, i.e. uncompressed signals.

  18. The Fungible Audio-Visual Mapping and its Experience

    Directory of Open Access Journals (Sweden)

    Adriana Sa

    2014-12-01

    Full Text Available This article draws a perceptual approach to audio-visual mapping. Clearly perceivable cause and effect relationships can be problematic if one desires the audience to experience the music. Indeed perception would bias those sonic qualities that fit previous concepts of causation, subordinating other sonic qualities, which may form the relations between the sounds themselves. The question is, how can an audio-visual mapping produce a sense of causation, and simultaneously confound the actual cause-effect relationships. We call this a fungible audio-visual mapping. Our aim here is to glean its constitution and aspect. We will report a study, which draws upon methods from experimental psychology to inform audio-visual instrument design and composition. The participants are shown several audio-visual mapping prototypes, after which we pose quantitative and qualitative questions regarding their sense of causation, and their sense of understanding the cause-effect relationships. The study shows that a fungible mapping requires both synchronized and seemingly non-related components – sufficient complexity to be confusing. As the specific cause-effect concepts remain inconclusive, the sense of causation embraces the whole. 

  19. Parametric packet-based audiovisual quality model for IPTV services

    CERN Document Server

    Garcia, Marie-Neige

    2014-01-01

    This volume presents a parametric packet-based audiovisual quality model for Internet Protocol TeleVision (IPTV) services. The model is composed of three quality modules for the respective audio, video and audiovisual components. The audio and video quality modules take as input a parametric description of the audiovisual processing path, and deliver an estimate of the audio and video quality. These outputs are sent to the audiovisual quality module which provides an estimate of the audiovisual quality. Estimates of perceived quality are typically used both in the network planning phase and as part of the quality monitoring. The same audio quality model is used for both these phases, while two variants of the video quality model have been developed for addressing the two application scenarios. The addressed packetization scheme is MPEG2 Transport Stream over Real-time Transport Protocol over Internet Protocol. In the case of quality monitoring, that is the case for which the network is already set-up, the aud...

  20. Audiovisual interpretative skills: between textual culture and formalized literacy

    Directory of Open Access Journals (Sweden)

    Estefanía Jiménez, Ph. D.

    2010-01-01

    Full Text Available This paper presents the results of a study on the process of acquiring interpretative skills to decode audiovisual texts among adolescents and youth. Based on the conception of such competence as the ability to understand the meanings connoted beneath the literal discourses of audiovisual texts, this study compared two variables: the acquisition of such skills from the personal and social experience in the consumption of audiovisual products (which is affected by age difference, and, on the second hand, the differences marked by the existence of formalized processes of media literacy. Based on focus groups of young students, the research assesses the existing academic debate about these processes of acquiring skills to interpret audiovisual materials.

  1. Games and (Preparation for Future) Learning

    Science.gov (United States)

    Hammer, Jessica; Black, John

    2009-01-01

    What makes games effective for learning? The authors argue that games provide vicarious experiences for players, which then amplify the effects of future, formal learning. However, not every game succeeds in doing so! Understanding why some games succeed and others fail at this task means investigating both a given game's design and the…

  2. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    Science.gov (United States)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  3. Quality models for audiovisual streaming

    Science.gov (United States)

    Thang, Truong Cong; Kim, Young Suk; Kim, Cheon Seog; Ro, Yong Man

    2006-01-01

    Quality is an essential factor in multimedia communication, especially in compression and adaptation. Quality metrics can be divided into three categories: within-modality quality, cross-modality quality, and multi-modality quality. Most research has so far focused on within-modality quality. Moreover, quality is normally just considered from the perceptual perspective. In practice, content may be drastically adapted, even converted to another modality. In this case, we should consider the quality from semantic perspective as well. In this work, we investigate the multi-modality quality from the semantic perspective. To model the semantic quality, we apply the concept of "conceptual graph", which consists of semantic nodes and relations between the nodes. As an typical of multi-modality example, we focus on audiovisual streaming service. Specifically, we evaluate the amount of information conveyed by a audiovisual content where both video and audio channels may be strongly degraded, even audio are converted to text. In the experiments, we also consider the perceptual quality model of audiovisual content, so as to see the difference with semantic quality model.

  4. Both Direct and Vicarious Experiences of Nature Affect Children's Willingness to Conserve Biodiversity.

    Science.gov (United States)

    Soga, Masashi; Gaston, Kevin J; Yamaura, Yuichi; Kurisu, Kiyo; Hanaki, Keisuke

    2016-05-25

    Children are becoming less likely to have direct contact with nature. This ongoing loss of human interactions with nature, the extinction of experience, is viewed as one of the most fundamental obstacles to addressing global environmental challenges. However, the consequences for biodiversity conservation have been examined very little. Here, we conducted a questionnaire survey of elementary schoolchildren and investigated effects of the frequency of direct (participating in nature-based activities) and vicarious experiences of nature (reading books or watching TV programs about nature and talking about nature with parents or friends) on their affective attitudes (individuals' emotional feelings) toward and willingness to conserve biodiversity. A total of 397 children participated in the surveys in Tokyo. Children's affective attitudes and willingness to conserve biodiversity were positively associated with the frequency of both direct and vicarious experiences of nature. Path analysis showed that effects of direct and vicarious experiences on children's willingness to conserve biodiversity were mediated by their affective attitudes. This study demonstrates that children who frequently experience nature are likely to develop greater emotional affinity to and support for protecting biodiversity. We suggest that children should be encouraged to experience nature and be provided with various types of these experiences.

  5. Audiovisual Archive Exploitation in the Networked Information Society

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.

    2011-01-01

    Safeguarding the massive body of audiovisual content, including rich music collections, in audiovisual archives and enabling access for various types of user groups is a prerequisite for unlocking the social-economic value of these collections. Data quantities and the need for specific content

  6. The role of emotion in dynamic audiovisual integration of faces and voices.

    Science.gov (United States)

    Kokinous, Jenny; Kotz, Sonja A; Tavano, Alessandro; Schröger, Erich

    2015-05-01

    We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  7. Influencing Republicans' and Democrats' attitudes toward Obamacare: Effects of imagined vicarious cognitive dissonance on political attitudes.

    Science.gov (United States)

    Cooper, Joel; Feldman, Lauren A; Blackman, Shane F

    2018-04-16

    The field of experimental social psychology is appropriately interested in using novel theoretical approaches to implement change in the social world. In the current study, we extended cognitive dissonance theory by creating a new framework of social influence: imagined vicarious dissonance. We used the framework to influence attitudes on an important and controversial political attitude: U.S. citizens' support for the Affordable Care Act (ACA). 36 Republicans and 84 Democrats were asked to imagine fellow Republicans and Democrats, respectively, making attitude discrepant statements under high and low choice conditions about support for the ACA. The data showed that vicarious dissonance, established by imagining a group member make a counterattitudinal speech under high-choice conditions (as compared to low-choice conditions), resulted in greater support for the Act by Republicans and marginally diminished support by Democrats. The results suggest a promising role for the application of vicarious dissonance theory to relevant societal issues and for further understanding the relationship of dissonance and people's identification with their social groups.

  8. Vicarious Neural Processing of Outcomes during Observational Learning

    NARCIS (Netherlands)

    Monfardini, Elisabetta; Gazzola, Valeria; Boussaoud, Driss; Brovelli, Andrea; Keysers, Christian; Wicker, Bruno

    2013-01-01

    Learning what behaviour is appropriate in a specific context by observing the actions of others and their outcomes is a key constituent of human cognition, because it saves time and energy and reduces exposure to potentially dangerous situations. Observational learning of associative rules relies on

  9. The shifting roles of dispersal and vicariance in biogeography.

    OpenAIRE

    Zink, R M; Blackwell-Rago, R C; Ronquist, F

    2000-01-01

    Dispersal and vicariance are often contrasted as competing processes primarily responsible for spatial and temporal patterns of biotic diversity. Recent methods of biogeographical reconstruction recognize the potential of both processes, and the emerging question is about discovering their relative frequencies. Relatively few empirical studies, especially those employing molecular phylogenies that allow a temporal perspective, have attempted to estimate the relative roles of dispersal and vic...

  10. Audiovisual Integration in High Functioning Adults with Autism

    Science.gov (United States)

    Keane, Brian P.; Rosenthal, Orna; Chun, Nicole H.; Shams, Ladan

    2010-01-01

    Autism involves various perceptual benefits and deficits, but it is unclear if the disorder also involves anomalous audiovisual integration. To address this issue, we compared the performance of high-functioning adults with autism and matched controls on experiments investigating the audiovisual integration of speech, spatiotemporal relations, and…

  11. Decision-level fusion for audio-visual laughter detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, M.; Truong, K.; Poppe, R.; Pantic, M.

    2008-01-01

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laughter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is

  12. Multistage audiovisual integration of speech: dissociating identification and detection

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech...... signal. Here we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers...... informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multi-stage account of audiovisual integration of speech in which the many attributes...

  13. Improving Classroom Learning by Collaboratively Observing Human Tutoring Videos while Problem Solving

    Science.gov (United States)

    Craig, Scotty D.; Chi, Michelene T. H.; VanLehn, Kurt

    2009-01-01

    Collaboratively observing tutoring is a promising method for observational learning (also referred to as vicarious learning). This method was tested in the Pittsburgh Science of Learning Center's Physics LearnLab, where students were introduced to physics topics by observing videos while problem solving in Andes, a physics tutoring system.…

  14. Perceived synchrony for realistic and dynamic audiovisual events.

    Science.gov (United States)

    Eg, Ragnhild; Behne, Dawn M

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli.

  15. Statistical learning of multisensory regularities is enhanced in musicians: An MEG study.

    Science.gov (United States)

    Paraskevopoulos, Evangelos; Chalas, Nikolas; Kartsidis, Panagiotis; Wollbrink, Andreas; Bamidis, Panagiotis

    2018-07-15

    The present study used magnetoencephalography (MEG) to identify the neural correlates of audiovisual statistical learning, while disentangling the differential contributions of uni- and multi-modal statistical mismatch responses in humans. The applied paradigm was based on a combination of a statistical learning paradigm and a multisensory oddball one, combining an audiovisual, an auditory and a visual stimulation stream, along with the corresponding deviances. Plasticity effects due to musical expertise were investigated by comparing the behavioral and MEG responses of musicians to non-musicians. The behavioral results indicated that the learning was successful for both musicians and non-musicians. The unimodal MEG responses are consistent with previous studies, revealing the contribution of Heschl's gyrus for the identification of auditory statistical mismatches and the contribution of medial temporal and visual association areas for the visual modality. The cortical network underlying audiovisual statistical learning was found to be partly common and partly distinct from the corresponding unimodal networks, comprising right temporal and left inferior frontal sources. Musicians showed enhanced activation in superior temporal and superior frontal gyrus. Connectivity and information processing flow amongst the sources comprising the cortical network of audiovisual statistical learning, as estimated by transfer entropy, was reorganized in musicians, indicating enhanced top-down processing. This neuroplastic effect showed a cross-modal stability between the auditory and audiovisual modalities. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Age-related audiovisual interactions in the superior colliculus of the rat.

    Science.gov (United States)

    Costa, M; Piché, M; Lepore, F; Guillemot, J-P

    2016-04-21

    It is well established that multisensory integration is a functional characteristic of the superior colliculus that disambiguates external stimuli and therefore reduces the reaction times toward simple audiovisual targets in space. However, in a condition where a complex audiovisual stimulus is used, such as the optical flow in the presence of modulated audio signals, little is known about the processing of the multisensory integration in the superior colliculus. Furthermore, since visual and auditory deficits constitute hallmark signs during aging, we sought to gain some insight on whether audiovisual processes in the superior colliculus are altered with age. Extracellular single-unit recordings were conducted in the superior colliculus of anesthetized Sprague-Dawley adult (10-12 months) and aged (21-22 months) rats. Looming circular concentric sinusoidal (CCS) gratings were presented alone and in the presence of sinusoidally amplitude modulated white noise. In both groups of rats, two different audiovisual response interactions were encountered in the spatial domain: superadditive, and suppressive. In contrast, additive audiovisual interactions were found only in adult rats. Hence, superior colliculus audiovisual interactions were more numerous in adult rats (38%) than in aged rats (8%). These results suggest that intersensory interactions in the superior colliculus play an essential role in space processing toward audiovisual moving objects during self-motion. Moreover, aging has a deleterious effect on complex audiovisual interactions. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. Knowledge Generated by Audiovisual Narrative Action Research Loops

    Science.gov (United States)

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of…

  18. Long-term music training modulates the recalibration of audiovisual simultaneity.

    Science.gov (United States)

    Jicol, Crescent; Proulx, Michael J; Pollick, Frank E; Petrini, Karin

    2018-07-01

    To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally occurring multisensory training known to enhance audiovisual perceptual accuracy. Hence, we asked a group of drummers, of non-drummer musicians and of non-musicians to judge the audiovisual simultaneity of musical and non-musical audiovisual events, before and after adaptation with two fixed audiovisual asynchronies. We found that the recalibration for the musicians and drummers was in the opposite direction (sound leading vision) to that of non-musicians (vision leading sound), and change together with both increased music training and increased perceptual accuracy (i.e. ability to detect asynchrony). Our findings demonstrate that long-term musical training reshapes the way humans adaptively recalibrate simultaneity between auditory and visual signals.

  19. Hydroxylation of nitro-(pentafluorosulfanyl)benzenes via vicarious nucleophilic substitution of hydrogen

    Czech Academy of Sciences Publication Activity Database

    Beier, Petr; Pastýříková, Tereza

    2011-01-01

    Roč. 52, č. 34 (2011), s. 4392-4394 ISSN 0040-4039 R&D Projects: GA ČR GAP207/11/0344 Institutional research plan: CEZ:AV0Z40550506 Keywords : pentafluorosulfanyl group * vicarious nucleophilic substitution * hydroxylation Subject RIV: CC - Organic Chemistry Impact factor: 2.683, year: 2011

  20. Coping with Vicarious Trauma in the Aftermath of a Natural Disaster

    Science.gov (United States)

    Smith, Lauren E.; Bernal, Darren R.; Schwartz, Billie S.; Whitt, Courtney L.; Christman, Seth T.; Donnelly, Stephanie; Wheatley, Anna; Guillaume, Casta; Nicolas, Guerda; Kish, Jonathan; Kobetz, Erin

    2014-01-01

    This study documents the vicarious psychological impact of the 2010 earthquake in Haiti on Haitians living in the United States. The role of coping resources--family, religious, and community support--was explored. The results highlight the importance of family and community as coping strategies to manage such trauma.

  1. Vicarious Desensitization of Test Anxiety Through Observation of Video-taped Treatment

    Science.gov (United States)

    Mann, Jay

    1972-01-01

    Procedural variations were compared for a vicarious group treatment of test anxiety involving observation of videotapes depicting systematic desensitization of a model. The theoretical implications of the present study and the feasibility of using videotaped materials to treat test anxiety and other avoidance responses in school settings are…

  2. Efficient visual search from synchronized auditory signals requires transient audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.

  3. Gestión documental de la información audiovisual deportiva en las televisiones generalistas Documentary management of the sport audio-visual information in the generalist televisions

    Directory of Open Access Journals (Sweden)

    Jorge Caldera Serrano

    2005-01-01

    Full Text Available Se analiza la gestión de la información audiovisual deportiva en el marco de los Sistemas de Información Documental de las cadenas estatales, zonales y locales. Para ello se realiza un realiza un recorrido por la cadena documental que realiza la información audiovisual deportiva con el fin de ir analizando cada uno de los parámetros, mostrando así una serie de recomendaciones y normativas para la confección del registro audiovisual deportivo. Evidentemente la documentación deportiva audiovisual no se diferencia en exceso del análisis de otros tipos documentales televisivos por lo que se lleva a cabo una profundización yampliación de su gestión y difusión, mostrando el flujo informacional dentro del Sistema.The management of the sport audio-visual documentation of the Information Systems of the state, zonal and local chains is analyzed within the framework. For it it is made makes a route by the documentary chain that makes the sport audio-visual information with the purpose of being analyzing each one of the parameters, showing therefore a series of recommendations and norms for the preparation of the sport audio-visual registry. Evidently the audio-visual sport documentation difference in excess of the analysis of other televising documentary types reason why is not carried out a deepening and extension of its management and diffusion, showing the informational flow within the System.

  4. Challenges and opportunities for audiovisual diversity in the Internet

    Directory of Open Access Journals (Sweden)

    Trinidad García Leiva

    2017-06-01

    Full Text Available http://dx.doi.org/10.5007/2175-7984.2017v16n35p132 At the gates of the first quarter of the XXI century, nobody doubts the fact that the value chain of the audiovisual industry has suffered important transformations. The digital era presents opportunities for cultural enrichment as well as displays new challenges. After presenting a general portray of the audiovisual industries in the digital era, taking as a point of departure the Spanish case and paying attention to players and logics in tension, this paper will present some notes about the advantages and disadvantages that exist for the diversity of audiovisual production, distribution and consumption online. It is here sustained that the diversity of the audiovisual sector online is not guaranteed because the formula that has made some players successful and powerful is based on walled-garden models to monetize contents (which, besides, add restrictions to their reproduction and circulation by and among consumers. The final objective is to present some ideas about the elements that prevent the strengthening of the diversity of the audiovisual industry in the digital scenario. Barriers to overcome are classified as technological, financial, social, legal and political.

  5. An Instrumented Glove for Control Audiovisual Elements in Performing Arts

    Directory of Open Access Journals (Sweden)

    Rafael Tavares

    2018-02-01

    Full Text Available The use of cutting-edge technologies such as wearable devices to control reactive audiovisual systems are rarely applied in more conventional stage performances, such as opera performances. This work reports a cross-disciplinary approach for the research and development of the WMTSensorGlove, a data-glove used in an opera performance to control audiovisual elements on stage through gestural movements. A system architecture of the interaction between the wireless wearable device and the different audiovisual systems is presented, taking advantage of the Open Sound Control (OSC protocol. The developed wearable system was used as audiovisual controller in “As sete mulheres de Jeremias Epicentro”, a portuguese opera by Quarteto Contratempus, which was premiered in September 2017.

  6. Audiovisual integration in speech perception: a multi-stage process

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    investigate whether the integration of auditory and visual speech observed in these two audiovisual integration effects are specific traits of speech perception. We further ask whether audiovisual integration is undertaken in a single processing stage or multiple processing stages....

  7. Elevated audiovisual temporal interaction in patients with migraine without aura

    Science.gov (United States)

    2014-01-01

    Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903

  8. Attenuated audiovisual integration in middle-aged adults in a discrimination task.

    Science.gov (United States)

    Yang, Weiping; Ren, Yanna

    2018-02-01

    Numerous studies have focused on the diversity of audiovisual integration between younger and older adults. However, consecutive trends in audiovisual integration throughout life are still unclear. In the present study, to clarify audiovisual integration characteristics in middle-aged adults, we instructed younger and middle-aged adults to conduct an auditory/visual stimuli discrimination experiment. Randomized streams of unimodal auditory (A), unimodal visual (V) or audiovisual stimuli were presented on the left or right hemispace of the central fixation point, and subjects were instructed to respond to the target stimuli rapidly and accurately. Our results demonstrated that the responses of middle-aged adults to all unimodal and bimodal stimuli were significantly slower than those of younger adults (p Audiovisual integration was markedly delayed (onset time 360 ms) and weaker (peak 3.97%) in middle-aged adults than in younger adults (onset time 260 ms, peak 11.86%). The results suggested that audiovisual integration was attenuated in middle-aged adults and further confirmed age-related decline in information processing.

  9. Both Direct and Vicarious Experiences of Nature Affect Children’s Willingness to Conserve Biodiversity

    Directory of Open Access Journals (Sweden)

    Masashi Soga

    2016-05-01

    Full Text Available Children are becoming less likely to have direct contact with nature. This ongoing loss of human interactions with nature, the extinction of experience, is viewed as one of the most fundamental obstacles to addressing global environmental challenges. However, the consequences for biodiversity conservation have been examined very little. Here, we conducted a questionnaire survey of elementary schoolchildren and investigated effects of the frequency of direct (participating in nature-based activities and vicarious experiences of nature (reading books or watching TV programs about nature and talking about nature with parents or friends on their affective attitudes (individuals’ emotional feelings toward and willingness to conserve biodiversity. A total of 397 children participated in the surveys in Tokyo. Children’s affective attitudes and willingness to conserve biodiversity were positively associated with the frequency of both direct and vicarious experiences of nature. Path analysis showed that effects of direct and vicarious experiences on children’s willingness to conserve biodiversity were mediated by their affective attitudes. This study demonstrates that children who frequently experience nature are likely to develop greater emotional affinity to and support for protecting biodiversity. We suggest that children should be encouraged to experience nature and be provided with various types of these experiences.

  10. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Social work in oncology-managing vicarious trauma-the positive impact of professional supervision.

    Science.gov (United States)

    Joubert, Lynette; Hocking, Alison; Hampson, Ralph

    2013-01-01

    This exploratory study focused on the experience and management of vicarious trauma in a team of social workers (N = 16) at a specialist cancer hospital in Melbourne. Respondents completed the Traumatic Stress Institute Belief Scale (TSIBS), the Professional Quality of Life Scale (ProQOL), and participated in four focus groups. The results from the TSIBS and the ProQol scales confirm that there is a stress associated with the social work role within a cancer service, as demonstrated by the high scores related to stress. However at the same time the results indicated a high level of satisfaction which acted as a mitigating factor. The study also highlighted the importance of supervision and management support. A model for clinical social work supervision is proposed to reduce the risks associated with vicarious trauma.

  12. Supporting Reflective Practices in Social Change Processes with the Dynamic Learning Agenda: An Example of Learning about the Process towards Disability Inclusive Development

    Science.gov (United States)

    van Veen, Saskia C.; de Wildt-Liesveld, Renée; Bunders, Joske F. G.; Regeer, Barbara J.

    2014-01-01

    Change processes are increasingly seen as the solution to entrenched (social) problems. However, change is difficult to realise while dealing with multiple actors, values, and approaches. (Inter)organisational learning is seen as a way to facilitate reflective practices in social change that support emergent changes, vicarious learning, and…

  13. Documentary management of the sport audio-visual information in the generalist televisions

    OpenAIRE

    Jorge Caldera Serrano; Felipe Alonso

    2007-01-01

    The management of the sport audio-visual documentation of the Information Systems of the state, zonal and local chains is analyzed within the framework. For it it is made makes a route by the documentary chain that makes the sport audio-visual information with the purpose of being analyzing each one of the parameters, showing therefore a series of recommendations and norms for the preparation of the sport audio-visual registry. Evidently the audio-visual sport documentation difference i...

  14. Rapid, generalized adaptation to asynchronous audiovisual speech.

    Science.gov (United States)

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-07

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  15. Testosterone and estrogen impact social evaluations and vicarious emotions: A double-blind placebo-controlled study.

    Science.gov (United States)

    Olsson, Andreas; Kopsida, Eleni; Sorjonen, Kimmo; Savic, Ivanka

    2016-06-01

    The abilities to "read" other peoples' intentions and emotions, and to learn from their experiences, are critical to survival. Previous studies have highlighted the role of sex hormones, notably testosterone and estrogen, in these processes. Yet it is unclear how these hormones affect social cognition and emotion using acute hormonal administration. In the present double-blind placebo-controlled study, we administered an acute exogenous dose of testosterone or estrogen to healthy female and male volunteers, respectively, with the aim of investigating the effects of these steroids on social-cognitive and emotional processes. Following hormonal and placebo treatment, participants made (a) facial dominance judgments, (b) mental state inferences (Reading the Mind in the Eyes Test), and (c) learned aversive associations through watching others' emotional responses (observational fear learning [OFL]). Our results showed that testosterone administration to females enhanced ratings of facial dominance but diminished their accuracy in inferring mental states. In men, estrogen administration resulted in an increase in emotional (vicarious) reactivity when watching a distressed other during the OFL task. Taken together, these results suggest that sex hormones affect social-cognitive and emotional functions at several levels, linking our results to neuropsychiatric disorders in which these functions are impaired. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Vicariance or long-distance dispersal: historical biogeography of the pantropical subfamily Chrysophylloideae (Sapotaceae)

    Czech Academy of Sciences Publication Activity Database

    Bartish, Igor; Antonelli, A.; Richardson, J. E.; Swenson, U.

    2011-01-01

    Roč. 38, č. 1 (2011), s. 177-190 ISSN 0305-0270 Institutional research plan: CEZ:AV0Z60050516 Keywords : molecular dating * Neotropics * vicariance Subject RIV: EF - Botanics Impact factor: 4.544, year: 2011

  17. Narrativa audiovisual. Estrategias y recursos [Reseña

    OpenAIRE

    Cuenca Jaramillo, María Dolores

    2011-01-01

    Reseña del libro "Narrativa audiovisual. Estrategias y recursos" de Fernando Canet y Josep Prósper. Cuenca Jaramillo, MD. (2011). Narrativa audiovisual. Estrategias y recursos [Reseña]. Vivat Academia. Revista de Comunicación. Año XIV(117):125-130. http://hdl.handle.net/10251/46210 Senia 125 130 Año XIV 117

  18. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    International Nuclear Information System (INIS)

    George, Rohini; Chung, Theodore D.; Vedam, Sastry S.; Ramakrishnan, Viswanathan; Mohan, Radhe; Weiss, Elisabeth; Keall, Paul J.

    2006-01-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathed without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating

  19. The role of visual spatial attention in audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias; Tiippana, K.; Laarni, J.

    2009-01-01

    Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive b......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre...... from each of the faces and from the voice on the auditory speech percept. We found that directing visual spatial attention towards a face increased the influence of that face on auditory perception. However, the influence of the voice on auditory perception did not change suggesting that audiovisual...... integration did not change. Visual spatial attention was also able to select between the faces when lip reading. This suggests that visual spatial attention acts at the level of visual speech perception prior to audiovisual integration and that the effect propagates through audiovisual integration...

  20. Media Aid Beyond the Factual: Culture, Development, and Audiovisual Assistance

    Directory of Open Access Journals (Sweden)

    Benjamin A. J. Pearson

    2015-01-01

    Full Text Available This paper discusses audiovisual assistance, a form of development aid that focuses on the production and distribution of cultural and entertainment media such as fictional films and TV shows. While the first audiovisual assistance program dates back to UNESCO’s International Fund for the Promotion of Culture in the 1970s, the past two decades have seen a proliferation of audiovisual assistance that, I argue, is related to a growing concern for culture in post-2015 global development agendas. In this paper, I examine the aims and motivations behind the EU’s audiovisual assistance programs to countries in the Global South, using data from policy documents and semi-structured, in-depth interviews with Program Managers and administrative staff in Brussels. These programs prioritize forms of audiovisual content that are locally specific, yet globally tradable. Furthermore, I argue that they have an ambivalent relationship with traditional notions of international development, one that conceptualizes media not only as a means to achieve economic development and human rights aims, but as a form of development itself.

  1. Gestión documental de la información audiovisual deportiva en las televisiones generalistas

    Documentary management of the sport audio-visual information in the generalist televisions

    OpenAIRE

    Jorge Caldera Serrano; Felipe Zapico Alonso

    2005-01-01

    Se analiza la gestión de la información audiovisual deportiva en el marco de los Sistemas de Información Documental de las cadenas estatales, zonales y locales. Para ello se realiza un realiza un recorrido por la cadena documental que realiza la información audiovisual deportiva con el fin de ir analizando cada uno de los parámetros, mostrando así una serie de recomendaciones y normativas para la confección del registro audiovisual deportivo. Evidentemente la documentación deportiva audiovisu...

  2. Seminario latinoamericano de didactica de los medios audiovisuales (Latin American Seminar on Teaching with Audiovisual Aids).

    Science.gov (United States)

    Eduplan Informa, 1971

    1971-01-01

    This seminar on the use of audiovisual aids reached several conclusions on the need for and the use of such aids in Latin America. The need for educational innovation in the face of a new society, a new type of communication, and a new vision of man is stressed. A new definition of teaching and learning as a fundamental process of communication is…

  3. Shedding light on our audiovisual heritage: perspectives to emphasise CERN Digital Memory

    CERN Document Server

    Salvador, Mathilde Estelle

    2017-01-01

    This work aims to answer the question of how to add value to CERN’s audiovisual heritage available on CERN Document Server. In other terms, how to make more visible to the scientific community and grand public what is hidden and classified: namely CERN’s archives, and more precisely audiovisual ones because of their creative potential. Rather than focusing on its scientific and technical value, we will analyse its artistic and attractive power. In fact, we will see that all kind of archive can be intentionally or even accidentally artistic and exciting, that it is possible to change our vision of a photo, a sound or a film. This process of enhancement is a virtuous circle as it has an educational value and makes accessible scientific content that is normally out of range. However, the problem of how to magnify such archives remains. That is why we will try to learn from other digital memories in the world to see how they managed to highlight their own archives, in order to suggest new ways of enhancing au...

  4. Trigger videos on the Web: Impact of audiovisual design

    NARCIS (Netherlands)

    Verleur, R.; Heuvelman, A.; Verhagen, Pleunes Willem

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is

  5. Feature Fusion Based Audio-Visual Speaker Identification Using Hidden Markov Model under Different Lighting Variations

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2014-01-01

    Full Text Available The aim of the paper is to propose a feature fusion based Audio-Visual Speaker Identification (AVSI system with varied conditions of illumination environments. Among the different fusion strategies, feature level fusion has been used for the proposed AVSI system where Hidden Markov Model (HMM is used for learning and classification. Since the feature set contains richer information about the raw biometric data than any other levels, integration at feature level is expected to provide better authentication results. In this paper, both Mel Frequency Cepstral Coefficients (MFCCs and Linear Prediction Cepstral Coefficients (LPCCs are combined to get the audio feature vectors and Active Shape Model (ASM based appearance and shape facial features are concatenated to take the visual feature vectors. These combined audio and visual features are used for the feature-fusion. To reduce the dimension of the audio and visual feature vectors, Principal Component Analysis (PCA method is used. The VALID audio-visual database is used to measure the performance of the proposed system where four different illumination levels of lighting conditions are considered. Experimental results focus on the significance of the proposed audio-visual speaker identification system with various combinations of audio and visual features.

  6. Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection.

    Science.gov (United States)

    Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica

    2015-09-01

    Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration.

  7. Effect of attentional load on audiovisual speech perception: Evidence from ERPs

    Directory of Open Access Journals (Sweden)

    Agnès eAlsius

    2014-07-01

    Full Text Available Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e. a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  8. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    Science.gov (United States)

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  9. Paired Peer Learning through Engineering Education Outreach

    Science.gov (United States)

    Fogg-Rogers, Laura; Lewis, Fay; Edmonds, Juliet

    2017-01-01

    Undergraduate education incorporating active learning and vicarious experience through education outreach presents a critical opportunity to influence future engineering teaching and practice capabilities. Engineering education outreach activities have been shown to have multiple benefits; increasing interest and engagement with science and…

  10. Development of Sensitivity to Audiovisual Temporal Asynchrony during Midchildhood

    Science.gov (United States)

    Kaganovich, Natalya

    2016-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…

  11. "Audio-visuel Integre" et Communication(s) ("Integrated Audiovisual" and Communication)

    Science.gov (United States)

    Moirand, Sophie

    1974-01-01

    This article examines the usefullness of the audiovisual method in teaching communication competence, and calls for research in audiovisual methods as well as in communication theory for improvement in these areas. (Text is in French.) (AM)

  12. The Influence of Selective and Divided Attention on Audiovisual Integration in Children.

    Science.gov (United States)

    Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong

    2016-01-24

    This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder. © The Author(s) 2016.

  13. Mujeres e industria audiovisual hoy: Involución, experimentación y nuevos modelos narrativos Women and the audiovisual (industry today: regression, experiment and new narrative models

    Directory of Open Access Journals (Sweden)

    Ana MARTÍNEZ-COLLADO MARTÍNEZ

    2011-07-01

    Full Text Available Este artículo analiza las prácticas artísticas audiovisuales en el contexto actual. Describe, en primer lugar, el proceso de involución de las prácticas audiovisuales realizadas por mujeres artistas. Las mujeres no están presentes ni como productoras, ni realizadoras, ni como ejecutivas de la industria audiovisual de tal manera que inevitablemente se reconstruyen y refuerzan los estereotipos tradicionales de género. A continuación el artículo se aproxima a la práctica artística audiovisual feminista en la década de los 70 y 80. Tomar la cámara se hizo absolutamente necesario no sólo para dar voz a muchas mujeres. Era necesario reinscribir los discursos ausentes y señalar un discurso crítico respecto a la representación cultural. Analiza, también, cómo estas prácticas a partir de la década de los 90 exploran nuevos modelos narrativos vinculados a las transformaciones de la subjetividad contemporánea, al tiempo que desarrollan su producción audiovisual en un “campo expandido” de exhibición. Por último, el artículo señala la relación de las prácticas feministas audiovisuales con el complejo territorio de la globalización y la sociedad de la información. La narración de la experiencia local ha encontrado en el audiovisual un medio privilegiado para señalar los problemas de la diferencia, la identidad, la raza y la etnicidad.This article analyses audiovisual art in the contemporary context. Firstly it describes the current regression of the role of women artists’ audiovisual practices. Women have little or no presence in the audiovisual industry as producers, filmmakers or executives, a condition that inevitably reconstitutes and reinforces traditional gender stereotypes. The article goes on to look at the feminist audiovisual practices of the nineteen seventies and eighties when women’s filmmaking became an absolutely necessity, not only to give voice to women but also to inscribe discourses found to be

  14. Electrocortical Dynamics in Children with a Language-Learning Impairment Before and After Audiovisual Training.

    Science.gov (United States)

    Heim, Sabine; Choudhury, Naseem; Benasich, April A

    2016-05-01

    Detecting and discriminating subtle and rapid sound changes in the speech environment is a fundamental prerequisite of language processing, and deficits in this ability have frequently been observed in individuals with language-learning impairments (LLI). One approach to studying associations between dysfunctional auditory dynamics and LLI, is to implement a training protocol tapping into this potential while quantifying pre- and post-intervention status. Event-related potentials (ERPs) are highly sensitive to the brain correlates of these dynamic changes and are therefore ideally suited for examining hypotheses regarding dysfunctional auditory processes. In this study, ERP measurements to rapid tone sequences (standard and deviant tone pairs) along with behavioral language testing were performed in 6- to 9-year-old LLI children (n = 21) before and after audiovisual training. A non-treatment group of children with typical language development (n = 12) was also assessed twice at a comparable time interval. The results indicated that the LLI group exhibited considerable gains on standardized measures of language. In terms of ERPs, we found evidence of changes in the LLI group specifically at the level of the P2 component, later than 250 ms after the onset of the second stimulus in the deviant tone pair. These changes suggested enhanced discrimination of deviant from standard tone sequences in widespread cortices, in LLI children after training.

  15. Prácticas de producción audiovisual universitaria reflejadas en los trabajos presentados en la muestra audiovisual universitaria Ventanas 2005-2009

    Directory of Open Access Journals (Sweden)

    Maria Urbanczyk

    2011-01-01

    Full Text Available Este artículo presenta los resultados de la investigación realizada sobre la producción audiovisual universitaria en Colombia, a partir de los trabajos presentados en la muestra audiovisual Ventanas 2005-2009. El estudio de los trabajos trató de abarcar de la manera más completa posible el proceso de producción audiovisual que realizan los jóvenes universitarios, desde el nacimiento de la idea hasta el producto final, la circulación y la socialización. Se encontró que los temas más recurrentes son la violencia y los sentimientos, reflejados desde distintos géneros, tratamientos estéticos y abordajes conceptuales. Ante la ausencia de investigaciones que legitimen el saber que se produce en las aulas en cuanto al campo audiovisual en Colombia, esta investigación pretende abrir un camino para evidenciar el aporte que dejan los jóvenes en la consolidación de una narrativa nacional y en la preservación de la memoria del país.

  16. 36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?

    Science.gov (United States)

    2010-07-01

    ... standards for audiovisual records storage? 1237.18 Section 1237.18 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.18 What are the environmental standards for audiovisual records storage? (a...

  17. Neural Correlates of Audiovisual Integration of Semantic Category Information

    Science.gov (United States)

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-01-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded…

  18. Haptic and Audio-visual Stimuli: Enhancing Experiences and Interaction

    NARCIS (Netherlands)

    Nijholt, Antinus; Dijk, Esko O.; Lemmens, Paul M.C.; Luitjens, S.B.

    2010-01-01

    The intention of the symposium on Haptic and Audio-visual stimuli at the EuroHaptics 2010 conference is to deepen the understanding of the effect of combined Haptic and Audio-visual stimuli. The knowledge gained will be used to enhance experiences and interactions in daily life. To this end, a

  19. Neural Correlates of Temporal Complexity and Synchrony during Audiovisual Correspondence Detection.

    Science.gov (United States)

    Baumann, Oliver; Vromen, Joyce M G; Cheung, Allen; McFadyen, Jessica; Ren, Yudan; Guo, Christine C

    2018-01-01

    We often perceive real-life objects as multisensory cues through space and time. A key challenge for audiovisual integration is to match neural signals that not only originate from different sensory modalities but also that typically reach the observer at slightly different times. In humans, complex, unpredictable audiovisual streams lead to higher levels of perceptual coherence than predictable, rhythmic streams. In addition, perceptual coherence for complex signals seems less affected by increased asynchrony between visual and auditory modalities than for simple signals. Here, we used functional magnetic resonance imaging to determine the human neural correlates of audiovisual signals with different levels of temporal complexity and synchrony. Our study demonstrated that greater perceptual asynchrony and lower signal complexity impaired performance in an audiovisual coherence-matching task. Differences in asynchrony and complexity were also underpinned by a partially different set of brain regions. In particular, our results suggest that, while regions in the dorsolateral prefrontal cortex (DLPFC) were modulated by differences in memory load due to stimulus asynchrony, areas traditionally thought to be involved in speech production and recognition, such as the inferior frontal and superior temporal cortex, were modulated by the temporal complexity of the audiovisual signals. Our results, therefore, indicate specific processing roles for different subregions of the fronto-temporal cortex during audiovisual coherence detection.

  20. A Tutorial Task and Tertiary Courseware Model for Collaborative Learning Communities

    Science.gov (United States)

    Newman, Julian; Lowe, Helen; Neely, Steve; Gong, Xiaofeng; Eyers, David; Bacon, Jean

    2004-01-01

    RAED provides a computerised infrastructure to support the development and administration of Vicarious Learning in collaborative learning communities spread across multiple universities and workplaces. The system is based on the OASIS middleware for Role-based Access Control. This paper describes the origins of the model and the approach to…

  1. Aula virtual y presencial en aprendizaje de comunicación audiovisual y educación Virtual and Real Classroom in Learning Audiovisual Communication and Education

    Directory of Open Access Journals (Sweden)

    Josefina Santibáñez Velilla

    2010-10-01

    Full Text Available El modelo mixto de enseñanza-aprendizaje pretende utilizar las tecnologías de la información y de la comunicación (TIC para garantizar una formación más ajustada al Espacio Europeo de Educación Superior (EEES. Se formularon los siguientes objetivos de investigación: Averiguar la valoración que hacen los alumnos de Magisterio del aula virtual WebCT como apoyo a la docencia presencial, y conocer las ventajas del uso de la WebCT y de las TIC por los alumnos en el estudio de caso: «Valores y contravalores transmitidos por series televisivas visionadas por niños y adolescentes». La investigación se realizó con una muestra de 205 alumnos de la Universidad de La Rioja que cursaban la asignatura de «Tecnologías aplicadas a la Educación». Para la descripción objetiva, sistemática y cuantitativa del contenido manifiesto de los documentos se ha utilizado la técnica de análisis de contenido cualitativa y cuantitativa. Los resultados obtenidos demuestran que las herramientas de comunicación, contenidos y evaluación son valoradas favorablemente por los alumnos. Se llega a la conclusión de que la WebCT y las TIC constituyen un apoyo a la innovación metodológica del EEES basada en el aprendizaje centrado en el alumno. Los alumnos evidencian su competencia audiovisual en los ámbitos de análisis de valores y de expresión a través de documentos audiovisuales en formatos multimedia. Dichos alumnos aportan un nuevo sentido innovador y creativo al uso docente de series televisivas.The mixed model of Teaching-Learning intends to use Information and Communication Technologies (ICTs to guarantee an education more adjusted to European Space for Higher Education (ESHE. The following research objectives were formulated: 1 To find out the assessment made by teacher-training college students of the virtual classroom WebCT as an aid to face-to-face teaching. 2 To know the advantages of the use of WebCT and ICTs by students in the case study:

  2. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    Science.gov (United States)

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate

  3. Trigger Videos on the Web: Impact of Audiovisual Design

    Science.gov (United States)

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  4. Hippocampus, delay discounting, and vicarious trial-and-error.

    Science.gov (United States)

    Bett, David; Murdoch, Lauren H; Wood, Emma R; Dudchenko, Paul A

    2015-05-01

    In decision-making, an immediate reward is usually preferred to a delayed reward, even if the latter is larger. We tested whether the hippocampus is necessary for this form of temporal discounting, and for vicarious trial-and-error at the decision point. Rats were trained on a recently developed, adjustable delay-discounting task (Papale et al. (2012) Cogn Affect Behav Neurosci 12:513-526), which featured a choice between a small, nearly immediate reward, and a larger, delayed reward. Rats then received either hippocampus or sham lesions. Animals with hippocampus lesions adjusted the delay for the larger reward to a level similar to that of sham-lesioned animals, suggesting a similar valuation capacity. However, the hippocampus lesion group spent significantly longer investigating the small and large rewards in the first part of the sessions, and were less sensitive to changes in the amount of reward in the large reward maze arm. Both sham- and hippocampus-lesioned rats showed a greater amount of vicarious trial-and-error on trials in which the delay was adjusted. In a nonadjusting version of the delay discounting task, animals with hippocampus lesions showed more variability in their preference for a larger reward that was delayed by 10 s compared with sham-lesioned animals. To verify the lesion behaviorally, rat were subsequently trained on a water maze task, and rats with hippocampus lesions were significantly impaired compared with sham-lesioned animals. The findings on the delay discounting tasks suggest that damage to the hippocampus may impair the detection of reward magnitude. © 2014 Wiley Periodicals, Inc.

  5. Specialization in audiovisual speech perception: a replication study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by bimodal integration in the McGurk effect. This integration effect may be specific to speech or be applied to all stimuli in general. To investigate this, Tuomainen et al. (2005) used sine-wave speech, which naïve observers may perceive as non......-speech, but hear as speech once informed of the linguistic origin of the signal. Combinations of sine-wave speech and incongruent video of the talker elicited a McGurk effect only for informed observers. This indicates that the audiovisual integration effect is specific to speech perception. However, observers...... that observers did look near the mouth. We conclude that eye-movements did not influence the results of Tuomainen et al. and that their results thus can be taken as evidence of a speech specific mode of audiovisual integration underlying the McGurk illusion....

  6. Rhythmic synchronization tapping to an audio-visual metronome in budgerigars.

    Science.gov (United States)

    Hasegawa, Ai; Okanoya, Kazuo; Hasegawa, Toshikazu; Seki, Yoshimasa

    2011-01-01

    In all ages and countries, music and dance have constituted a central part in human culture and communication. Recently, vocal-learning animals such as parrots and elephants have been found to share rhythmic ability with humans. Thus, we investigated the rhythmic synchronization of budgerigars, a vocal-mimicking parrot species, under controlled conditions and a systematically designed experimental paradigm as a first step in understanding the evolution of musical entrainment. We trained eight budgerigars to perform isochronous tapping tasks in which they pecked a key to the rhythm of audio-visual metronome-like stimuli. The budgerigars showed evidence of entrainment to external stimuli over a wide range of tempos. They seemed to be inherently inclined to tap at fast tempos, which have a similar time scale to the rhythm of budgerigars' natural vocalizations. We suggest that vocal learning might have contributed to their performance, which resembled that of humans.

  7. Cross-modal cueing in audiovisual spatial attention

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Greenlee, Mark W.; Gondan, Matthias

    2015-01-01

    effects have been reported for endogenous visual cues while exogenous cues seem to be mostly ineffective. In three experiments, we investigated cueing effects on the processing of audiovisual signals. In Experiment 1 we used endogenous cues to investigate their effect on the detection of auditory, visual......, and audiovisual targets presented with onset asynchrony. Consistent cueing effects were found in all target conditions. In Experiment 2 we used exogenous cues and found cueing effects only for visual target detection, but not auditory target detection. In Experiment 3 we used predictive exogenous cues to examine...

  8. Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special

    Science.gov (United States)

    Vroomen, Jean; Stekelenburg, Jeroen J.

    2011-01-01

    Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…

  9. GÖRSEL-İŞİTSEL ÇEVİRİ / AUDIOVISUAL TRANSLATION

    Directory of Open Access Journals (Sweden)

    Sevtap GÜNAY KÖPRÜLÜ

    2016-04-01

    Full Text Available Audiovisual translation dating back to the silent film era is a special translation method which has been developed for the translation of the movies and programs shown on TV and cinema. Therefore, in the beginning, the term “film translation” was used for this type of translation. Due to the growing number of audiovisual texts it has attracted the interest of scientists and has been assessed under the translation studies. Also in our country the concept of film translation was used for this area, but recently, the concept of audio-visual has been used. Since it not only encompasses the films but also covers all the audio-visual communicatian tools as well, especially in scientific field. In this study, the aspects are analyzed which should be taken into consideration by the translator during the audio-visual translation process within the framework of source text, translated text, film, technical knowledge and knowledge. At the end of the study, it is shown that there are some factors, apart from linguistic and paralinguistic factors and they must be considered carefully as they can influence the quality of the translation. And it is also shown that the given factors require technical knowledge in translation. In this sense, the audio-visual translation is accessed from a different angle compared to the other researches which have been done.

  10. 36 CFR 1237.12 - What record elements must be created and preserved for permanent audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... created and preserved for permanent audiovisual records? 1237.12 Section 1237.12 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC... permanent audiovisual records? For permanent audiovisual records, the following record elements must be...

  11. Virtual Attendance: Analysis of an Audiovisual over IP System for Distance Learning in the Spanish Open University (UNED

    Directory of Open Access Journals (Sweden)

    Esteban Vázquez-Cano

    2013-07-01

    Full Text Available This article analyzes a system of virtual attendance, called “AVIP” (AudioVisual over Internet Protocol, at the Spanish Open University (UNED in Spain. UNED, the largest open university in Europe, is the pioneer in distance education in Spain. It currently has more than 300,000 students, 1,300 teachers, and 6,000 tutors all over the world, besides Spain. This university is redefining, on the lines of other universities, many of its academic processes to meet the new requirements of the European Higher Education Area (EHEA. Since its inception, more than 30 years ago, the methodology chosen by UNED has been blended learning. Today, this university combines face-to-face tutorial sessions with new methodological proposals, mediated by ICT. Through a quantitative methodology, the perception of students and tutors of the new model of virtual tutoring, called AVIP Classrooms, was analyzed. The results show that the new model greatly improves the orientation and teaching methodology of tutors. However, it requires training and new approaches to provide a more collaborative and participatory environment for students.

  12. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    Science.gov (United States)

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  13. Testing audiovisual comprehension tasks with questions embedded in videos as subtitles: a pilot multimethod study

    OpenAIRE

    Casañ Núñez, Juan Carlos

    2017-01-01

    [EN] Listening, watching, reading and writing simultaneously in a foreign language is very complex. This paper is part of wider research which explores the use of audiovisual comprehension questions imprinted in the video image in the form of subtitles and synchronized with the relevant fragments for the purpose of language learning and testing. Compared to viewings where the comprehension activity is available only on paper, this innovative methodology may provide some benefits. Among them, ...

  14. From "Piracy" to Payment: Audio-Visual Copyright and Teaching Practice.

    Science.gov (United States)

    Anderson, Peter

    1993-01-01

    The changing circumstances in Australia governing the use of broadcast television and radio material in education are examined, from the uncertainty of the early 1980s to current management of copyrighted audiovisual material under the statutory licensing agreement between universities and an audiovisual copyright agency. (MSE)

  15. The Impact of Audiovisual Feedback on the Learning Outcomes of a Remote and Virtual Laboratory Class

    Science.gov (United States)

    Lindsay, E.; Good, M.

    2009-01-01

    Remote and virtual laboratory classes are an increasingly prevalent alternative to traditional hands-on laboratory experiences. One of the key issues with these modes of access is the provision of adequate audiovisual (AV) feedback to the user, which can be a complicated and resource-intensive challenge. This paper reports on a comparison of two…

  16. Attitude change as a function of the observation of vicarious reinforcement and friendliness

    OpenAIRE

    Stocker-Kreichgauer, Gisela

    1982-01-01

    Attitude change as a function of the observation of vicarious reinforcement and friendliness : hostility in a debate / Lutz von Rosenstiel ; Gisela Stocker- Kreichgauer. - In: Group decision making / ed. by Gisela Stocker-Kreichgauer ... - London u.a. : Acad. Press, 1982. - S. 241-255. - (European monographs in social psychology ; 25)

  17. Audiovisual consumption and its social logics on the web

    OpenAIRE

    Rose Marie Santini; Juan C. Calvi

    2013-01-01

    This article analyzes the social logics underlying audiovisualconsumption on digital networks. We retrieved some data on the Internet globaltraffic of audiovisual files since 2008 to identify formats, modes of distributionand consumption of audiovisual contents that tend to prevail on the Web. Thisresearch shows the types of social practices which are dominant among usersand its relation to what we designate as “Internet culture”.

  18. 36 CFR 1237.10 - How must agencies manage their audiovisual, cartographic, and related records?

    Science.gov (United States)

    2010-07-01

    ... their audiovisual, cartographic, and related records? 1237.10 Section 1237.10 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.10 How must agencies manage their audiovisual, cartographic, and related...

  19. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    Science.gov (United States)

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. No experience required: Violent crime and anticipated, vicarious, and experienced racial discrimination.

    Science.gov (United States)

    Herda, Daniel; McCarthy, Bill

    2018-02-01

    There is a growing body of evidence linking racial discrimination and juvenile crime, and a number of theories explain this relationship. In this study, we draw on one popular approach, Agnew's general strain theory, and extend prior research by moving from a focus on experienced discrimination to consider two other forms, anticipated and vicarious discrimination. Using data on black, white, and Hispanic youth, from the Project on Human Development in Chicago Neighborhoods (PHDCN), we find that experienced, anticipated, and to a lesser extent, vicarious discrimination, significantly predict violent crime independent of a set of neighborhood, parental, and individual level controls, including prior violent offending. Additional analyses on the specific contexts of discrimination reveal that violence is associated with the anticipation of police discrimination. The effects tend to be larger for African American than Hispanic youth, but the differences are not statistically significant. These findings support the thesis that, like other strains, discrimination may not have to be experienced directly to influence offending. Copyright © 2017. Published by Elsevier Inc.

  1. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    Science.gov (United States)

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. [Audio-visual communication in the history of psychiatry].

    Science.gov (United States)

    Farina, B; Remoli, V; Russo, F

    1993-12-01

    The authors analyse the evolution of visual communication in the history of psychiatry. From the 18th century oil paintings to the first dagherrotic prints until the cinematography and the modern audiovisual systems they observed an increasing diffusion of the new communication techniques in psychiatry, and described the use of the different techniques in psychiatric practice. The article ends with a brief review of the current applications of the audiovisual in therapy, training, teaching, and research.

  3. Mobile Guide System Using Problem-Solving Strategy for Museum Learning: A Sequential Learning Behavioural Pattern Analysis

    Science.gov (United States)

    Sung, Y.-T.; Hou, H.-T.; Liu, C.-K.; Chang, K.-E.

    2010-01-01

    Mobile devices have been increasingly utilized in informal learning because of their high degree of portability; mobile guide systems (or electronic guidebooks) have also been adopted in museum learning, including those that combine learning strategies and the general audio-visual guide systems. To gain a deeper understanding of the features and…

  4. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... considerations in the maintenance of audiovisual records? 1237.20 Section 1237.20 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.20 What are special considerations in the maintenance of audiovisual...

  5. 77 FR 22803 - Certain Audiovisual Components and Products Containing the Same; Institution of Investigation...

    Science.gov (United States)

    2012-04-17

    ... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-837] Certain Audiovisual Components and Products... importation of certain audiovisual components and products containing the same by reason of infringement of... importation, or the sale within the United States after importation of certain audiovisual components and...

  6. Microscale vicariance and diversification of Western Balkan caddisflies linked to karstification.

    Science.gov (United States)

    Previšić, Ana; Schnitzler, Jan; Kučinić, Mladen; Graf, Wolfram; Ibrahimi, Halil; Kerovec, Mladen; Pauls, Steffen U

    2014-03-01

    The karst areas in the Dinaric region of the Western Balkan Peninsula are a hotspot of freshwater biodiversity. Many investigators have examined diversification of the subterranean freshwater fauna in these karst systems. However, diversification of surface-water fauna remains largely unexplored. We assessed local and regional diversification of surface-water species in karst systems and asked whether patterns of population differentiation could be explained by dispersal-diversification processes or allopatric diversification following karst-related microscale vicariance. We analyzed mitochondrial cytochrome c oxidase subunit I (mtCOI) sequence data of 4 caddisfly species (genus Drusus ) in a phylogeographic framework to assess local and regional population genetic structure and Pliocene/Pleistocene history. We used BEAST software to assess the timing of intraspecific diversification of the target species. We compared climate envelopes of the study species and projected climatically suitable areas during the last glacial maximum (LGM) to assess differences in the species climatic niches and infer potential LGM refugia. The haplotype distribution of the 4 species (324 individuals from 32 populations) was characterized by strong genetic differentiation with few haplotypes shared among populations (16%) and deep divergence among populations of the 3 endemic species, even at local scales. Divergence among local populations of endemics often exceeded divergence among regional and continental clades of the widespread D. discolor . Major divergences among regional populations dated to 2.0 to 0.5 Mya. Species distribution model projections and genetic structure suggest that the endemic species persisted in situ and diversified locally throughout multiple Pleistocene climate cycles. The pattern for D. discolor was different and consistent with multiple invasions into the region. Patterns of population genetic structure and diversification were similar for the 3 regional

  7. Vicariously touching products through observing others' hand actions increases purchasing intention, and the effect of visual perspective in this process: An fMRI study.

    Science.gov (United States)

    Liu, Yi; Zang, Xuelian; Chen, Lihan; Assumpção, Leonardo; Li, Hong

    2018-01-01

    The growth of online shopping increases consumers' dependence on vicarious sensory experiences, such as observing others touching products in commercials. However, empirical evidence on whether observing others' sensory experiences increases purchasing intention is still scarce. In the present study, participants observed others interacting with products in the first- or third-person perspective in video clips, and their neural responses were measured with functional magnetic resonance imaging (fMRI). We investigated (1) whether and how vicariously touching certain products affected purchasing intention, and the neural correlates of this process; and (2) how visual perspective interacts with vicarious tactility. Vicarious tactile experiences were manipulated by hand actions touching or not touching the products, while the visual perspective was manipulated by showing the hand actions either in first- or third-person perspective. During the fMRI scanning, participants watched the video clips and rated their purchasing intention for each product. The results showed that, observing others touching (vs. not touching) the products increased purchasing intention, with vicarious neural responses found in mirror neuron systems (MNS) and lateral occipital complex (LOC). Moreover, the stronger neural activities in MNS was associated with higher purchasing intention. The effects of visual perspectives were found in left superior parietal lobule (SPL), while the interaction of tactility and visual perspective was shown in precuneus and precuneus-LOC connectivity. The present study provides the first evidence that vicariously touching a given product increased purchasing intention and the neural activities in bilateral MNS, LOC, left SPL and precuneus are involved in this process. Hum Brain Mapp 39:332-343, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  8. Cortical Integration of Audio-Visual Information

    Science.gov (United States)

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  9. Cortical oscillations modulated by congruent and incongruent audiovisual stimuli.

    Science.gov (United States)

    Herdman, A T; Fujioka, T; Chau, W; Ross, B; Pantev, C; Picton, T W

    2004-11-30

    Congruent or incongruent grapheme-phoneme stimuli are easily perceived as one or two linguistic objects. The main objective of this study was to investigate the changes in cortical oscillations that reflect the processing of congruent and incongruent audiovisual stimuli. Graphemes were Japanese Hiragana characters for four different vowels (/a/, /o/, /u/, and /i/). They were presented simultaneously with their corresponding phonemes (congruent) or non-corresponding phonemes (incongruent) to native-speaking Japanese participants. Participants' reaction times to the congruent audiovisual stimuli were significantly faster by 57 ms as compared to reaction times to incongruent stimuli. We recorded the brain responses for each condition using a whole-head magnetoencephalograph (MEG). A novel approach to analysing MEG data, called synthetic aperture magnetometry (SAM), was used to identify event-related changes in cortical oscillations involved in audiovisual processing. The SAM contrast between congruent and incongruent responses revealed greater event-related desynchonization (8-16 Hz) bilaterally in the occipital lobes and greater event-related synchronization (4-8 Hz) in the left transverse temporal gyrus. Results from this study further support the concept of interactions between the auditory and visual sensory cortices in multi-sensory processing of audiovisual objects.

  10. Context-specific effects of musical expertise on audiovisual integration

    Science.gov (United States)

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  11. Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.

    Science.gov (United States)

    Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu

    2018-05-01

    Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.

  12. Audiovisual Speech Synchrony Measure: Application to Biometrics

    Directory of Open Access Journals (Sweden)

    Gérard Chollet

    2007-01-01

    Full Text Available Speech is a means of communication which is intrinsically bimodal: the audio signal originates from the dynamics of the articulators. This paper reviews recent works in the field of audiovisual speech, and more specifically techniques developed to measure the level of correspondence between audio and visual speech. It overviews the most common audio and visual speech front-end processing, transformations performed on audio, visual, or joint audiovisual feature spaces, and the actual measure of correspondence between audio and visual speech. Finally, the use of synchrony measure for biometric identity verification based on talking faces is experimented on the BANCA database.

  13. Audiovisual consumption and its social logics on the web

    Directory of Open Access Journals (Sweden)

    Rose Marie Santini

    2013-06-01

    Full Text Available This article analyzes the social logics underlying audiovisualconsumption on digital networks. We retrieved some data on the Internet globaltraffic of audiovisual files since 2008 to identify formats, modes of distributionand consumption of audiovisual contents that tend to prevail on the Web. Thisresearch shows the types of social practices which are dominant among usersand its relation to what we designate as “Internet culture”.

  14. Audiovisual Integration of Speech in a Patient with Broca’s Aphasia

    DEFF Research Database (Denmark)

    Andersen, Tobias; Starrfelt, Randi

    2015-01-01

    's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical......, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing...

  15. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis.

    Science.gov (United States)

    Altieri, Nicholas; Wenger, Michael J

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of -12 dB, and S/N ratio of -18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity.

  16. Search in audiovisual broadcast archives

    NARCIS (Netherlands)

    Huurnink, B.

    2010-01-01

    Documentary makers, journalists, news editors, and other media professionals routinely require previously recorded audiovisual material for new productions. For example, a news editor might wish to reuse footage from overseas services for the evening news, or a documentary maker describing the

  17. The Effects of Audio-Visual Recorded and Audio Recorded Listening Tasks on the Accuracy of Iranian EFL Learners' Oral Production

    Science.gov (United States)

    Drood, Pooya; Asl, Hanieh Davatgari

    2016-01-01

    The ways in which task in classrooms has developed and proceeded have receive great attention in the field of language teaching and learning in the sense that they draw attention of learners to the competing features such as accuracy, fluency, and complexity. English audiovisual and audio recorded materials have been widely used by teachers and…

  18. Vicarious Racism: A Qualitative Analysis of Experiences with Secondhand Racism in Graduate Education

    Science.gov (United States)

    Truong, Kimberly A.; Museus, Samuel D.; McGuire, Keon M.

    2016-01-01

    In this article, the authors examine the role of vicarious racism in the experiences of doctoral students of color. The researchers conducted semi-structured individual interviews with 26 doctoral students who self-reported experiencing racism and racial trauma during their doctoral studies. The analysis generated four themes that detail the…

  19. The process of developing audiovisual patient information: challenges and opportunities.

    Science.gov (United States)

    Hutchison, Catherine; McCreaddie, May

    2007-11-01

    The aim of this project was to produce audiovisual patient information, which was user friendly and fit for purpose. The purpose of the audiovisual patient information is to inform patients about randomized controlled trials, as a supplement to their trial-specific written information sheet. Audiovisual patient information is known to be an effective way of informing patients about treatment. User involvement is also recognized as being important in the development of service provision. The aim of this paper is (i) to describe and discuss the process of developing the audiovisual patient information and (ii) to highlight the challenges and opportunities, thereby identifying implications for practice. A future study will test the effectiveness of the audiovisual patient information in the cancer clinical trial setting. An advisory group was set up to oversee the project and provide guidance in relation to information content, level and delivery. An expert panel of two patients provided additional guidance and a dedicated operational team dealt with the logistics of the project including: ethics; finance; scriptwriting; filming; editing and intellectual property rights. Challenges included the limitations of filming in a busy clinical environment, restricted technical and financial resources, ethical needs and issues around copyright. There were, however, substantial opportunities that included utilizing creative skills, meaningfully involving patients, teamworking and mutual appreciation of clinical, multidisciplinary and technical expertise. Developing audiovisual patient information is an important area for nurses to be involved with. However, this must be performed within the context of the multiprofessional team. Teamworking, including patient involvement, is crucial as a wide variety of expertise is required. Many aspects of the process are transferable and will provide information and guidance for nurses, regardless of specialty, considering developing this

  20. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate...... of audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase...... visual detection task. In our first experiment, observers presented with congruent and incongruent audiovisual sine-wave speech stimuli did only show a McGurk effect when informed of the speech nature of the stimulus. Performance on the secondary visual task was very good, thus supporting the finding...

  1. Strategies for media literacy: Audiovisual skills and the citizenship in Andalusia

    Directory of Open Access Journals (Sweden)

    Ignacio Aguaded-Gómez

    2012-07-01

    Full Text Available Media consumption is an undeniable fact in present-day society. The hours that members of all social segments spend in front of a screen take up a large part of their leisure time worldwide. Audiovisual communication becomes especially important within the context of today’s digital society (society-network, where information and communication technologies pervade all corners of everyday life. However, people do not own enough audiovisual media skills to cope with this mass media omnipresence. Neither the education system nor civic associations, or the media themselves, have promoted audiovisual skills to make people critically competent when viewing media. This study aims to provide an updated conceptualization of the “audiovisual skill” in this digital environment and transpose it onto a specific interventional environment, seeking to detect needs and shortcomings, plan global strategies to be adopted by governments and devise training programmes for the various sectors involved.

  2. Situación actual de la traducción audiovisual en Colombia

    Directory of Open Access Journals (Sweden)

    Jeffersson David Orrego Carmona

    2010-05-01

    Full Text Available Objetivos: el presente artículo tiene dos objetivos: dar a conocer el panorama general del mercado actual de la traducción audiovisual en Colombia y resaltar la importancia de desarrollar estudios en esta área. Método: la metodología empleada incluyó investigación y lectura de bibliografía relacionada con el tema, aplicación de encuestas a diferentes grupos vinculados con la traducción audiovisual y el posterior análisis. Resultados: éstos mostraron el desconocimiento general que hay sobre esta labor y las preferencias de los grupos encuestados sobre las modalidades de traducción audiovisual. Se pudo observar que hay una marcada preferencia por el subtitulaje, por razones particulares de cada grupo. Conclusiones: los traductores colombianos necesitan un entrenamiento en traducción audiovisual para satisfacer las demandas del mercado y se resalta la importancia de desarrollar estudios más profundos enfocados en el desarrollo de la traducción audiovisual en Colombia.

  3. The Impact of Enactive /Vicarious pre-reading Tasks on Reading Comprehension and Self-Efficacy of Iranian Pre-Intermediate EFL Learners

    Directory of Open Access Journals (Sweden)

    Arezoo Eshghipour

    2016-01-01

    Full Text Available This study investigated the effect of enactive pre-reading tasks on Iranian pre-intermediate EFL learners’ reading comprehension and self-efficacy. Moreover, it explored whether Iranian per-intermediate EFL learners’ reading comprehension and self-efficacy are influenced by vicarious pre-reading tasks. The required data was gathered through a reading comprehension passage entailing 20 comprehension questions and a 30-item self-efficacy questionnaire with 5-point Likert-scale response options. A total of 66 participants (including 34 individuals in the enactive group and 32 leaners in the vicarious one took part in this study. The Pearson formula, an independent T-Test, paired T-test, and the Mann-Whitney U test were used to analyze the data. Based on the findings of the study, enactive pre-reading tasks played a key role in the Iranian pre-intermediate EFL learners’ reading comprehension ability. Moreover, it was found that vicarious pre-reading tasks served an important role in the Iranian pre-intermediate EFL learners’ self-efficacy.

  4. A conceptual framework for audio-visual museum media

    DEFF Research Database (Denmark)

    Kirkedahl Lysholm Nielsen, Mikkel

    2017-01-01

    In today's history museums, the past is communicated through many other means than original artefacts. This interdisciplinary and theoretical article suggests a new approach to studying the use of audio-visual media, such as film, video and related media types, in a museum context. The centre...... and museum studies, existing case studies, and real life observations, the suggested framework instead stress particular characteristics of contextual use of audio-visual media in history museums, such as authenticity, virtuality, interativity, social context and spatial attributes of the communication...

  5. Benefits for Voice Learning Caused by Concurrent Faces Develop over Time.

    Science.gov (United States)

    Zäske, Romi; Mühl, Constanze; Schweinberger, Stefan R

    2015-01-01

    Recognition of personally familiar voices benefits from the concurrent presentation of the corresponding speakers' faces. This effect of audiovisual integration is most pronounced for voices combined with dynamic articulating faces. However, it is unclear if learning unfamiliar voices also benefits from audiovisual face-voice integration or, alternatively, is hampered by attentional capture of faces, i.e., "face-overshadowing". In six study-test cycles we compared the recognition of newly-learned voices following unimodal voice learning vs. bimodal face-voice learning with either static (Exp. 1) or dynamic articulating faces (Exp. 2). Voice recognition accuracies significantly increased for bimodal learning across study-test cycles while remaining stable for unimodal learning, as reflected in numerical costs of bimodal relative to unimodal voice learning in the first two study-test cycles and benefits in the last two cycles. This was independent of whether faces were static images (Exp. 1) or dynamic videos (Exp. 2). In both experiments, slower reaction times to voices previously studied with faces compared to voices only may result from visual search for faces during memory retrieval. A general decrease of reaction times across study-test cycles suggests facilitated recognition with more speaker repetitions. Overall, our data suggest two simultaneous and opposing mechanisms during bimodal face-voice learning: while attentional capture of faces may initially impede voice learning, audiovisual integration may facilitate it thereafter.

  6. Use of audiovisual resources in a FlexQuest strategy on Radioactivity

    Directory of Open Access Journals (Sweden)

    Flávia Cristina Gomes Catunda de Vasconcelos

    2012-03-01

    Full Text Available This paper presents a study conducted in a private school in Recife - PE, Brazil, with 25 students from 1st year of high school. One of the focuses was to evaluate the implementation of the strategy FlexQuest on the teaching of radioactivity. The FlexQuest incorporates, within the WebQuest, the Cognitive Flexibility Theory (TFC, which is a theory of teaching, learning and knowledge representation, aiming to propose strategies for the acquisition of advanced levels of knowledge. With a qualitative approach, there were interventions of application having, as axle, an analysis of landscape crossings that the students have accomplished during the execution of required tasks. The results revealed that this strategy involves audiovisual resources, and these make learning possible, provided that strategies are embedded in a constructivist approach to teaching and learning. In this sense, it was perceived to be effective, the introductory level/stimulator, for the understanding of the applications of radioactivity. Showing a tool based on real situations, enabling students to develop the critical eye on what it is televised, including also the study of radioactivity.

  7. Conflict between place and response navigation strategies: effects on vicarious trial and error (VTE) behaviors.

    Science.gov (United States)

    Schmidt, Brandy; Papale, Andrew; Redish, A David; Markus, Etan J

    2013-02-15

    Navigation can be accomplished through multiple decision-making strategies, using different information-processing computations. A well-studied dichotomy in these decision-making strategies compares hippocampal-dependent "place" and dorsal-lateral striatal-dependent "response" strategies. A place strategy depends on the ability to flexibly respond to environmental cues, while a response strategy depends on the ability to quickly recognize and react to situations with well-learned action-outcome relationships. When rats reach decision points, they sometimes pause and orient toward the potential routes of travel, a process termed vicarious trial and error (VTE). VTE co-occurs with neurophysiological information processing, including sweeps of representation ahead of the animal in the hippocampus and transient representations of reward in the ventral striatum and orbitofrontal cortex. To examine the relationship between VTE and the place/response strategy dichotomy, we analyzed data in which rats were cued to switch between place and response strategies on a plus maze. The configuration of the maze allowed for place and response strategies to work competitively or cooperatively. Animals showed increased VTE on trials entailing competition between navigational systems, linking VTE with deliberative decision-making. Even in a well-learned task, VTE was preferentially exhibited when a spatial selection was required, further linking VTE behavior with decision-making associated with hippocampal processing.

  8. Understanding the basics of audiovisual archiving in Africa and the ...

    African Journals Online (AJOL)

    In the developed world, the cultural value of the audiovisual media gained legitimacy and widening acceptance after World War II, and this is what Africa still requires. There are a lot of problems in Africa, and because of this, activities such as preservation of a historical record, especially in the audiovisual media are seen as ...

  9. Audio-visual temporal recalibration can be constrained by content cues regardless of spatial overlap

    Directory of Open Access Journals (Sweden)

    Warrick eRoseboom

    2013-04-01

    Full Text Available It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this was necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; Experiment 1 and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; Experiment 2 we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  10. Personal Audiovisual Aptitude Influences the Interaction Between Landscape and Soundscape Appraisal.

    Science.gov (United States)

    Sun, Kang; Echevarria Sanchez, Gemma M; De Coensel, Bert; Van Renterghem, Timothy; Talsma, Durk; Botteldooren, Dick

    2018-01-01

    It has been established that there is an interaction between audition and vision in the appraisal of our living environment, and that this appraisal is influenced by personal factors. Here, we test the hypothesis that audiovisual aptitude influences appraisal of our sonic and visual environment. To measure audiovisual aptitude, an auditory deviant detection experiment was conducted in an ecologically valid and complex context. This experiment allows us to distinguish between accurate and less accurate listeners. Additionally, it allows to distinguish between participants that are easily visually distracted and those who are not. To do so, two previously conducted laboratory experiments were re-analyzed. The first experiment focuses on self-reported noise annoyance in a living room context, whereas the second experiment focuses on the perceived pleasantness of using outdoor public spaces. In the first experiment, the influence of visibility of vegetation on self-reported noise annoyance was modified by audiovisual aptitude. In the second one, it was found that the overall appraisal of walking across a bridge is influenced by audiovisual aptitude, in particular when a visually intrusive noise barrier is used to reduce highway traffic noise levels. We conclude that audiovisual aptitude may affect the appraisal of the living environment.

  11. Audiovisual semantic congruency during encoding enhances memory performance.

    Science.gov (United States)

    Heikkilä, Jenni; Alho, Kimmo; Hyvönen, Heidi; Tiippana, Kaisa

    2015-01-01

    Studies of memory and learning have usually focused on a single sensory modality, although human perception is multisensory in nature. In the present study, we investigated the effects of audiovisual encoding on later unisensory recognition memory performance. The participants were to memorize auditory or visual stimuli (sounds, pictures, spoken words, or written words), each of which co-occurred with either a semantically congruent stimulus, incongruent stimulus, or a neutral (non-semantic noise) stimulus in the other modality during encoding. Subsequent memory performance was overall better when the stimulus to be memorized was initially accompanied by a semantically congruent stimulus in the other modality than when it was accompanied by a neutral stimulus. These results suggest that semantically congruent multisensory experiences enhance encoding of both nonverbal and verbal materials, resulting in an improvement in their later recognition memory.

  12. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG.

    Directory of Open Access Journals (Sweden)

    Yao Lu

    Full Text Available Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave, while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  13. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    Science.gov (United States)

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  14. Learning multimodal dictionaries.

    Science.gov (United States)

    Monaci, Gianluca; Jost, Philippe; Vandergheynst, Pierre; Mailhé, Boris; Lesage, Sylvain; Gribonval, Rémi

    2007-09-01

    Real-world phenomena involve complex interactions between multiple signal modalities. As a consequence, humans are used to integrate at each instant perceptions from all their senses in order to enrich their understanding of the surrounding world. This paradigm can be also extremely useful in many signal processing and computer vision problems involving mutually related signals. The simultaneous processing of multimodal data can, in fact, reveal information that is otherwise hidden when considering the signals independently. However, in natural multimodal signals, the statistical dependencies between modalities are in general not obvious. Learning fundamental multimodal patterns could offer deep insight into the structure of such signals. In this paper, we present a novel model of multimodal signals based on their sparse decomposition over a dictionary of multimodal structures. An algorithm for iteratively learning multimodal generating functions that can be shifted at all positions in the signal is proposed, as well. The learning is defined in such a way that it can be accomplished by iteratively solving a generalized eigenvector problem, which makes the algorithm fast, flexible, and free of user-defined parameters. The proposed algorithm is applied to audiovisual sequences and it is able to discover underlying structures in the data. The detection of such audio-video patterns in audiovisual clips allows to effectively localize the sound source on the video in presence of substantial acoustic and visual distractors, outperforming state-of-the-art audiovisual localization algorithms.

  15. Vicarious absolute radiometric calibration of GF-2 PMS2 sensor using permanent artificial targets in China

    Science.gov (United States)

    Liu, Yaokai; Li, Chuanrong; Ma, Lingling; Wang, Ning; Qian, Yonggang; Tang, Lingli

    2016-10-01

    GF-2, launched on August 19 2014, is one of the high-resolution land resource observing satellite of the China GF series satellites plan. The radiometric performance evaluation of the onboard optical pan and multispectral (PMS2) sensor of GF-2 satellite is very important for the further application of the data. And, the vicarious absolute radiometric calibration approach is one of the most useful way to monitor the radiometric performance of the onboard optical sensors. In this study, the traditional reflectance-based method is used to vicarious radiometrically calibrate the onboard PMS2 sensor of GF-2 satellite using three black, gray and white reflected permanent artificial targets located in the AOE Baotou site in China. Vicarious field calibration campaign were carried out in the AOE-Baotou calibration site on 22 April 2016. And, the absolute radiometric calibration coefficients were determined with in situ measured atmospheric parameters and surface reflectance of the permanent artificial calibration targets. The predicted TOA radiance of a selected desert area with our determined calibrated coefficients were compared with the official distributed calibration coefficients. Comparison results show a good consistent and the mean relative difference of the multispectral channels is less than 5%. Uncertainty analysis was also carried out and a total uncertainty with 3.87% is determined of the TOA radiance.

  16. On the relevance of script writing basics in audiovisual translation practice and training

    Directory of Open Access Journals (Sweden)

    Juan José Martínez-Sierra

    2012-07-01

    Full Text Available http://dx.doi.org/10.5007/2175-7968.2012v1n29p145   Audiovisual texts possess characteristics that clearly differentiate audiovisual translation from both oral and written translation, and prospective screen translators are usually taught about the issues that typically arise in audiovisual translation. This article argues for the development of an interdisciplinary approach that brings together Translation Studies and Film Studies, which would prepare future audiovisual translators to work with the nature and structure of a script in mind, in addition to the study of common and diverse translational aspects. Focusing on film, the article briefly discusses the nature and structure of scripts, and identifies key points in the development and structuring of a plot. These key points and various potential hurdles are illustrated with examples from the films Chinatown and La habitación de Fermat. The second part of this article addresses some implications for teaching audiovisual translation.

  17. 36 CFR 1237.26 - What materials and processes must agencies use to create audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... must agencies use to create audiovisual records? 1237.26 Section 1237.26 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.26 What materials and processes must agencies use to create audiovisual...

  18. Explaining infant feeding: The role of previous personal and vicarious experience on attitudes, subjective norms, self-efficacy, and breastfeeding outcomes.

    Science.gov (United States)

    Bartle, Naomi C; Harvey, Kate

    2017-11-01

    Breastfeeding confers important health benefits to both infants and their mothers, but rates are low in the United Kingdom and other developed countries despite widespread promotion. This study examined the relationships between personal and vicarious experience of infant feeding, self-efficacy, the theory of planned behaviour variables of attitudes and subjective norm, and the likelihood of breastfeeding at 6-8 weeks post-natally. A prospective questionnaire study of both first-time mothers (n = 77) and experienced breastfeeders (n = 72) recruited at an antenatal clinic in South East England. Participants completed a questionnaire at 32 weeks pregnant assessing personal and vicarious experience of infant feeding (breastfeeding, formula-feeding, and maternal grandmother's experience of breastfeeding), perceived control, self-efficacy, intentions, attitudes (to breastfeeding and formula-feeding), and subjective norm. Infant feeding behaviour was recorded at 6-8 weeks post-natally. Multiple linear regression modelled the influence of vicarious experience on attitudes, subjective norm, and self-efficacy (but not perceived control) and modelled the influence of attitude, subjective norm, self-efficacy, and past experience on intentions to breastfeed. Logistic regression modelled the likelihood of breastfeeding at 6-8 weeks. Previous experience (particularly personal experience of breastfeeding) explained a significant amount of variance in attitudes, subjective norm, and self-efficacy. Intentions to breastfeed were predicted by subjective norm and attitude to formula-feeding and, in experienced mothers, self-efficacy. Breastfeeding at 6 weeks was predicted by intentions and vicarious experience of formula-feeding. Vicarious experience, particularly of formula-feeding, has been shown to influence the behaviour of first-time and experienced mothers both directly and indirectly via attitudes and subjective norm. Interventions that reduce exposure to formula

  19. La estacíon de trabajo del traductor audiovisual: Herramientas y Recursos.

    Directory of Open Access Journals (Sweden)

    Anna Matamala

    2005-01-01

    Full Text Available In this article, we discuss the relationship between audiovisual translation and new technologies, and describe the characteristics of the audiovisual translator´s workstation, especially as regards dubbing and voiceover. After presenting the tools necessary for the translator to perform his/ her task satisfactorily as well as pointing to future perspectives, we make a list of sources that can be consulted in order to solve translation problems, including those available on the Internet. Keywords: audiovisual translation, new technologies, Internet, translator´s tools.

  20. A general audiovisual temporal processing deficit in adult readers with dyslexia

    NARCIS (Netherlands)

    Francisco, A.A.; Jesse, A.; Groen, M.A.; McQueen, J.M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with

  1. Does hearing aid use affect audiovisual integration in mild hearing impairment?

    Science.gov (United States)

    Gieseler, Anja; Tahden, Maike A S; Thiel, Christiane M; Colonius, Hans

    2018-04-01

    There is converging evidence for altered audiovisual integration abilities in hearing-impaired individuals and those with profound hearing loss who are provided with cochlear implants, compared to normal-hearing adults. Still, little is known on the effects of hearing aid use on audiovisual integration in mild hearing loss, although this constitutes one of the most prevalent conditions in the elderly and, yet, often remains untreated in its early stages. This study investigated differences in the strength of audiovisual integration between elderly hearing aid users and those with the same degree of mild hearing loss who were not using hearing aids, the non-users, by measuring their susceptibility to the sound-induced flash illusion. We also explored the corresponding window of integration by varying the stimulus onset asynchronies. To examine general group differences that are not attributable to specific hearing aid settings but rather reflect overall changes associated with habitual hearing aid use, the group of hearing aid users was tested unaided while individually controlling for audibility. We found greater audiovisual integration together with a wider window of integration in hearing aid users compared to their age-matched untreated peers. Signal detection analyses indicate that a change in perceptual sensitivity as well as in bias may underlie the observed effects. Our results and comparisons with other studies in normal-hearing older adults suggest that both mild hearing impairment and hearing aid use seem to affect audiovisual integration, possibly in the sense that hearing aid use may reverse the effects of hearing loss on audiovisual integration. We suggest that these findings may be particularly important for auditory rehabilitation and call for a longitudinal study.

  2. Multiple concurrent temporal recalibrations driven by audiovisual stimuli with apparent physical differences.

    Science.gov (United States)

    Yuan, Xiangyong; Bi, Cuihua; Huang, Xiting

    2015-05-01

    Out-of-synchrony experiences can easily recalibrate one's subjective simultaneity point in the direction of the experienced asynchrony. Although temporal adjustment of multiple audiovisual stimuli has been recently demonstrated to be spatially specific, perceptual grouping processes that organize separate audiovisual stimuli into distinctive "objects" may play a more important role in forming the basis for subsequent multiple temporal recalibrations. We investigated whether apparent physical differences between audiovisual pairs that make them distinct from each other can independently drive multiple concurrent temporal recalibrations regardless of spatial overlap. Experiment 1 verified that reducing the physical difference between two audiovisual pairs diminishes the multiple temporal recalibrations by exposing observers to two utterances with opposing temporal relationships spoken by one single speaker rather than two distinct speakers at the same location. Experiment 2 found that increasing the physical difference between two stimuli pairs can promote multiple temporal recalibrations by complicating their non-temporal dimensions (e.g., disks composed of two rather than one attribute and tones generated by multiplying two frequencies); however, these recalibration aftereffects were subtle. Experiment 3 further revealed that making the two audiovisual pairs differ in temporal structures (one transient and one gradual) was sufficient to drive concurrent temporal recalibration. These results confirm that the more audiovisual pairs physically differ, especially in temporal profile, the more likely multiple temporal perception adjustments will be content-constrained regardless of spatial overlap. These results indicate that multiple temporal recalibrations are based secondarily on the outcome of perceptual grouping processes.

  3. Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation.

    Science.gov (United States)

    Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina

    2017-01-01

    Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this "online" multisensory improvement, there is evidence of long-lasting, "offline" effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced "online" effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual

  4. Audiovisual Temporal Processing and Synchrony Perception in the Rat.

    Science.gov (United States)

    Schormans, Ashley L; Scott, Kaela E; Vo, Albert M Q; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L

    2016-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer's ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats ( n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats ( n = 7) perceived the synchronous audiovisual stimuli to be "visual first" for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20-40 ms. Ultimately, given

  5. A pilot study of audiovisual family meetings in the intensive care unit.

    Science.gov (United States)

    de Havenon, Adam; Petersen, Casey; Tanana, Michael; Wold, Jana; Hoesch, Robert

    2015-10-01

    We hypothesized that virtual family meetings in the intensive care unit with conference calling or Skype videoconferencing would result in increased family member satisfaction and more efficient decision making. This is a prospective, nonblinded, nonrandomized pilot study. A 6-question survey was completed by family members after family meetings, some of which used conference calling or Skype by choice. Overall, 29 (33%) of the completed surveys came from audiovisual family meetings vs 59 (67%) from control meetings. The survey data were analyzed using hierarchical linear modeling, which did not find any significant group differences between satisfaction with the audiovisual meetings vs controls. There was no association between the audiovisual intervention and withdrawal of care (P = .682) or overall hospital length of stay (z = 0.885, P = .376). Although we do not report benefit from an audiovisual intervention, these results are preliminary and heavily influenced by notable limitations to the study. Given that the intervention was feasible in this pilot study, audiovisual and social media intervention strategies warrant additional investigation given their unique ability to facilitate communication among family members in the intensive care unit. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Testing audiovisual comprehension tasks with questions embedded in videos as subtitles: a pilot multimethod study

    Directory of Open Access Journals (Sweden)

    Juan Carlos Casañ Núñez

    2017-06-01

    Full Text Available Listening, watching, reading and writing simultaneously in a foreign language is very complex. This paper is part of wider research which explores the use of audiovisual comprehension questions imprinted in the video image in the form of subtitles and synchronized with the relevant fragments for the purpose of language learning and testing. Compared to viewings where the comprehension activity is available only on paper, this innovative methodology may provide some benefits. Among them, it could reduce the conflict in visual attention between watching the video and completing the task, by spatially and temporally approximating the questions and the relevant fragments. The technique is seen as especially beneficial for students with a low proficiency language level. The main objectives of this study were to investigate if embedded questions had an impact on SFL students’ audiovisual comprehension test performance and to find out what examinees thought about them. A multimethod design (Morse, 2003 involving the sequential collection of three quantitative datasets was employed. A total of 41 learners of Spanish as a foreign language (SFL participated in the study (22 in the control group and 19 in the experimental one. Informants were selected by non-probabilistic sampling. The results showed that imprinted questions did not have any effect on test performance. Test-takers’ attitudes towards this methodology were positive. Globally, students in the experimental group agreed that the embedded questions helped them to complete the tasks. Furthermore, most of them were in favour of having the questions imprinted in the video in the audiovisual comprehension test of the final exam. These opinions are in line with those obtained in previous studies that looked into experts’, SFL students’ and SFL teachers’ views about this methodology (Casañ Núñez, 2015a, 2016a, in press-b. On the whole, these studies suggest that this technique has

  7. Exploring Student Perceptions of Audiovisual Feedback via Screencasting in Online Courses

    Science.gov (United States)

    Mathieson, Kathleen

    2012-01-01

    Using Moore's (1993) theory of transactional distance as a framework, this action research study explored students' perceptions of audiovisual feedback provided via screencasting as a supplement to text-only feedback. A crossover design was employed to ensure that all students experienced both text-only and text-plus-audiovisual feedback and to…

  8. La comunicación corporativa audiovisual: propuesta metodológica de estudio

    OpenAIRE

    Lorán Herrero, María Dolores

    2016-01-01

    Esta investigación, versa en torno a dos conceptos, la Comunicación Audiovisual y La Comunicación Corporativa, disciplinas que afectan a las organizaciones y que se van articulando de tal manera que dan lugar a la Comunicación Corporativa Audiovisual, concepto que se propone en esta tesis. Se realiza una clasificación y definición de los formatos que utilizan las organizaciones para su comunicación. Se trata de poder analizar cualquier documento audiovisual corporativo para constatar si el l...

  9. Vicarious Traumatisation in Practitioners Who Work with Adult Survivors of Sexual Violence in Child Sexual Abuse: Literature Review and Directions for Future Research

    OpenAIRE

    Choularia, Zoe; Hutchison, Craig; Karatzias, Thanos

    2009-01-01

    Primary objective: The authors sought to summarise and evaluate evidence regarding vicarious traumatisation (VT) in practitioners working with adult survivors of sexual violence and/or child sexual abuse (CSA). Methods and selection criteria: Relevant publications were identified from systematic literature searches of PubMed and PsycINFO. Studies were selected for inclusion if they examined vicarious traumatisation resulting from sexual violence and/or CSA work and were published in English b...

  10. 77 FR 16561 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-03-21

    ... INTERNATIONAL TRADE COMMISSION [DN 2884] Certain Audiovisual Components and Products Containing.... International Trade Commission has received a complaint entitled Certain Audiovisual Components and Products... audiovisual components and products containing the same. The complaint names as respondents Funai Electric...

  11. 36 CFR 1237.14 - What are the additional scheduling requirements for audiovisual, cartographic, and related records?

    Science.gov (United States)

    2010-07-01

    ... scheduling requirements for audiovisual, cartographic, and related records? 1237.14 Section 1237.14 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL... audiovisual, cartographic, and related records? The disposition instructions should also provide that...

  12. 77 FR 16560 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-03-21

    ... INTERNATIONAL TRADE COMMISSION [DN 2884] Certain Audiovisual Components and Products Containing.... International Trade Commission has received a complaint entitled Certain Audiovisual Components and Products... audiovisual components and products containing the same. The complaint names as respondents Funai Electric...

  13. Psychophysiological effects of audiovisual stimuli during cycle exercise.

    Science.gov (United States)

    Barreto-Silva, Vinícius; Bigliassi, Marcelo; Chierotti, Priscila; Altimari, Leandro R

    2018-05-01

    Immersive environments induced by audiovisual stimuli are hypothesised to facilitate the control of movements and ameliorate fatigue-related symptoms during exercise. The objective of the present study was to investigate the effects of pleasant and unpleasant audiovisual stimuli on perceptual and psychophysiological responses during moderate-intensity exercises performed on an electromagnetically braked cycle ergometer. Twenty young adults were administered three experimental conditions in a randomised and counterbalanced order: unpleasant stimulus (US; e.g. images depicting laboured breathing); pleasant stimulus (PS; e.g. images depicting pleasant emotions); and neutral stimulus (NS; e.g. neutral facial expressions). The exercise had 10 min of duration (2 min of warm-up + 6 min of exercise + 2 min of warm-down). During all conditions, the rate of perceived exertion and heart rate variability were monitored to further understanding of the moderating influence of audiovisual stimuli on perceptual and psychophysiological responses, respectively. The results of the present study indicate that PS ameliorated fatigue-related symptoms and reduced the physiological stress imposed by the exercise bout. Conversely, US increased the global activity of the autonomic nervous system and increased exertional responses to a greater degree when compared to PS. Accordingly, audiovisual stimuli appear to induce a psychophysiological response in which individuals visualise themselves within the story presented in the video. In such instances, individuals appear to copy the behaviour observed in the videos as if the situation was real. This mirroring mechanism has the potential to up-/down-regulate the cardiac work as if in fact the exercise intensities were different in each condition.

  14. Preventing the Development of Observationally Learnt Fears in Children by Devaluing the Model's Negative Response.

    Science.gov (United States)

    Reynolds, Gemma; Field, Andy P; Askew, Chris

    2015-10-01

    Vicarious learning has become an established indirect pathway to fear acquisition. It is generally accepted that associative learning processes underlie vicarious learning; however, whether this association is a form of conditioned stimulus-unconditioned stimulus (CS-US) learning or stimulus-response (CS-CR) learning remains unclear. Traditionally, these types of learning can be dissociated in a US revaluation procedure. The current study explored the effects of post-vicarious learning US revaluation on acquired fear responses. Ninety-four children (46 males and 48 females) aged 6 to 10 years first viewed either a fear vicarious learning video or a neutral vicarious learning video followed by random allocation to one of three US revaluation conditions: inflation; deflation; or control. Inflation group children were presented with still images of the adults in the video and told that the accompanying sound and image of a very fast heart rate monitor belonged to the adult. The deflation group were shown the same images but with the sound and image of a normal heart rate. The control group received no US revaluation. Results indicated that inflating how scared the models appeared to be did not result in significant increases in children's fear beliefs, avoidance preferences, avoidance behavior or heart rate for animals above increases caused by vicarious learning. In contrast, US devaluation resulted in significant decreases in fear beliefs and avoidance preferences. Thus, the findings provide evidence that CS-US associations underpin vicarious learning and suggest that US devaluation may be a successful method for preventing children from developing fear beliefs following a traumatic vicarious learning episode with a stimulus.

  15. KAMAN PELAYANAN MEDIA AUDIOVISUAL: STUDI KASUS DI THE BRITISH COUNCIL JAKARTA

    Directory of Open Access Journals (Sweden)

    Hindar Purnomo

    2015-12-01

    Full Text Available Tujuan penelitian ini adalah untuk mengetahui cara penyelenggaraan pelayanan media AV, efektivitas pelayanan serta tingkat kepuasan pemustaka terhadap berbagai aspek pelayanan. Penelitian dilakukan di The British Council Jakarta dengan cara evaluasi karena dengan cara ini dapat diketahui berbagai fenomena yang terjadi. Perpustakaan British Council menyediakan tiga jenis media yaitu berupa kaset video, kaset audio, dan siaran televisi BBC. Subjek penelitian adalah pemakai jasa pelaya-nan media audiovisual yang terdaftar sebagai anggota. Subjek dikelompokkan berdasarkan kelompok usia dan kelompok tujuan pemanfaatan media AV. Data angket terkumpul sebanyak 157 responden (75,48% kemudian dianalisis secara statistik dengan uji analisis varian sate arah Kruskal-Wallis. Hasil penelitian menunjukkan bahwa ketiga media tersebut diminati oleh banyak pemakai terutama pada kelompok usia muda. Sebagian besar pemustaka lebih menyukai jenis fiksi dibandingkan jenis nonfiksi, mereka menggunakan media audiovisual untuk mencari informasi pengetahuan. Pelayanan media audiovisual terbukti sangat efektif dilihat dari angka keterpakaian koleksi maupun tingkat kepuasan pemakain. Hasil uji hipotesis menunjukkan bahwa antarkelompok usia maupun tujuan kegunaan tidak ada perbedaan yang berarti dalam menanggapi berbagai aspek pelayanan media audiovisual. Kata Kunci: MediaAudio Visual-Layanan Perpustakaan

  16. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    Science.gov (United States)

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.

  17. 36 CFR 1235.42 - What specifications and standards for transfer apply to audiovisual records, cartographic, and...

    Science.gov (United States)

    2010-07-01

    ... standards for transfer apply to audiovisual records, cartographic, and related records? 1235.42 Section 1235... Standards § 1235.42 What specifications and standards for transfer apply to audiovisual records... elements that are needed for future preservation, duplication, and reference for audiovisual records...

  18. Effects over time of self-reported direct and vicarious racial discrimination on depressive symptoms and loneliness among Australian school students.

    Science.gov (United States)

    Priest, Naomi; Perry, Ryan; Ferdinand, Angeline; Kelaher, Margaret; Paradies, Yin

    2017-02-03

    Racism and racial discrimination are increasingly acknowledged as a critical determinant of health and health inequalities. However, patterns and impacts of racial discrimination among children and adolescents remain under-investigated, including how different experiences of racial discrimination co-occur and influence health and development over time. This study examines associations between self-reported direct and vicarious racial discrimination experiences and loneliness and depressive symptoms over time among Australian school students. Across seven schools, 142 students (54.2% female), age at T1 from 8 to 15 years old (M = 11.14, SD = 2.2), and from diverse racial/ethnic and migration backgrounds (37.3% born in English-speaking countries as were one or both parents) self-reported racial discrimination experiences (direct and vicarious) and mental health (depressive symptoms and loneliness) at baseline and 9 months later at follow up. A full cross-lagged panel design was modelled using MPLUS v.7 with all variables included at both time points. A cross-lagged effect of perceived direct racial discrimination on later depressive symptoms and on later loneliness was found. As expected, the effect of direct discrimination on both health outcomes was unidirectional as mental health did not reciprocally influence reported racism. There was no evidence that vicarious racial discrimination influenced either depressive symptoms or loneliness beyond the effect of direct racial discrimination. Findings suggest direct racial discrimination has a persistent effect on depressive symptoms and loneliness among school students over time. Future work to explore associations between direct and vicarious discrimination is required.

  19. 16 CFR 307.8 - Requirements for disclosure in audiovisual and audio advertising.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Requirements for disclosure in audiovisual and audio advertising. 307.8 Section 307.8 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... ACT OF 1986 Advertising Disclosures § 307.8 Requirements for disclosure in audiovisual and audio...

  20. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection

    Science.gov (United States)

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-01-01

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC. PMID:26123281

  1. A General Audiovisual Temporal Processing Deficit in Adult Readers with Dyslexia

    Science.gov (United States)

    Francisco, Ana A.; Jesse, Alexandra; Groen, Margriet A.; McQueen, James M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of…

  2. Film Studies in Motion : From Audiovisual Essay to Academic Research Video

    NARCIS (Netherlands)

    Kiss, Miklós; van den Berg, Thomas

    2016-01-01

    Our (co-written with Thomas van den Berg) ‪media rich,‬ ‪‎open access‬ ‪‎Scalar‬ ‪e-book‬ on the ‪‎Audiovisual Essay‬ practice is available online: http://scalar.usc.edu/works/film-studies-in-motion Audiovisual essaying should be more than an appropriation of traditional video artistry, or a mere

  3. Ordinal models of audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2011-01-01

    Audiovisual information is integrated in speech perception. One manifestation of this is the McGurk illusion in which watching the articulating face alters the auditory phonetic percept. Understanding this phenomenon fully requires a computational model with predictive power. Here, we describe...

  4. Audiovisual cultural heritage: bridging the gap between digital archives and its users

    NARCIS (Netherlands)

    Ongena, G.; Donoso, Veronica; Geerts, David; Cesar, Pablo; de Grooff, Dirk

    2009-01-01

    This document describes a PhD research track on the disclosure of audiovisual digital archives. The domain of audiovisual material is introduced as well as a problem description is formulated. The main research objective is to investigate the gap between the different users and the digital archives.

  5. Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception

    DEFF Research Database (Denmark)

    Baart, Martijn; Lindborg, Alma Cornelia; Andersen, Tobias S

    2017-01-01

    Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure...... of audiovisual integration) for fusions was comparable to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli. This article is protected...

  6. Enhancing audiovisual experience with haptic feedback: a survey on HAV.

    Science.gov (United States)

    Danieau, F; Lecuyer, A; Guillotel, P; Fleureau, J; Mollet, N; Christie, M

    2013-01-01

    Haptic technology has been widely employed in applications ranging from teleoperation and medical simulation to art and design, including entertainment, flight simulation, and virtual reality. Today there is a growing interest among researchers in integrating haptic feedback into audiovisual systems. A new medium emerges from this effort: haptic-audiovisual (HAV) content. This paper presents the techniques, formalisms, and key results pertinent to this medium. We first review the three main stages of the HAV workflow: the production, distribution, and rendering of haptic effects. We then highlight the pressing necessity for evaluation techniques in this context and discuss the key challenges in the field. By building on existing technologies and tackling the specific challenges of the enhancement of audiovisual experience with haptics, we believe the field presents exciting research perspectives whose financial and societal stakes are significant.

  7. Lifelong learning: Established concepts and evolving values.

    Science.gov (United States)

    Talati, Jamsheer Jehangir

    2014-03-01

    To summarise the concepts critical for understanding the content and value of lifelong learning (LL). Ideas generated by personal experience were combined with those of philosophers, social scientists, educational institutions, governments and UNESCO, to facilitate an understanding of the importance of the basic concepts of LL. Autopoietic, continuous, self-determined, informal, vicarious, biographical, lifelong reflexive learning, from and for society, when supported by self-chosen formal courses, can build capacities and portable skills that allow useful responses to challenges and society's new structures of governance. The need for LL is driven by challenges. LL flows continuously in pursuit of one agenda, which could either be citizenship, as is conventional, or as this article proposes, health. LL cannot be wholly centred on vocation. Continuous medical education and continuous professional development, important in their own right, cannot supply all that is needed. LL aids society with its learning, and it requires an awareness of the environment and structures of society. It is heavily vicarious, draws on formal learning and relies for effectiveness on reflection, self-assessment and personal shaping of views of the world from different perspectives. Health is critical to rational thought and peace, and determines society's capacity to govern itself, and improve its health. LL should be reshaped to focus on health not citizenship. Therefore, embedding learning in society and environment is critical. Each urologist must develop an understanding of the numerous concepts in LL, of which 'biographicisation' is the seed that will promote innovative strategies.

  8. Social identity shapes social valuation: evidence from prosocial behavior and vicarious reward.

    Science.gov (United States)

    Hackel, Leor M; Zaki, Jamil; Van Bavel, Jay J

    2017-08-01

    People frequently engage in more prosocial behavior toward members of their own groups, as compared to other groups. Such group-based prosociality may reflect either strategic considerations concerning one's own future outcomes or intrinsic value placed on the outcomes of in-group members. In a functional magnetic resonance imaging experiment, we examined vicarious reward responses to witnessing the monetary gains of in-group and out-group members, as well as prosocial behavior towards both types of individuals. We found that individuals' investment in their group-a motivational component of social identification-tracked the intensity of their responses in ventral striatum to in-group (vs out-group) members' rewards, as well as their tendency towards group-based prosociality. Individuals with strong motivational investment in their group preferred rewards for an in-group member, whereas individuals with low investment preferred rewards for an out-group member. These findings suggest that the motivational importance of social identity-beyond mere similarity to group members-influences vicarious reward and prosocial behavior. More broadly, these findings support a theoretical framework in which salient social identities can influence neural representations of subjective value, and suggest that social preferences can best be understood by examining the identity contexts in which they unfold. © The Author (2017). Published by Oxford University Press.

  9. Out of Africa:Miocene Dispersal, Vicariance, and Extinction within Hyacinthaceae Subfamily Urgineoideae

    Institute of Scientific and Technical Information of China (English)

    Syed Shujait Ali; Martin Pfosser; Wolfgang Wetschnig; Mario MartnezAzorn; Manuel B. Crespo; Yan Yu

    2013-01-01

    Disjunct distribution patterns in plant lineages are usually explained according to three hypotheses:vicariance, geodispersal, and long-distance dispersal. The role of these hypotheses is tested in Urgineoideae (Hyacinthaceae), a subfamily disjunctly distributed in Africa, Madagascar, India, and the Mediterranean region. The potential ancestral range, dispersal routes, and factors responsible for the current distribution in Urgineoideae are investigated using divergence time estimations. Urgineoideae originated in Southern Africa approximately 48.9 Mya. Two independent dispersal events in the Western Mediterranean region possibly occurred during Early Oligocene and Miocene (29.9-8.5 Mya) via Eastern and Northwestern Africa. A dispersal from Northwestern Africa to India could have occurred between 16.3 and 7.6 Mya. Vicariance and extinction events occurred approximately 21.6 Mya. Colonization of Madagascar occurred between 30.6 and 16.6 Mya, after a single transoceanic dispersal event from Southern Africa. The current disjunct distributions of Urgineoideae are not satisfactorily explained by Gondwana fragmentation or dispersal via boreotropical forests, due to the younger divergence time estimates. The flattened winged seeds of Urgineoideae could have played an important role in long-distance dispersal by strong winds and big storms, whereas geodispersal could have also occurred from Southern Africa to Asia and the Mediterranean region via the so-called arid and high-altitude corridors.

  10. Temporal Ventriloquism Reveals Intact Audiovisual Temporal Integration in Amblyopia.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-02-01

    We have shown previously that amblyopia involves impaired detection of asynchrony between auditory and visual events. To distinguish whether this impairment represents a defect in temporal integration or nonintegrative multisensory processing (e.g., cross-modal matching), we used the temporal ventriloquism effect in which visual temporal order judgment (TOJ) is normally enhanced by a lagging auditory click. Participants with amblyopia (n = 9) and normally sighted controls (n = 9) performed a visual TOJ task. Pairs of clicks accompanied the two lights such that the first click preceded the first light, or second click lagged the second light by 100, 200, or 450 ms. Baseline audiovisual synchrony and visual-only conditions also were tested. Within both groups, just noticeable differences for the visual TOJ task were significantly reduced compared with baseline in the 100- and 200-ms click lag conditions. Within the amblyopia group, poorer stereo acuity and poorer visual acuity in the amblyopic eye were significantly associated with greater enhancement in visual TOJ performance in the 200-ms click lag condition. Audiovisual temporal integration is intact in amblyopia, as indicated by perceptual enhancement in the temporal ventriloquism effect. Furthermore, poorer stereo acuity and poorer visual acuity in the amblyopic eye are associated with a widened temporal binding window for the effect. These findings suggest that previously reported abnormalities in audiovisual multisensory processing may result from impaired cross-modal matching rather than a diminished capacity for temporal audiovisual integration.

  11. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    Science.gov (United States)

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  12. [Intermodal timing cues for audio-visual speech recognition].

    Science.gov (United States)

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  13. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    OpenAIRE

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin?Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possib...

  14. Detecting Functional Connectivity During Audiovisual Integration with MEG: A Comparison of Connectivity Metrics.

    Science.gov (United States)

    Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard

    2015-08-01

    In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.

  15. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    Science.gov (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Ensenyar amb casos audiovisuals en l'entorn virtual: metodologia i resultats

    OpenAIRE

    Triadó i Ivern, Xavier Ma.; Aparicio Chueca, Ma. del Pilar (María del Pilar); Jaría Chacón, Natalia; Gallardo-Gallardo, Eva; Elasri Ejjaberi, Amal

    2010-01-01

    Aquest quadern pretén posar i donar a conèixer les bases d'una metodologia que serveixi per engegar experiències d'aprenentatge amb casos audiovisuals en l'entorn del campus virtual. Per aquest motiu, s'ha definit un protocol metodològic per utilitzar els casos audiovisuals dins l'entorn del campus virtual a diferents assignatures.

  17. Audiovisual integration increases the intentional step synchronization of side-by-side walkers.

    Science.gov (United States)

    Noy, Dominic; Mouta, Sandra; Lamas, Joao; Basso, Daniel; Silva, Carlos; Santos, Jorge A

    2017-12-01

    When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Differentiating the Sources of Taiwanese High School Students' Multidimensional Science Learning Self-Efficacy: An Examination of Gender Differences

    Science.gov (United States)

    Lin, Tzung-Jin; Tsai, Chin-Chung

    2017-04-01

    The main purpose of this study was to investigate Taiwanese high school students' multi-dimensional self-efficacy and its sources in the domain of science. Two instruments, Sources of Science Learning Self-Efficacy (SSLSE) and Science Learning Self-Efficacy (SLSE), were used. By means of correlation and regression analyses, the relationships between students' science learning self-efficacy and the sources of their science learning self-efficacy were examined. The findings revealed that the four sources of the students' self-efficacy were found to play significant roles in their science learning self-efficacy. By and large, Mastery Experience and Vicarious Experience were found to be the two salient influencing sources. Several gender differences were also revealed. For example, the female students regarded Social Persuasion as the most influential source in the "Science Communication" dimension, while the male students considered Vicarious Experience as the main efficacy source. Physiological and Affective States, in particular, was a crucial antecedent of the female students' various SLSE dimensions, including "Conceptual Understanding," "Higher-Order Cognitive Skills," and "Science Communication." In addition, the variations between male and female students' responses to both instruments were also unraveled. The results suggest that, first, the male students perceived themselves as having more mastery experience, vicarious experience and social persuasion than their female counterparts. Meanwhile, the female students experienced more negative emotional arousal than the male students. Additionally, the male students were more self-efficacious than the females in the five SLSE dimensions of "Conceptual Understanding," "Higher-Order Cognitive Skills," "Practical Work," "Everyday Application," and "Science Communication."

  19. Differentiating the Sources of Taiwanese High School Students' Multidimensional Science Learning Self-Efficacy: An Examination of Gender Differences

    Science.gov (United States)

    Lin, Tzung-Jin; Tsai, Chin-Chung

    2018-06-01

    The main purpose of this study was to investigate Taiwanese high school students' multi-dimensional self-efficacy and its sources in the domain of science. Two instruments, Sources of Science Learning Self-Efficacy (SSLSE) and Science Learning Self-Efficacy (SLSE), were used. By means of correlation and regression analyses, the relationships between students' science learning self-efficacy and the sources of their science learning self-efficacy were examined. The findings revealed that the four sources of the students' self-efficacy were found to play significant roles in their science learning self-efficacy. By and large, Mastery Experience and Vicarious Experience were found to be the two salient influencing sources. Several gender differences were also revealed. For example, the female students regarded Social Persuasion as the most influential source in the "Science Communication" dimension, while the male students considered Vicarious Experience as the main efficacy source. Physiological and Affective States, in particular, was a crucial antecedent of the female students' various SLSE dimensions, including "Conceptual Understanding," "Higher-Order Cognitive Skills," and "Science Communication." In addition, the variations between male and female students' responses to both instruments were also unraveled. The results suggest that, first, the male students perceived themselves as having more mastery experience, vicarious experience and social persuasion than their female counterparts. Meanwhile, the female students experienced more negative emotional arousal than the male students. Additionally, the male students were more self-efficacious than the females in the five SLSE dimensions of "Conceptual Understanding," "Higher-Order Cognitive Skills," "Practical Work," "Everyday Application," and "Science Communication."

  20. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals

    Science.gov (United States)

    Lidestam, Björn; Rönnberg, Jerker

    2016-01-01

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667

  1. Neuromorphic Audio-Visual Sensor Fusion on a Sound-Localising Robot

    Directory of Open Access Journals (Sweden)

    Vincent Yue-Sek Chan

    2012-02-01

    Full Text Available This paper presents the first robotic system featuring audio-visual sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localisation through self-motion and visual feedback, using an adaptive ITD-based sound localisation algorithm. After training, the robot can localise sound sources (white or pink noise in a reverberant environment with an RMS error of 4 to 5 degrees in azimuth. In the second part of the paper, we investigate the source binding problem. An experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. The results show that this technique can be quite effective, despite its simplicity.

  2. Neurofunctional Underpinnings of Audiovisual Emotion Processing in Teens with Autism Spectrum Disorders

    Science.gov (United States)

    Doyle-Thomas, Krissy A.R.; Goldberg, Jeremy; Szatmari, Peter; Hall, Geoffrey B.C.

    2013-01-01

    Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD). Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n = 18) and typically developing controls (n = 16) during audiovisual and unimodal emotion processing. Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviors, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that in the absence of engaging integrative emotional networks during audiovisual emotion matching, teens with ASD may have recruited the parietofrontal network as an alternate compensatory system. PMID:23750139

  3. Plan empresa productora de audiovisuales : La Central Audiovisual y Publicidad

    OpenAIRE

    Arroyave Velasquez, Alejandro

    2015-01-01

    El presente documento corresponde al plan de creación de empresa La Central Publicidad y Audiovisual, una empresa dedicada a la pre-producción, producción y post-producción de material de tipo audiovisual. La empresa estará ubicada en la ciudad de Cali y tiene como mercado objetivo atender los diferentes tipos de empresas de la ciudad, entre las cuales se encuentran las pequeñas, medianas y grandes empresas.

  4. Self-organizing maps for measuring similarity of audiovisual speech percepts

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich

    The goal of this work is to find a way to measure similarity of audiovisual speech percepts. Phoneme-related self-organizing maps (SOM) with a rectangular basis are trained with data material from a (labeled) video film. For the training, a combination of auditory speech features and corresponding....... Dependent on the training data, these other units may also be contextually immediate neighboring units. The poster demonstrates the idea with text material spoken by one individual subject using a set of simple audio-visual features. The data material for the training process consists of 44 labeled...... sentences in German with a balanced phoneme repertoire. As a result it can be stated that (i) the SOM can be trained to map auditory and visual features in a topology-preserving way and (ii) they show strain due to the influence of other audio-visual units. The SOM can be used to measure similarity amongst...

  5. Audiovisual Speech Perception and Eye Gaze Behavior of Adults with Asperger Syndrome

    Science.gov (United States)

    Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-01-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…

  6. Audiovisual integration for speech during mid-childhood: Electrophysiological evidence

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-01-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7–8-year-olds and 10–11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception. PMID:25463815

  7. Effect of Audiovisual Treatment Information on Relieving Anxiety in Patients Undergoing Impacted Mandibular Third Molar Removal.

    Science.gov (United States)

    Choi, Sung-Hwan; Won, Ji-Hoon; Cha, Jung-Yul; Hwang, Chung-Ju

    2015-11-01

    The authors hypothesized that an audiovisual slide presentation that provided treatment information regarding the removal of an impacted mandibular third molar could improve patient knowledge of postoperative complications and decrease anxiety in young adults before and after surgery. A group that received an audiovisual description was compared with a group that received the conventional written description of the procedure. This randomized clinical trial included young adult patients who required surgical removal of an impacted mandibular third molar and fulfilled the predetermined criteria. The predictor variable was the presentation of an audiovisual slideshow. The audiovisual informed group provided informed consent after viewing an audiovisual slideshow. The control group provided informed consent after reading a written description of the procedure. The outcome variables were the State-Trait Anxiety Inventory, the Dental Anxiety Scale, a self-reported anxiety questionnaire, completed immediately before and 1 week after surgery, and a postoperative questionnaire about the level of understanding of potential postoperative complications. The data were analyzed with χ(2) tests, independent t tests, Mann-Whitney U  tests, and Spearman rank correlation coefficients. Fifty-one patients fulfilled the inclusion criteria. The audiovisual informed group was comprised of 20 men and 5 women; the written informed group was comprised of 21 men and 5 women. The audiovisual informed group remembered significantly more information than the control group about a potential allergic reaction to local anesthesia or medication and potential trismus (P audiovisual informed group had lower self-reported anxiety scores than the control group 1 week after surgery (P audiovisual slide presentation could improve patient knowledge about postoperative complications and aid in alleviating anxiety after the surgical removal of an impacted mandibular third molar. Copyright © 2015

  8. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Yanna Ren

    2018-01-01

    Full Text Available The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson’s disease (PD. This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p0.05. The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.

  9. Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception.

    Science.gov (United States)

    Baart, Martijn; Lindborg, Alma; Andersen, Tobias S

    2017-11-01

    Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure of audiovisual integration) for fusions was similar to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli. © 2017 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. Audiovisual Asynchrony Detection in Human Speech

    Science.gov (United States)

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  11. Neural correlates of audiovisual speech processing in a second language.

    Science.gov (United States)

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Threats and opportunities for new audiovisual cultural heritage archive services: the Dutch case

    NARCIS (Netherlands)

    Ongena, G.; Huizer, E.; van de Wijngaert, Lidwien

    2012-01-01

    Purpose The purpose of this paper is to analyze the business-to-consumer market for digital audiovisual archiving services. In doing so we identify drivers, threats, and opportunities for new services based on audiovisual archives in the cultural heritage domain. By analyzing the market we provide

  13. Learning Computational Models of Video Memorability from fMRI Brain Imaging.

    Science.gov (United States)

    Han, Junwei; Chen, Changyuan; Shao, Ling; Hu, Xintao; Han, Jungong; Liu, Tianming

    2015-08-01

    Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.

  14. Vicarious calibration of the solar reflection channels of radiometers onboard satellites through the field campaigns with measurements of refractive index and size distribution of aerosols

    Science.gov (United States)

    Arai, K.

    A comparative study on vicarious calibration for the solar reflection channels of radiometers onboard satellite through the field campaigns between with and without measurements of refractive index and size distribution of aerosols is made. In particular, it is noticed that the influence due to soot from the cars exhaust has to be care about for the test sites near by a heavy trafficked roads. It is found that the 0.1% inclusion of soot induces around 10% vicarious calibration error so that it is better to measure refractive index properly at the test site. It is found that the vicarious calibration coefficients with the field campaigns at the different test site, Ivanpah (near road) and Railroad (distant from road) shows approximately 10% discrepancy. It seems that one of the possible causes for the difference is the influence due to soot from cars exhaust.

  15. Sustainable models of audiovisual commons

    Directory of Open Access Journals (Sweden)

    Mayo Fuster Morell

    2013-03-01

    Full Text Available This paper addresses an emerging phenomenon characterized by continuous change and experimentation: the collaborative commons creation of audiovisual content online. The analysis wants to focus on models of sustainability of collaborative online creation, paying particular attention to the use of different forms of advertising. This article is an excerpt of a larger investigation, which unit of analysis are cases of Online Creation Communities that take as their central node of activity the Catalan territory. From 22 selected cases, the methodology combines quantitative analysis, through a questionnaire delivered to all cases, and qualitative analysis through face interviews conducted in 8 cases studied. The research, which conclusions we summarize in this article,in this article, leads us to conclude that the sustainability of the project depends largely on relationships of trust and interdependence between different voluntary agents, the non-monetary contributions and retributions as well as resources and infrastructure of free use. All together leads us to understand that this is and will be a very important area for the future of audiovisual content and its sustainability, which will imply changes in the policies that govern them.

  16. Global biogeography of scaly tree ferns (Cyatheaceae): evidence for Gondwanan vicariance and limited transoceanic dispersal.

    Science.gov (United States)

    Korall, Petra; Pryer, Kathleen M

    2014-02-01

    Scaly tree ferns, Cyatheaceae, are a well-supported group of mostly tree-forming ferns found throughout the tropics, the subtropics and the south-temperate zone. Fossil evidence shows that the lineage originated in the Late Jurassic period. We reconstructed large-scale historical biogeographical patterns of Cyatheaceae and tested the hypothesis that some of the observed distribution patterns are in fact compatible, in time and space, with a vicariance scenario related to the break-up of Gondwana. Tropics, subtropics and south-temperate areas of the world. The historical biogeography of Cyatheaceae was analysed in a maximum likelihood framework using Lagrange. The 78 ingroup taxa are representative of the geographical distribution of the entire family. The phylogenies that served as a basis for the analyses were obtained by Bayesian inference analyses of mainly previously published DNA sequence data using MrBayes. Lineage divergence dates were estimated in a Bayesian Markov chain Monte Carlo framework using beast. Cyatheaceae originated in the Late Jurassic in either South America or Australasia. Following a range expansion, the ancestral distribution of the marginate-scaled clade included both these areas, whereas Sphaeropteris is reconstructed as having its origin only in Australasia. Within the marginate-scaled clade, reconstructions of early divergences are hampered by the unresolved relationships among the Alsophila , Cyathea and Gymnosphaera lineages. Nevertheless, it is clear that the occurrence of the Cyathea and Sphaeropteris lineages in South America may be related to vicariance, whereas transoceanic dispersal needs to be inferred for the range shifts seen in Alsophila and Gymnosphaera . The evolutionary history of Cyatheaceae involves both Gondwanan vicariance scenarios as well as long-distance dispersal events. The number of transoceanic dispersals reconstructed for the family is rather few when compared with other fern lineages. We suggest that a causal

  17. Longevity and Depreciation of Audiovisual Equipment.

    Science.gov (United States)

    Post, Richard

    1987-01-01

    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  18. Movement Sonification: Audiovisual benefits on motor learning

    Directory of Open Access Journals (Sweden)

    Weber Andreas

    2011-12-01

    Full Text Available Processes of motor control and learning in sports as well as in motor rehabilitation are based on perceptual functions and emergent motor representations. Here a new method of movement sonification is described which is designed to tune in more comprehensively the auditory system into motor perception to enhance motor learning. Usually silent features of the cyclic movement pattern "indoor rowing" are sonified in real time to make them additionally available to the auditory system when executing the movement. Via real time sonification movement perception can be enhanced in terms of temporal precision and multi-channel integration. But beside the contribution of a single perceptual channel to motor perception and motor representation also mechanisms of multisensory integration can be addressed, if movement sonification is configured adequately: Multimodal motor representations consisting of at least visual, auditory and proprioceptive components - can be shaped subtly resulting in more precise motor control and enhanced motor learning.

  19. Asymmetries in Experiential and Vicarious Feedback: Lessons from the Hiring and Firing of Baseball Managers

    Directory of Open Access Journals (Sweden)

    David Strang

    2014-05-01

    Full Text Available We examine experiential and vicarious feedback in the hiring and firing of baseball managers. Realized outcomes play a large role in both decisions; the probability that a manager will be fired is a function of the team’s win–loss record, and a manager is quicker to be rehired if his teams had won more in the past. There are substantial asymmetries, however, in the fine structure of the two feedback functions. The rate at which managers are fired is powerfully shaped by recent outcomes, falls with success and rises with failure, and adjusts for history-based expectations. By contrast, hiring reflects a longer-term perspective that emphasizes outcomes over the manager’s career as well as the most recent campaign, rewards success but does not penalize failure, and exhibits no adjustment for historical expectations. We explain these asymmetries in terms of the disparate displays of rationality that organizations enact in response to their own outcomes versus those of others. Experiential feedback is conditioned by a logic of accountability, vicarious feedback by a logic of emulation.

  20. Vicarious Calibration of Beijing-1 Multispectral Imagers

    Directory of Open Access Journals (Sweden)

    Zhengchao Chen

    2014-02-01

    Full Text Available For on-orbit calibration of the Beijing-1 multispectral imagers (Beijing-1/MS, a field calibration campaign was performed at the Dunhuang calibration site during September and October of 2008. Based on the in situ data and images from Beijing-1 and Terra/Moderate Resolution Imaging Spectroradiometer (MODIS, three vicarious calibration methods (i.e., reflectance-based, irradiance-based, and cross-calibration were used to calculate the top-of-atmosphere (TOA radiance of Beijing-1. An analysis was then performed to determine or identify systematic and accidental errors, and the overall uncertainty was assessed for each individual method. The findings show that the reflectance-based method has an uncertainty of more than 10% if the aerosol optical depth (AOD exceeds 0.2. The cross-calibration method is able to reach an error level within 7% if the images are selected carefully. The final calibration coefficients were derived from the irradiance-based data for 6 September 2008, with an uncertainty estimated to be less than 5%.

  1. Audiovisual quality assessment in communications applications: Current status, trends and challenges

    DEFF Research Database (Denmark)

    Korhonen, Jari

    2010-01-01

    Audiovisual quality assessment is one of the major challenges in multimedia communications. Traditionally, algorithm-based (objective) assessment methods have focused primarily on the compression artifacts. However, compression is only one of the numerous factors influencing the perception...... addressed in practical quality metrics is the co-impact of audio and video qualities. This paper provides an overview of the current trends and challenges in objective audiovisual quality assessment, with emphasis on communication applications...

  2. Alfasecuencialización: la enseñanza del cine en la era del audiovisual Sequential literacy: the teaching of cinema in the age of audio-visual speech

    Directory of Open Access Journals (Sweden)

    José Antonio Palao Errando

    2007-10-01

    Full Text Available En la llamada «sociedad de la información» los estudios sobre cine se han visto diluidos en el abordaje pragmático y tecnológico del discurso audiovisual, así como la propia fruición del cine se ha visto atrapada en la red del DVD y del hipertexto. El propio cine reacciona ante ello a través de estructuras narrativas complejas que lo alejan del discurso audiovisual estándar. La función de los estudios sobre cine y de su enseñanza universitaria debe ser la reintroducción del sujeto rechazado del saber informativo por medio de la interpretación del texto fílmico. In the so called «information society», film studies have been diluted in the pragmatic and technological approaching of the audiovisual speech, as well as the own fruition of the cinema has been caught in the net of DVD and hypertext. The cinema itself reacts in the face of it through complex narrative structures that take it away from the standard audio-visual speech. The function of film studies at the university education should be the reintroduction of the rejected subject of the informative knowledge by means of the interpretation of film text.

  3. A economia do audiovisual no contexto contemporâneo das Cidades Criativas

    Directory of Open Access Journals (Sweden)

    Paulo Celso da Silva

    2012-12-01

    Full Text Available Este trabalho aborda a economia do audiovisual em cidades com status de criativas. Mais do que um adjetivo, é no bojo das atividades ligadas à comunicação, o audiovisual entre elas, cultura, moda, arquitetura, artes manuais ou artesanato local, que tais cidades renovaram a forma de acumulação, reorganizando espaços públicos e privados. As cidades de  Barcelona, Berlim, New York, Milão e São Paulo, são representativas para atingir o objetivo de analisar as cidades relacionado ao desenvolvimento do setor audiovisual. Ainda que tal hipótese possa parecer indicar, através de dados oficiais que auxiliam em uma compreensão mais realista de cada uma delas.

  4. Venezuela: Nueva Experiencia Audiovisual

    Directory of Open Access Journals (Sweden)

    Revista Chasqui

    2015-01-01

    Full Text Available La Universidad Simón Bolívar (USB creó en 1986, la Fundación para el Desarrollo del Arte Audiovisual, ARTEVISION. Su objetivo general es la promoción y venta de servicios y productos para la televisión, radio, cine, diseño y fotografía de alta calidad artística y técnica. Todo esto sin descuidar los aspectos teóricos-académicos de estas disciplinas.

  5. Audiovisual Narrative Creation and Creative Retrieval: How Searching for a Story Shapes the Story

    NARCIS (Netherlands)

    Sauer, Sabrina

    2017-01-01

    Media professionals – such as news editors, image researchers, and documentary filmmakers - increasingly rely on online access to digital content within audiovisual archives to create narratives. Retrieving audiovisual sources therefore requires an in-depth knowledge of how to find sources

  6. Selective attention modulates the direction of audio-visual temporal recalibration.

    Science.gov (United States)

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  7. Selective attention modulates the direction of audio-visual temporal recalibration.

    Directory of Open Access Journals (Sweden)

    Nara Ikumi

    Full Text Available Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging, was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  8. Audiovisual integration of speech falters under high attention demands.

    Science.gov (United States)

    Alsius, Agnès; Navarra, Jordi; Campbell, Ruth; Soto-Faraco, Salvador

    2005-05-10

    One of the most commonly cited examples of human multisensory integration occurs during exposure to natural speech, when the vocal and the visual aspects of the signal are integrated in a unitary percept. Audiovisual association of facial gestures and vocal sounds has been demonstrated in nonhuman primates and in prelinguistic children, arguing for a general basis for this capacity. One critical question, however, concerns the role of attention in such multisensory integration. Although both behavioral and neurophysiological studies have converged on a preattentive conceptualization of audiovisual speech integration, this mechanism has rarely been measured under conditions of high attentional load, when the observers' attention resources are depleted. We tested the extent to which audiovisual integration was modulated by the amount of available attentional resources by measuring the observers' susceptibility to the classic McGurk illusion in a dual-task paradigm. The proportion of visually influenced responses was severely, and selectively, reduced if participants were concurrently performing an unrelated visual or auditory task. In contrast with the assumption that crossmodal speech integration is automatic, our results suggest that these multisensory binding processes are subject to attentional demands.

  9. Rehabilitation of balance-impaired stroke patients through audio-visual biofeedback

    DEFF Research Database (Denmark)

    Gheorghe, Cristina; Nissen, Thomas; Juul Rosengreen Christensen, Daniel

    2015-01-01

    This study explored how audio-visual biofeedback influences physical balance of seven balance-impaired stroke patients, between 33–70 years-of-age. The setup included a bespoke balance board and a music rhythm game. The procedure was designed as follows: (1) a control group who performed a balance...... training exercise without any technological input, (2) a visual biofeedback group, performing via visual input, and (3) an audio-visual biofeedback group, performing via audio and visual input. Results retrieved from comparisons between the data sets (2) and (3) suggested superior postural stability...

  10. Heart House: Where Doctors Learn

    Science.gov (United States)

    American School and University, 1978

    1978-01-01

    The new learning center and administrative headquarters of the American College of Cardiology in Bethesda, Maryland, contain a unique classroom equipped with the highly sophisticated audiovisual aids developed to teach the latest techniques in the diagnosis and treatment of heart disease. (Author/MLF)

  11. Audiovisual speech integration in the superior temporal region is dysfunctional in dyslexia.

    Science.gov (United States)

    Ye, Zheng; Rüsseler, Jascha; Gerth, Ivonne; Münte, Thomas F

    2017-07-25

    Dyslexia is an impairment of reading and spelling that affects both children and adults even after many years of schooling. Dyslexic readers have deficits in the integration of auditory and visual inputs but the neural mechanisms of the deficits are still unclear. This fMRI study examined the neural processing of auditorily presented German numbers 0-9 and videos of lip movements of a German native speaker voicing numbers 0-9 in unimodal (auditory or visual) and bimodal (always congruent) conditions in dyslexic readers and their matched fluent readers. We confirmed results of previous studies that the superior temporal gyrus/sulcus plays a critical role in audiovisual speech integration: fluent readers showed greater superior temporal activations for combined audiovisual stimuli than auditory-/visual-only stimuli. Importantly, such an enhancement effect was absent in dyslexic readers. Moreover, the auditory network (bilateral superior temporal regions plus medial PFC) was dynamically modulated during audiovisual integration in fluent, but not in dyslexic readers. These results suggest that superior temporal dysfunction may underly poor audiovisual speech integration in readers with dyslexia. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. 78 FR 63243 - Certain Audiovisual Components and Products Containing the Same; Commission Determination To...

    Science.gov (United States)

    2013-10-23

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-837] Certain Audiovisual Components and Products Containing the Same; Commission Determination To Review a Final Initial Determination Finding a... section 337 as to certain audiovisual components and products containing the same with respect to claims 1...

  13. Dissociating Cortical Activity during Processing of Native and Non-Native Audiovisual Speech from Early to Late Infancy

    Directory of Open Access Journals (Sweden)

    Eswen Fava

    2014-08-01

    Full Text Available Initially, infants are capable of discriminating phonetic contrasts across the world’s languages. Starting between seven and ten months of age, they gradually lose this ability through a process of perceptual narrowing. Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech. Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds. But infants learn from speech produced by people talking to them, meaning they learn from a complex audiovisual signal. Here, we use near-infrared spectroscopy to measure blood concentration changes in the bilateral temporal cortices of infants in three different age groups: 3-to-6 months, 7-to-10 months, and 11-to-14-months. Critically, all three groups of infants were tested with continuous audiovisual speech in both their native and another, unfamiliar language. We found that at each age range, infants showed different patterns of cortical activity in response to the native and non-native stimuli. Infants in the youngest group showed bilateral cortical activity that was greater overall in response to non-native relative to native speech; the oldest group showed left lateralized activity in response to native relative to non-native speech. These results highlight perceptual tuning as a dynamic process that happens across modalities and at different levels of stimulus complexity.

  14. Audiovisual facilitation of clinical knowledge: a paradigm for dispersed student education based on Paivio's Dual Coding Theory.

    Science.gov (United States)

    Hartland, William; Biddle, Chuck; Fallacaro, Michael

    2008-06-01

    This article explores the application of Paivio's Dual Coding Theory (DCT) as a scientifically sound rationale for the effects of multimedia learning in programs of nurse anesthesia. We explore and highlight this theory as a practical infrastructure for programs that work with dispersed students (ie, distance education models). Exploring the work of Paivio and others, we are engaged in an ongoing outcome study using audiovisual teaching interventions (SBVTIs) that we have applied to a range of healthcare providers in a quasiexperimental model. The early results of that study are reported in this article. In addition, we have observed powerful and sustained learning in a wide range of healthcare providers with our SBVTIs and suggest that this is likely explained by DCT.

  15. Search in audiovisual broadcast archives : doctoral abstract

    NARCIS (Netherlands)

    Huurnink, B.

    Documentary makers, journalists, news editors, and other media professionals routinely require previously recorded audiovisual material for new productions. For example, a news editor might wish to reuse footage shot by overseas services for the evening news, or a documentary maker might require

  16. Inactivation of Primate Prefrontal Cortex Impairs Auditory and Audiovisual Working Memory.

    Science.gov (United States)

    Plakke, Bethany; Hwang, Jaewon; Romanski, Lizabeth M

    2015-07-01

    The prefrontal cortex is associated with cognitive functions that include planning, reasoning, decision-making, working memory, and communication. Neurophysiology and neuropsychology studies have established that dorsolateral prefrontal cortex is essential in spatial working memory while the ventral frontal lobe processes language and communication signals. Single-unit recordings in nonhuman primates has shown that ventral prefrontal (VLPFC) neurons integrate face and vocal information and are active during audiovisual working memory. However, whether VLPFC is essential in remembering face and voice information is unknown. We therefore trained nonhuman primates in an audiovisual working memory paradigm using naturalistic face-vocalization movies as memoranda. We inactivated VLPFC, with reversible cortical cooling, and examined performance when faces, vocalizations or both faces and vocalization had to be remembered. We found that VLPFC inactivation impaired subjects' performance in audiovisual and auditory-alone versions of the task. In contrast, VLPFC inactivation did not disrupt visual working memory. Our studies demonstrate the importance of VLPFC in auditory and audiovisual working memory for social stimuli but suggest a different role for VLPFC in unimodal visual processing. The ventral frontal lobe, or inferior frontal gyrus, plays an important role in audiovisual communication in the human brain. Studies with nonhuman primates have found that neurons within ventral prefrontal cortex (VLPFC) encode both faces and vocalizations and that VLPFC is active when animals need to remember these social stimuli. In the present study, we temporarily inactivated VLPFC by cooling the cortex while nonhuman primates performed a working memory task. This impaired the ability of subjects to remember a face and vocalization pair or just the vocalization alone. Our work highlights the importance of the primate VLPFC in the processing of faces and vocalizations in a manner that

  17. Audiovisual distraction for pain relief in paediatric inpatients: A crossover study.

    Science.gov (United States)

    Oliveira, N C A C; Santos, J L F; Linhares, M B M

    2017-01-01

    Pain is a stressful experience that can have a negative impact on child development. The aim of this crossover study was to examine the efficacy of audiovisual distraction for acute pain relief in paediatric inpatients. The sample comprised 40 inpatients (6-11 years) who underwent painful puncture procedures. The participants were randomized into two groups, and all children received the intervention and served as their own controls. Stress and pain-catastrophizing assessments were initially performed using the Child Stress Scale and Pain Catastrophizing Scale for Children, with the aim of controlling these variables. The pain assessment was performed using a Visual Analog Scale and the Faces Pain Scale-Revised after the painful procedures. Group 1 received audiovisual distraction before and during the puncture procedure, which was performed again without intervention on another day. The procedure was reversed in Group 2. Audiovisual distraction used animated short films. A 2 × 2 × 2 analysis of variance for 2 × 2 crossover study was performed, with a 5% level of statistical significance. The two groups had similar baseline measures of stress and pain catastrophizing. A significant difference was found between periods with and without distraction in both groups, in which scores on both pain scales were lower during distraction compared with no intervention. The sequence of exposure to the distraction intervention in both groups and first versus second painful procedure during which the distraction was performed also significantly influenced the efficacy of the distraction intervention. Audiovisual distraction effectively reduced the intensity of pain perception in paediatric inpatients. The crossover study design provides a better understanding of the power effects of distraction for acute pain management. Audiovisual distraction was a powerful and effective non-pharmacological intervention for pain relief in paediatric inpatients. The effects were

  18. The natural statistics of audiovisual speech.

    Directory of Open Access Journals (Sweden)

    Chandramouli Chandrasekaran

    2009-07-01

    Full Text Available Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2-7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.

  19. Automated social skills training with audiovisual information.

    Science.gov (United States)

    Tanaka, Hiroki; Sakti, Sakriani; Neubig, Graham; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi

    2016-08-01

    People with social communication difficulties tend to have superior skills using computers, and as a result computer-based social skills training systems are flourishing. Social skills training, performed by human trainers, is a well-established method to obtain appropriate skills in social interaction. Previous works have attempted to automate one or several parts of social skills training through human-computer interaction. However, while previous work on simulating social skills training considered only acoustic and linguistic features, human social skills trainers take into account visual features (e.g. facial expression, posture). In this paper, we create and evaluate a social skills training system that closes this gap by considering audiovisual features regarding ratio of smiling, yaw, and pitch. An experimental evaluation measures the difference in effectiveness of social skill training when using audio features and audiovisual features. Results showed that the visual features were effective to improve users' social skills.

  20. Presentación: Narrativas de no ficción audiovisual, interactiva y transmedia

    Directory of Open Access Journals (Sweden)

    Arnau Gifreu Castells

    2015-03-01

    Full Text Available El número 8 de la Revista profundiza en las formas de expresión narrativas de no ficción audiovisual, interactiva y transmedia. A lo largo de la historia de la comunicación, el ámbito de la no ficción siempre ha sido considerado como menor respecto de su homónimo de ficción. Esto sucede también en el campo de la investigación, donde las narrativas de ficción audiovisual, interactiva y transmedia siempre han ido un paso por delante de las de no ficción. Este monográfico propone un acercamiento teórico-práctico a narrativas de no ficción como el documental, el reportaje, el ensayo, los formatos educativos o las películas institucionales, con el propósito de ofrecer una radiografía de su ubicación actual en el ecosistema de medios.  Audiovisual, interactive and transmedia non-fiction Abstract Number 8 of  Obra Digital Revista de Comunicación  explores  audiovisual, interactive and transmedia non-fiction narrative expression forms. Throughout the history of communication the field of non-fiction has always been regarded as less than its fictional namesake. This is also true in the field of research, where the studies into audiovisual, interactive and transmedia fiction narratives have always been one step ahead of the studies into nonfiction narratives. This monograph proposes a theoretical and practical approach to narrative nonfiction forms as documentary, reporting, essay, educational formats and institutional films in order to supply a picture of its current position in the media ecosystem. Keywords: Non-fiction, Audiovisual Narrative, Interactive Narrative, Transmedia Narrative.

  1. The audiovisual mounting narrative as a basis for the documentary film interactive: news studies

    Directory of Open Access Journals (Sweden)

    Mgs. Denis Porto Renó

    2008-01-01

    Full Text Available This paper presents a literature review and experiment results from pilot-doctoral research "assembly language visual narrative for the documentary film interactive," which defend the thesis that there are features interactive audio and video editing of the movie, even as causing agent of interactivity. The search for interactive audio-visual formats are present in international investigations, but sob glances technology. He believes that this paper is to propose possible formats for interactive audiovisual production film, video, television, computer and cell phone from the postmodern society. Key words: Audiovisual, language, interactivity, cinema interactive, documentary, communication.

  2. Comparison of audio and audiovisual measures of adult stuttering: Implications for clinical trials.

    Science.gov (United States)

    O'Brian, Sue; Jones, Mark; Onslow, Mark; Packman, Ann; Menzies, Ross; Lowe, Robyn

    2015-04-15

    This study investigated whether measures of percentage syllables stuttered (%SS) and stuttering severity ratings with a 9-point scale differ when made from audiovisual compared with audio-only recordings. Four experienced speech-language pathologists measured %SS and assigned stuttering severity ratings to 10-minute audiovisual and audio-only recordings of 36 adults. There was a mean 18% increase in %SS scores when samples were presented in audiovisual compared with audio-only mode. This result was consistent across both higher and lower %SS scores and was found to be directly attributable to counts of stuttered syllables rather than the total number of syllables. There was no significant difference between stuttering severity ratings made from the two modes. In clinical trials research, when using %SS as the primary outcome measure, audiovisual samples would be preferred as long as clear, good quality, front-on images can be easily captured. Alternatively, stuttering severity ratings may be a more valid measure to use as they correlate well with %SS and values are not influenced by the presentation mode.

  3. Use of audiovisual media for education and self-management of patients with Chronic Obstructive Pulmonary Disease – COPD

    Directory of Open Access Journals (Sweden)

    Janaína Schäfer

    Full Text Available Introduction Chronic Obstructive Pulmonary Disease (COPD is considered a disease with high morbidity and mortality, even though it is a preventable and treatable disease. Objective To assess the effectiveness of an audiovisual educational material about the knowledge and self-management in COPD. Methods Quasi-experimental design and convenience sample was composed of COPD patients of Pulmonary Rehabilitation (PR (n = 42, in advanced stage of the disease, adults of both genders, and with low education. All subjects answered a specific questionnaire before and post-education audiovisual session, to assess their acquired knowledge about COPD. Results Positive results were obtained in the topics: COPD and its consequences, first symptom identified when the disease is aggravated and physical exercise practice. Regarding the second and third symptoms, it was observed that the education session did not improve this learning, as well as the decision facing the worsening of COPD. Conclusion COPD patients showed reasonable knowledge about the disease, its implications and symptomatology. Important aspects should be emphasized, such as identification of exacerbations of COPD and decision facing this exacerbation.

  4. Audiovisual associations alter the perception of low-level visual motion

    Directory of Open Access Journals (Sweden)

    Hulusi eKafaligonul

    2015-03-01

    Full Text Available Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role.

  5. Propuestas para la investigavción en comunicación audiovisual: publicidad social y creación colectiva en Internet / Research proposals for audiovisual communication: social advertising and collective creation on the internet

    Directory of Open Access Journals (Sweden)

    Teresa Fraile Prieto

    2011-09-01

    Full Text Available Resumen: La sociedad de la información digital plantea nuevos retos a los investigadores. A mediada que la comunicación audiovisual se ha consolidado como disciplina, los estudios culturales se muestran como una perspectiva de análisis ventajosa para acercarse a las nuevas prácticas creativas y de consumo del medio audiovisual. Este artículo defiende el estudio de los productos culturales audiovisuales que esta sociedad digital produce por cuanto son un testimonio de los cambios sociales que se operan en ella. En concreto se propone el acercamiento a la publicidad social y a los objetos de creación colectiva en Internet como medio para conocer las circunstancias de nuestra sociedad. Abstract: The information society poses new challenges to researchers. While audiovisual communication has been consolidated as a discipline, cultural studies is an advantageous analytical perspective to approach the new creative practices and consumption of audiovisual media. This article defends the study of audiovisual cultural products produced by the digital society because they are a testimony of the social changes taking place in it. Specifically, it proposes an approach to social advertising and objects of collective creation on the Internet as a means to know the circumstances of our society.

  6. Audiovisual laughter detection based on temporal features

    NARCIS (Netherlands)

    Petridis, Stavros; Nijholt, Antinus; Nijholt, A.; Pantic, M.; Pantic, Maja; Poel, Mannes; Poel, M.; Hondorp, G.H.W.

    2008-01-01

    Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audiovisual approach to distinguishing laughter from speech based on temporal features and we show that the integration of audio and visual information leads to improved

  7. Audio-visual materials usage preference among agricultural ...

    African Journals Online (AJOL)

    It was found that respondents preferred radio, television, poster, advert, photographs, specimen, bulletin, magazine, cinema, videotape, chalkboard, and bulletin board as audio-visual materials for extension work. These are the materials that can easily be manipulated and utilized for extension work. Nigerian Journal of ...

  8. Voice activity detection using audio-visual information

    DEFF Research Database (Denmark)

    Petsatodis, Theodore; Pnevmatikakis, Aristodemos; Boukis, Christos

    2009-01-01

    An audio-visual voice activity detector that uses sensors positioned distantly from the speaker is presented. Its constituting unimodal detectors are based on the modeling of the temporal variation of audio and visual features using Hidden Markov Models; their outcomes are fused using a post...

  9. First clinical implementation of audiovisual biofeedback in liver cancer stereotactic body radiation therapy

    International Nuclear Information System (INIS)

    Pollock, Sean; Tse, Regina; Martin, Darren

    2015-01-01

    This case report details a clinical trial's first recruited liver cancer patient who underwent a course of stereotactic body radiation therapy treatment utilising audiovisual biofeedback breathing guidance. Breathing motion results for both abdominal wall motion and tumour motion are included. Patient 1 demonstrated improved breathing motion regularity with audiovisual biofeedback. A training effect was also observed.

  10. Brain responses to audiovisual speech mismatch in infants are associated with individual differences in looking behaviour.

    Science.gov (United States)

    Kushnerenko, Elena; Tomalski, Przemyslaw; Ballieux, Haiko; Ribeiro, Helena; Potton, Anita; Axelsson, Emma L; Murphy, Elizabeth; Moore, Derek G

    2013-11-01

    Research on audiovisual speech integration has reported high levels of individual variability, especially among young infants. In the present study we tested the hypothesis that this variability results from individual differences in the maturation of audiovisual speech processing during infancy. A developmental shift in selective attention to audiovisual speech has been demonstrated between 6 and 9 months with an increase in the time spent looking to articulating mouths as compared to eyes (Lewkowicz & Hansen-Tift. (2012) Proc. Natl Acad. Sci. USA, 109, 1431-1436; Tomalski et al. (2012) Eur. J. Dev. Psychol., 1-14). In the present study we tested whether these changes in behavioural maturational level are associated with differences in brain responses to audiovisual speech across this age range. We measured high-density event-related potentials (ERPs) in response to videos of audiovisually matching and mismatched syllables /ba/ and /ga/, and subsequently examined visual scanning of the same stimuli with eye-tracking. There were no clear age-specific changes in ERPs, but the amplitude of audiovisual mismatch response (AVMMR) to the combination of visual /ba/ and auditory /ga/ was strongly negatively associated with looking time to the mouth in the same condition. These results have significant implications for our understanding of individual differences in neural signatures of audiovisual speech processing in infants, suggesting that they are not strictly related to chronological age but instead associated with the maturation of looking behaviour, and develop at individual rates in the second half of the first year of life. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  11. Audiovisual Webjournalism: An analysis of news on UOL News and on TV UERJ Online

    Directory of Open Access Journals (Sweden)

    Leila Nogueira

    2008-06-01

    Full Text Available This work shows the development of audiovisual webjournalism on the Brazilian Internet. This paper, based on the analysis of UOL News on UOL TV – pioneer format on commercial web television - and of UERJ Online TV – first on-line university television in Brazil - investigates the changes in the gathering, production and dissemination processes of audiovisual news when it starts to be transmitted through the web. Reflections of authors such as Herreros (2003, Manovich (2001 and Gosciola (2003 are used to discuss the construction of audiovisual narrative on the web. To comprehend the current changes in today’s webjournalism, we draw on the concepts developed by Fidler (1997; Bolter and Grusin (1998; Machado (2000; Mattos (2002 and Palacios (2003. We may conclude that the organization of narrative elements in cyberspace makes for the efficiency of journalistic messages, while establishing the basis of a particular language for audiovisual news on the Internet.

  12. Social Fear Learning: from Animal Models to Human Function.

    Science.gov (United States)

    Debiec, Jacek; Olsson, Andreas

    2017-07-01

    Learning about potential threats is critical for survival. Learned fear responses are acquired either through direct experiences or indirectly through social transmission. Social fear learning (SFL), also known as vicarious fear learning, is a paradigm successfully used for studying the transmission of threat information between individuals. Animal and human studies have begun to elucidate the behavioral, neural and molecular mechanisms of SFL. Recent research suggests that social learning mechanisms underlie a wide range of adaptive and maladaptive phenomena, from supporting flexible avoidance in dynamic environments to intergenerational transmission of trauma and anxiety disorders. This review discusses recent advances in SFL studies and their implications for basic, social and clinical sciences. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Audio/visual analysis for high-speed TV advertisement detection from MPEG bitstream

    OpenAIRE

    Sadlier, David A.

    2002-01-01

    Advertisement breaks dunng or between television programmes are typically flagged by senes of black-and-silent video frames, which recurrendy occur in order to audio-visually separate individual advertisement spots from one another. It is the regular prevalence of these flags that enables automatic differentiauon between what is programme content and what is advertisement break. Detection of these audio-visual depressions within broadcast television content provides a basis on which advertise...

  14. Audiovisual focus of attention and its application to Ultra High Definition video compression

    Science.gov (United States)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  15. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  16. A linguagem audiovisual da lousa digital interativa no contexto educacional/Audiovisual language of the digital interactive whiteboard in the educational environment

    Directory of Open Access Journals (Sweden)

    Rosária Helena Ruiz Nakashima

    2006-01-01

    Full Text Available Neste artigo serão apresentadas informações sobre a lousa digital como um instrumento que proporciona a inserção da linguagem audiovisual no contexto escolar. Para o funcionamento da lousa digital interativa é necessário que esteja conectada a um computador e este a um projetor multimídia, sendo que, através da tecnologia Digital Vision Touch (DViT, a superfície desse quadro torna-se sensível ao toque. Dessa forma, utilizando-se o dedo, professores e alunos executarão funções que aumentam a interatividade com as atividades propostas na lousa. Serão apresentadas duas possibilidades de atividades pedagógicas, destacando as áreas do conhecimento de Ciências e Língua Portuguesa, que poderão ser aplicadas na educação infantil, com alunos de cinco a seis anos. Essa tecnologia reflete a evolução de um tipo de linguagem que não é mais baseada somente na oralidade e na escrita, mas também é audiovisual e dinâmica, pois permite que o sujeito além de receptor, seja produtor de informações. Portanto, a escola deve aproveitar esses recursos tecnológicos que facilitam o trabalho com a linguagem audiovisual em sala de aula, permitindo a elaboração de aulas mais significativas e inovadoras.In this paper we present some information about the digital interactive whiteboard and its use as a tool to introduce the audiovisual language in the educational environment. The digital interactive whiteboard is connected to both a computer and a multimedia projector and it uses the Digital Vision Touch (DViT, which means that the screen is touch-sensitive. By touching with their fingers, both teachers and pupils have access to functionalities that increase the interactivity with the activities worked during the class. We present two pedagogical activities to be used in Science and Portuguese classes, for five- and six-years old pupils. This new technology is the result of the evolution of a new type of communication, which is not grounded

  17. Content and retention evaluation of an audiovisual patient-education program on bronchodilators.

    Science.gov (United States)

    Darr, M S; Self, T H; Ryan, M R; Vanderbush, R E; Boswell, R L

    1981-05-01

    A study was conducted to: (1) evaluate the effect of a slide-tape program on patients' short-term and long-term knowledge about their bronchodilator medications; and (2) determine it any differences exist in learning or retention patterns for different content areas of drug information. The knowledge of 30 patients was measured using a randomized sequence of three comparable 15-question tests. The first test was given before the slide-tape program was presented, the second test within 24 hours, and the last test one to six months (mean = 2.8 months) later. Scores attained on the first posttest were significantly higher (p less than 0.001) than pretest scores. Learning differences among drug-information-content areas were not evidenced on the first posttest. No significant difference was demonstrated between scores on pretest and last posttest (p = 0.100). However, retention patterns among content areas were found to differ significantly (p less than 0.05). Carefully designed audiovisual programs can impart drug information to patients. Medication counseling should be repeated at appropriate opportunities because patients lose drug knowledge over time.

  18. Early and late beta-band power reflect audiovisual perception in the McGurk illusion.

    Science.gov (United States)

    Roa Romero, Yadira; Senkowski, Daniel; Keil, Julian

    2015-04-01

    The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13-30 Hz) at short (0-500 ms) and long (500-800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept. Copyright © 2015 the American Physiological Society.

  19. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    Directory of Open Access Journals (Sweden)

    Yuanqing Li

    Full Text Available One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG. The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  20. Planning and Producing Audiovisual Materials. Third Edition.

    Science.gov (United States)

    Kemp, Jerrold E.

    A revised edition of this handbook provides illustrated, step-by-step explanations of how to plan and produce audiovisual materials. Included are sections on the fundamental skills--photography, graphics and recording sound--followed by individual sections on photographic print series, slide series, filmstrips, tape recordings, overhead…

  1. Content-based analysis improves audiovisual archive retrieval

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2012-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. In this paper, we take into account the information needs

  2. Quantifying temporal ventriloquism in audiovisual synchrony perception

    NARCIS (Netherlands)

    Kuling, I.A.; Kohlrausch, A.G.; Juola, J.F.

    2013-01-01

    The integration of visual and auditory inputs in the human brain works properly only if the components are perceived in close temporal proximity. In the present study, we quantified cross-modal interactions in the human brain for audiovisual stimuli with temporal asynchronies, using a paradigm from

  3. 36 CFR 1256.100 - What is the copying policy for USIA audiovisual records that either have copyright protection or...

    Science.gov (United States)

    2010-07-01

    ... for USIA audiovisual records that either have copyright protection or contain copyrighted material... Distribution of United States Information Agency Audiovisual Materials in the National Archives of the United States § 1256.100 What is the copying policy for USIA audiovisual records that either have copyright...

  4. The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study.

    Science.gov (United States)

    Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence

    2017-09-25

    At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Subtitles and language learning principles, strategies and practical experiences

    CERN Document Server

    Mariotti, Cristina; Caimi, Annamaria

    2014-01-01

    The articles collected in this publication combine diachronic and synchronic research with the description of updated teaching experiences showing the educational role of subtitled audiovisuals in various foreign language learning settings.

  6. Vicarious traumatization in the work with survivors of childhood trauma.

    Science.gov (United States)

    Crothers, D

    1995-04-01

    1. Persons working with victims of childhood trauma may experience traumatic countertransference and vicarious traumatization. After hearing a patient's trauma story, which is a necessary part of childhood trauma therapy, staff may experience post-traumatic stress disorder, imagery associated with the patient's story and the same disruptions in relationships as the patient. 2. During the first 6 months of working with survivors of childhood trauma, common behaviors of staff members were identified, including a lack of attention, poor work performance, medication errors, sick calls, treatment errors, irreverence, hypervigilance, and somatic complaints. 3. Staff working with victims of childhood trauma can obtain the necessary staff support through team support, in traumatic events, and in a leadership role.

  7. Selective Attention and Audiovisual Integration: Is Attending to Both Modalities a Prerequisite for Early Integration?

    NARCIS (Netherlands)

    Talsma, D.; Doty, Tracy J.; Woldorff, Marty G.

    2007-01-01

    Interactions between multisensory integration and attention were studied using a combined audiovisual streaming design and a rapid serial visual presentation paradigm. Event-related potentials (ERPs) following audiovisual objects (AV) were compared with the sum of the ERPs following auditory (A) and

  8. Influence of auditory and audiovisual stimuli on the right-left prevalence effect

    DEFF Research Database (Denmark)

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate...... vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did...... not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension...

  9. A Similarity-Based Approach for Audiovisual Document Classification Using Temporal Relation Analysis

    Directory of Open Access Journals (Sweden)

    Ferrane Isabelle

    2011-01-01

    Full Text Available Abstract We propose a novel approach for video classification that bases on the analysis of the temporal relationships between the basic events in audiovisual documents. Starting from basic segmentation results, we define a new representation method that is called Temporal Relation Matrix (TRM. Each document is then described by a set of TRMs, the analysis of which makes events of a higher level stand out. This representation has been first designed to analyze any audiovisual document in order to find events that may well characterize its content and its structure. The aim of this work is to use this representation to compute a similarity measure between two documents. Approaches for audiovisual documents classification are presented and discussed. Experimentations are done on a set of 242 video documents and the results show the efficiency of our proposals.

  10. Conditioning Influences Audio-Visual Integration by Increasing Sound Saliency

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    2011-10-01

    Full Text Available We investigated the effect of prior conditioning of an auditory stimulus on audiovisual integration in a series of four psychophysical experiments. The experiments factorially manipulated the conditioning procedure (picture vs monetary conditioning and multisensory paradigm (2AFC visual detection vs redundant target paradigm. In the conditioning sessions, subjects were presented with three pure tones (= conditioned stimulus, CS that were paired with neutral, positive, or negative unconditioned stimuli (US, monetary: +50 euro cents,.–50 cents, 0 cents; pictures: highly pleasant, unpleasant, and neutral IAPS. In a 2AFC visual selective attention paradigm, detection of near-threshold Gabors was improved by concurrent sounds that had previously been paired with a positive (monetary or negative (picture outcome relative to neutral sounds. In the redundant target paradigm, sounds previously paired with positive (monetary or negative (picture outcomes increased response speed to both auditory and audiovisual targets similarly. Importantly, prior conditioning did not increase the multisensory response facilitation (ie, (A + V/2 – AV or the race model violation. Collectively, our results suggest that prior conditioning primarily increases the saliency of the auditory stimulus per se rather than influencing audiovisual integration directly. In turn, conditioned sounds are rendered more potent for increasing response accuracy or speed in detection of visual targets.

  11. Effects of audio-visual aids on foreign language test anxiety, reading and listening comprehension, and retention in EFL learners.

    Science.gov (United States)

    Lee, Shu-Ping; Lee, Shin-Da; Liao, Yuan-Lin; Wang, An-Chi

    2015-04-01

    This study examined the effects of audio-visual aids on anxiety, comprehension test scores, and retention in reading and listening to short stories in English as a Foreign Language (EFL) classrooms. Reading and listening tests, general and test anxiety, and retention were measured in English-major college students in an experimental group with audio-visual aids (n=83) and a control group without audio-visual aids (n=94) with similar general English proficiency. Lower reading test anxiety, unchanged reading comprehension scores, and better reading short-term and long-term retention after four weeks were evident in the audiovisual group relative to the control group. In addition, lower listening test anxiety, higher listening comprehension scores, and unchanged short-term and long-term retention were found in the audiovisual group relative to the control group after the intervention. Audio-visual aids may help to reduce EFL learners' listening test anxiety and enhance their listening comprehension scores without facilitating retention of such materials. Although audio-visual aids did not increase reading comprehension scores, they helped reduce EFL learners' reading test anxiety and facilitated retention of reading materials.

  12. Observing tutorial dialogues collaboratively: insights about human tutoring effectiveness from vicarious learning.

    Science.gov (United States)

    Chi, Michelene T H; Roy, Marguerite; Hausmann, Robert G M

    2008-03-01

    The goals of this study are to evaluate a relatively novel learning environment, as well as to seek greater understanding of why human tutoring is so effective. This alternative learning environment consists of pairs of students collaboratively observing a videotape of another student being tutored. Comparing this collaboratively observing environment to four other instructional methods-one-on-one human tutoring, observing tutoring individually, collaborating without observing, and studying alone-the results showed that students learned to solve physics problems just as effectively from observing tutoring collaboratively as the tutees who were being tutored individually. We explain the effectiveness of this learning environment by postulating that such a situation encourages learners to become active and constructive observers through interactions with a peer. In essence, collaboratively observing combines the benefit of tutoring with the benefit of collaborating. The learning outcomes of the tutees and the collaborative observers, along with the tutoring dialogues, were used to further evaluate three hypotheses explaining why human tutoring is an effective learning method. Detailed analyses of the protocols at several grain sizes suggest that tutoring is effective when tutees are independently or jointly constructing knowledge: with the tutor, but not when the tutor independently conveys knowledge. 2008 Cognitive Science Society, Inc.

  13. Looking Back--A Lesson Learned: From Videotape to Digital Media

    Science.gov (United States)

    Lys, Franziska

    2010-01-01

    This paper chronicles the development of Drehort Neubrandenburg Online, an interactive, content-rich audiovisual language learning environment based on documentary film material shot on location in Neubrandenburg, Germany, in 1991 and 2002 and aimed at making language learning more interactive and more real. The paper starts with the description…

  14. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults.

    Science.gov (United States)

    Bernstein, Lynne E; Eberhardt, Silvio P; Auer, Edward T

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  15. Interpreters’ Experiences of Transferential Dynamics, Vicarious Traumatisation, and Their Need for Support and Supervision: A Systematic Literature Review

    Directory of Open Access Journals (Sweden)

    Emma Darroch

    2016-08-01

    Full Text Available Using thematic analysis, this systematic review aimed to explore sign language interpreters’ experiences of transferential dynamics and vicarious trauma. The notion of transferential dynamics, such as transference and countertransference, originate from psychodynamic therapy and refer to the mutual impact that client and therapist have on one another (Chessick, 1986. Psychodynamic models of therapy are predominantly concerned with unconscious processes and theorise that such processes have a powerful influence over an individuals’ thoughts, feelings and behaviours (Howard, 2011. In contrast to countertransference, which is a immediate response to a particular client, vicarious trauma is thought to develop as a result of continuous exposure to, and engagement across, many therapeutic interactions (Pearlman & Saakvitne, 1995a. A search of the available literature uncovered a striking lack of literature into the experiences of sign language interpreters, and in all, only two of the 11 identified empirical studies addressed sign language interpreters. The vast majority of the literature analysed reflected the experiences of spoken language interpreters. The results indicate that interpreters experience transferential dynamics as part of their work as well as suggesting the presence of vicarious trauma among interpreters. Additionally, a unique contribution to the fields of interpreting and psychology is offered, as it is consistently demonstrated that ‘service providers’ and ‘mental health workers’, which are umbrella terms for psychologists, immensely under-estimate the role of interpreters, as they fail to consider the emotional impact of their work and ignore the linguistic complexities of translation by failing to appreciate their need for information in order to ensure an effective translation.

  16. Market potential for interactive audio-visual media

    NARCIS (Netherlands)

    Leurdijk, A.; Limonard, S.

    2005-01-01

    NM2 (New Media for a New Millennium) develops tools for interactive, personalised and non-linear audio-visual content that will be tested in seven pilot productions. This paper looks at the market potential for these productions from a technological, a business and a users' perspective. It shows

  17. Homebound Learning Opportunities: Reaching Out to Older Shut-ins and Their Caregivers.

    Science.gov (United States)

    Penning, Margaret; Wasyliw, Douglas

    1992-01-01

    Describes Homebound Learning Opportunities, innovative health promotion and educational outreach service for homebound older adults and their caregivers. Notes that program provides over 125 topics for individualized learning programs delivered to participants in homes, audiovisual lending library, educational television programing, and peer…

  18. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  19. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech and music

    Directory of Open Access Journals (Sweden)

    Hwee Ling eLee

    2014-08-01

    Full Text Available This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogues of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms. Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past three years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  20. On-line repository of audiovisual material feminist research methodology

    Directory of Open Access Journals (Sweden)

    Lena Prado

    2014-12-01

    Full Text Available This paper includes a collection of audiovisual material available in the repository of the Interdisciplinary Seminar of Feminist Research Methodology SIMReF (http://www.simref.net.

  1. Roles and Characteristics of Television and Some Implications for Distance Learning.

    Science.gov (United States)

    Bates, Tony W.

    1982-01-01

    Explores some recent theory and research developments on the role and character of television, and its impact on learning in distance education. The implications for learning of distributional and social, control, and symbolic (audiovisual) characteristics of television are discussed. Fifteen references and an outline of television applications…

  2. Monitoring Implementation of Active Learning Classrooms at Lethbridge College, 2014-2015

    Science.gov (United States)

    Benoit, Andy

    2017-01-01

    Having experienced preliminary success in designing two active learning classrooms, Lethbridge College developed an additional eight active learning classrooms as part of a three-year initiative spanning 2014-2017. Year one of the initiative entailed purchasing new audio-visual equipment and classroom furniture followed by installation. This…

  3. Audiovisual physics reports: students' video production as a strategy for the didactic laboratory

    Science.gov (United States)

    Vinicius Pereira, Marcus; de Souza Barros, Susana; de Rezende Filho, Luiz Augusto C.; Fauth, Leduc Hermeto de A.

    2012-01-01

    Constant technological advancement has facilitated access to digital cameras and cell phones. Involving students in a video production project can work as a motivating aspect to make them active and reflective in their learning, intellectually engaged in a recursive process. This project was implemented in high school level physics laboratory classes resulting in 22 videos which are considered as audiovisual reports and analysed under two components: theoretical and experimental. This kind of project allows the students to spontaneously use features such as music, pictures, dramatization, animations, etc, even when the didactic laboratory may not be the place where aesthetic and cultural dimensions are generally developed. This could be due to the fact that digital media are more legitimately used as cultural tools than as teaching strategies.

  4. Net neutrality and audiovisual services

    OpenAIRE

    van Eijk, N.; Nikoltchev, S.

    2011-01-01

    Net neutrality is high on the European agenda. New regulations for the communication sector provide a legal framework for net neutrality and need to be implemented on both a European and a national level. The key element is not just about blocking or slowing down traffic across communication networks: the control over the distribution of audiovisual services constitutes a vital part of the problem. In this contribution, the phenomenon of net neutrality is described first. Next, the European a...

  5. Audiovisual integration of speech in a patient with Broca's Aphasia

    Science.gov (United States)

    Andersen, Tobias S.; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia. PMID:25972819

  6. Electrophysiological evidence for speech-specific audiovisual integration

    NARCIS (Netherlands)

    Baart, M.; Stekelenburg, J.J.; Vroomen, J.

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were

  7. Iniciativas e ações feministas no audiovisual brasileiro contemporâneo

    Directory of Open Access Journals (Sweden)

    Marina Cavalcanti Tedesco

    2017-10-01

    Full Text Available É possível afirmar que nos últimos dois anos a palavra feminismo adquiriu um novo peso, conquistando um espaço significativo nas redes sociais, na mídia e nas ruas. O audiovisual foi uma das áreas que acompanhou esta ascensão recente do feminismo, o que se materializou através de uma série de iniciativas focadas em reivindicar direitos e discutir o machismo no mercado de trabalho. Neste artigo pretendemos, sem nenhuma pretensão de esgotar o tema, apresentar e refletir sobre oito iniciativas que consideramos emblemáticas dessa intersecção contemporânea entre feminismo e cinema: Mulher no Cinema, Mulheres do Audiovisual Brasil, Mulheres Negras no Audiovisual Brasileiro, Cabíria Prêmio de Roteiro, Eparrêi Filmes, Academia das Musas, Cineclube Delas e o FINCAR – Festival Internacional de Cinema de Realizadoras.

  8. Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction.

    Science.gov (United States)

    Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro

    2016-10-01

    Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue

  9. Facilitating role of 3D multimodal visualization and learning rehearsal in memory recall.

    Science.gov (United States)

    Do, Phuong T; Moreland, John R

    2014-04-01

    The present study investigated the influence of 3D multimodal visualization and learning rehearsal on memory recall. Participants (N = 175 college students ranging from 21 to 25 years) were assigned to different training conditions and rehearsal processes to learn a list of 14 terms associated with construction of a wood-frame house. They then completed a memory test determining their cognitive ability to free recall the definitions of the 14 studied terms immediately after training and rehearsal. The audiovisual modality training condition was associated with the highest accuracy, and the visual- and auditory-modality conditions with lower accuracy rates. The no-training condition indicated little learning acquisition. A statistically significant increase in performance accuracy for the audiovisual condition as a function of rehearsal suggested the relative importance of rehearsal strategies in 3D observational learning. Findings revealed the potential application of integrating virtual reality and cognitive sciences to enhance learning and teaching effectiveness.

  10. Computationally efficient clustering of audio-visual meeting data

    NARCIS (Netherlands)

    Hung, H.; Friedland, G.; Yeo, C.; Shao, L.; Shan, C.; Luo, J.; Etoh, M.

    2010-01-01

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors,

  11. Sincronía entre formas sonoras y formas visuales en la narrativa audiovisual

    Directory of Open Access Journals (Sweden)

    Lic. José Alfredo Sánchez Ríos

    1999-01-01

    Full Text Available ¿Dónde tiene que situarse el investigador para realizar un trabajo que lleve consigo un conocimiento más profundo para entender un fenómeno tan próximo y tan complejo como es la comunicación audiovisual que usa sonido e imagen a la vez? ¿Cuál es el papel del investigador en comunicación audiovisual para aportar nuevas aproximaciones en torno a su objeto de estudio? Desde esta perspectiva, pensamos que la nueva tarea del investigador en comunicación audiovisual será hacer una teoría menos interpretativa-subjetiva y encaminar sus observaciones hacia conocimientos segmentados que puedan ser demostrables, repetibles y autocuestionables, es decir, estudiar, elaborar y construir una teoría con un mayor y nuevo rigor metodológico.

  12. Optimal Audiovisual Integration in the Ventriloquism Effect But Pervasive Deficits in Unisensory Spatial Localization in Amblyopia.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-01-01

    Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.

  13. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    Science.gov (United States)

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  14. Audiovisual en línea en la universidad española: bibliotecas y servicios especializados (una panorámica

    Directory of Open Access Journals (Sweden)

    Alfonso López Yepes

    2014-08-01

    Full Text Available Situación que presenta la información audiovisual en línea en el ámbito de las bibliotecas y servicios audiovisuales universitarios españoles, con ejemplos de aplicaciones y desarrollos concretos. Se destaca la presencia del audiovisual fundamentalmente en blogs, canales IPTV, portales bibliotecarios propios y en actuaciones concretas como “La Universidad Responde”, a cargo de los servicios audiovisuales de las universidades españolas, que supone sin duda un marco de referencia y de difusión informativa muy destacado también para el ámbito bibliotecario; así como en redes sociales, mencionándose una propuesta de modelo de red social de biblioteca universitaria. Se remite a la participación de bibliotecas y servicios en proyectos colaborativos de investigación y desarrollo social, presencia ya efectiva en el marco del proyecto “Red iberoamericana de patrimonio sonoro y audiovisual”, que apuesta  por la construcción social del conocimiento audiovisual basado en la interacción entre distintos grupos multidisciplinarios de profesionales con diferentes comunidades de usuarios e instituciones.A situation presenting audiovisual information online in the field of libraries and audiovisual university spanish services, with examples of applications and specific developments. The presence of the audiovisual in blogs and IPTV channels librarians and specific actions as The University Respond, in charge of the audiovisual services of the spanish universities, a very important reference and information dissemination for the field librarian and in social networks, mentioning a model of social network of University Library. Participation of libraries and services in collaborative research and social development projects in the Ibero-American network of sound and audiovisual heritage project, for the social construction of the audiovisual knowledge based on the interaction between various multidisciplinary groups of professionals with

  15. Enabling the development of student teacher professional identity ...

    African Journals Online (AJOL)

    This paper explores the views of student teachers who were provided vicarious learning opportunities during an educational excursion, and how the learning enabled them to develop their teacher professional identity. This qualitative research study, using a social-constructivist lens highlights how vicarious learning ...

  16. Proper Use of Audio-Visual Aids: Essential for Educators.

    Science.gov (United States)

    Dejardin, Conrad

    1989-01-01

    Criticizes educators as the worst users of audio-visual aids and among the worst public speakers. Offers guidelines for the proper use of an overhead projector and the development of transparencies. (DMM)

  17. The effect of disgust and fear modeling on children's disgust and fear for animals.

    Science.gov (United States)

    Askew, Chris; Cakır, Kübra; Põldsam, Liine; Reynolds, Gemma

    2014-08-01

    Disgust is a protective emotion associated with certain types of animal fears. Given that a primary function of disgust is to protect against harm, increasing children's disgust-related beliefs for animals may affect how threatening they think animals are and their avoidance of them. One way that children's disgust beliefs for animals might change is via vicarious learning: by observing others responding to the animal with disgust. In Experiment 1, children (ages 7-10 years) were presented with images of novel animals together with adult faces expressing disgust. Children's fear beliefs and avoidance preferences increased for these disgust-paired animals compared with unpaired control animals. Experiment 2 used the same procedure and compared disgust vicarious learning with vicarious learning with fear faces. Children's fear beliefs and avoidance preferences for animals again increased as a result of disgust vicarious learning, and animals seen with disgust or fear faces were also rated more disgusting than control animals. The relationship between increased fear beliefs and avoidance preferences for animals was mediated by disgust for the animals. The experiments demonstrate that children can learn to believe that animals are disgusting and threatening after observing an adult responding with disgust toward them. The findings also suggest a bidirectional relationship between fear and disgust with fear-related vicarious learning leading to increased disgust for animals and disgust-related vicarious learning leading to increased fear and avoidance. (c) 2014 APA, all rights reserved.

  18. The Effect of Disgust and Fear Modeling on Children’s Disgust and Fear for Animals

    Science.gov (United States)

    2014-01-01

    Disgust is a protective emotion associated with certain types of animal fears. Given that a primary function of disgust is to protect against harm, increasing children’s disgust-related beliefs for animals may affect how threatening they think animals are and their avoidance of them. One way that children’s disgust beliefs for animals might change is via vicarious learning: by observing others responding to the animal with disgust. In Experiment 1, children (ages 7–10 years) were presented with images of novel animals together with adult faces expressing disgust. Children’s fear beliefs and avoidance preferences increased for these disgust-paired animals compared with unpaired control animals. Experiment 2 used the same procedure and compared disgust vicarious learning with vicarious learning with fear faces. Children’s fear beliefs and avoidance preferences for animals again increased as a result of disgust vicarious learning, and animals seen with disgust or fear faces were also rated more disgusting than control animals. The relationship between increased fear beliefs and avoidance preferences for animals was mediated by disgust for the animals. The experiments demonstrate that children can learn to believe that animals are disgusting and threatening after observing an adult responding with disgust toward them. The findings also suggest a bidirectional relationship between fear and disgust with fear-related vicarious learning leading to increased disgust for animals and disgust-related vicarious learning leading to increased fear and avoidance. PMID:24955571

  19. Sex differences in audiovisual discrimination learning by Bengalese finches (Lonchura striata var. domestica).

    Science.gov (United States)

    Seki, Yoshimasa; Okanoya, Kazuo

    2008-02-01

    Both visual and auditory information are important for songbirds, especially in developmental and sexual contexts. To investigate bimodal cognition in songbirds, the authors conducted audiovisual discrimination training in Bengalese finches. The authors used two types of stimulus: an "artificial stimulus," which is a combination of simple figures and sound, and a "biological stimulus," consisting of video images of singing males along with their songs. The authors found that while both sexes predominantly used visual cues in the discrimination tasks, males tended to be more dependent on auditory information for the biological stimulus. Female responses were always dependent on the visual stimulus for both stimulus types. Only males changed their discrimination strategy according to stimulus type. Although males used both visual and auditory cues for the biological stimulus, they responded to the artificial stimulus depending only on visual information, as the females did. These findings suggest a sex difference in innate auditory sensitivity. (c) 2008 APA.

  20. Audiovisual News, Cartoons, and Films as Sources of Authentic Language Input and Language Proficiency Enhancement

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2012-01-01

    In today's audiovisually driven world, various audiovisual programs can be incorporated as authentic sources of potential language input for second language acquisition. In line with this view, the present research aimed at discovering the effectiveness of exposure to news, cartoons, and films as three different types of authentic audiovisual…

  1. AUTHOR’S DIGITAL VIDEO: CREATING AND USING FOR THE LEARNING

    Directory of Open Access Journals (Sweden)

    Igor V. Riatshentcev

    2014-01-01

    Full Text Available The article considers the functionality of software to construct the author’s video for its use in distance learning and its audiovisual implementation in the open educational space. 

  2. Users Requirements in Audiovisual Search: A Quantitative Approach

    NARCIS (Netherlands)

    Nadeem, Danish; Ordelman, Roeland J.F.; Aly, Robin; Verbruggen, Erwin; Aalberg, Trond; Papatheodorou, Christos; Dobreva, Milena; Tsakonas, Giannis; Farrugia, Charles J.

    2013-01-01

    This paper reports on the results of a quantitative analysis of user requirements for audiovisual search that allow the categorisation of requirements and to compare requirements across user groups. The categorisation provides clear directions with respect to the prioritisation of system features

  3. When Library and Archival Science Methods Converge and Diverge: KAUST’s Multi-Disciplinary Approach to the Management of its Audiovisual Heritage

    KAUST Repository

    Kenosi, Lekoko

    2015-07-16

    Libraries and Archives have long recognized the important role played by audiovisual records in the development of an informed global citizen and the King Abdullah University of Science and Technology (KAUST) is no exception. Lying on the banks of the Red Sea, KAUST has a state of the art library housing professional library and archives teams committed to the processing of digital audiovisual records created within and outside the University. This commitment, however, sometimes obscures the fundamental divergences unique to the two disciplines on the acquisition, cataloguing, access and long-term preservation of audiovisual records. This dichotomy is not isolated to KAUST but replicates itself in many settings that have employed Librarians and Archivists to manage their audiovisual collections. Using the KAUST audiovisual collections as a case study the authors of this paper will take the reader through the journey of managing KAUST’s digital audiovisual collection. Several theoretical and methodological areas of convergence and divergence will be highlighted as well as suggestions on the way forward for the IFLA and ICA working committees on the management of audiovisual records.

  4. Computationally Efficient Clustering of Audio-Visual Meeting Data

    Science.gov (United States)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  5. Vicarious Versus Traditional Learning in Biology: A Case of Sexually ...

    African Journals Online (AJOL)

    The purpose of this study was to compare between learning sexually transmitted infections in Biology by observation and traditional classroom lecture method ... The study found that observational method was more effective and preferred by students as compared to traditional lecture method ... AJOL African Journals Online.

  6. Does audiovisual distraction reduce dental anxiety in children under local anesthesia? A systematic review and meta-analysis.

    Science.gov (United States)

    Zhang, Cai; Qin, Dan; Shen, Lu; Ji, Ping; Wang, Jinhua

    2018-03-02

    To perform a systematic review and meta-analysis on the effects of audiovisual distraction on reducing dental anxiety in children during dental treatment under local anesthesia. The authors identified eligible reports published through August 2017 by searching PubMed, EMBASE, and Cochrane Central Register of Controlled Trials. Clinical trials that reported the effects of audiovisual distraction on children's physiological measures, self-reports and behavior rating scales during dental treatment met the minimum inclusion requirements. The authors extracted data and performed a meta-analysis of appropriate articles. Nine eligible trials were included and qualitatively analyzed; some of these trials were also quantitatively analyzed. Among the physiological measures, heart rate or pulse rate was significantly lower (p=0.01) in children subjected to audiovisual distraction during dental treatment under local anesthesia than in those who were not; a significant difference in oxygen saturation was not observed. The majority of the studies using self-reports and behavior rating scales suggested that audiovisual distraction was beneficial in reducing anxiety perception and improving children's cooperation during dental treatment. The audiovisual distraction approach effectively reduces dental anxiety among children. Therefore, we suggest the use of audiovisual distraction when children need dental treatment under local anesthesia. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  7. Researching Embodied Learning by Using Videographic Participation for Data Collection and Audiovisual Narratives for Dissemination--Illustrated by the Encounter between Two Acrobats

    Science.gov (United States)

    Degerbøl, Stine; Nielsen, Charlotte Svendler

    2015-01-01

    The article concerns doing ethnography in education and it reflects upon using "videographic participation" for data collection and the concept of "audiovisual narratives" for dissemination, which is inspired by the idea of developing academic video. The article takes a narrative approach to qualitative research and presents a…

  8. 36 CFR 1256.98 - Can I get access to and obtain copies of USIA audiovisual records transferred to the National...

    Science.gov (United States)

    2010-07-01

    ... obtain copies of USIA audiovisual records transferred to the National Archives of the United States? 1256... United States Information Agency Audiovisual Materials in the National Archives of the United States § 1256.98 Can I get access to and obtain copies of USIA audiovisual records transferred to the National...

  9. Audiovisual English-Arabic Translation: De Beaugrande's Perspective

    Directory of Open Access Journals (Sweden)

    Alaa Eddin Hussain

    2016-05-01

    Full Text Available This paper attempts to demonstrate the significance of the seven standards of textuality with special application to audiovisual English Arabic translation.  Ample and thoroughly analysed examples have been provided to help in audiovisual English-Arabic translation decision-making. A text is meaningful if and only if it carries meaning and knowledge to its audience, and is optimally activatable, recoverable and accessible.  The same is equally applicable to audiovisual translation (AVT. The latter should also carry knowledge which can be easily accessed by the TL audience, and be processed with least energy and time, i.e. achieving the utmost level of efficiency. Communication occurs only when that text is coherent, with continuity of senses and concepts that are appropriately linked. Coherence of a text will be achieved when all aspects of cohesive devices are well accounted for pragmatically.  This combined with a good amount of psycholinguistic element will provide a text with optimal communicative value. Non-text is certainly devoid of such components and ultimately non-communicative. Communicative knowledge can be classified into three categories: determinate knowledge, typical knowledge and accidental knowledge. To create dramatic suspense and the element of surprise, the text in AV environment, as in any dialogue, often carries accidental knowledge.  This unusual knowledge aims to make AV material interesting in the eyes of its audience. That cognitive environment is enhanced by an adequate employment of material (picture and sound, and helps to recover sense in the text. Hence, the premise of this paper is the application of certain aspects of these standards to AV texts taken from various recent feature films and documentaries, in order to facilitate the translating process and produce a final appropriate product.

  10. Audio-Visual Equipment Depreciation. RDU-75-07.

    Science.gov (United States)

    Drake, Miriam A.; Baker, Martha

    A study was conducted at Purdue University to gather operational and budgetary planning data for the Libraries and Audiovisual Center. The objectives were: (1) to complete a current inventory of equipment including year of purchase, costs, and salvage value; (2) to determine useful life data for general classes of equipment; and (3) to determine…

  11. Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation.

    Science.gov (United States)

    Banks, Briony; Gowen, Emma; Munro, Kevin J; Adank, Patti

    2015-01-01

    Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker's facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants' eye gaze was recorded to verify that they looked at the speaker's face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation.

  12. Dissociating verbal and nonverbal audiovisual object processing.

    Science.gov (United States)

    Hocking, Julia; Price, Cathy J

    2009-02-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

  13. Summarizing Audiovisual Contents of a Video Program

    Science.gov (United States)

    Gong, Yihong

    2003-12-01

    In this paper, we focus on video programs that are intended to disseminate information and knowledge such as news, documentaries, seminars, etc, and present an audiovisual summarization system that summarizes the audio and visual contents of the given video separately, and then integrating the two summaries with a partial alignment. The audio summary is created by selecting spoken sentences that best present the main content of the audio speech while the visual summary is created by eliminating duplicates/redundancies and preserving visually rich contents in the image stream. The alignment operation aims to synchronize each spoken sentence in the audio summary with its corresponding speaker's face and to preserve the rich content in the visual summary. A Bipartite Graph-based audiovisual alignment algorithm is developed to efficiently find the best alignment solution that satisfies these alignment requirements. With the proposed system, we strive to produce a video summary that: (1) provides a natural visual and audio content overview, and (2) maximizes the coverage for both audio and visual contents of the original video without having to sacrifice either of them.

  14. 78 FR 48190 - Certain Audiovisual Components and Products Containing the Same Notice of Request for Statements...

    Science.gov (United States)

    2013-08-07

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-837] Certain Audiovisual Components and Products Containing the Same Notice of Request for Statements on the Public Interest AGENCY: U.S... infringing audiovisual components and products containing the same, imported by Funai Corporation, Inc. of...

  15. 36 CFR 1256.96 - What provisions apply to the transfer of USIA audiovisual records to the National Archives of the...

    Science.gov (United States)

    2010-07-01

    ... transfer of USIA audiovisual records to the National Archives of the United States? 1256.96 Section 1256.96... Information Agency Audiovisual Materials in the National Archives of the United States § 1256.96 What provisions apply to the transfer of USIA audiovisual records to the National Archives of the United States...

  16. Vicarious Radiometric Calibration of a Multispectral Camera on Board an Unmanned Aerial System

    Directory of Open Access Journals (Sweden)

    Susana Del Pozo

    2014-02-01

    Full Text Available Combinations of unmanned aerial platforms and multispectral sensors are considered low-cost tools for detailed spatial and temporal studies addressing spectral signatures, opening a broad range of applications in remote sensing. Thus, a key step in this process is knowledge of multi-spectral sensor calibration parameters in order to identify the physical variables collected by the sensor. This paper discusses the radiometric calibration process by means of a vicarious method applied to a high-spatial resolution unmanned flight using low-cost artificial and natural covers as control and check surfaces, respectively.

  17. Exposure to audiovisual programs as sources of authentic language ...

    African Journals Online (AJOL)

    Exposure to audiovisual programs as sources of authentic language input and second ... Southern African Linguistics and Applied Language Studies ... The findings of the present research contribute more insights on the type and amount of ...

  18. Infants' preference for native audiovisual speech dissociated from congruency preference.

    Directory of Open Access Journals (Sweden)

    Kathleen Shaw

    Full Text Available Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces. Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English and non-native (Spanish language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.

  19. Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability

    NARCIS (Netherlands)

    Francisco, A.A.; Groen, M.A.; Jesse, A.; McQueen, J.M.

    2017-01-01

    The aim of this study was to clarify whether audiovisual processing accounted for variance in reading and reading-related abilities, beyond the effect of a set of measures typically associated with individual differences in both reading and audiovisual processing. Testing adults with and without a

  20. ETNOGRAFÍA Y COMUNICACIÓN: EL PROYECTO ARCHIVO ETNOGRÁFICO AUDIOVISUAL DE LA UNIVERSIDAD DE CHILE

    Directory of Open Access Journals (Sweden)

    Mauricio Pineda Pertier

    2012-06-01

    This article considers audiovisual ethnography as a communication process, and takes the Audiovisual Ethnographic Archive of Universidad de Chile and its experience in the development of audiovisual ethnographies during the past eight years as a case of analysis. Beyond its use as a data recording technique, the construction and dissemination of messages with social content based on the aforementioned data records constitute a complex praxis of communication production that leads us to critically review the traditional conceptualization of the concept of communication. This work discusses these models, setting forth alternatives from an applied ethno-political perspective in local development contexts.