WorldWideScience

Sample records for vicarious audiovisual learning

  1. Vicarious audiovisual learning in perfusion education.

    Science.gov (United States)

    Rath, Thomas E; Holt, David W

    2010-12-01

    Perfusion technology is a mechanical and visual science traditionally taught with didactic instruction combined with clinical experience. It is difficult to provide perfusion students the opportunity to experience difficult clinical situations, set up complex perfusion equipment, or observe corrective measures taken during catastrophic events because of patient safety concerns. Although high fidelity simulators offer exciting opportunities for future perfusion training, we explore the use of a less costly low fidelity form of simulation instruction, vicarious audiovisual learning. Two low fidelity modes of instruction; description with text and a vicarious, first person audiovisual production depicting the same content were compared. Students (n = 37) sampled from five North American perfusion schools were prospectively randomized to one of two online learning modules, text or video.These modules described the setup and operation of the MAQUET ROTAFLOW stand-alone centrifugal console and pump. Using a 10 question multiple-choice test, students were assessed immediately after viewing the module (test #1) and then again 2 weeks later (test #2) to determine cognition and recall of the module content. In addition, students completed a questionnaire assessing the learning preferences of today's perfusion student. Mean test scores from test #1 for video learners (n = 18) were significantly higher (88.89%) than for text learners (n = 19) (74.74%), (p audiovisual learning modules may be an efficacious, low cost means of delivering perfusion training on subjects such as equipment setup and operation. Video learning appears to improve cognition and retention of learned content and may play an important role in how we teach perfusion in the future, as simulation technology becomes more prevalent.

  2. Vicarious Acquisition Of Learned Helplessness

    Science.gov (United States)

    And Others; DeVellis, Robert F.

    1978-01-01

    Reports a study conducted to determine whether individuals who observed others experiencing noncontingency would develop learned helplessness vicariously. Subjects were 75 college female undergraduates. (MP)

  3. Computer Support for Vicarious Learning.

    Science.gov (United States)

    Monthienvichienchai, Rachada; Sasse, M. Angela

    This paper investigates how computer support for vicarious learning can be implemented by taking a principled approach to selecting and combining different media to capture educational dialogues. The main goal is to create vicarious learning materials of appropriate pedagogic content and production quality, and at the same time minimize the…

  4. Still to Learn from Vicarious Learning

    Science.gov (United States)

    Mayes, J. T.

    2015-01-01

    The term "vicarious learning" was introduced in the 1960s by Bandura, who demonstrated how learning can occur through observing the behaviour of others. Such social learning is effective without the need for the observer to experience feedback directly. More than twenty years later a series of studies on vicarious learning was undertaken…

  5. Vicarious learning: a review of the literature.

    Science.gov (United States)

    Roberts, Debbie

    2010-01-01

    Experiential learning theory stresses the primacy of personal experience and the literature suggests that direct clinical experience is required in order for learning to take place. However, raw or first hand experience may not be the only mechanisms by which students engage in experiential learning. There is a growing body of literature within higher education which suggests that students are able to use another's experience to learn: vicarious learning. This literature review aims to outline vicarious learning within a nursing context. Many of the studies regarding vicarious learning are situated within Higher Education in general, however, within the United States these relate more specifically to nursing students. The literature indicates the increasing global interest in this area. This paper reveals that whilst the literature offers a number of examples illustrating how vicarious learning takes place, opinion on the role of the lecturer is divided and requires further exploration and clarification. The implications for nurse education are discussed.

  6. Vicarious learning from human models in monkeys.

    Science.gov (United States)

    Falcone, Rossella; Brunamonti, Emiliano; Genovesio, Aldo

    2012-01-01

    We examined whether monkeys can learn by observing a human model, through vicarious learning. Two monkeys observed a human model demonstrating an object-reward association and consuming food found underneath an object. The monkeys observed human models as they solved more than 30 learning problems. For each problem, the human models made a choice between two objects, one of which concealed a piece of apple. In the test phase afterwards, the monkeys made a choice of their own. Learning was apparent from the first trial of the test phase, confirming the ability of monkeys to learn by vicarious observation of human models.

  7. Vicarious learning from human models in monkeys.

    Directory of Open Access Journals (Sweden)

    Rossella Falcone

    Full Text Available We examined whether monkeys can learn by observing a human model, through vicarious learning. Two monkeys observed a human model demonstrating an object-reward association and consuming food found underneath an object. The monkeys observed human models as they solved more than 30 learning problems. For each problem, the human models made a choice between two objects, one of which concealed a piece of apple. In the test phase afterwards, the monkeys made a choice of their own. Learning was apparent from the first trial of the test phase, confirming the ability of monkeys to learn by vicarious observation of human models.

  8. A comparison of positive vicarious learning and verbal information for reducing vicariously learned fear.

    Science.gov (United States)

    Reynolds, Gemma; Wasely, David; Dunne, Güler; Askew, Chris

    2017-10-19

    Research with children has demonstrated that both positive vicarious learning (modelling) and positive verbal information can reduce children's acquired fear responses for a particular stimulus. However, this fear reduction appears to be more effective when the intervention pathway matches the initial fear learning pathway. That is, positive verbal information is a more effective intervention than positive modelling when fear is originally acquired via negative verbal information. Research has yet to explore whether fear reduction pathways are also important for fears acquired via vicarious learning. To test this, an experiment compared the effectiveness of positive verbal information and positive vicarious learning interventions for reducing vicariously acquired fears in children (7-9 years). Both vicarious and informational fear reduction interventions were found to be equally effective at reducing vicariously acquired fears, suggesting that acquisition and intervention pathways do not need to match for successful fear reduction. This has significant implications for parents and those working with children because it suggests that providing children with positive information or positive vicarious learning immediately after a negative modelling event may prevent more serious fears developing.

  9. Vicarious Learning from Human Models in Monkeys

    OpenAIRE

    Falcone, Rossella; Brunamonti, Emiliano; Genovesio, Aldo

    2012-01-01

    We examined whether monkeys can learn by observing a human model, through vicarious learning. Two monkeys observed a human model demonstrating an object-reward association and consuming food found underneath an object. The monkeys observed human models as they solved more than 30 learning problems. For each problem, the human models made a choice between two objects, one of which concealed a piece of apple. In the test phase afterwards, the monkeys made a choice of their own. Learning was app...

  10. Vicarious learning through capturing taskdirected discussions

    Directory of Open Access Journals (Sweden)

    F. Dineen

    1999-12-01

    Full Text Available The research programme on vicarious learning, part of which we report in this paper, has been aimed at exploring the idea that learning can be facilitated by providing learners with access to the experiences of other learners. We use Bandura's term vicarious learning to describe this (Bandura, 1986, and we believe it to be a paradigm that offers particular promise when seen as an innovative way of exploiting recent technical advances in multimedia and distance learning technologies. It offers the prospect of a real alternative to the building of intelligent tutors (which directly address the problem of allowing learners access to dialogue, but which have proved largely intractable in practice or to the direct support of live dialogues (which do not offer a solution to the problem of providing 'live' tutors - unless they are between peer learners. In the research reported here our main objectives were to develop techniques to facilitate learners' access to, especially, dialogues and discussions which have arisen when other learners were faced with similar issues or problems in understanding the material. This required us to investigate means of indexing and retrieving appropriate dialogues and build on these to create an advanced prototype system for use in educational settings.

  11. Neural signals of vicarious extinction learning.

    Science.gov (United States)

    Golkar, Armita; Haaker, Jan; Selbing, Ida; Olsson, Andreas

    2016-10-01

    Social transmission of both threat and safety is ubiquitous, but little is known about the neural circuitry underlying vicarious safety learning. This is surprising given that these processes are critical to flexibly adapt to a changeable environment. To address how the expression of previously learned fears can be modified by the transmission of social information, two conditioned stimuli (CS + s) were paired with shock and the third was not. During extinction, we held constant the amount of direct, non-reinforced, exposure to the CSs (i.e. direct extinction), and critically varied whether another individual-acting as a demonstrator-experienced safety (CS + vic safety) or aversive reinforcement (CS + vic reinf). During extinction, ventromedial prefrontal cortex (vmPFC) responses to the CS + vic reinf increased but decreased to the CS + vic safety This pattern of vmPFC activity was reversed during a subsequent fear reinstatement test, suggesting a temporal shift in the involvement of the vmPFC. Moreover, only the CS + vic reinf association recovered. Our data suggest that vicarious extinction prevents the return of conditioned fear responses, and that this efficacy is reflected by diminished vmPFC involvement during extinction learning. The present findings may have important implications for understanding how social information influences the persistence of fear memories in individuals suffering from emotional disorders. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  12. Vicarious reinforcement learning signals when instructing others.

    Science.gov (United States)

    Apps, Matthew A J; Lesage, Elise; Ramnani, Narender

    2015-02-18

    Reinforcement learning (RL) theory posits that learning is driven by discrepancies between the predicted and actual outcomes of actions (prediction errors [PEs]). In social environments, learning is often guided by similar RL mechanisms. For example, teachers monitor the actions of students and provide feedback to them. This feedback evokes PEs in students that guide their learning. We report the first study that investigates the neural mechanisms that underpin RL signals in the brain of a teacher. Neurons in the anterior cingulate cortex (ACC) signal PEs when learning from the outcomes of one's own actions but also signal information when outcomes are received by others. Does a teacher's ACC signal PEs when monitoring a student's learning? Using fMRI, we studied brain activity in human subjects (teachers) as they taught a confederate (student) action-outcome associations by providing positive or negative feedback. We examined activity time-locked to the students' responses, when teachers infer student predictions and know actual outcomes. We fitted a RL-based computational model to the behavior of the student to characterize their learning, and examined whether a teacher's ACC signals when a student's predictions are wrong. In line with our hypothesis, activity in the teacher's ACC covaried with the PE values in the model. Additionally, activity in the teacher's insula and ventromedial prefrontal cortex covaried with the predicted value according to the student. Our findings highlight that the ACC signals PEs vicariously for others' erroneous predictions, when monitoring and instructing their learning. These results suggest that RL mechanisms, processed vicariously, may underpin and facilitate teaching behaviors. Copyright © 2015 Apps et al.

  13. A comparison of positive vicarious learning and verbal information for reducing vicariously learned fear

    OpenAIRE

    Reynolds, Gemma; Wasely, David; Dunne, Guler; Askew, Chris

    2017-01-01

    Research with children has demonstrated that both positive vicarious learning (modelling) and positive verbal information can reduce children’s acquired fear responses for a particular stimulus. However, this fear reduction appears to be more effective when the intervention pathway matches the initial fear learning pathway. That is, positive verbal information is a more effective intervention than positive modelling when fear is originally acquired via negative verbal information. Research ha...

  14. The vicarious learning pathway to fear 40 years on.

    Science.gov (United States)

    Askew, Chris; Field, Andy P

    2008-10-01

    Forty years on from the initial idea that fears could be learnt vicariously through observing other people's responses to a situation or stimulus, this review looks at the evidence for this theory as an explanatory model of clinical fear. First, we review early experimental evidence that fears can be learnt vicariously before turning to the evidence from both primate and human research that clinical fears can be acquired in this way. Finally, we review recent evidence from research on non-anxious children. Throughout the review we highlight problems and areas for future research. We conclude by exploring the likely underlying mechanisms in the vicarious learning of fear and the resulting clinical implications.

  15. Audiovisual speech facilitates voice learning.

    Science.gov (United States)

    Sheffert, Sonya M; Olson, Elizabeth

    2004-02-01

    In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.

  16. Learning sparse generative models of audiovisual signals

    OpenAIRE

    Monaci, Gianluca; Sommer, Friedrich T.; Vandergheynst, Pierre

    2008-01-01

    This paper presents a novel framework to learn sparse represen- tations for audiovisual signals. An audiovisual signal is modeled as a sparse sum of audiovisual kernels. The kernels are bimodal functions made of synchronous audio and video components that can be positioned independently and arbitrarily in space and time. We design an algorithm capable of learning sets of such audiovi- sual, synchronous, shift-invariant functions by alternatingly solving a coding and a learning pr...

  17. Comparing Learning from Productive Failure and Vicarious Failure

    Science.gov (United States)

    Kapur, Manu

    2014-01-01

    A total of 136 eighth-grade math students from 2 Singapore schools learned from either productive failure (PF) or vicarious failure (VF). PF students "generated" solutions to a complex problem targeting the concept of variance that they had not learned yet before receiving instruction on the targeted concept. VF students…

  18. Types of vicarious learning experienced by pre-dialysis patients

    Directory of Open Access Journals (Sweden)

    Kate McCarthy

    2015-04-01

    Full Text Available Objective: Haemodialysis and peritoneal dialysis renal replacement treatment options are in clinical equipoise, although the cost of haemodialysis to the National Health Service is £16,411/patient/year greater than peritoneal dialysis. Treatment decision-making takes place during the pre-dialysis year when estimated glomerular filtration rate drops to between 15 and 30 mL/min/1.73 m2. Renal disease can be familial, and the majority of patients have considerable health service experience when they approach these treatment decisions. Factors affecting patient treatment decisions are currently unknown. The objective of this article is to explore data from a wider study in specific relation to the types of vicarious learning experiences reported by pre-dialysis patients. Methods: A qualitative study utilised unstructured interviews and grounded theory analysis during the participant’s pre-dialysis year. The interview cohort comprised 20 pre-dialysis participants between 24 and 80 years of age. Grounded theory design entailed thematic sampling and analysis, scrutinised by secondary coding and checked with participants. Participants were recruited from routine renal clinics at two local hospitals when their estimated glomerular filtration rate was between 15 and 30 mL/min/1.73 m2. Results: Vicarious learning that contributed to treatment decision-making fell into three main categories: planned vicarious leaning, unplanned vicarious learning and historical vicarious experiences. Conclusion: Exploration and acknowledgement of service users’ prior vicarious learning, by healthcare professionals, is important in understanding its potential influences on individuals’ treatment decision-making. This will enable healthcare professionals to challenge heuristic decisions based on limited information and to encourage analytic thought processes.

  19. Types of vicarious learning experienced by pre-dialysis patients.

    Science.gov (United States)

    McCarthy, Kate; Sturt, Jackie; Adams, Ann

    2015-01-01

    Haemodialysis and peritoneal dialysis renal replacement treatment options are in clinical equipoise, although the cost of haemodialysis to the National Health Service is £16,411/patient/year greater than peritoneal dialysis. Treatment decision-making takes place during the pre-dialysis year when estimated glomerular filtration rate drops to between 15 and 30 mL/min/1.73 m(2). Renal disease can be familial, and the majority of patients have considerable health service experience when they approach these treatment decisions. Factors affecting patient treatment decisions are currently unknown. The objective of this article is to explore data from a wider study in specific relation to the types of vicarious learning experiences reported by pre-dialysis patients. A qualitative study utilised unstructured interviews and grounded theory analysis during the participant's pre-dialysis year. The interview cohort comprised 20 pre-dialysis participants between 24 and 80 years of age. Grounded theory design entailed thematic sampling and analysis, scrutinised by secondary coding and checked with participants. Participants were recruited from routine renal clinics at two local hospitals when their estimated glomerular filtration rate was between 15 and 30 mL/min/1.73 m(2). Vicarious learning that contributed to treatment decision-making fell into three main categories: planned vicarious leaning, unplanned vicarious learning and historical vicarious experiences. Exploration and acknowledgement of service users' prior vicarious learning, by healthcare professionals, is important in understanding its potential influences on individuals' treatment decision-making. This will enable healthcare professionals to challenge heuristic decisions based on limited information and to encourage analytic thought processes.

  20. Learning to fear a second-order stimulus following vicarious learning

    OpenAIRE

    Reynolds, G; Field, AP; Askew, C

    2015-01-01

    Vicarious fear learning refers to the acquisition of fear via observation of the fearful responses of others. The present study aims to extend current knowledge by exploring whether second-order vicarious fear learning can be demonstrated in children. That is, whether vicariously learnt fear responses for one stimulus can be elicited in a second stimulus associated with that initial stimulus. Results demonstrated that children’s (5–11 years) fear responses for marsupials and caterpillars incr...

  1. Vicarious extinction learning during reconsolidation neutralizes fear memory

    NARCIS (Netherlands)

    Golkar, A.; Tjaden, C.; Kindt, M.

    Background: Previous studies have suggested that fear memories can be updated when recalled, a process referred to as reconsolidation. Given the beneficial effects of model-based safety learning (i.e. vicarious extinction) in preventing the recovery of short-term fear memory, we examined whether

  2. Promoting Vicarious Learning of Physics Using Deep Questions with Explanations

    Science.gov (United States)

    Craig, Scotty D.; Gholson, Barry; Brittingham, Joshua K.; Williams, Joah L.; Shubeck, Keith T.

    2012-01-01

    Two experiments explored the role of vicarious "self" explanations in facilitating student learning gains during computer-presented instruction. In Exp. 1, college students with low or high knowledge on Newton's laws were tested in four conditions: (a) monologue (M), (b) questions (Q), (c) explanation (E), and (d) question + explanation (Q + E).…

  3. FACTORS INFLUENCING VICARIOUS LEARNING MECHANISM EFFECTIVENESS WITHIN ORGANIZATIONS

    OpenAIRE

    JOHN R. VOIT; COLIN G. DRURY

    2013-01-01

    As organizations become larger it becomes increasingly difficult to share lessons-learned across their disconnected units allowing individuals to learn vicariously from each other's experiences. This lesson-learned information is often unsolicited by the recipient group or individual and required an individual or group to react to the information to yield benefits for the organization. Data was collected using 39 interviews and 582 survey responses that proved the effects of information usefu...

  4. Vicarious learning revisited: a contemporary behavior analytic interpretation.

    Science.gov (United States)

    Masia, C L; Chase, P N

    1997-03-01

    Beginning in the 1960s, social learning theorists argued that behavioral learning principles could not account for behavior acquired through observation. Such a viewpoint is still widely held today. This rejection of behavioral principles in explaining vicarious learning was based on three phenomena: (1) imitation that occurred without direct reinforcement of the observer's behavior; (2) imitation that occurred after a long delay following modeling; and (3) a greater probability of imitation of the model's reinforced behavior than of the model's nonreinforced or punished behavior. These observations convinced social learning theorists that cognitive variables were required to explain behavior. Such a viewpoint has progressed aggressively, as evidenced by the change in name from social learning theory to social cognitive theory, and has been accompanied by the inclusion of information-processing theory. Many criticisms of operant theory, however, have ignored the full range of behavioral concepts and principles that have been derived to account for complex behavior. This paper will discuss some problems with the social learning theory explanation of vicarious learning and provide an interpretation of vicarious learning from a contemporary behavior analytic viewpoint.

  5. Scanning and vicarious learning from adverse events in health care

    Directory of Open Access Journals (Sweden)

    2001-01-01

    Full Text Available Studies have shown that serious adverse clinical events occur in approximately 3%-10% of acute care hospital admissions, and one third of these adverse events result in permanent disability or death. These findings have led to calls for national medical error reporting systems and for greater organizational learning by hospitals. But do hospitals and hospital personnel pay enough attention to such risk information that they might learn from each other's failures or adverse events? This paper gives an overview of the importance of scanning and vicarious learning from adverse events. In it I propose that health care organizations' attention and information focus, organizational affinity, and absorptive capacity may each influence scanning and vicarious learning outcomes. Implications for future research are discussed.

  6. Vicarious Fear Learning Depends on Empathic Appraisals and Trait Empathy.

    Science.gov (United States)

    Olsson, Andreas; McMahon, Kibby; Papenberg, Goran; Zaki, Jamil; Bolger, Niall; Ochsner, Kevin N

    2016-01-01

    Empathy and vicarious learning of fear are increasingly understood as separate phenomena, but the interaction between the two remains poorly understood. We investigated how social (vicarious) fear learning is affected by empathic appraisals by asking participants to either enhance or decrease their empathic responses to another individual (the demonstrator), who received electric shocks paired with a predictive conditioned stimulus. A third group of participants received no appraisal instructions and responded naturally to the demonstrator. During a later test, participants who had enhanced their empathy evinced the strongest vicarious fear learning as measured by skin conductance responses to the conditioned stimulus in the absence of the demonstrator. Moreover, this effect was augmented in observers high in trait empathy. Our results suggest that a demonstrator's expression can serve as a "social" unconditioned stimulus (US), similar to a personally experienced US in Pavlovian fear conditioning, and that learning from a social US depends on both empathic appraisals and the observers' stable traits. © The Author(s) 2015.

  7. Vicarious learning and the development of fears in childhood.

    Science.gov (United States)

    Askew, Chris; Field, Andy P

    2007-11-01

    Vicarious learning has long been assumed to be an indirect pathway to fear; however, there is only retrospective evidence that children acquire fears in this way. In two experiments, children (aged 7-9 years) were exposed to pictures of novel animals paired with pictures of either scared, happy or no facial expressions to see the impact on their fear cognitions and avoidance behavior about the animals. In Experiment 1, directly (self-report) and indirectly measured (affective priming) fear attitudes towards the animals changed congruent with the facial expressions with which these were paired. The indirectly measured fear beliefs persisted up to 3 months. Experiment 2 showed that children took significantly longer to approach a box they believed to contain an animal they had previously seen paired with scared faces. These results support theories of fear acquisition that suppose that vicarious learning affects cognitive and behavioral fear emotion, and suggest possibilities for interventions to weaken fear acquired in this way.

  8. Vicarious extinction learning during reconsolidation neutralizes fear memory.

    Science.gov (United States)

    Golkar, Armita; Tjaden, Cathelijn; Kindt, Merel

    2017-05-01

    Previous studies have suggested that fear memories can be updated when recalled, a process referred to as reconsolidation. Given the beneficial effects of model-based safety learning (i.e. vicarious extinction) in preventing the recovery of short-term fear memory, we examined whether consolidated long-term fear memories could be updated with safety learning accomplished through vicarious extinction learning initiated within the reconsolidation time-window. We assessed this in a final sample of 19 participants that underwent a three-day within-subject fear-conditioning design, using fear-potentiated startle as our primary index of fear learning. On day 1, two fear-relevant stimuli (reinforced CSs) were paired with shock (US) and a third stimulus served as a control (CS). On day 2, one of the two previously reinforced stimuli (the reminded CS) was presented once in order to reactivate the fear memory 10 min before vicarious extinction training was initiated for all CSs. The recovery of the fear memory was tested 24 h later. Vicarious extinction training conducted within the reconsolidation time window specifically prevented the recovery of the reactivated fear memory (p = 0.03), while leaving fear-potentiated startle responses to the non-reactivated cue intact (p = 0.62). These findings are relevant to both basic and clinical research, suggesting that a safe, non-invasive model-based exposure technique has the potential to enhance the efficiency and durability of anxiolytic therapies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Effect of vicarious fear learning on children's heart rate responses and attentional bias for novel animals

    OpenAIRE

    Reynolds, G; Field, AP; Askew, C

    2014-01-01

    Research with children has shown that vicarious learning can result in changes to 2 of Lang's (1968) 3 anxiety response systems: subjective report and behavioral avoidance. The current study extended this research by exploring the effect of vicarious learning on physiological responses (Lang's final response system) and attentional bias. The study used Askew and Field's (2007) vicarious learning procedure and demonstrated fear-related increases in children's cognitive, behavioral, and physiol...

  10. Enabling the Development of Student Teacher Professional Identity through Vicarious Learning during an Educational Excursion

    Science.gov (United States)

    Steenekamp, Karen; van der Merwe, Martyn; Mehmedova, Aygul Salieva

    2018-01-01

    This paper explores the views of student teachers who were provided vicarious learning opportunities during an educational excursion, and how the learning enabled them to develop their teacher professional identity. This qualitative research study, using a social-constructivist lens highlights how vicarious learning influenced student teachers'…

  11. Effect of vicarious fear learning on children's heart rate responses and attentional bias for novel animals.

    Science.gov (United States)

    Reynolds, Gemma; Field, Andy P; Askew, Chris

    2014-10-01

    Research with children has shown that vicarious learning can result in changes to 2 of Lang's (1968) 3 anxiety response systems: subjective report and behavioral avoidance. The current study extended this research by exploring the effect of vicarious learning on physiological responses (Lang's final response system) and attentional bias. The study used Askew and Field's (2007) vicarious learning procedure and demonstrated fear-related increases in children's cognitive, behavioral, and physiological responses. Cognitive and behavioral changes were retested 1 week and 1 month later, and remained elevated. In addition, a visual search task demonstrated that fear-related vicarious learning creates an attentional bias for novel animals, which is moderated by increases in fear beliefs during learning. The findings demonstrate that vicarious learning leads to lasting changes in all 3 of Lang's anxiety response systems and is sufficient to create attentional bias to threat in children. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  12. Teaching Parents about Responsive Feeding through a Vicarious Learning Video: A Pilot Randomized Controlled Trial

    Science.gov (United States)

    Ledoux, Tracey; Robinson, Jessica; Baranowski, Tom; O'Connor, Daniel P.

    2018-01-01

    The American Academy of Pediatrics and World Health Organization recommend responsive feeding (RF) to promote healthy eating behaviors in early childhood. This project developed and tested a vicarious learning video to teach parents RF practices. A RF vicarious learning video was developed using community-based participatory research methods.…

  13. Teaching parents about responsive feeding through a vicarious learning video: A pilot randomized controlled trial

    Science.gov (United States)

    The American Academy of Pediatrics and World Health Organization recommend responsive feeding (RF) to promote healthy eating behaviors in early childhood. This project developed and tested a vicarious learning video to teach parents RF practices. A RF vicarious learning video was developed using com...

  14. Promoting Constructive Activities that Support Vicarious Learning during Computer-Based Instruction

    Science.gov (United States)

    Gholson, Barry; Craig, Scotty D.

    2006-01-01

    This article explores several ways computer-based instruction can be designed to support constructive activities and promote deep-level comprehension during vicarious learning. Vicarious learning, discussed in the first section, refers to knowledge acquisition under conditions in which the learner is not the addressee and does not physically…

  15. Effects of Competition on Students' Self-Efficacy in Vicarious Learning

    Science.gov (United States)

    Chan, Joanne C. Y.; Lam, Shui-fong

    2008-01-01

    Background: Vicarious learning is one of the fundamental sources of self-efficacy that is frequently employed in educational settings. However, little research has investigated the effects of competition on students' writing self-efficacy when they engage in vicarious learning. Aim: This study compared the effects of competitive and…

  16. Examining the Effect of Small Group Discussions and Question Prompts on Vicarious Learning Outcomes

    Science.gov (United States)

    Lee, Yekyung; Ertmer, Peggy A.

    2006-01-01

    This study investigated the effect of group discussions and question prompts on students' vicarious learning experiences. Vicarious experiences were delivered to 65 preservice teachers via VisionQuest, a Web site that provided examples of successful technology integration. A 2x2 factorial research design employed group discussions and question…

  17. Other people as means to a safe end: vicarious extinction blocks the return of learned fear.

    Science.gov (United States)

    Golkar, Armita; Selbing, Ida; Flygare, Oskar; Ohman, Arne; Olsson, Andreas

    2013-11-01

    Information about what is dangerous and safe in the environment is often transferred from other individuals through social forms of learning, such as observation. Past research has focused on the observational, or vicarious, acquisition of fears, but little is known about how social information can promote safety learning. To address this issue, we studied the effects of vicarious-extinction learning on the recovery of conditioned fear. Compared with a standard extinction procedure, vicarious extinction promoted better extinction and effectively blocked the return of previously learned fear. We confirmed that these effects could not be attributed to the presence of a learning model per se but were specifically driven by the model's experience of safety. Our results confirm that vicarious and direct emotional learning share important characteristics but that social-safety information promotes superior down-regulation of learned fear. These findings have implications for emotional learning, social-affective processes, and clinical practice.

  18. Learning to fear a second-order stimulus following vicarious learning.

    Science.gov (United States)

    Reynolds, Gemma; Field, Andy P; Askew, Chris

    2017-04-01

    Vicarious fear learning refers to the acquisition of fear via observation of the fearful responses of others. The present study aims to extend current knowledge by exploring whether second-order vicarious fear learning can be demonstrated in children. That is, whether vicariously learnt fear responses for one stimulus can be elicited in a second stimulus associated with that initial stimulus. Results demonstrated that children's (5-11 years) fear responses for marsupials and caterpillars increased when they were seen with fearful faces compared to no faces. Additionally, the results indicated a second-order effect in which fear-related learning occurred for other animals seen together with the fear-paired animal, even though the animals were never observed with fearful faces themselves. Overall, the findings indicate that for children in this age group vicariously learnt fear-related responses for one stimulus can subsequently be observed for a second stimulus without it being experienced in a fear-related vicarious learning event. These findings may help to explain why some individuals do not recall involvement of a traumatic learning episode in the development of their fear of a specific stimulus.

  19. Audiovisual Blindsight: Audiovisual learning in the absence of primary visual cortex

    OpenAIRE

    Mehrdad eSeirafi; Peter eDe Weerd; Alan J Pegna; Beatrice ede Gelder

    2016-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit...

  20. Inhibition of vicariously learned fear in children using positive modeling and prior exposure.

    Science.gov (United States)

    Askew, Chris; Reynolds, Gemma; Fielding-Smith, Sarah; Field, Andy P

    2016-02-01

    One of the challenges to conditioning models of fear acquisition is to explain how different individuals can experience similar learning events and only some of them subsequently develop fear. Understanding factors moderating the impact of learning events on fear acquisition is key to understanding the etiology and prevention of fear in childhood. This study investigates these moderators in the context of vicarious (observational) learning. Two experiments tested predictions that the acquisition or inhibition of fear via vicarious learning is driven by associative learning mechanisms similar to direct conditioning. In Experiment 1, 3 groups of children aged 7 to 9 years received 1 of 3 inhibitive information interventions-psychoeducation, factual information, or no information (control)-prior to taking part in a vicarious fear learning procedure. In Experiment 2, 3 groups of children aged 7 to 10 years received 1 of 3 observational learning interventions-positive modeling (immunization), observational familiarity (latent inhibition), or no prevention (control)-before vicarious fear learning. Results indicated that observationally delivered manipulations inhibited vicarious fear learning, while preventions presented via written information did not. These findings confirm that vicarious learning shares some of the characteristics of direct conditioning and can explain why not all individuals will develop fear following a vicarious learning event. They also suggest that the modality of inhibitive learning is important and should match the fear learning pathway for increased chances of inhibition. Finally, the results demonstrate that positive modeling is likely to be a particularly effective method for preventing fear-related observational learning in children. (c) 2016 APA, all rights reserved).

  1. Stimulus fear relevance and the speed, magnitude, and robustness of vicariously learned fear.

    Science.gov (United States)

    Dunne, Güler; Reynolds, Gemma; Askew, Chris

    2017-08-01

    Superior learning for fear-relevant stimuli is typically indicated in the laboratory by faster acquisition of fear responses, greater learned fear, and enhanced resistance to extinction. Three experiments investigated the speed, magnitude, and robustness of UK children's (6-10 years; N = 290; 122 boys, 168 girls) vicariously learned fear responses for three types of stimuli. In two experiments, children were presented with pictures of novel animals (Australian marsupials) and flowers (fear-irrelevant stimuli) alone (control) or together with faces expressing fear or happiness. To determine learning speed the number of stimulus-face pairings seen by children was varied (1, 10, or 30 trials). Robustness of learning was examined via repeated extinction procedures over 3 weeks. A third experiment compared the magnitude and robustness of vicarious fear learning for snakes and marsupials. Significant increases in fear responses were found for snakes, marsupials and flowers. There was no indication that vicarious learning for marsupials was faster than for flowers. Moreover, vicariously learned fear was neither greater nor more robust for snakes compared to marsupials, or for marsupials compared to flowers. These findings suggest that for this age group stimulus fear relevance may have little influence on vicarious fear learning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Hybrid E-Learning Tool TransLearning: Video Storytelling to Foster Vicarious Learning within Multi-Stakeholder Collaboration Networks

    Science.gov (United States)

    van der Meij, Marjoleine G.; Kupper, Frank; Beers, Pieter J.; Broerse, Jacqueline E. W.

    2016-01-01

    E-learning and storytelling approaches can support informal vicarious learning within geographically widely distributed multi-stakeholder collaboration networks. This case study evaluates hybrid e-learning and video-storytelling approach "TransLearning" by investigation into how its storytelling e-tool supported informal vicarious…

  3. The Deep-Level-Reasoning-Question Effect: The Role of Dialogue and Deep-Level-Reasoning Questions during Vicarious Learning

    Science.gov (United States)

    Craig, Scotty D.; Sullins, Jeremiah; Witherspoon, Amy; Gholson, Barry

    2006-01-01

    We investigated the impact of dialogue and deep-level-reasoning questions on vicarious learning in 2 studies with undergraduates. In Experiment 1, participants learned material by interacting with AutoTutor or by viewing 1 of 4 vicarious learning conditions: a noninteractive recorded version of the AutoTutor dialogues, a dialogue with a…

  4. Audiovisual Association Learning in the Absence of Primary Visual Cortex

    OpenAIRE

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J.; de Gelder, Beatrice

    2016-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit ...

  5. Vicarious learning during simulations: is it more effective than hands-on training?

    Science.gov (United States)

    Stegmann, Karsten; Pilz, Florian; Siebeck, Matthias; Fischer, Frank

    2012-10-01

    Doctor-patient communication skills are often fostered by using simulations with standardised patients (SPs). The efficiency of such experiences is greater if student observers learn at least as much from the simulation as do students who actually interact with the patient. This study aimed to investigate whether the type of simulation-based learning (learning by doing versus vicarious learning) and the order in which these activities are carried out (learning by doing → vicarious learning versus vicarious learninglearning by doing) have any effect on the acquisition of knowledge on effective doctor-patient communication strategies. In addition, we wished to examine the extent to which an observation script and a feedback formulation script affect knowledge acquisition in this domain. The sample consisted of 200 undergraduate medical students (126 female, 74 male). They participated in two separate simulation sessions, each of which was 30 minutes long and was followed by a collaborative peer feedback phase. Half of the students first performed (learning by doing) and then observed (vicarious learning) the simulation, and the other half participated in the reverse order. Knowledge of doctor-patient communication was measured before, between and after the simulations. Vicarious learning led to greater knowledge of doctor-patient communication scores than learning by doing. The order in which vicarious learning was experienced had no influence. The inclusion of an observation script also enabled significantly greater learning in students to whom this script was given compared with students who were not supported in this way, but the presence of a feedback script had no effect. Students appear to learn at least as much, if not more, about doctor-patient communication by observing their peers interact with SPs as they do from interacting with SPs themselves. Instructional support for observing simulations in the form of observation scripts facilitates both

  6. Effect of Vicarious Fear Learning on Children’s Heart Rate Responses and Attentional Bias for Novel Animals

    Science.gov (United States)

    2014-01-01

    Research with children has shown that vicarious learning can result in changes to 2 of Lang’s (1968) 3 anxiety response systems: subjective report and behavioral avoidance. The current study extended this research by exploring the effect of vicarious learning on physiological responses (Lang’s final response system) and attentional bias. The study used Askew and Field’s (2007) vicarious learning procedure and demonstrated fear-related increases in children’s cognitive, behavioral, and physiological responses. Cognitive and behavioral changes were retested 1 week and 1 month later, and remained elevated. In addition, a visual search task demonstrated that fear-related vicarious learning creates an attentional bias for novel animals, which is moderated by increases in fear beliefs during learning. The findings demonstrate that vicarious learning leads to lasting changes in all 3 of Lang’s anxiety response systems and is sufficient to create attentional bias to threat in children. PMID:25151521

  7. Vicarious Learning and Reduction of Fear in Children via Adult and Child Models.

    Science.gov (United States)

    Dunne, Güler; Askew, Chris

    2017-06-01

    Children can learn to fear stimuli vicariously, by observing adults' or peers' responses to them. Given that much of school-age children's time is typically spent with their peers, it is important to establish whether fear learning from peers is as effective or robust as learning from adults, and also whether peers can be successful positive models for reducing fear. During a vicarious fear learning procedure, children (6 to 10 years; N = 60) were shown images of novel animals together with images of adult or peer faces expressing fear. Later they saw their fear-paired animal again together with positive emotional adult or peer faces. Children's fear beliefs and avoidance for the animals increased following vicarious fear learning and decreased following positive vicarious counterconditioning. There was little evidence of differences in learning from adults and peers, demonstrating that for this age group peer models are effective models for both fear acquisition and reduction. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Vicarious Learning in PBL Variants for Learning Electronics

    Science.gov (United States)

    Podges, Martin; Kommers, Piet

    2017-01-01

    Three different groups in a class of first-year tertiary engineering students had to solve a problem based on a project by applying the distinctive problem-based learning (PBL) approach. Each group's project (PBL project) was then studied by the other two groups after successful completion and demonstration. Each group then had to study the…

  9. Differential influence of social versus isolate housing on vicarious fear learning in adolescent mice.

    Science.gov (United States)

    Panksepp, Jules B; Lahvis, Garet P

    2016-04-01

    Laboratory rodents can adopt the pain or fear of nearby conspecifics. This phenotype conceptually lies within the domain of empathy, a bio-psycho-social process through which individuals come to share each other's emotion. Using a model of cue-conditioned fear, we show here that the expression of vicarious fear varies with respect to whether mice are raised socially or in solitude during adolescence. The impact of the adolescent housing environment was selective: (a) vicarious fear was more influenced than directly acquired fear, (b) "long-term" (24-h postconditioning) vicarious fear memories were stronger than "short-term" (15-min postconditioning) memories in socially reared mice whereas the opposite was true for isolate mice, and (c) females were more fearful than males. Housing differences during adolescence did not alter the general mobility of mice or their vocal response to receiving the unconditioned stimulus. Previous work with this mouse model underscored a genetic influence on vicarious fear learning, and the present study complements these findings by elucidating an interaction between the adolescent social environment and vicarious experience. Collectively, these findings are relevant to developing models of empathy amenable to mechanistic exploitation in the laboratory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. Memory and learning with rapid audiovisual sequences

    Science.gov (United States)

    Keller, Arielle S.; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  11. Memory and learning with rapid audiovisual sequences.

    Science.gov (United States)

    Keller, Arielle S; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed.

  12. Audiovisual Association Learning in the Absence of Primary Visual Cortex.

    Science.gov (United States)

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J; de Gelder, Beatrice

    2015-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning.

  13. Stimulus fear-relevance and the vicarious learning pathway to childhood fears

    OpenAIRE

    Askew, C.; Dunne, G.; Ozdil, A.; Reynolds, G.; Field, A.P.

    2013-01-01

    Enhanced fear learning for fear-relevant stimuli has been demonstrated in procedures with adults in the laboratory. Three experiments investigated the effect of stimulus fear-relevance on vicarious fear learning in children (aged 6-11 years). Pictures of stimuli with different levels of fear-relevance (flowers, caterpillars, snakes, worms, and Australian marsupials) were presented alone or together with scared faces. In line with previous studies, children's fear beliefs and avoidance prefere...

  14. Vicarious learning and unlearning of fear in childhood via mother and stranger models.

    Science.gov (United States)

    Dunne, Güler; Askew, Chris

    2013-10-01

    Evidence shows that anxiety runs in families. One reason may be that children are particularly susceptible to learning fear from their parents. The current study compared children's fear beliefs and avoidance preferences for animals following positive or fearful modeling by mothers and strangers in vicarious learning and unlearning procedures. Children aged 6 to 10 years (N = 60) were exposed to pictures of novel animals either alone (control) or together with pictures of their mother or a stranger expressing fear or happiness. During unlearning (counterconditioning), children saw each animal again with their mother or a stranger expressing the opposite facial expression. Following vicarious learning, children's fear beliefs increased for animals seen with scared faces and this effect was the same whether fear was modeled by mothers or strangers. Fear beliefs and avoidance preferences decreased following positive counterconditioning and increased following fear counterconditioning. Again, learning was the same whether the model was the child's mother or a stranger. These findings indicate that children in this age group can vicariously learn and unlearn fear-related cognitions from both strangers and mothers. This has implications for our understanding of fear acquisition and the development of early interventions to prevent and reverse childhood fears and phobias.

  15. The Role of Audiovisual Mass Media News in Language Learning

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2011-01-01

    The present paper focuses on the role of audio/visual mass media news in language learning. In this regard, the two important issues regarding the selection and preparation of TV news for language learning are the content of the news and the linguistic difficulty. Content is described as whether the news is specialized or universal. Universal…

  16. Vicariously learned helplessness: the role of perceived dominance and prestige of a model.

    Science.gov (United States)

    Chambers, Sheridan; Hammonds, Frank

    2014-01-01

    Prior research has examined the relationship between various model characteristics (e.g., age, competence, similarity) and the likelihood that the observers will experience vicariously learned helplessness. However, no research in this area has investigated dominance as a relevant model characteristic. This study explored whether the vicarious acquisition of learned helplessness could be mediated by the perceived dominance of a model. Participants observed a model attempting to solve anagrams. Across participant groups, the model displayed either dominant or nondominant characteristics and was either successful or unsuccessful at solving the anagrams. The characteristics displayed by the model significantly affected observers' ratings of his dominance and prestige. After viewing the model, participants attempted to solve 40 anagrams. When the dominant model was successful, observers solved significantly more anagrams than when he was unsuccessful. This effect was not found when the model was nondominant.

  17. Learning from the Pros: Influence of Web-Based Expert Commentary on Vicarious Learning about Financial Markets

    Science.gov (United States)

    Ford, Matthew W.; Kent, Daniel W.; Devoto, Steven

    2007-01-01

    Web-based financial commentary, in which experts routinely express market-related thought processes, is proposed as a means for college students to learn vicariously about financial markets. Undergraduate business school students from a regional university were exposed to expert market commentary from a single financial Web site for a 6-week…

  18. Audiovisual Cues and Perceptual Learning of Spectrally Distorted Speech

    Science.gov (United States)

    Pilling, Michael; Thomas, Sharon

    2011-01-01

    Two experiments investigate the effectiveness of audiovisual (AV) speech cues (cues derived from both seeing and hearing a talker speak) in facilitating perceptual learning of spectrally distorted speech. Speech was distorted through an eight channel noise-vocoder which shifted the spectral envelope of the speech signal to simulate the properties…

  19. Vicarious neural processing of outcomes during observational learning.

    Directory of Open Access Journals (Sweden)

    Elisabetta Monfardini

    Full Text Available Learning what behaviour is appropriate in a specific context by observing the actions of others and their outcomes is a key constituent of human cognition, because it saves time and energy and reduces exposure to potentially dangerous situations. Observational learning of associative rules relies on the ability to map the actions of others onto our own, process outcomes, and combine these sources of information. Here, we combined newly developed experimental tasks and functional magnetic resonance imaging (fMRI to investigate the neural mechanisms that govern such observational learning. Results show that the neural systems involved in individual trial-and-error learning and in action observation and execution both participate in observational learning. In addition, we identified brain areas that specifically activate for others' incorrect outcomes during learning in the posterior medial frontal cortex (pMFC, the anterior insula and the posterior superior temporal sulcus (pSTS.

  20. Vicarious neural processing of outcomes during observational learning.

    Science.gov (United States)

    Monfardini, Elisabetta; Gazzola, Valeria; Boussaoud, Driss; Brovelli, Andrea; Keysers, Christian; Wicker, Bruno

    2013-01-01

    Learning what behaviour is appropriate in a specific context by observing the actions of others and their outcomes is a key constituent of human cognition, because it saves time and energy and reduces exposure to potentially dangerous situations. Observational learning of associative rules relies on the ability to map the actions of others onto our own, process outcomes, and combine these sources of information. Here, we combined newly developed experimental tasks and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms that govern such observational learning. Results show that the neural systems involved in individual trial-and-error learning and in action observation and execution both participate in observational learning. In addition, we identified brain areas that specifically activate for others' incorrect outcomes during learning in the posterior medial frontal cortex (pMFC), the anterior insula and the posterior superior temporal sulcus (pSTS).

  1. Vicarious Neural Processing of Outcomes during Observational Learning

    NARCIS (Netherlands)

    Monfardini, Elisabetta; Gazzola, Valeria; Boussaoud, Driss; Brovelli, Andrea; Keysers, Christian; Wicker, Bruno

    2013-01-01

    Learning what behaviour is appropriate in a specific context by observing the actions of others and their outcomes is a key constituent of human cognition, because it saves time and energy and reduces exposure to potentially dangerous situations. Observational learning of associative rules relies on

  2. Macaque monkeys can learn token values from human models through vicarious reward.

    Science.gov (United States)

    Bevacqua, Sara; Cerasti, Erika; Falcone, Rossella; Cervelloni, Milena; Brunamonti, Emiliano; Ferraina, Stefano; Genovesio, Aldo

    2013-01-01

    Monkeys can learn the symbolic meaning of tokens, and exchange them to get a reward. Monkeys can also learn the symbolic value of a token by observing conspecifics but it is not clear if they can learn passively by observing other actors, e.g., humans. To answer this question, we tested two monkeys in a token exchange paradigm in three experiments. Monkeys learned token values through observation of human models exchanging them. We used, after a phase of object familiarization, different sets of tokens. One token of each set was rewarded with a bit of apple. Other tokens had zero value (neutral tokens). Each token was presented only in one set. During the observation phase, monkeys watched the human model exchange tokens and watched them consume rewards (vicarious rewards). In the test phase, the monkeys were asked to exchange one of the tokens for food reward. Sets of three tokens were used in the first experiment and sets of two tokens were used in the second and third experiments. The valuable token was presented with different probabilities in the observation phase during the first and second experiments in which the monkeys exchanged the valuable token more frequently than any of the neutral tokens. The third experiments examined the effect of unequal probabilities. Our results support the view that monkeys can learn from non-conspecific actors through vicarious reward, even a symbolic task like the token-exchange task.

  3. Vicarious Versus Traditional Learning in Biology: A Case of Sexually ...

    African Journals Online (AJOL)

    The purpose of this study was to compare between learning sexually transmitted infections in Biology by observation and traditional classroom lecture method ... The study found that observational method was more effective and preferred by students as compared to traditional lecture method ... AJOL African Journals Online.

  4. Stimulus fear-relevance and the vicarious learning pathway to childhood fears.

    Science.gov (United States)

    Askew, Chris; Dunne, Güler; Özdil, Zehra; Reynolds, Gemma; Field, Andy P

    2013-10-01

    Enhanced fear learning for fear-relevant stimuli has been demonstrated in procedures with adults in the laboratory. Three experiments investigated the effect of stimulus fear-relevance on vicarious fear learning in children (aged 6-11 years). Pictures of stimuli with different levels of fear-relevance (flowers, caterpillars, snakes, worms, and Australian marsupials) were presented alone or together with scared faces. In line with previous studies, children's fear beliefs and avoidance preferences increased for stimuli they had seen with scared faces. However, in contrast to evidence with adults, learning was mostly similar for all stimulus types irrespective of fear-relevance. The results support a proposal that stimulus preparedness is bypassed when children observationally learn threat-related information from adults.

  5. Audiovisual perceptual learning with multiple speakers.

    Science.gov (United States)

    Mitchel, Aaron D; Gerfen, Chip; Weiss, Daniel J

    2016-05-01

    One challenge for speech perception is between-speaker variability in the acoustic parameters of speech. For example, the same phoneme (e.g. the vowel in "cat") may have substantially different acoustic properties when produced by two different speakers and yet the listener must be able to interpret these disparate stimuli as equivalent. Perceptual tuning, the use of contextual information to adjust phonemic representations, may be one mechanism that helps listeners overcome obstacles they face due to this variability during speech perception. Here we test whether visual contextual cues to speaker identity may facilitate the formation and maintenance of distributional representations for individual speakers, allowing listeners to adjust phoneme boundaries in a speaker-specific manner. We familiarized participants to an audiovisual continuum between /aba/ and /ada/. During familiarization, the "b-face" mouthed /aba/ when an ambiguous token was played, while the "D-face" mouthed /ada/. At test, the same ambiguous token was more likely to be identified as /aba/ when paired with a stilled image of the "b-face" than with an image of the "D-face." This was not the case in the control condition when the two faces were paired equally with the ambiguous token. Together, these results suggest that listeners may form speaker-specific phonemic representations using facial identity cues.

  6. Vicarious shame.

    Science.gov (United States)

    Welten, Stephanie C M; Zeelenberg, Marcel; Breugelmans, Seger M

    2012-01-01

    We examined an account of vicarious shame that explains how people can experience a self-conscious emotion for the behaviour of another person. Two divergent processes have been put forward to explain how another's behaviour links to the self. The group-based emotion account explains vicarious shame in terms of an in-group member threatening one's social identity by behaving shamefully. The empathy account explains vicarious shame in terms of empathic perspective taking; people imagine themselves in another's shameful behaviour. In three studies using autobiographical recall and experimental inductions, we revealed that both processes can explain why vicarious shame arises in different situations, what variation can be observed in the experience of vicarious shame, and how all vicarious shame can be related to a threat to the self. Results are integrated in a functional account of shame.

  7. Impact of Vicarious Learning Experiences and Goal Setting on Preservice Teachers' Self-Efficacy for Technology Integration: A Pilot Study.

    Science.gov (United States)

    Wang, Ling; Ertmer, Peggy A.

    This pilot study was designed to explore how vicarious learning experiences and goal setting influence preservice teachers' self-efficacy for integrating technology into the classroom. Twenty undergraduate students who were enrolled in an introductory educational technology course at a large midwestern university participated and were assigned…

  8. Spontaneous eye movements and trait empathy predict vicarious learning of fear.

    Science.gov (United States)

    Kleberg, Johan L; Selbing, Ida; Lundqvist, Daniel; Hofvander, Björn; Olsson, Andreas

    2015-12-01

    Learning to predict dangerous outcomes is important to survival. In humans, this kind of learning is often transmitted through the observation of others' emotional responses. We analyzed eye movements during an observational/vicarious fear learning procedure, in which healthy participants (N=33) watched another individual ('learning model') receiving aversive treatment (shocks) paired with a predictive conditioned stimulus (CS+), but not a control stimulus (CS-). Participants' gaze pattern towards the model differentiated as a function of whether the CS was predictive or not of a shock to the model. Consistent with our hypothesis that the face of a conspecific in distress can act as an unconditioned stimulus (US), we found that the total fixation time at a learning model's face increased when the CS+ was shown. Furthermore, we found that the total fixation time at the CS+ during learning predicted participants' conditioned responses (CRs) at a later test in the absence of the model. We also demonstrated that trait empathy was associated with stronger CRs, and that autistic traits were positively related to autonomic reactions to watching the model receiving the aversive treatment. Our results have implications for both healthy and dysfunctional socio-emotional learning. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Movement Sonification: Audiovisual benefits on motor learning

    Directory of Open Access Journals (Sweden)

    Weber Andreas

    2011-12-01

    Full Text Available Processes of motor control and learning in sports as well as in motor rehabilitation are based on perceptual functions and emergent motor representations. Here a new method of movement sonification is described which is designed to tune in more comprehensively the auditory system into motor perception to enhance motor learning. Usually silent features of the cyclic movement pattern "indoor rowing" are sonified in real time to make them additionally available to the auditory system when executing the movement. Via real time sonification movement perception can be enhanced in terms of temporal precision and multi-channel integration. But beside the contribution of a single perceptual channel to motor perception and motor representation also mechanisms of multisensory integration can be addressed, if movement sonification is configured adequately: Multimodal motor representations consisting of at least visual, auditory and proprioceptive components - can be shaped subtly resulting in more precise motor control and enhanced motor learning.

  10. Changes of the Prefrontal EEG (Electroencephalogram) Activities According to the Repetition of Audio-Visual Learning.

    Science.gov (United States)

    Kim, Yong-Jin; Chang, Nam-Kee

    2001-01-01

    Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…

  11. Dissociable brain systems mediate vicarious learning of stimulus-response and action-outcome contingencies.

    Science.gov (United States)

    Liljeholm, Mimi; Molloy, Ciara J; O'Doherty, John P

    2012-07-18

    Two distinct strategies have been suggested to support action selection in humans and other animals on the basis of experiential learning: a goal-directed strategy that generates decisions based on the value and causal antecedents of action outcomes, and a habitual strategy that relies on the automatic elicitation of actions by environmental stimuli. In the present study, we investigated whether a similar dichotomy exists for actions that are acquired vicariously, through observation of other individuals rather than through direct experience, and assessed whether these strategies are mediated by distinct brain regions. We scanned participants with functional magnetic resonance imaging while they performed an observational learning task designed to encourage either goal-directed encoding of the consequences of observed actions, or a mapping of observed actions to conditional discriminative cues. Activity in different parts of the action observation network discriminated between the two conditions during observational learning and correlated with the degree of insensitivity to outcome devaluation in subsequent performance. Our findings suggest that, in striking parallel to experiential learning, neural systems mediating the observational acquisition of actions may be dissociated into distinct components: a goal-directed, outcome-sensitive component and a less flexible stimulus-response component.

  12. Academic e-learning experience in the enhancement of open access audiovisual and media education

    OpenAIRE

    Pacholak, Anna; Sidor, Dorota

    2015-01-01

    The paper presents how the academic e-learning experience and didactic methods of the Centre for Open and Multimedia Education (COME UW), University of Warsaw, enhance the open access to audiovisual and media education at various levels of education. The project is implemented within the Audiovisual and Media Education Programme (PEAM). It is funded by the Polish Film Institute (PISF). The aim of the project is to create a proposal of a comprehensive and open programme for the audiovisual (me...

  13. Vicarious learning of children's social-anxiety-related fear beliefs and emotional Stroop bias.

    Science.gov (United States)

    Askew, Chris; Hagel, Anna; Morgan, Julie

    2015-08-01

    Models of social anxiety suggest that negative social experiences contribute to the development of social anxiety, and this is supported by self-report research. However, there is relatively little experimental evidence for the effects of learning experiences on social cognitions. The current study examined the effect of observing a social performance situation with a negative outcome on children's (8 to 11 years old) fear-related beliefs and cognitive processing. Two groups of children were each shown 1 of 2 animated films of a person trying to score in basketball while being observed by others; in 1 film, the outcome was negative, and in the other, it was neutral. Children's fear-related beliefs about performing in front of others were measured before and after the film and children were asked to complete an emotional Stroop task. Results showed that social fear beliefs increased for children who saw the negative social performance film. In addition, these children showed an emotional Stroop bias for social-anxiety-related words compared to children who saw the neutral film. The findings have implications for our understanding of social anxiety disorder and suggest that vicarious learning experiences in childhood may contribute to the development of social anxiety. (c) 2015 APA, all rights reserved).

  14. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    Science.gov (United States)

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Effects of MK-801 on vicarious trial-and-error and reversal of olfactory discrimination learning in weanling rats.

    Science.gov (United States)

    Griesbach, G S; Hu, D; Amsel, A

    1998-12-01

    The effects of dizocilpine maleate (MK-801) on vicarious trial-and-error (VTE), and on simultaneous olfactory discrimination learning and its reversal, were observed in weanling rats. The term VTE was used by Tolman (The determiners of behavior at a choice point. Psychol. Rev. 1938;46:318-336), who described it as conflict-like behavior at a choice-point in simultaneous discrimination learning. It takes the form of head movements from one stimulus to the other, and has recently been proposed by Amsel (Hippocampal function in the rat: cognitive mapping or vicarious trial-and-error? Hippocampus, 1993;3:251-256) as related to hippocampal, nonspatial function during this learning. Weanling male rats received systemic MK-801 either 30 min before the onset of olfactory discrimination training and its reversal, or only before its reversal. The MK-801-treated animals needed significantly more sessions to acquire the discrimination and showed significantly fewer VTEs in the acquisition phase of learning. Impaired reversal learning was shown only when MK-801 was administered during the reversal-learning phase, itself, and not when it was administered throughout both phases.

  16. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2017-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  17. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2018-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  18. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    Directory of Open Access Journals (Sweden)

    David Alais

    2010-06-01

    Full Text Available An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question.Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ. Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones was slightly weaker than visual learning (lateralised grating patches. Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes.The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order

  19. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    Science.gov (United States)

    Alais, David; Cass, John

    2010-06-23

    An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be

  20. Vicarious trial-and-error behavior and hippocampal cytochrome oxidase activity during Y-maze discrimination learning in the rat.

    Science.gov (United States)

    Hu, Dan; Xu, Xiaojuan; Gonzalez-Lima, Francisco

    2006-03-01

    The present study investigated whether more vicarious trial-and-error (VTE) behavior, defined by head movement from one stimulus to another at a choice point during simultaneous discriminations, led to better visual discrimination learning in a Y-maze, and whether VTE behavior was a function of the hippocampus by measuring regional brain cytochrome oxidase (C.O.) activity, an index of neuronal metabolic activity. The results showed that the more VTEs a rat made, the better the rat learned the visual discrimination. Furthermore, both learning and VTE behavior during learning were correlated to C.O. activity in the hippocampus, suggesting that the hippocampus plays a role in VTE behavior during discrimination learning.

  1. Text-to-audiovisual speech synthesizer for children with learning disabilities.

    Science.gov (United States)

    Mendi, Engin; Bayrak, Coskun

    2013-01-01

    Learning disabilities affect the ability of children to learn, despite their having normal intelligence. Assistive tools can highly increase functional capabilities of children with learning disorders such as writing, reading, or listening. In this article, we describe a text-to-audiovisual synthesizer that can serve as an assistive tool for such children. The system automatically converts an input text to audiovisual speech, providing synchronization of the head, eye, and lip movements of the three-dimensional face model with appropriate facial expressions and word flow of the text. The proposed system can enhance speech perception and help children having learning deficits to improve their chances of success.

  2. Development of vicarious trial-and-error behavior in odor discrimination learning in the rat: relation to hippocampal function?

    Science.gov (United States)

    Hu, D; Griesbach, G; Amsel, A

    1997-06-01

    Previous work from our laboratory has suggested that hippocampal electrolytic lesions result in a deficit in simultaneous, black-white discrimination learning and reduce the frequency of vicarious trial-and-error (VTE) at a choice-point. VTE is a term Tolman used to describe the rat's conflict-like behavior, moving its head from one stimulus to the other at a choice point, and has been proposed as a major nonspatial feature of hippocampal function in both visual and olfactory discrimination learning. Simultaneous odor discrimination and VTE behavior were examined at three different ages. The results were that 16-day-old pups made fewer VTEs and learned much more slowly than 30- and 60-day-olds, a finding in accord with levels of hippocampal maturity in the rat.

  3. Online Dissection Audio-Visual Resources for Human Anatomy: Undergraduate Medical Students' Usage and Learning Outcomes

    Science.gov (United States)

    Choi-Lundberg, Derek L.; Cuellar, William A.; Williams, Anne-Marie M.

    2016-01-01

    In an attempt to improve undergraduate medical student preparation for and learning from dissection sessions, dissection audio-visual resources (DAVR) were developed. Data from e-learning management systems indicated DAVR were accessed by 28% ± 10 (mean ± SD for nine DAVR across three years) of students prior to the corresponding dissection…

  4. Primary School Pupils' Response to Audio-Visual Learning Process in Port-Harcourt

    Science.gov (United States)

    Olube, Friday K.

    2015-01-01

    The purpose of this study is to examine primary school children's response on the use of audio-visual learning processes--a case study of Chokhmah International Academy, Port-Harcourt (owned by Salvation Ministries). It looked at the elements that enhance pupils' response to educational television programmes and their hindrances to these…

  5. Developing an Audiovisual Notebook as a Self-Learning Tool in Histology: Perceptions of Teachers and Students

    Science.gov (United States)

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four…

  6. Independent Interactive Inquiry-Based Learning Modules Using Audio-Visual Instruction In Statistics

    OpenAIRE

    McDaniel, Scott N.; Green, Lisa

    2012-01-01

    Simulations can make complex ideas easier for students to visualize and understand. It has been shown that guidance in the use of these simulations enhances students’ learning. This paper describes the implementation and evaluation of the Independent Interactive Inquiry-based (I3) Learning Modules, which use existing open-source Java applets, combined with audio-visual instruction. Students are guided to discover and visualize important concepts in post-calculus and algebra-based courses in p...

  7. Career Coaches as a Source of Vicarious Learning for Racial and Ethnic Minority PhD Students in the Biomedical Sciences: A Qualitative Study.

    Science.gov (United States)

    Williams, Simon N; Thakore, Bhoomi K; McGee, Richard

    2016-01-01

    Many recent mentoring initiatives have sought to help improve the proportion of underrepresented racial and ethnic minorities (URMs) in academic positions across the biomedical sciences. However, the intractable nature of the problem of underrepresentation suggests that many young scientists may require supplemental career development beyond what many mentors are able to offer. As an adjunct to traditional scientific mentoring, we created a novel academic career "coaching" intervention for PhD students in the biomedical sciences. To determine whether and how academic career coaches can provide effective career-development-related learning experiences for URM PhD students in the biomedical sciences. We focus specifically on vicarious learning experiences, where individuals learn indirectly through the experiences of others. The intervention is being tested as part of a longitudinal randomized control trial (RCT). Here, we describe a nested qualitative study, using a framework approach to analyze data from a total of 48 semi-structured interviews from 24 URM PhD students (2 interviews per participant, 1 at baseline, 1 at 12-month follow-up) (16 female, 8 male; 11 Black, 12 Hispanic, 1 Native-American). We explored the role of the coach as a source of vicarious learning, in relation to the students' goal of being future biomedical science faculty. Coaches were resources through which most students in the study were able to learn vicariously about how to pursue, and succeed within, an academic career. Coaches were particularly useful in instances where students' research mentors are unable to provide such vicarious learning opportunities, for example because the mentor is too busy to have career-related discussions with a student, or because they have, or value, a different type of academic career to the type the student hopes to achieve. Coaching can be an important way to address the lack of structured career development that students receive in their home training

  8. Impact of audio-visual storytelling in simulation learning experiences of undergraduate nursing students.

    Science.gov (United States)

    Johnston, Sandra; Parker, Christina N; Fox, Amanda

    2017-09-01

    Use of high fidelity simulation has become increasingly popular in nursing education to the extent that it is now an integral component of most nursing programs. Anecdotal evidence suggests that students have difficulty engaging with simulation manikins due to their unrealistic appearance. Introduction of the manikin as a 'real patient' with the use of an audio-visual narrative may engage students in the simulated learning experience and impact on their learning. A paucity of literature currently exists on the use of audio-visual narratives to enhance simulated learning experiences. This study aimed to determine if viewing an audio-visual narrative during a simulation pre-brief altered undergraduate nursing student perceptions of the learning experience. A quasi-experimental post-test design was utilised. A convenience sample of final year baccalaureate nursing students at a large metropolitan university. Participants completed a modified version of the Student Satisfaction with Simulation Experiences survey. This 12-item questionnaire contained questions relating to the ability to transfer skills learned in simulation to the real clinical world, the realism of the simulation and the overall value of the learning experience. Descriptive statistics were used to summarise demographic information. Two tailed, independent group t-tests were used to determine statistical differences within the categories. Findings indicated that students reported high levels of value, realism and transferability in relation to the viewing of an audio-visual narrative. Statistically significant results (t=2.38, psimulation to clinical practice. The subgroups of age and gender although not significant indicated some interesting results. High satisfaction with simulation was indicated by all students in relation to value and realism. There was a significant finding in relation to transferability on knowledge and this is vital to quality educational outcomes. Copyright © 2017. Published by

  9. Functionally segregated neural substrates for arbitrary audiovisual paired-association learning.

    Science.gov (United States)

    Tanabe, Hiroki C; Honda, Manabu; Sadato, Norihiro

    2005-07-06

    To clarify the neural substrates and their dynamics during crossmodal association learning, we conducted functional magnetic resonance imaging (MRI) during audiovisual paired-association learning of delayed matching-to-sample tasks. Thirty subjects were involved in the study; 15 performed an audiovisual paired-association learning task, and the remainder completed a control visuo-visual task. Each trial consisted of the successive presentation of a pair of stimuli. Subjects were asked to identify predefined audiovisual or visuo-visual pairs by trial and error. Feedback for each trial was given regardless of whether the response was correct or incorrect. During the delay period, several areas showed an increase in the MRI signal as learning proceeded: crossmodal activity increased in unimodal areas corresponding to visual or auditory areas, and polymodal responses increased in the occipitotemporal junction and parahippocampal gyrus. This pattern was not observed in the visuo-visual intramodal paired-association learning task, suggesting that crossmodal associations might be formed by binding unimodal sensory areas via polymodal regions. In both the audiovisual and visuo-visual tasks, the MRI signal in the superior temporal sulcus (STS) in response to the second stimulus and feedback peaked during the early phase of learning and then decreased, indicating that the STS might be key to the creation of paired associations, regardless of stimulus type. In contrast to the activity changes in the regions discussed above, there was constant activity in the frontoparietal circuit during the delay period in both tasks, implying that the neural substrates for the formation and storage of paired associates are distinct from working memory circuits.

  10. Left Prefrontal Activity Reflects the Ability of Vicarious Fear Learning: A Functional Near-Infrared Spectroscopy Study

    Directory of Open Access Journals (Sweden)

    Qingguo Ma

    2013-01-01

    Full Text Available Fear could be acquired indirectly via social observation. However, it remains unclear which cortical substrate activities are involved in vicarious fear transmission. The present study was to examine empathy-related processes during fear learning by-proxy and to examine the activation of prefrontal cortex by using functional near-infrared spectroscopy. We simultaneously measured participants’ hemodynamic responses and skin conductance responses when they were exposed to a movie. In this movie, a demonstrator (i.e., another human being was receiving a classical fear conditioning. A neutral colored square paired with shocks (CSshock and another colored square paired with no shocks (CSno-shock were randomly presented in front of the demonstrator. Results showed that increased concentration of oxygenated hemoglobin in left prefrontal cortex was observed when participants watched a demonstrator seeing CSshock compared with that exposed to CSno-shock. In addition, enhanced skin conductance responses showing a demonstrator's aversive experience during learning object-fear association were observed. The present study suggests that left prefrontal cortex, which may reflect speculation of others’ mental state, is associated with social fear transmission.

  11. Left prefrontal activity reflects the ability of vicarious fear learning: a functional near-infrared spectroscopy study.

    Science.gov (United States)

    Ma, Qingguo; Huang, Yujing; Wang, Lei

    2013-01-01

    Fear could be acquired indirectly via social observation. However, it remains unclear which cortical substrate activities are involved in vicarious fear transmission. The present study was to examine empathy-related processes during fear learning by-proxy and to examine the activation of prefrontal cortex by using functional near-infrared spectroscopy. We simultaneously measured participants' hemodynamic responses and skin conductance responses when they were exposed to a movie. In this movie, a demonstrator (i.e., another human being) was receiving a classical fear conditioning. A neutral colored square paired with shocks (CS(shock)) and another colored square paired with no shocks (CS(no-shock)) were randomly presented in front of the demonstrator. Results showed that increased concentration of oxygenated hemoglobin in left prefrontal cortex was observed when participants watched a demonstrator seeing CS(shock) compared with that exposed to CS(no-shock). In addition, enhanced skin conductance responses showing a demonstrator's aversive experience during learning object-fear association were observed. The present study suggests that left prefrontal cortex, which may reflect speculation of others' mental state, is associated with social fear transmission.

  12. Concurrent Unimodal Learning Enhances Multisensory Responses of Bi-Directional Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2018-01-01

    modalities to independently update modality-specific neural weights on a moment-by-moment basis, in response to dynamic changes in noisy sensory stimuli. The circuit is embodied as a non-holonomic robotic agent that must orient a towards a moving audio-visual target. The circuit continuously learns the best...

  13. Concern for Others Leads to Vicarious Optimism.

    Science.gov (United States)

    Kappes, Andreas; Faber, Nadira S; Kahane, Guy; Savulescu, Julian; Crockett, Molly J

    2018-03-01

    An optimistic learning bias leads people to update their beliefs in response to better-than-expected good news but neglect worse-than-expected bad news. Because evidence suggests that this bias arises from self-concern, we hypothesized that a similar bias may affect beliefs about other people's futures, to the extent that people care about others. Here, we demonstrated the phenomenon of vicarious optimism and showed that it arises from concern for others. Participants predicted the likelihood of unpleasant future events that could happen to either themselves or others. In addition to showing an optimistic learning bias for events affecting themselves, people showed vicarious optimism when learning about events affecting friends and strangers. Vicarious optimism for strangers correlated with generosity toward strangers, and experimentally increasing concern for strangers amplified vicarious optimism for them. These findings suggest that concern for others can bias beliefs about their future welfare and that optimism in learning is not restricted to oneself.

  14. Learning cardiopulmonary resuscitation theory with face-to-face versus audiovisual instruction for secondary school students: a randomized controlled trial.

    Science.gov (United States)

    Cerezo Espinosa, Cristina; Nieto Caballero, Sergio; Juguera Rodríguez, Laura; Castejón-Mochón, José Francisco; Segura Melgarejo, Francisca; Sánchez Martínez, Carmen María; López López, Carmen Amalia; Pardo Ríos, Manuel

    2018-02-01

    To compare secondary students' learning of basic life support (BLS) theory and the use of an automatic external defibrillator (AED) through face-to-face classroom instruction versus educational video instruction. A total of 2225 secondary students from 15 schools were randomly assigned to one of the following 5 instructional groups: 1) face-to-face instruction with no audiovisual support, 2) face-to-face instruction with audiovisual support, 3) audiovisual instruction without face-to-face instruction, 4) audiovisual instruction with face-to-face instruction, and 5) a control group that received no instruction. The students took a test of BLS and AED theory before instruction, immediately after instruction, and 2 months later. The median (interquartile range) scores overall were 2.33 (2.17) at baseline, 5.33 (4.66) immediately after instruction (Paudiovisual instruction for learning BLS and AED theory were found in secondary school students either immediately after instruction or 2 months later.

  15. Sex differences in vicarious trial-and-error behavior during radial arm maze learning.

    Science.gov (United States)

    Bimonte, H A; Denenberg, V H

    2000-02-01

    We investigated sex differences in VTE behavior in rats during radial arm maze learning. Females made more VTEs than males, although there were no sex differences in learning. Further, VTEs and errors were positively correlated during the latter testing sessions in females, but not in males. This sex difference may be a reflection of differences between the sexes in conflict behavior or cognitive strategy while solving the maze.

  16. Observing tutorial dialogues collaboratively: insights about human tutoring effectiveness from vicarious learning.

    Science.gov (United States)

    Chi, Michelene T H; Roy, Marguerite; Hausmann, Robert G M

    2008-03-01

    The goals of this study are to evaluate a relatively novel learning environment, as well as to seek greater understanding of why human tutoring is so effective. This alternative learning environment consists of pairs of students collaboratively observing a videotape of another student being tutored. Comparing this collaboratively observing environment to four other instructional methods-one-on-one human tutoring, observing tutoring individually, collaborating without observing, and studying alone-the results showed that students learned to solve physics problems just as effectively from observing tutoring collaboratively as the tutees who were being tutored individually. We explain the effectiveness of this learning environment by postulating that such a situation encourages learners to become active and constructive observers through interactions with a peer. In essence, collaboratively observing combines the benefit of tutoring with the benefit of collaborating. The learning outcomes of the tutees and the collaborative observers, along with the tutoring dialogues, were used to further evaluate three hypotheses explaining why human tutoring is an effective learning method. Detailed analyses of the protocols at several grain sizes suggest that tutoring is effective when tutees are independently or jointly constructing knowledge: with the tutor, but not when the tutor independently conveys knowledge. 2008 Cognitive Science Society, Inc.

  17. Learning Vicariously: Students' Reflections of the Leadership Lessons Portrayed in "The Office"

    Science.gov (United States)

    Wimmer, Gaea; Meyers, Courtney; Porter, Haley; Shaw, Martin

    2012-01-01

    Leadership educators are encouraged to identify and apply new ways to teach leadership. This paper provides the qualitative results of post-secondary students' reflections of learning leadership concepts after watching several episodes of the television show, "The Office." Students used reflective journaling to record their reactions and…

  18. Using Audiovisual TV Interviews to Create Visible Authors that Reduce the Learning Gap between Native and Non-Native Language Speakers

    Science.gov (United States)

    Inglese, Terry; Mayer, Richard E.; Rigotti, Francesca

    2007-01-01

    Can archives of audiovisual TV interviews be used to make authors more visible to students, and thereby reduce the learning gap between native and non-native language speakers in college classes? We examined students in a college course who learned about one scholar's ideas through watching an audiovisual TV interview (i.e., visible author format)…

  19. Vicarious birth experiences and childbirth fear: does it matter how young canadian women learn about birth?

    Science.gov (United States)

    Stoll, Kathrin; Hall, Wendy

    2013-01-01

    In our secondary analysis of a cross-sectional survey, we explored predictors of childbirth fear for young women (n = 2,676). Young women whose attitudes toward pregnancy and birth were shaped by the media were 1.5 times more likely to report childbirth fear. Three factors that were associated with reduced fear of birth were women's confidence in reproductive knowledge, witnessing a birth, and learning about pregnancy and birth through friends. Offering age-appropriate birth education during primary and secondary education, as an alternative to mass-mediated information about birth, can be evaluated as an approach to reduce young women's childbirth fear.

  20. Comparing Auditory-Only and Audiovisual Word Learning for Children with Hearing Loss.

    Science.gov (United States)

    McDaniel, Jena; Camarata, Stephen; Yoder, Paul

    2018-05-15

    Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed.

  1. Learning Vicariously: Tourism, Orientalism and the Making of an Architectural Photography Collection of Egypt

    Directory of Open Access Journals (Sweden)

    Elvan Cobb

    2017-01-01

    Full Text Available Andrew Dickson White, the first president of Cornell University in the United States, referred to architecture as his 'pet extravagance.' Leveraging his influential position as president, White was instrumental in the establishment of the architecture department in 1871. One of his noteworthy contributions to this newly founded department was the initiation of an architectural photography collection that was a direct result of his travels around the world as a diplomat, a scholar and, eventually, as a tourist. This architectural photography collection formed the core of the architectural history education at the school well into the 20th century. At that time, photographs provided one of the only ways for students to learn about the architecture of distant places. White’s selection of architectural subjects, however, was shaped not through deep scholarly inquiry, but rather by the nascent tourist industry. This paper examines White's Egyptian collection, acquired during his voyage to Egypt in 1889. His trip to Egypt, in his own words “marked a new epoch in [his] thinking.” Encountering the 'east' for the first time, White's photography collection both bolstered and challenged the prescribed ways of viewing Egypt and Egyptian architecture, thus having a direct influence on how Cornell students perceived the historic built environment of the ‘east’.

  2. Developing an audiovisual notebook as a self-learning tool in histology: perceptions of teachers and students.

    Science.gov (United States)

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four questionnaires with items about information, images, text and music, and filmmaking were used to investigate students' (n = 115) and teachers' perceptions (n = 28) regarding the development of a video focused on a histological technique. The results show that both students and teachers significantly prioritize informative components, images and filmmaking more than text and music. The scores were significantly higher for teachers than for students for all four components analyzed. The highest scores were given to items related to practical and medically oriented elements, and the lowest values were given to theoretical and complementary elements. For most items, there were no differences between genders. A strong positive correlation was found between the scores given to each item by teachers and students. These results show that both students' and teachers' perceptions tend to coincide for most items, and suggest that audiovisual notebooks developed by students would emphasize the same items as those perceived by teachers to be the most relevant. Further, these findings suggest that the use of video as an audiovisual learning notebook would not only preserve the curricular objectives but would also offer the advantages of self-learning processes. © 2013 American Association of Anatomists.

  3. Online dissection audio-visual resources for human anatomy: Undergraduate medical students' usage and learning outcomes.

    Science.gov (United States)

    Choi-Lundberg, Derek L; Cuellar, William A; Williams, Anne-Marie M

    2016-11-01

    In an attempt to improve undergraduate medical student preparation for and learning from dissection sessions, dissection audio-visual resources (DAVR) were developed. Data from e-learning management systems indicated DAVR were accessed by 28% ± 10 (mean ± SD for nine DAVR across three years) of students prior to the corresponding dissection sessions, representing at most 58% ± 20 of assigned dissectors. Approximately 50% of students accessed all available DAVR by the end of semester, while 10% accessed none. Ninety percent of survey respondents (response rate 58%) generally agreed that DAVR improved their preparation for and learning from dissection when used. Of several learning resources, only DAVR usage had a significant positive correlation (P = 0.002) with feeling prepared for dissection. Results on cadaveric anatomy practical examination questions in year 2 (Y2) and year 3 (Y3) cohorts were 3.9% (P learning outcomes of more students. Anat Sci Educ 9: 545-554. © 2016 American Association of Anatomists. © 2016 American Association of Anatomists.

  4. Vicarious Reinforcement In Rhesus Macaques (Macaca mulatta

    Directory of Open Access Journals (Sweden)

    Steve W. C. Chang

    2011-03-01

    Full Text Available What happens to others profoundly influences our own behavior. Such other-regarding outcomes can drive observational learning, as well as motivate cooperation, charity, empathy, and even spite. Vicarious reinforcement may serve as one of the critical mechanisms mediating the influence of other-regarding outcomes on behavior and decision-making in groups. Here we show that rhesus macaques spontaneously derive vicarious reinforcement from observing rewards given to another monkey, and that this reinforcement can motivate them to subsequently deliver or withhold rewards from the other animal. We exploited Pavlovian and instrumental conditioning to associate rewards to self (M1 and/or rewards to another monkey (M2 with visual cues. M1s made more errors in the instrumental trials when cues predicted reward to M2 compared to when cues predicted reward to M1, but made even more errors when cues predicted reward to no one. In subsequent preference tests between pairs of conditioned cues, M1s preferred cues paired with reward to M2 over cues paired with reward to no one. By contrast, M1s preferred cues paired with reward to self over cues paired with reward to both monkeys simultaneously. Rates of attention to M2 strongly predicted the strength and valence of vicarious reinforcement. These patterns of behavior, which were absent in nonsocial control trials, are consistent with vicarious reinforcement based upon sensitivity to observed, or counterfactual, outcomes with respect to another individual. Vicarious reward may play a critical role in shaping cooperation and competition, as well as motivating observational learning and group coordination in rhesus macaques, much as it does in humans. We propose that vicarious reinforcement signals mediate these behaviors via homologous neural circuits involved in reinforcement learning and decision-making.

  5. Vicarious reinforcement in rhesus macaques (macaca mulatta).

    Science.gov (United States)

    Chang, Steve W C; Winecoff, Amy A; Platt, Michael L

    2011-01-01

    What happens to others profoundly influences our own behavior. Such other-regarding outcomes can drive observational learning, as well as motivate cooperation, charity, empathy, and even spite. Vicarious reinforcement may serve as one of the critical mechanisms mediating the influence of other-regarding outcomes on behavior and decision-making in groups. Here we show that rhesus macaques spontaneously derive vicarious reinforcement from observing rewards given to another monkey, and that this reinforcement can motivate them to subsequently deliver or withhold rewards from the other animal. We exploited Pavlovian and instrumental conditioning to associate rewards to self (M1) and/or rewards to another monkey (M2) with visual cues. M1s made more errors in the instrumental trials when cues predicted reward to M2 compared to when cues predicted reward to M1, but made even more errors when cues predicted reward to no one. In subsequent preference tests between pairs of conditioned cues, M1s preferred cues paired with reward to M2 over cues paired with reward to no one. By contrast, M1s preferred cues paired with reward to self over cues paired with reward to both monkeys simultaneously. Rates of attention to M2 strongly predicted the strength and valence of vicarious reinforcement. These patterns of behavior, which were absent in non-social control trials, are consistent with vicarious reinforcement based upon sensitivity to observed, or counterfactual, outcomes with respect to another individual. Vicarious reward may play a critical role in shaping cooperation and competition, as well as motivating observational learning and group coordination in rhesus macaques, much as it does in humans. We propose that vicarious reinforcement signals mediate these behaviors via homologous neural circuits involved in reinforcement learning and decision-making.

  6. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training.

    Science.gov (United States)

    Bernstein, Lynne E; Auer, Edward T; Eberhardt, Silvio P; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

  7. Use of High-Definition Audiovisual Technology in a Gross Anatomy Laboratory: Effect on Dental Students' Learning Outcomes and Satisfaction.

    Science.gov (United States)

    Ahmad, Maha; Sleiman, Naama H; Thomas, Maureen; Kashani, Nahid; Ditmyer, Marcia M

    2016-02-01

    Laboratory cadaver dissection is essential for three-dimensional understanding of anatomical structures and variability, but there are many challenges to teaching gross anatomy in medical and dental schools, including a lack of available space and qualified anatomy faculty. The aim of this study was to determine the efficacy of high-definition audiovisual educational technology in the gross anatomy laboratory in improving dental students' learning outcomes and satisfaction. Exam scores were compared for two classes of first-year students at one U.S. dental school: 2012-13 (no audiovisual technology) and 2013-14 (audiovisual technology), and section exams were used to compare differences between semesters. Additionally, an online survey was used to assess the satisfaction of students who used the technology. All 284 first-year students in the two years (2012-13 N=144; 2013-14 N=140) participated in the exams. Of the 140 students in the 2013-14 class, 63 completed the survey (45% response rate). The results showed that those students who used the technology had higher scores on the laboratory exams than those who did not use it, and students in the winter semester scored higher (90.17±0.56) than in the fall semester (82.10±0.68). More than 87% of those surveyed strongly agreed or agreed that the audiovisual devices represented anatomical structures clearly in the gross anatomy laboratory. These students reported an improved experience in learning and understanding anatomical structures, found the laboratory to be less overwhelming, and said they were better able to follow dissection instructions and understand details of anatomical structures with the new technology. Based on these results, the study concluded that the ability to provide the students a clear view of anatomical structures and high-quality imaging had improved their learning experience.

  8. [Learning to use semiautomatic external defibrillators through audiovisual materials for schoolchildren].

    Science.gov (United States)

    Jorge-Soto, Cristina; Abelairas-Gómez, Cristian; Barcala-Furelos, Roberto; Gregorio-García, Carolina; Prieto-Saborit, José Antonio; Rodríguez-Núñez, Antonio

    2016-01-01

    To assess the ability of schoolchildren to use a automated external defibrillator (AED) to provide an effective shock and their retention of the skill 1 month after a training exercise supported by audiovisual materials. Quasi-experimental controlled study in 205 initially untrained schoolchildren aged 6 to 16 years old. SAEDs were used to apply shocks to manikins. The students took a baseline test (T0) of skill, and were then randomized to an experimental or control group in the first phase (T1). The experimental group watched a training video, and both groups were then retested. The children were tested in simulations again 1 month later (T2). A total of 196 students completed all 3 phases. Ninety-six (95.0%) of the secondary school students and 54 (56.8%) of the primary schoolchildren were able to explain what a SAED is. Twenty of the secondary school students (19.8%) and 8 of the primary schoolchildren (8.4%) said they knew how to use one. At T0, 78 participants (39.8%) were able to simulate an effective shock. At T1, 36 controls (34.9%) and 56 experimental-group children (60.2%) achieved an effective shock (Paudiovisual instruction improves students' skill in managing a SAED and helps them retain what they learned for later use.

  9. Electrocortical Dynamics in Children with a Language-Learning Impairment Before and After Audiovisual Training.

    Science.gov (United States)

    Heim, Sabine; Choudhury, Naseem; Benasich, April A

    2016-05-01

    Detecting and discriminating subtle and rapid sound changes in the speech environment is a fundamental prerequisite of language processing, and deficits in this ability have frequently been observed in individuals with language-learning impairments (LLI). One approach to studying associations between dysfunctional auditory dynamics and LLI, is to implement a training protocol tapping into this potential while quantifying pre- and post-intervention status. Event-related potentials (ERPs) are highly sensitive to the brain correlates of these dynamic changes and are therefore ideally suited for examining hypotheses regarding dysfunctional auditory processes. In this study, ERP measurements to rapid tone sequences (standard and deviant tone pairs) along with behavioral language testing were performed in 6- to 9-year-old LLI children (n = 21) before and after audiovisual training. A non-treatment group of children with typical language development (n = 12) was also assessed twice at a comparable time interval. The results indicated that the LLI group exhibited considerable gains on standardized measures of language. In terms of ERPs, we found evidence of changes in the LLI group specifically at the level of the P2 component, later than 250 ms after the onset of the second stimulus in the deviant tone pair. These changes suggested enhanced discrimination of deviant from standard tone sequences in widespread cortices, in LLI children after training.

  10. Concern for others leads to vicarious optimism

    OpenAIRE

    Kappes, A.; Faber, N. S.; Kahane, G.; Savulescu, J.; Crockett, M. J.

    2018-01-01

    An optimistic learning bias leads people to update their beliefs in response to better-than-expected good news but neglect worse-than-expected bad news. Because evidence suggests that this bias arises from self-concern, we hypothesized that a similar bias may affect beliefs about other people’s futures, to the extent that people care about others. Here, we demonstrated the phenomenon of vicarious optimism and showed that it arises from concern for others. Participants predicted the likelihood...

  11. Aula virtual y presencial en aprendizaje de comunicación audiovisual y educación Virtual and Real Classroom in Learning Audiovisual Communication and Education

    Directory of Open Access Journals (Sweden)

    Josefina Santibáñez Velilla

    2010-10-01

    Full Text Available El modelo mixto de enseñanza-aprendizaje pretende utilizar las tecnologías de la información y de la comunicación (TIC para garantizar una formación más ajustada al Espacio Europeo de Educación Superior (EEES. Se formularon los siguientes objetivos de investigación: Averiguar la valoración que hacen los alumnos de Magisterio del aula virtual WebCT como apoyo a la docencia presencial, y conocer las ventajas del uso de la WebCT y de las TIC por los alumnos en el estudio de caso: «Valores y contravalores transmitidos por series televisivas visionadas por niños y adolescentes». La investigación se realizó con una muestra de 205 alumnos de la Universidad de La Rioja que cursaban la asignatura de «Tecnologías aplicadas a la Educación». Para la descripción objetiva, sistemática y cuantitativa del contenido manifiesto de los documentos se ha utilizado la técnica de análisis de contenido cualitativa y cuantitativa. Los resultados obtenidos demuestran que las herramientas de comunicación, contenidos y evaluación son valoradas favorablemente por los alumnos. Se llega a la conclusión de que la WebCT y las TIC constituyen un apoyo a la innovación metodológica del EEES basada en el aprendizaje centrado en el alumno. Los alumnos evidencian su competencia audiovisual en los ámbitos de análisis de valores y de expresión a través de documentos audiovisuales en formatos multimedia. Dichos alumnos aportan un nuevo sentido innovador y creativo al uso docente de series televisivas.The mixed model of Teaching-Learning intends to use Information and Communication Technologies (ICTs to guarantee an education more adjusted to European Space for Higher Education (ESHE. The following research objectives were formulated: 1 To find out the assessment made by teacher-training college students of the virtual classroom WebCT as an aid to face-to-face teaching. 2 To know the advantages of the use of WebCT and ICTs by students in the case study:

  12. The Impact of Audiovisual Feedback on the Learning Outcomes of a Remote and Virtual Laboratory Class

    Science.gov (United States)

    Lindsay, E.; Good, M.

    2009-01-01

    Remote and virtual laboratory classes are an increasingly prevalent alternative to traditional hands-on laboratory experiences. One of the key issues with these modes of access is the provision of adequate audiovisual (AV) feedback to the user, which can be a complicated and resource-intensive challenge. This paper reports on a comparison of two…

  13. Audiovisual Script Writing.

    Science.gov (United States)

    Parker, Norton S.

    In audiovisual writing the writer must first learn to think in terms of moving visual presentation. The writer must research his script, organize it, and adapt it to a limited running time. By use of a pleasant-sounding narrator and well-written narration, the visual and narrative can be successfully integrated. There are two types of script…

  14. The Role of Addressee Backchannels and Conversational Grounding in Vicarious Word Learning in Four-Year-Olds

    Science.gov (United States)

    Tolins, Jackson; Namiranian, Neda; Akhtar, Nameera; Fox Tree, Jean E.

    2017-01-01

    Children successfully learn words through overhearing others engaged in verbal interactions. The current studies investigated the degree to which four-year-old overhearers are influenced by the response behaviors of addressees and by the interactional pattern of the speakers and addressees. It was found that while addressee responses on their own…

  15. PENINGKATAN KUALITAS PEMBELAJARAN IPA MELALUI MODEL PROBLEM BASED LEARNING (PBL MENGGUNAKAN AUDIOVISUAL

    Directory of Open Access Journals (Sweden)

    Endang Eka Wulandari, Sri Hartati

    2016-11-01

    Full Text Available Tujuan Penelitian ini untuk meningkatkan kualitas pembelajaran IPA pada siswa kelas IV melalui model PBL menggunakan audiovisual. Penelitian ini menggunakan desain penelitian tindakan kelas yang berlangsung dalam tiga siklus. Data dianalisis dengan menggunakan teknik analisis deskriptif kuantitatif dan kualitatif. Hasil penelitian menunjukan bahwa (1 Keterampilan guru pada siklus I mendapat skor 18, siklus II skor 22, meningkat pada siklus III skor 25.(2 Aktivitas siswa pada siklus I skor 16,8, pada siklus II skor 22, meningkat menjadi 24,4 pada siklus III. (3 Respon siswa pada siklus I dengan persentase 71% siklus II dengan persentase 78%, meningkat 92% pada siklus III (4 Hasil belajar siswa pada siklus I mengalami ketuntasan klasikal sebesar 60%, siklus II sebesar 73%, dan mengalami peningkatan pada siklus III menjadi 94%. Kesimpulan penelitian ini menunjukan model PBL menggunakan audiovisual dapat meningkatkan kualitas pembelajaran IPA yang ditandai dengan meningkatnya keterampilan guru, aktivitas siswa, respon siswa dan hasil belajar siswa.

  16. Burnout, vicarious traumatization and its prevention.

    Science.gov (United States)

    Pross, Christian

    2006-01-01

    Previous studies on burnout and vicarious traumatization are reviewed and summarized with a list of signs and symptoms. From the author's own observations two histories of caregivers working with torture survivors are described which exemplify the risk,implications and consequences of secondary trauma. Contributing factors in the social and political framework in which caregivers operate are analyzed and possible means of prevention suggested, particularly focussing on the conflict of roles when providing evaluations on trauma victims for health and immigration authorities. Caregivers working with victims of violence carry a high risk of suffering from burnout and vicarious traumatization unless preventive factors are considered such as: self care, solid professional training in psychotherapy, therapeutic self-awareness, regular self-examination by collegial and external supervision, limiting caseload, continuing professional education and learning about new concepts in trauma, occasional research sabbaticals, keeping a balance between empathy and a proper professional distance to clients, protecting oneself against being mislead by clients with fictitious PTSD. An institutional setting should be provided in which the roles of therapists and evaluators are separated. Important factors for burnout and vicarious traumatization are the lack of social recognition for caregivers and the financial and legal outsider status of many centers. Therefore politicians and social insurance carriers should be urged to integrate facilities for traumatized refugees into the general health care system and centers should work on more alliances with the medical mainstream and academic medicine.

  17. Lecture Hall and Learning Design: A Survey of Variables, Parameters, Criteria and Interrelationships for Audio-Visual Presentation Systems and Audience Reception.

    Science.gov (United States)

    Justin, J. Karl

    Variables and parameters affecting architectural planning and audiovisual systems selection for lecture halls and other learning spaces are surveyed. Interrelationships of factors are discussed, including--(1) design requirements for modern educational techniques as differentiated from cinema, theater or auditorium design, (2) general hall…

  18. Audiovisual Interaction

    DEFF Research Database (Denmark)

    Karandreas, Theodoros-Alexandros

    in a manner that allowed the subjective audiovisual evaluation of loudspeakers under controlled conditions. Additionally, unimodal audio and visual evaluations were used as a baseline for comparison. The same procedure was applied in the investigation of the validity of less than optimal stimuli presentations...

  19. Audiovisual Review

    Science.gov (United States)

    Physiology Teacher, 1976

    1976-01-01

    Lists and reviews recent audiovisual materials in areas of medical, dental, nursing and allied health, and veterinary medicine; undergraduate, and high school studies. Each is classified as to level, type of instruction, usefulness, and source of availability. Topics include respiration, renal physiology, muscle mechanics, anatomy, evolution,…

  20. Sex differences in audiovisual discrimination learning by Bengalese finches (Lonchura striata var. domestica).

    Science.gov (United States)

    Seki, Yoshimasa; Okanoya, Kazuo

    2008-02-01

    Both visual and auditory information are important for songbirds, especially in developmental and sexual contexts. To investigate bimodal cognition in songbirds, the authors conducted audiovisual discrimination training in Bengalese finches. The authors used two types of stimulus: an "artificial stimulus," which is a combination of simple figures and sound, and a "biological stimulus," consisting of video images of singing males along with their songs. The authors found that while both sexes predominantly used visual cues in the discrimination tasks, males tended to be more dependent on auditory information for the biological stimulus. Female responses were always dependent on the visual stimulus for both stimulus types. Only males changed their discrimination strategy according to stimulus type. Although males used both visual and auditory cues for the biological stimulus, they responded to the artificial stimulus depending only on visual information, as the females did. These findings suggest a sex difference in innate auditory sensitivity. (c) 2008 APA.

  1. Information about the model's unconditioned stimulus and response in vicarious classical conditioning.

    Science.gov (United States)

    Hygge, S

    1976-06-01

    Four groups with 16 observers each participated in a differential, vicarious conditioning experiment with skin conductance responses as the dependent variable. The information available to the observer about the model's unconditioned stimulus and response was varied in a 2 X 2 factorial design. Results clearly showed that information about the model's unconditioned stimulus (a high or low dB level) was not necessary for vicarious instigation, but that information about the unconditioned response (a high or low emotional aversiveness) was necessary. Data for conditioning of responses showed almost identical patterns to those for vicarious instigation. To explain the results, a distinction between factors necessary for the development and elicitation of vicariously instigated responses was introduced, and the effectiveness of information about the model's response on the elicitation of vicariously instigated responses was considered in terms of an expansion of Bandura's social learning theory.

  2. Thomas Vicary, barber-surgeon.

    Science.gov (United States)

    Thomas, Duncan P

    2006-05-01

    An Act of Parliament in 1540 uniting the barbers and surgeons to form the Barber-Surgeons' Company represented an important foundation stone towards better surgery in England. Thomas Vicary, who played a pivotal role in promoting this union, was a leading surgeon in London in the middle of the 16th century. While Vicary made no direct contribution to surgical knowledge, he should be remembered primarily as one who contributed much towards the early organization and teaching of surgery and to the consequent benefits that flowed from this improvement.

  3. Active, Passive, and Vicarious Desensitization

    Science.gov (United States)

    Denney, Douglas R.

    1974-01-01

    Two variations of desensitization therapy for reducing test anxiety were studied, active desensitization in which the client describes his visualizations of the scenes and vicarious desensitization in which the client merely observes the desensitization treatment of another test anxious client. The relaxation treatment which emphasized application…

  4. Audiovisual alignment of co-speech gestures to speech supports word learning in 2-year-olds.

    Science.gov (United States)

    Jesse, Alexandra; Johnson, Elizabeth K

    2016-05-01

    Analyses of caregiver-child communication suggest that an adult tends to highlight objects in a child's visual scene by moving them in a manner that is temporally aligned with the adult's speech productions. Here, we used the looking-while-listening paradigm to examine whether 25-month-olds use audiovisual temporal alignment to disambiguate and learn novel word-referent mappings in a difficult word-learning task. Videos of two equally interesting and animated novel objects were simultaneously presented to children, but the movement of only one of the objects was aligned with an accompanying object-labeling audio track. No social cues (e.g., pointing, eye gaze, touch) were available to the children because the speaker was edited out of the videos. Immediately afterward, toddlers were presented with still images of the two objects and asked to look at one or the other. Toddlers looked reliably longer to the labeled object, demonstrating their acquisition of the novel word-referent mapping. A control condition showed that children's performance was not solely due to the single unambiguous labeling that had occurred at experiment onset. We conclude that the temporal link between a speaker's utterances and the motion they imposed on the referent object helps toddlers to deduce a speaker's intended reference in a difficult word-learning scenario. In combination with our previous work, these findings suggest that intersensory redundancy is a source of information used by language users of all ages. That is, intersensory redundancy is not just a word-learning tool used by young infants. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Reductions in Children's Vicariously Learnt Avoidance and Heart Rate Responses Using Positive Modeling.

    Science.gov (United States)

    Reynolds, Gemma; Field, Andy P; Askew, Chris

    2016-03-23

    Recent research has indicated that vicarious learning can lead to increases in children's fear beliefs and avoidance preferences for stimuli and that these fear responses can subsequently be reversed using positive modeling (counterconditioning). The current study investigated children's vicariously acquired avoidance behavior, physiological responses (heart rate), and attentional bias for stimuli and whether these could also be reduced via counterconditioning. Ninety-six (49 boys, 47 girls) 7- to 11-year-olds received vicarious fear learning for novel stimuli and were then randomly assigned to a counterconditioning, extinction, or control group. Fear beliefs and avoidance preferences were measured pre- and post-learning, whereas avoidance behavior, heart rate, and attentional bias were all measured post-learning. Control group children showed increases in fear beliefs and avoidance preferences for animals seen in vicarious fear learning trials. In addition, significantly greater avoidance behavior, heart rate responding, and attentional bias were observed for these animals compared to a control animal. In contrast, vicariously acquired avoidance preferences of children in the counterconditioning group were significantly reduced post-positive modeling, and these children also did not show the heightened heart rate responding to fear-paired animals. Children in the extinction group demonstrated comparable responses to the control group; thus the extinction procedure showed no effect on any fear measures. The findings suggest that counterconditioning with positive modelling can be used as an effective early intervention to reduce the behavioral and physiological effects of vicarious fear learning in childhood.

  6. Tracing Trajectories of Audio-Visual Learning in the Infant Brain

    Science.gov (United States)

    Kersey, Alyssa J.; Emberson, Lauren L.

    2017-01-01

    Although infants begin learning about their environment before they are born, little is known about how the infant brain changes during learning. Here, we take the initial steps in documenting how the neural responses in the brain change as infants learn to associate audio and visual stimuli. Using functional near-infrared spectroscopy (fNRIS) to…

  7. Enhanced Multisensory Integration and Motor Reactivation after Active Motor Learning of Audiovisual Associations

    Science.gov (United States)

    Butler, Andrew J.; James, Thomas W.; James, Karin Harman

    2011-01-01

    Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent…

  8. Virtual Attendance: Analysis of an Audiovisual over IP System for Distance Learning in the Spanish Open University (UNED

    Directory of Open Access Journals (Sweden)

    Esteban Vázquez-Cano

    2013-07-01

    Full Text Available This article analyzes a system of virtual attendance, called “AVIP” (AudioVisual over Internet Protocol, at the Spanish Open University (UNED in Spain. UNED, the largest open university in Europe, is the pioneer in distance education in Spain. It currently has more than 300,000 students, 1,300 teachers, and 6,000 tutors all over the world, besides Spain. This university is redefining, on the lines of other universities, many of its academic processes to meet the new requirements of the European Higher Education Area (EHEA. Since its inception, more than 30 years ago, the methodology chosen by UNED has been blended learning. Today, this university combines face-to-face tutorial sessions with new methodological proposals, mediated by ICT. Through a quantitative methodology, the perception of students and tutors of the new model of virtual tutoring, called AVIP Classrooms, was analyzed. The results show that the new model greatly improves the orientation and teaching methodology of tutors. However, it requires training and new approaches to provide a more collaborative and participatory environment for students.

  9. Online incidental statistical learning of audiovisual word sequences in adults: a registered report.

    Science.gov (United States)

    Kuppuraj, Sengottuvel; Duta, Mihaela; Thompson, Paul; Bishop, Dorothy

    2018-02-01

    Statistical learning has been proposed as a key mechanism in language learning. Our main goal was to examine whether adults are capable of simultaneously extracting statistical dependencies in a task where stimuli include a range of structures amenable to statistical learning within a single paradigm. We devised an online statistical learning task using real word auditory-picture sequences that vary in two dimensions: (i) predictability and (ii) adjacency of dependent elements. This task was followed by an offline recall task to probe learning of each sequence type. We registered three hypotheses with specific predictions. First, adults would extract regular patterns from continuous stream (effect of grammaticality). Second, within grammatical conditions, they would show differential speeding up for each condition as a factor of statistical complexity of the condition and exposure. Third, our novel approach to measure online statistical learning would be reliable in showing individual differences in statistical learning ability. Further, we explored the relation between statistical learning and a measure of verbal short-term memory (STM). Forty-two participants were tested and retested after an interval of at least 3 days on our novel statistical learning task. We analysed the reaction time data using a novel regression discontinuity approach. Consistent with prediction, participants showed a grammaticality effect, agreeing with the predicted order of difficulty for learning different statistical structures. Furthermore, a learning index from the task showed acceptable test-retest reliability ( r  = 0.67). However, STM did not correlate with statistical learning. We discuss the findings noting the benefits of online measures in tracking the learning process.

  10. Researching embodied learning by using videographic participation for data collection and audiovisual narratives for dissemination - illustrated by the encounter between two acrobats

    DEFF Research Database (Denmark)

    Degerbøl, Stine; Svendler Nielsen, Charlotte

    2015-01-01

    to qualitative research and presents a case from contemporary circus education examining embodied learning, whereas the particular focus in this article is methodology and the development of a dissemination strategy for empirical material generated through videographic participation. Drawing on contributions...... concerned with the senses from the field of sport sciences and from the field of visual anthropology and sensory ethnography, the article concludes that using videographic participation and creating audiovisual narratives might be a good option to capture the multisensuous dimensions of a learning situation....

  11. The Picmonic® Learning System: enhancing memory retention of medical sciences, using an audiovisual mnemonic Web-based learning platform

    Directory of Open Access Journals (Sweden)

    Yang A

    2014-05-01

    Full Text Available Adeel Yang,1,* Hersh Goel,1,* Matthew Bryan,2 Ron Robertson,1 Jane Lim,1 Shehran Islam,1 Mark R Speicher2 1College of Medicine, The University of Arizona, Tucson, AZ, USA; 2Arizona College of Osteopathic Medicine, Midwestern University, Glendale, AZ, USA *These authors contributed equally to this work Background: Medical students are required to retain vast amounts of medical knowledge on the path to becoming physicians. To address this challenge, multimedia Web-based learning resources have been developed to supplement traditional text-based materials. The Picmonic® Learning System (PLS; Picmonic, Phoenix, AZ, USA is a novel multimedia Web-based learning platform that delivers audiovisual mnemonics designed to improve memory retention of medical sciences. Methods: A single-center, randomized, subject-blinded, controlled study was conducted to compare the PLS with traditional text-based material for retention of medical science topics. Subjects were randomly assigned to use two different types of study materials covering several diseases. Subjects randomly assigned to the PLS group were given audiovisual mnemonics along with text-based materials, whereas subjects in the control group were given the same text-based materials with key terms highlighted. The primary endpoints were the differences in performance on immediate, 1 week, and 1 month delayed free-recall and paired-matching tests. The secondary endpoints were the difference in performance on a 1 week delayed multiple-choice test and self-reported satisfaction with the study materials. Differences were calculated using unpaired two-tailed t-tests. Results: PLS group subjects demonstrated improvements of 65%, 161%, and 208% compared with control group subjects on free-recall tests conducted immediately, 1 week, and 1 month after study of materials, respectively. The results of performance on paired-matching tests showed an improvement of up to 331% for PLS group subjects. PLS group

  12. The role of empathy in experiencing vicarious anxiety.

    Science.gov (United States)

    Shu, Jocelyn; Hassell, Samuel; Weber, Jochen; Ochsner, Kevin N; Mobbs, Dean

    2017-08-01

    With depictions of others facing threats common in the media, the experience of vicarious anxiety may be prevalent in the general population. However, the phenomenon of vicarious anxiety-the experience of anxiety in response to observing others expressing anxiety-and the interpersonal mechanisms underlying it have not been fully investigated in prior research. In 4 studies, we investigate the role of empathy in experiencing vicarious anxiety, using film clips depicting target victims facing threats. In Studies 1 and 2, trait emotional empathy was associated with greater self-reported anxiety when observing target victims, and with perceiving greater anxiety to be experienced by the targets. Study 3 extended these findings by demonstrating that trait empathic concern-the tendency to feel concern and compassion for others-was associated with experiencing vicarious anxiety, whereas trait personal distress-the tendency to experience distress in stressful situations-was not. Study 4 manipulated state empathy to establish a causal relationship between empathy and experience of vicarious anxiety. Participants who took an empathic perspective when observing target victims, as compared to those who took an objective perspective using reappraisal-based strategies, reported experiencing greater anxiety, risk-aversion, and sleep disruption the following night. These results highlight the impact of one's social environment on experiencing anxiety, particularly for those who are highly empathic. In addition, these findings have implications for extending basic models of anxiety to incorporate interpersonal processes, understanding the role of empathy in social learning, and potential applications for therapeutic contexts. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. The Picmonic(®) Learning System: enhancing memory retention of medical sciences, using an audiovisual mnemonic Web-based learning platform.

    Science.gov (United States)

    Yang, Adeel; Goel, Hersh; Bryan, Matthew; Robertson, Ron; Lim, Jane; Islam, Shehran; Speicher, Mark R

    2014-01-01

    Medical students are required to retain vast amounts of medical knowledge on the path to becoming physicians. To address this challenge, multimedia Web-based learning resources have been developed to supplement traditional text-based materials. The Picmonic(®) Learning System (PLS; Picmonic, Phoenix, AZ, USA) is a novel multimedia Web-based learning platform that delivers audiovisual mnemonics designed to improve memory retention of medical sciences. A single-center, randomized, subject-blinded, controlled study was conducted to compare the PLS with traditional text-based material for retention of medical science topics. Subjects were randomly assigned to use two different types of study materials covering several diseases. Subjects randomly assigned to the PLS group were given audiovisual mnemonics along with text-based materials, whereas subjects in the control group were given the same text-based materials with key terms highlighted. The primary endpoints were the differences in performance on immediate, 1 week, and 1 month delayed free-recall and paired-matching tests. The secondary endpoints were the difference in performance on a 1 week delayed multiple-choice test and self-reported satisfaction with the study materials. Differences were calculated using unpaired two-tailed t-tests. PLS group subjects demonstrated improvements of 65%, 161%, and 208% compared with control group subjects on free-recall tests conducted immediately, 1 week, and 1 month after study of materials, respectively. The results of performance on paired-matching tests showed an improvement of up to 331% for PLS group subjects. PLS group subjects also performed 55% greater than control group subjects on a 1 week delayed multiple choice test requiring higher-order thinking. The differences in test performance between the PLS group subjects and the control group subjects were statistically significant (P<0.001), and the PLS group subjects reported higher overall satisfaction with the

  14. Historia audiovisual para una sociedad audiovisual

    Directory of Open Access Journals (Sweden)

    Julio Montero Díaz

    2013-04-01

    Full Text Available This article analyzes the possibilities of presenting an audiovisual history in a society in which audiovisual media has progressively gained greater protagonism. We analyze specific cases of films and historical documentaries and we assess the difficulties faced by historians to understand the keys of audiovisual language and by filmmakers to understand and incorporate history into their productions. We conclude that it would not be possible to disseminate history in the western world without audiovisual resources circulated through various types of screens (cinema, television, computer, mobile phone, video games.

  15. Group Vicarious Desensitization of Test Anxiety.

    Science.gov (United States)

    Altmaier, Elizabeth Mitchell; Woodward, Margaret

    1981-01-01

    Studied test-anxious college students (N=43) who received either vicarious desensitization, study skills training, or both treatments; there was also a no-treatment control condition. Self-report measures indicated that vicarious desensitization resulted in lower test and trait anxiety than study skills training alone or no treatment. (Author)

  16. Vicarious resilience and vicarious traumatisation: Experiences of working with refugees and asylum seekers in South Australia.

    Science.gov (United States)

    Puvimanasinghe, Teresa; Denson, Linley A; Augoustinos, Martha; Somasundaram, Daya

    2015-12-01

    The negative psychological impacts of working with traumatised people are well documented and include vicarious traumatisation (VT): the cumulative effect of identifying with clients' trauma stories that negatively impacts on service providers' memory, emotions, thoughts, and worldviews. More recently, the concept of vicarious resilience (VR) has been also identified: the strength, growth, and empowerment experienced by trauma workers as a consequence of their work. VR includes service providers' awareness and appreciation of their clients' capacity to grow, maintaining hope for change, as well as learning from and reassessing personal problems in the light of clients' stories of perseverance, strength, and growth. This study aimed at exploring the experiences of mental health, physical healthcare, and settlement workers caring for refugees and asylum seekers in South Australia. Using a qualitative method (data-based thematic analysis) to collect and analyse 26 semi-structured face-to-face interviews, we identified four prominent and recurring themes emanating from the data: VT, VR, work satisfaction, and cultural flexibility. These findings-among the first to describe both VT and VR in Australians working with refugee people-have important implications for policy, service quality, service providers' wellbeing, and refugee clients' lives. © The Author(s) 2015.

  17. Audiovisual Capture with Ambiguous Audiovisual Stimuli

    Directory of Open Access Journals (Sweden)

    Jean-Michel Hupé

    2011-10-01

    Full Text Available Audiovisual capture happens when information across modalities get fused into a coherent percept. Ambiguous multi-modal stimuli have the potential to be powerful tools to observe such effects. We used such stimuli made of temporally synchronized and spatially co-localized visual flashes and auditory tones. The flashes produced bistable apparent motion and the tones produced ambiguous streaming. We measured strong interferences between perceptual decisions in each modality, a case of audiovisual capture. However, does this mean that audiovisual capture occurs before bistable decision? We argue that this is not the case, as the interference had a slow temporal dynamics and was modulated by audiovisual congruence, suggestive of high-level factors such as attention or intention. We propose a framework to integrate bistability and audiovisual capture, which distinguishes between “what” competes and “how” it competes (Hupé et al., 2008. The audiovisual interactions may be the result of contextual influences on neural representations (“what” competes, quite independent from the causal mechanisms of perceptual switches (“how” it competes. This framework predicts that audiovisual capture can bias bistability especially if modalities are congruent (Sato et al., 2007, but that is fundamentally distinct in nature from the bistable competition mechanism.

  18. Behavioural and neurobiological foundations of vicarious processing

    OpenAIRE

    Lockwood, P. L.

    2015-01-01

    Empathy can be broadly defined as the ability to vicariously experience and to understand the affect of other people. This thesis will argue that such a capacity for vicarious processing is fundamental for successful social-cognitive ability and behaviour. To this end, four outstanding research questions regarding the behavioural and neural basis of empathy are addressed 1) can empathy be dissected into different components and do these components differentially explain individual differences...

  19. Vicarious liability and criminal prosecutions for regulatory offences.

    Science.gov (United States)

    Freckelton, Ian

    2006-08-01

    The parameters of vicarious liability of corporations for the conduct of their employees, especially in the context of provisions that criminalise breaches of regulatory provisions, are complex. The decision of Bell J in ABC Developmental Learning Centres Pty Ltd v Wallace [2006] VSC 171 raises starkly the potential unfairness of an approach which converts criminal liability of corporations too readily into absolute liability, irrespective of the absence of any form of proven culpability. The author queries whether fault should not be brought back in some form to constitute a determinant of criminal liability for corporations.

  20. Use of Audiovisual Texts in University Education Process

    Science.gov (United States)

    Aleksandrov, Evgeniy P.

    2014-01-01

    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  1. Exploring the Deep-Level Reasoning Questions Effect during Vicarious Learning among Eighth to Eleventh Graders in the Domains of Computer Literacy and Newtonian Physics

    Science.gov (United States)

    Gholson, Barry; Witherspoon, Amy; Morgan, Brent; Brittingham, Joshua K.; Coles, Robert; Graesser, Arthur C.; Sullins, Jeremiah; Craig, Scotty D.

    2009-01-01

    This paper tested the deep-level reasoning questions effect in the domains of computer literacy between eighth and tenth graders and Newtonian physics for ninth and eleventh graders. This effect claims that learning is facilitated when the materials are organized around questions that invite deep-reasoning. The literature indicates that vicarious…

  2. Campaign for vicarious calibration of SumbandilaSat in Argentina

    CSIR Research Space (South Africa)

    Vhengani, LM

    2011-07-01

    Full Text Available assessment, they are also calibrated post-launch. Various post-launch techniques exist including cross-sensor, solar, lunar and vicarious calibration. Vicarious calibration relies on in-situ measurements of surface reflectance and atmospheric transmittance...

  3. Music evokes vicarious emotions in listeners.

    Science.gov (United States)

    Kawakami, Ai; Furukawa, Kiyoshi; Okanoya, Kazuo

    2014-01-01

    Why do we listen to sad music? We seek to answer this question using a psychological approach. It is possible to distinguish perceived emotions from those that are experienced. Therefore, we hypothesized that, although sad music is perceived as sad, listeners actually feel (experience) pleasant emotions concurrent with sadness. This hypothesis was supported, which led us to question whether sadness in the context of art is truly an unpleasant emotion. While experiencing sadness may be unpleasant, it may also be somewhat pleasant when experienced in the context of art, for example, when listening to sad music. We consider musically evoked emotion vicarious, as we are not threatened when we experience it, in the way that we can be during the course of experiencing emotion in daily life. When we listen to sad music, we experience vicarious sadness. In this review, we propose two sides to sadness by suggesting vicarious emotion.

  4. Late Cretaceous vicariance in Gondwanan amphibians.

    Directory of Open Access Journals (Sweden)

    Ines Van Bocxlaer

    Full Text Available Overseas dispersals are often invoked when Southern Hemisphere terrestrial and freshwater organism phylogenies do not fit the sequence or timing of Gondwana fragmentation. We used dispersal-vicariance analyses and molecular timetrees to show that two species-rich frog groups, Microhylidae and Natatanura, display congruent patterns of spatial and temporal diversification among Gondwanan plates in the Late Cretaceous, long after the presumed major tectonic break-up events. Because amphibians are notoriously salt-intolerant, these analogies are best explained by simultaneous vicariance, rather than by oceanic dispersal. Hence our results imply Late Cretaceous connections between most adjacent Gondwanan landmasses, an essential concept for biogeographic and palaeomap reconstructions.

  5. Digital audiovisual archives

    CERN Document Server

    Stockinger, Peter

    2013-01-01

    Today, huge quantities of digital audiovisual resources are already available - everywhere and at any time - through Web portals, online archives and libraries, and video blogs. One central question with respect to this huge amount of audiovisual data is how they can be used in specific (social, pedagogical, etc.) contexts and what are their potential interest for target groups (communities, professionals, students, researchers, etc.).This book examines the question of the (creative) exploitation of digital audiovisual archives from a theoretical, methodological, technical and practical

  6. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults.

    Science.gov (United States)

    Bernstein, Lynne E; Eberhardt, Silvio P; Auer, Edward T

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  7. Web-based audiovisual phonetic table program application as e-learning of pronunciation practice in undergraduate degree program

    Directory of Open Access Journals (Sweden)

    Retnomurti Ayu Bandu

    2018-01-01

    Full Text Available Verbal-based learning such as English pronunciation practice requires the existence of an effective e-learning because if it is directly given without any learning media, inaccuracies in pronunciation, spelling, repetition will usually occur in the spoken language. Therefore, this study aims to develop e-learning to be used in the Pronunciation Practice class, Indraprasta PGRI University. This research belongs to Research and Development are: requires an analysis, develops syllabus and teaching materials, creates and develops e-learning, tries and revises the media. Consequently, there is a need to develop module in the classroom into a versatile technology web-based module in the form of Phonetic Table Program. The result is carried out in pronunciation practice classes to find more details on some parts that may still not be detected by the researchers. Thus, the use of technology has become a necessity to assist students in achieving the learning objectives. Therefore, the process of communication in learning will attract more students’ interest and provide facilities to understand the sound system of English as it is equipped with buttons to practice presented by nonnative speakers. Non-native speakers’ selection are based on the consideration that they quickly adapt helping other students who are less fluent in English.

  8. Vicarious traumatization and coping in medical students: a pilot study.

    Science.gov (United States)

    Al-Mateen, Cheryl S; Linker, Julie A; Damle, Neha; Hupe, Jessica; Helfer, Tamara; Jessick, Veronica

    2015-02-01

    This study explored the impact of traumatic experiences on medical students during their clerkships. Medical students completed an anonymous online survey inquiring about traumatic experiences on required clerkships during their third year of medical school, including any symptoms they may have experienced as well as coping strategies they may have used. Twenty-six percent of students reported experiencing vicarious traumatization (VT) during their third year of medical school. The experience of VT in medical students is relevant to medical educators, given that the resulting symptoms may impact student performance and learning as well as ongoing well-being. Fifty percent of the students who experienced VT in this study did so on the psychiatry clerkship. It is important for psychiatrists to recognize that this is a potential risk for students in order to increase the likelihood that appropriate supports are provided.

  9. Vicarious resilience in sexual assault and domestic violence advocates.

    Science.gov (United States)

    Frey, Lisa L; Beesley, Denise; Abbott, Deah; Kendrick, Elizabeth

    2017-01-01

    There is little research related to sexual assault and domestic violence advocates' experiences, with the bulk of the literature focused on stressors and systemic barriers that negatively impact efforts to assist survivors. However, advocates participating in these studies have also emphasized the positive impact they experience consequent to their work. This study explores the positive impact. Vicarious resilience, personal trauma experiences, peer relational quality, and perceived organizational support in advocates (n = 222) are examined. Also, overlap among the conceptual components of vicarious resilience is explored. The first set of multiple regressions showed that personal trauma experiences and peer relational health predicted compassion satisfaction and vicarious posttraumatic growth, with organizational support predicting only compassion satisfaction. The second set of multiple regressions showed that (a) there was significant shared variance between vicarious posttraumatic growth and compassion satisfaction; (b) after accounting for vicarious posttraumatic growth, organizational support accounted for significant variance in compassion satisfaction; and (c) after accounting for compassion satisfaction, peer relational health accounted for significant variance in vicarious posttraumatic growth. Results suggest that it may be more meaningful to conceptualize advocates' personal growth related to their work through the lens of a multidimensional construct such as vicarious resilience. Organizational strategies promoting vicarious resilience (e.g., shared organizational power, training components) are offered, and the value to trauma-informed care of fostering advocates' vicarious resilience is discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. An audiovisual emotion recognition system

    Science.gov (United States)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  11. Audiovisual integration facilitates monkeys' short-term memory.

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2016-07-01

    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans.

  12. Audio-visual synchronization in reading while listening to texts: Effects on visual behavior and verbal learning

    OpenAIRE

    Gerbier , Emilie; Bailly , Gérard; Bosse , Marie-Line

    2018-01-01

    International audience; Reading while listening to texts (RWL) is a promising way to improve the learning benefits provided by a reading experience. In an exploratory study, we investigated the effect of synchronizing the highlighting of words (visual) with their auditory (speech) counterpart during a RWL task. Forty French children from 3rd to 5th grade read short stories in their native language while hearing the story spoken by a narrator. In the non-synchronized (S-) condition the text wa...

  13. Researching Embodied Learning by Using Videographic Participation for Data Collection and Audiovisual Narratives for Dissemination--Illustrated by the Encounter between Two Acrobats

    Science.gov (United States)

    Degerbøl, Stine; Nielsen, Charlotte Svendler

    2015-01-01

    The article concerns doing ethnography in education and it reflects upon using "videographic participation" for data collection and the concept of "audiovisual narratives" for dissemination, which is inspired by the idea of developing academic video. The article takes a narrative approach to qualitative research and presents a…

  14. Just watching the game ain't enough: striatal fMRI reward responses to successes and failures in a video game during active and vicarious playing.

    Science.gov (United States)

    Kätsyri, Jari; Hari, Riitta; Ravaja, Niklas; Nummenmaa, Lauri

    2013-01-01

    Although the multimodal stimulation provided by modern audiovisual video games is pleasing by itself, the rewarding nature of video game playing depends critically also on the players' active engagement in the gameplay. The extent to which active engagement influences dopaminergic brain reward circuit responses remains unsettled. Here we show that striatal reward circuit responses elicited by successes (wins) and failures (losses) in a video game are stronger during active than vicarious gameplay. Eleven healthy males both played a competitive first-person tank shooter game (active playing) and watched a pre-recorded gameplay video (vicarious playing) while their hemodynamic brain activation was measured with 3-tesla functional magnetic resonance imaging (fMRI). Wins and losses were paired with symmetrical monetary rewards and punishments during active and vicarious playing so that the external reward context remained identical during both conditions. Brain activation was stronger in the orbitomedial prefrontal cortex (omPFC) during winning than losing, both during active and vicarious playing. In contrast, both wins and losses suppressed activations in the midbrain and striatum during active playing; however, the striatal suppression, particularly in the anterior putamen, was more pronounced during loss than win events. Sensorimotor confounds related to joystick movements did not account for the results. Self-ratings indicated losing to be more unpleasant during active than vicarious playing. Our findings demonstrate striatum to be selectively sensitive to self-acquired rewards, in contrast to frontal components of the reward circuit that process both self-acquired and passively received rewards. We propose that the striatal responses to repeated acquisition of rewards that are contingent on game related successes contribute to the motivational pull of video-game playing.

  15. Just watching the game ain’t enough: Striatal fMRI reward responses to successes and failures in a video game during active and vicarious playing

    Directory of Open Access Journals (Sweden)

    Jari eKätsyri

    2013-06-01

    Full Text Available Although the multimodal stimulation provided by modern audiovisual video games is pleasing by itself, the rewarding nature of video game playing depends critically also on the players’ active engagement in the gameplay. The extent to which active engagement influences dopaminergic brain reward circuit responses remains unsettled. Here we show that striatal reward circuit responses elicited by successes (wins and failures (losses in a video game are stronger during active than vicarious gameplay. Eleven healthy males both played a competitive first-person tank shooter game (active playing and watched a pre-recorded gameplay video (vicarious playing while their hemodynamic brain activation was measured with 3-tesla functional magnetic resonance imaging (fMRI. Wins and losses were paired with symmetrical monetary rewards and punishments during active and vicarious playing so that the external reward context remained identical during both conditions. Brain activation was stronger in the orbitomedial prefrontal cortex (omPFC during winning than losing, both during active and vicarious playing conditions. In contrast, both wins and losses suppressed activations in the midbrain and striatum during active playing; however, the striatal suppression, particularly in the anterior putamen, was more pronounced during loss than win events. Sensorimotor confounds related to joystick movements did not account for the results. Self-ratings indicated losing to be more unpleasant during active than vicarious playing. Our findings demonstrate striatum to be selectively sensitive to self-acquired rewards, in contrast to frontal components of the reward circuit that process both self-acquired and passively received rewards. We propose that the striatal responses to repeated acquisition of rewards that are contingent on game related successes contribute to the motivational pull of video-game playing.

  16. Flexible goal imitation: Vicarious feedback influences stimulus-response binding by observation.

    Science.gov (United States)

    Giesen, Carina; Scherdin, Kerstin; Rothermund, Klaus

    2017-06-01

    This study investigated whether vicarious feedback influences binding processes between stimuli and observed responses. Two participants worked together in a shared color categorization task, taking the roles of actor and observer in turns. During a prime trial, participants saw a word while observing the other person executing a specific response. Automatic binding of words and observed responses into stimulus-response (S-R) episodes was assessed via word repetition effects in a subsequent probe trial in which either the same (compatible) or a different (incompatible) response had to be executed by the participants in response to the same or a different word. Results showed that vicarious prime feedback (i.e., the feedback that the other participant received for her or his response in the prime) modulated S-R retrieval effects: After positive vicarious prime feedback, typical S-R retrieval effects emerged (i.e., performance benefits for stimulus repetition probes with compatible responses, but performance costs for stimulus repetition probes with incompatible responses emerged). Notably, however, S-R-retrieval effects were reversed after vicarious negative prime feedback (meaning that stimulus repetition in the probe resulted in performance costs if prime and probe responses were compatible, and in performance benefits for incompatible responses). Findings are consistent with a flexible goal imitation account, according to which imitation is based on an interpretative and therefore feedback-sensitive reconstruction of action goals from observed movements. In concert with earlier findings, this data support the conclusion that transient S-R binding and retrieval processes are involved in social learning phenomena.

  17. Venezuela: Nueva Experiencia Audiovisual

    Directory of Open Access Journals (Sweden)

    Revista Chasqui

    2015-01-01

    Full Text Available La Universidad Simón Bolívar (USB creó en 1986, la Fundación para el Desarrollo del Arte Audiovisual, ARTEVISION. Su objetivo general es la promoción y venta de servicios y productos para la televisión, radio, cine, diseño y fotografía de alta calidad artística y técnica. Todo esto sin descuidar los aspectos teóricos-académicos de estas disciplinas.

  18. Vicarious retribution: the role of collective blame in intergroup aggression.

    Science.gov (United States)

    Lickel, Brian; Miller, Norman; Stenstrom, Douglas M; Denson, Thomas F; Schmader, Toni

    2006-01-01

    We provide a new framework for understanding 1 aspect of aggressive conflict between groups, which we refer to as vicarious retribution. Vicarious retribution occurs when a member of a group commits an act of aggression toward the members of an outgroup for an assault or provocation that had no personal consequences for him or her but which did harm a fellow ingroup member. Furthermore, retribution is often directed at outgroup members who, themselves, were not the direct causal agents in the original attack against the person's ingroup. Thus, retribution is vicarious in that neither the agent of retaliation nor the target of retribution were directly involved in the original event that precipitated the intergroup conflict. We describe how ingroup identification, outgroup entitativity, and other variables, such as group power, influence vicarious retribution. We conclude by considering a variety of conflict reduction strategies in light of this new theoretical framework.

  19. The production of audiovisual teaching tools in minimally invasive surgery.

    Science.gov (United States)

    Tolerton, Sarah K; Hugh, Thomas J; Cosman, Peter H

    2012-01-01

    Audiovisual learning resources have become valuable adjuncts to formal teaching in surgical training. This report discusses the process and challenges of preparing an audiovisual teaching tool for laparoscopic cholecystectomy. The relative value in surgical education and training, for both the creator and viewer are addressed. This audiovisual teaching resource was prepared as part of the Master of Surgery program at the University of Sydney, Australia. The different methods of video production used to create operative teaching tools are discussed. Collating and editing material for an audiovisual teaching resource can be a time-consuming and technically challenging process. However, quality learning resources can now be produced even with limited prior video editing experience. With minimal cost and suitable guidance to ensure clinically relevant content, most surgeons should be able to produce short, high-quality education videos of both open and minimally invasive surgery. Despite the challenges faced during production of audiovisual teaching tools, these resources are now relatively easy to produce using readily available software. These resources are particularly attractive to surgical trainees when real time operative footage is used. They serve as valuable adjuncts to formal teaching, particularly in the setting of minimally invasive surgery. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  20. Vicarious Calibration of Beijing-1 Multispectral Imagers

    Directory of Open Access Journals (Sweden)

    Zhengchao Chen

    2014-02-01

    Full Text Available For on-orbit calibration of the Beijing-1 multispectral imagers (Beijing-1/MS, a field calibration campaign was performed at the Dunhuang calibration site during September and October of 2008. Based on the in situ data and images from Beijing-1 and Terra/Moderate Resolution Imaging Spectroradiometer (MODIS, three vicarious calibration methods (i.e., reflectance-based, irradiance-based, and cross-calibration were used to calculate the top-of-atmosphere (TOA radiance of Beijing-1. An analysis was then performed to determine or identify systematic and accidental errors, and the overall uncertainty was assessed for each individual method. The findings show that the reflectance-based method has an uncertainty of more than 10% if the aerosol optical depth (AOD exceeds 0.2. The cross-calibration method is able to reach an error level within 7% if the images are selected carefully. The final calibration coefficients were derived from the irradiance-based data for 6 September 2008, with an uncertainty estimated to be less than 5%.

  1. School Building Design and Audio-Visual Resources.

    Science.gov (United States)

    National Committee for Audio-Visual Aids in Education, London (England).

    The design of new schools should facilitate the use of audiovisual resources by ensuring that the materials used in the construction of the buildings provide adequate sound insulation and acoustical and viewing conditions in all learning spaces. The facilities to be considered are: electrical services; electronic services; light control and…

  2. Use of Audiovisual Media and Equipment by Medical Educationists ...

    African Journals Online (AJOL)

    The most frequently used audiovisual medium and equipment is transparency on Overhead projector (O. H. P.) while the medium and equipment that is barely used for teaching is computer graphics on multi-media projector. This study also suggests ways of improving teaching-learning processes in medical education, ...

  3. Vicarious experience affects patients' treatment preferences for depression.

    Directory of Open Access Journals (Sweden)

    Seth A Berkowitz

    Full Text Available Depression is common in primary care but often under-treated. Personal experiences with depression can affect adherence to therapy, but the effect of vicarious experience is unstudied. We sought to evaluate the association between a patient's vicarious experiences with depression (those of friends or family and treatment preferences for depressive symptoms.We sampled 1054 English and/or Spanish speaking adult subjects from July through December 2008, randomly selected from the 2008 California Behavioral Risk Factor Survey System, regarding depressive symptoms and treatment preferences. We then constructed a unidimensional scale using item analysis that reflects attitudes about antidepressant pharmacotherapy. This became the dependent variable in linear regression analyses to examine the association between vicarious experiences and treatment preferences for depressive symptoms.Our sample was 68% female, 91% white, and 13% Hispanic. Age ranged from 18-94 years. Mean PHQ-9 score was 4.3; 14.5% of respondents had a PHQ-9 score >9.0, consistent with active depressive symptoms. Analyses controlling for current depression symptoms and socio-demographic factors found that in patients both with (coefficient 1.08, p = 0.03 and without (coefficient 0.77, p = 0.03 a personal history of depression, having a vicarious experience (family and friend, respectively with depression is associated with a more favorable attitude towards antidepressant medications.Patients with vicarious experiences of depression express more acceptance of pharmacotherapy. Conversely, patients lacking vicarious experiences of depression have more negative attitudes towards antidepressants. When discussing treatment with patients, clinicians should inquire about vicarious experiences of depression. This information may identify patients at greater risk for non-adherence and lead to more tailored patient-specific education about treatment.

  4. Multisensory integration in complete unawareness: evidence from audiovisual congruency priming.

    Science.gov (United States)

    Faivre, Nathan; Mudrik, Liad; Schwartz, Naama; Koch, Christof

    2014-11-01

    Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter "b" or "m," respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit "6" or "8," respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration. © The Author(s) 2014.

  5. The organization and reorganization of audiovisual speech perception in the first year of life.

    Science.gov (United States)

    Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F

    2017-04-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.

  6. Explaining Self and Vicarious Reactance: A Process Model Approach.

    Science.gov (United States)

    Sittenthaler, Sandra; Jonas, Eva; Traut-Mattausch, Eva

    2016-04-01

    Research shows that people experience a motivational state of agitation known as reactance when they perceive restrictions to their freedoms. However, research has yet to show whether people experience reactance if they merely observe the restriction of another person's freedom. In Study 1, we activated realistic vicarious reactance in the laboratory. In Study 2, we compared people's responses with their own and others' restrictions and found the same levels of experienced reactance and behavioral intentions as well as aggressive tendencies. We did, however, find differences in physiological arousal: Physiological arousal increased quickly after participants imagined their own freedom being restricted, but arousal in response to imagining a friend's freedom being threatened was weaker and delayed. In line with the physiological data, Study 3's results showed that self-restrictions aroused more emotional thoughts than vicarious restrictions, which induced more cognitive responses. Furthermore, in Study 4a, a cognitive task affected only the cognitive process behind vicarious reactance. In contrast, in Study 4b, an emotional task affected self-reactance but not vicarious reactance. We propose a process model explaining the emotional and cognitive processes of self- and vicarious reactance. © 2016 by the Society for Personality and Social Psychology, Inc.

  7. Functions of personal and vicarious life stories: Identity and empathy

    DEFF Research Database (Denmark)

    Lind, Majse; Thomsen, Dorthe Kirkegaard

    2018-01-01

    The present study investigates functions of personal and vicarious life stories focusing on identity and empathy. Two-hundred-and-forty Danish high school students completed two life story questionnaires: One for their personal life story and one for a close other’s life story. In both...... questionnaires, they identified up to 10 chapters and self-rated the chapters on valence and valence of causal connections. In addition, they completed measures of identity disturbance and empathy. More positive personal life stories were related to lower identity disturbance and higher empathy. Vicarious life...... stories showed a similar pattern with respect to identity but surprisingly were unrelated to empathy. In addition, we found positive correlations between personal and vicarious life stories for number of chapters, chapter valence, and valence of causal connections. The study indicates that both personal...

  8. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    Science.gov (United States)

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  9. Plantilla 1: El documento audiovisual: elementos importantes

    OpenAIRE

    Alemany, Dolores

    2011-01-01

    Concepto de documento audiovisual y de documentación audiovisual, profundizando en la distinción de documentación de imagen en movimiento con posible incorporación de sonido frente al concepto de documentación audiovisual según plantea Jorge Caldera. Diferenciación entre documentos audiovisuales, obras audiovisuales y patrimonio audiovisual según Félix del Valle.

  10. Skill dependent audiovisual integration in the fusiform induces repetition suppression.

    Science.gov (United States)

    McNorgan, Chris; Booth, James R

    2015-02-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression

    Science.gov (United States)

    McNorgan, Chris; Booth, James R.

    2015-01-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276

  12. Subjective Evaluation of Audiovisual Signals

    Directory of Open Access Journals (Sweden)

    F. Fikejz

    2010-01-01

    Full Text Available This paper deals with subjective evaluation of audiovisual signals, with emphasis on the interaction between acoustic and visual quality. The subjective test is realized by a simple rating method. The audiovisual signal used in this test is a combination of images compressed by JPEG compression codec and sound samples compressed by MPEG-1 Layer III. Images and sounds have various contents. It simulates a real situation when the subject listens to compressed music and watches compressed pictures without the access to original, i.e. uncompressed signals.

  13. Vicarious motor activation during action perception: beyond correlational evidence

    Directory of Open Access Journals (Sweden)

    Alessio eAvenanti

    2013-05-01

    Full Text Available Neurophysiological and imaging studies have shown that seeing the actions of other individuals brings about the vicarious activation of motor regions involved in performing the same actions. While this suggests a simulative mechanism mediating the perception of others’ actions, one cannot use such evidence to make inferences about the functional significance of vicarious activations. Indeed, a central aim in social neuroscience is to comprehend how vicarious activations allow the understanding of other people’s behavior, and this requires to use stimulation or lesion methods to establish causal links from brain activity to cognitive functions. In the present work we review studies investigating the effects of transient manipulations of brain activity or stable lesions in the motor system on individuals’ ability to perceive and understand the actions of others. We conclude there is now compelling evidence that neural activity in the motor system is critical for such cognitive ability. More research using causal methods, however, is needed in order to disclose the limits and the conditions under which vicarious activations are required to perceive and understand actions of others as well as their emotions and somatic feelings.

  14. Using modeling and vicarious reinforcement to produce more positive attitudes toward mental health treatment.

    Science.gov (United States)

    Buckley, Gary I; Malouff, John M

    2005-05-01

    In this study, the authors evaluated the effectiveness of a video, developed for this study and using principles of cognitive learning theory, to produce positive attitudinal change toward mental health treatment. The participants were 35 men and 45 women who were randomly assigned to watch either an experimental video, which included 3 positive 1st-person accounts of psychotherapy or a control video that focused on the psychological construct of self. Pre-intervention, post-intervention, and 2-week follow-up levels of attitude toward mental health treatment were measured using the Attitude Toward Seeking Professional Help Scale (E. H. Fischer & J. L. Turner, 1970). The experimental video group showed a significantly greater increase in positive attitude than did the control group. These results support the effectiveness of using the vicarious reinforcement elements of cognitive learning theory as a basis for changing attitudes toward mental health treatment.

  15. Search in audiovisual broadcast archives

    NARCIS (Netherlands)

    Huurnink, B.

    2010-01-01

    Documentary makers, journalists, news editors, and other media professionals routinely require previously recorded audiovisual material for new productions. For example, a news editor might wish to reuse footage from overseas services for the evening news, or a documentary maker describing the

  16. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    Science.gov (United States)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  17. Neural initialization of audiovisual integration in prereaders at varying risk for developmental dyslexia.

    Science.gov (United States)

    I Karipidis, Iliana; Pleisch, Georgette; Röthlisberger, Martina; Hofstetter, Christoph; Dornbierer, Dario; Stämpfli, Philipp; Brem, Silvia

    2017-02-01

    Learning letter-speech sound correspondences is a major step in reading acquisition and is severely impaired in children with dyslexia. Up to now, it remains largely unknown how quickly neural networks adopt specific functions during audiovisual integration of linguistic information when prereading children learn letter-speech sound correspondences. Here, we simulated the process of learning letter-speech sound correspondences in 20 prereading children (6.13-7.17 years) at varying risk for dyslexia by training artificial letter-speech sound correspondences within a single experimental session. Subsequently, we acquired simultaneously event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) scans during implicit audiovisual presentation of trained and untrained pairs. Audiovisual integration of trained pairs correlated with individual learning rates in right superior temporal, left inferior temporal, and bilateral parietal areas and with phonological awareness in left temporal areas. In correspondence, a differential left-lateralized parietooccipitotemporal ERP at 400 ms for trained pairs correlated with learning achievement and familial risk. Finally, a late (650 ms) posterior negativity indicating audiovisual congruency of trained pairs was associated with increased fMRI activation in the left occipital cortex. Taken together, a short (audiovisual integration in neural systems that are responsible for processing linguistic information in proficient readers. To conclude, the ability to learn grapheme-phoneme correspondences, the familial history of reading disability, and phonological awareness of prereading children account for the degree of audiovisual integration in a distributed brain network. Such findings on emerging linguistic audiovisual integration could allow for distinguishing between children with typical and atypical reading development. Hum Brain Mapp 38:1038-1055, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals

  18. Game of Objects: vicarious causation and multi-modal media

    Directory of Open Access Journals (Sweden)

    Aaron Pedinotti

    2013-09-01

    Full Text Available This paper applies philosopher Graham Harman's object-oriented theory of "vicarious causation" to an analysis of the multi-modal media phenomenon known as "Game of Thrones." Examining the manner in which George R.R. Martin's best-selling series of fantasy novels has been adapted into a board game, a video game, and a hit HBO television series, it uses the changes entailed by these processes to trace the contours of vicariously generative relations. In the course of the resulting analysis, it provides new suggestions concerning the eidetic dimensions of Harman's causal model, particularly with regard to causation in linear networks and in differing types of game systems.

  19. The shifting roles of dispersal and vicariance in biogeography.

    OpenAIRE

    Zink, R M; Blackwell-Rago, R C; Ronquist, F

    2000-01-01

    Dispersal and vicariance are often contrasted as competing processes primarily responsible for spatial and temporal patterns of biotic diversity. Recent methods of biogeographical reconstruction recognize the potential of both processes, and the emerging question is about discovering their relative frequencies. Relatively few empirical studies, especially those employing molecular phylogenies that allow a temporal perspective, have attempted to estimate the relative roles of dispersal and vic...

  20. Copyright for audiovisual work and analysis of websites offering audiovisual works

    OpenAIRE

    Chrastecká, Nicolle

    2014-01-01

    This Bachelor's thesis deals with the matter of audiovisual piracy. It discusses the question of audiovisual piracy being caused not by the wrong interpretation of law but by the lack of competitiveness among websites with legal audiovisual content. This thesis questions the quality of legal interpretation in the matter of audiovisual piracy and focuses on its sufficiency. It analyses the responsibility of website providers, providers of the illegal content, the responsibility of illegal cont...

  1. Vicarious Social Touch Biases Gazing at Faces and Facial Emotions.

    Science.gov (United States)

    Schirmer, Annett; Ng, Tabitha; Ebstein, Richard P

    2018-02-01

    Research has suggested that interpersonal touch promotes social processing and other-concern, and that women may respond to it more sensitively than men. In this study, we asked whether this phenomenon would extend to third-party observers who experience touch vicariously. In an eye-tracking experiment, participants (N = 64, 32 men and 32 women) viewed prime and target images with the intention of remembering them. Primes comprised line drawings of dyadic interactions with and without touch. Targets comprised two faces shown side-by-side, with one being neutral and the other being happy or sad. Analysis of prime fixations revealed that faces in touch interactions attracted longer gazing than faces in no-touch interactions. In addition, touch enhanced gazing at the area of touch in women but not men. Analysis of target fixations revealed that touch priming increased looking at both faces immediately after target onset, and subsequently, at the emotional face in the pair. Sex differences in target processing were nonsignificant. Together, the present results imply that vicarious touch biases visual attention to faces and promotes emotion sensitivity. In addition, they suggest that, compared with men, women are more aware of tactile exchanges in their environment. As such, vicarious touch appears to share important qualities with actual physical touch. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Net neutrality and audiovisual services

    OpenAIRE

    van Eijk, N.; Nikoltchev, S.

    2011-01-01

    Net neutrality is high on the European agenda. New regulations for the communication sector provide a legal framework for net neutrality and need to be implemented on both a European and a national level. The key element is not just about blocking or slowing down traffic across communication networks: the control over the distribution of audiovisual services constitutes a vital part of the problem. In this contribution, the phenomenon of net neutrality is described first. Next, the European a...

  3. Quality models for audiovisual streaming

    Science.gov (United States)

    Thang, Truong Cong; Kim, Young Suk; Kim, Cheon Seog; Ro, Yong Man

    2006-01-01

    Quality is an essential factor in multimedia communication, especially in compression and adaptation. Quality metrics can be divided into three categories: within-modality quality, cross-modality quality, and multi-modality quality. Most research has so far focused on within-modality quality. Moreover, quality is normally just considered from the perceptual perspective. In practice, content may be drastically adapted, even converted to another modality. In this case, we should consider the quality from semantic perspective as well. In this work, we investigate the multi-modality quality from the semantic perspective. To model the semantic quality, we apply the concept of "conceptual graph", which consists of semantic nodes and relations between the nodes. As an typical of multi-modality example, we focus on audiovisual streaming service. Specifically, we evaluate the amount of information conveyed by a audiovisual content where both video and audio channels may be strongly degraded, even audio are converted to text. In the experiments, we also consider the perceptual quality model of audiovisual content, so as to see the difference with semantic quality model.

  4. The efficacy of an audiovisual aid in teaching the Neo-Classical ...

    African Journals Online (AJOL)

    This study interrogated the central theoretical statement that understanding and learning to apply the abstract concept of classical dramatic narrative structure can be addressed effectively through a useful audiovisual teaching method. The purpose of the study was to design an effective DVD teaching and learning aid, ...

  5. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    Science.gov (United States)

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation

  6. Audiovisual Discrimination between Laughter and Speech

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audiovisual approach to distinguishing laughter from speech and we show that integrating the information from audio and video leads to an improved reliability of audiovisual approach in

  7. Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, B.

    2007-01-01

    Laughter is a highly variable signal, and can express a spectrum of emotions. This makes the automatic detection of laughter a challenging but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is performed

  8. Decreased BOLD responses in audiovisual processing

    NARCIS (Netherlands)

    Wiersinga-Post, Esther; Tomaskovic, Sonja; Slabu, Lavinia; Renken, Remco; de Smit, Femke; Duifhuis, Hendrikus

    2010-01-01

    Audiovisual processing was studied in a functional magnetic resonance imaging study using the McGurk effect. Perceptual responses and the brain activity patterns were measured as a function of audiovisual delay. In several cortical and subcortical brain areas, BOLD responses correlated negatively

  9. Audiovisual signs and information science: an evaluation

    Directory of Open Access Journals (Sweden)

    Jalver Bethônico

    2006-12-01

    Full Text Available This work evaluates the relationship of Information Science with audiovisual signs, pointing out conceptual limitations, difficulties imposed by the verbal fundament of knowledge, the reduced use within libraries and the ways in the direction of a more consistent analysis of the audiovisual means, supported by the semiotics of Charles Peirce.

  10. Types of vicarious learning experienced by pre-dialysis patients

    OpenAIRE

    McCarthy, Kate; Sturt, Jackie; Adams, Ann

    2015-01-01

    Objective: Haemodialysis and peritoneal dialysis renal replacement treatment options are in clinical equipoise, although the cost of haemodialysis to the National Health Service is £16,411/patient/year greater than peritoneal dialysis. Treatment decision-making takes place during the pre-dialysis year when estimated glomerular filtration rate drops to between 15 and 30 mL/min/1.73 m2. Renal disease can be familial, and the majority of patients have considerable health service experience when ...

  11. Cinco discursos da digitalidade audiovisual

    Directory of Open Access Journals (Sweden)

    Gerbase, Carlos

    2001-01-01

    Full Text Available Michel Foucault ensina que toda fala sistemática - inclusive aquela que se afirma “neutra” ou “uma desinteressada visão objetiva do que acontece” - é, na verdade, mecanismo de articulação do saber e, na seqüência, de formação de poder. O aparecimento de novas tecnologias, especialmente as digitais, no campo da produção audiovisual, provoca uma avalanche de declarações de cineastas, ensaios de acadêmicos e previsões de demiurgos da mídia.

  12. Beyond Vicary's fantasies: The impact of subliminal priming and brand choice

    NARCIS (Netherlands)

    Karremans, J.C.T.M.; Stroebe, W.; Claus, J.

    2006-01-01

    With his claim to have increased sales of Coca Cola and popcorn in a movie theatre through subliminal messages flashed on the screen, James Vicary raised the possibility of subliminal advertising. Nobody has ever replicated Vicary's findings and his study was a hoax. This article reports two

  13. Vicarious Trauma: Predictors of Clinicians' Disrupted Cognitions about Self-Esteem and Self-Intimacy

    Science.gov (United States)

    Way, Ineke; VanDeusen, Karen; Cottrell, Tom

    2007-01-01

    This study examined vicarious trauma in clinicians who provide sexual abuse treatment (N = 383). A random sample of clinical members from the Association for the Treatment of Sexual Abusers and American Professional Society on the Abuse of Children were surveyed. Vicarious trauma was measured using the Trauma Stress Institute Belief Scale…

  14. 29 CFR 2.13 - Audiovisual coverage prohibited.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage prohibited. 2.13 Section 2.13 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.13 Audiovisual coverage prohibited. The Department shall not permit audiovisual coverage of the...

  15. My partner's stories: relationships between personal and vicarious life stories within romantic couples.

    Science.gov (United States)

    Panattoni, Katherine; Thomsen, Dorthe Kirkegaard

    2018-06-12

    In this paper, we examined relationships and differences between personal and vicarious life stories, i.e., the life stories one knows of others. Personal and vicarious life stories of both members of 51 young couples (102 participants), based on McAdams' Life Story Interview (2008), were collected. We found significant positive relationships between participants' personal and vicarious life stories on agency and communion themes and redemption sequences. We also found significant positive relationships between participants' vicarious life stories about their partners and those partners' personal life stories on agency and communion, but not redemption. Furthermore, these relationships were not explained by similarity between couples' two personal life stories, as no associations were found between couples' personal stories on agency, communion and redemption. These results suggest that the way we construct the vicarious life stories of close others may reflect how we construct our personal life stories.

  16. Sustainable models of audiovisual commons

    Directory of Open Access Journals (Sweden)

    Mayo Fuster Morell

    2013-03-01

    Full Text Available This paper addresses an emerging phenomenon characterized by continuous change and experimentation: the collaborative commons creation of audiovisual content online. The analysis wants to focus on models of sustainability of collaborative online creation, paying particular attention to the use of different forms of advertising. This article is an excerpt of a larger investigation, which unit of analysis are cases of Online Creation Communities that take as their central node of activity the Catalan territory. From 22 selected cases, the methodology combines quantitative analysis, through a questionnaire delivered to all cases, and qualitative analysis through face interviews conducted in 8 cases studied. The research, which conclusions we summarize in this article,in this article, leads us to conclude that the sustainability of the project depends largely on relationships of trust and interdependence between different voluntary agents, the non-monetary contributions and retributions as well as resources and infrastructure of free use. All together leads us to understand that this is and will be a very important area for the future of audiovisual content and its sustainability, which will imply changes in the policies that govern them.

  17. Risk of vicarious trauma in nursing research: a focused mapping review and synthesis.

    Science.gov (United States)

    Taylor, Julie; Bradbury-Jones, Caroline; Breckenridge, Jenna P; Jones, Christine; Herber, Oliver Rudolf

    2016-10-01

    To provide a snapshot of how vicarious trauma is considered within the published nursing research literature. Vicarious trauma (secondary traumatic stress) has been the focus of attention in nursing practice for many years. The most pertinent areas to invoke vicarious trauma in research have been suggested as abuse/violence and death/dying. What is not known is how researchers account for the risks of vicarious trauma in research. Focused mapping review and synthesis. Empirical studies meeting criteria for abuse/violence or death/dying in relevant Scopus ranked top nursing journals (n = 6) January 2009 to December 2014. Relevant papers were scrutinised for the extent to which researchers discussed the risk of vicarious trauma. Aspects of the studies were mapped systematically to a pre-defined template, allowing patterns and gaps in authors' reporting to be determined. These were synthesised into a coherent profile of current reporting practices and from this, a new conceptualisation seeking to anticipate and address the risk of vicarious trauma was developed. Two thousand five hundred and three papers were published during the review period, of which 104 met the inclusion criteria. Studies were distributed evenly by method (52 qualitative; 51 quantitative; one mixed methods) and by focus (54 abuse/violence; 50 death/dying). The majority of studies (98) were carried out in adult populations. Only two papers reported on vicarious trauma. The conceptualisation of vicarious trauma takes account of both sensitivity of the substantive data collected, and closeness of those involved with the research. This might assist researchers in designing ethical and protective research and foreground the importance of managing risks of vicarious trauma. Vicarious trauma is not well considered in research into clinically important topics. Our proposed framework allows for consideration of these so that precautionary measures can be put in place to minimise harm to staff. © 2016

  18. Audiovisual quality assessment and prediction for videotelephony

    CERN Document Server

    Belmudez, Benjamin

    2015-01-01

    The work presented in this book focuses on modeling audiovisual quality as perceived by the users of IP-based solutions for video communication like videotelephony. It also extends the current framework for the parametric prediction of audiovisual call quality. The book addresses several aspects related to the quality perception of entire video calls, namely, the quality estimation of the single audio and video modalities in an interactive context, the audiovisual quality integration of these modalities and the temporal pooling of short sample-based quality scores to account for the perceptual quality impact of time-varying degradations.

  19. Seminario latinoamericano de didactica de los medios audiovisuales (Latin American Seminar on Teaching with Audiovisual Aids).

    Science.gov (United States)

    Eduplan Informa, 1971

    1971-01-01

    This seminar on the use of audiovisual aids reached several conclusions on the need for and the use of such aids in Latin America. The need for educational innovation in the face of a new society, a new type of communication, and a new vision of man is stressed. A new definition of teaching and learning as a fundamental process of communication is…

  20. Designing between Pedagogies and Cultures: Audio-Visual Chinese Language Resources for Australian Schools

    Science.gov (United States)

    Yuan, Yifeng; Shen, Huizhong

    2016-01-01

    This design-based study examines the creation and development of audio-visual Chinese language teaching and learning materials for Australian schools by incorporating users' feedback and content writers' input that emerged in the designing process. Data were collected from workshop feedback of two groups of Chinese-language teachers from primary…

  1. Testing Audiovisual Comprehension Tasks with Questions Embedded in Videos as Subtitles: A Pilot Multimethod Study

    Science.gov (United States)

    Núñez, Juan Carlos Casañ

    2017-01-01

    Listening, watching, reading and writing simultaneously in a foreign language is very complex. This paper is part of wider research which explores the use of audiovisual comprehension questions imprinted in the video image in the form of subtitles and synchronized with the relevant fragments for the purpose of language learning and testing.…

  2. Vocabulary Teaching in Foreign Language via Audiovisual Method Technique of Listening and Following Writing Scripts

    Science.gov (United States)

    Bozavli, Ebubekir

    2017-01-01

    The objective is hereby study is to compare the effects of conventional and audiovisual methods on learning efficiency and success of retention with regard to vocabulary teaching in foreign language. Research sample consists of 21 undergraduate and 7 graduate students studying at Department of French Language Teaching, Kazim Karabekir Faculty of…

  3. Speech cues contribute to audiovisual spatial integration.

    Directory of Open Access Journals (Sweden)

    Christopher W Bishop

    Full Text Available Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral 'what' and dorsal 'where' pathways.

  4. Audiovisual Styling and the Film Experience

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2015-01-01

    Approaches to music and audiovisual meaning in film appear to be very different in nature and scope when considered from the point of view of experimental psychology or humanistic studies. Nevertheless, this article argues that experimental studies square with ideas of audiovisual perception...... and meaning in humanistic film music studies in two ways: through studies of vertical synchronous interaction and through studies of horizontal narrative effects. Also, it is argued that the combination of insights from quantitative experimental studies and qualitative audiovisual film analysis may actually...... be combined into a more complex understanding of how audiovisual features interact in the minds of their audiences. This is demonstrated through a review of a series of experimental studies. Yet, it is also argued that textual analysis and concepts from within film and music studies can provide insights...

  5. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    Science.gov (United States)

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  6. Audio-Visual Tibetan Speech Recognition Based on a Deep Dynamic Bayesian Network for Natural Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Yue Zhao

    2012-12-01

    Full Text Available Audio-visual speech recognition is a natural and robust approach to improving human-robot interaction in noisy environments. Although multi-stream Dynamic Bayesian Network and coupled HMM are widely used for audio-visual speech recognition, they fail to learn the shared features between modalities and ignore the dependency of features among the frames within each discrete state. In this paper, we propose a Deep Dynamic Bayesian Network (DDBN to perform unsupervised extraction of spatial-temporal multimodal features from Tibetan audio-visual speech data and build an accurate audio-visual speech recognition model under a no frame-independency assumption. The experiment results on Tibetan speech data from some real-world environments showed the proposed DDBN outperforms the state-of-art methods in word recognition accuracy.

  7. Vicarious pain experiences while observing another in pain: an experimental approach

    Directory of Open Access Journals (Sweden)

    Sophie eVandenbroucke

    2013-06-01

    Full Text Available Objective: This study aimed at developing an experimental paradigm to assess vicarious pain experiences. We further explored the putative moderating role of observer’s characteristics such as hypervigilance for pain and dispositional empathy. Methods: Two experiments are reported using a similar procedure. Undergraduate students were selected based upon whether they reported vicarious pain in daily life, and categorized into a pain responder group or a comparison group. Participants were presented a series of videos showing hands being pricked whilst receiving occasionally pricking (electrocutaneous stimuli themselves. In congruent trials, pricking and visual stimuli were applied to the same spatial location. In incongruent trials, pricking and visual stimuli were in the opposite spatial location. Participants were required to report on which location they felt a pricking sensation. Of primary interest was the effect of viewing another in pain upon vicarious pain errors, i.e., the number of trials in which an illusionary sensation was reported. Furthermore, we explored the effect of individual differences in hypervigilance to pain, dispositional empathy and the rubber hand illusion (RHI upon vicarious pain errors. Results: Results of both experiments indicated that the number of vicarious pain errors was overall low. In line with expectations, the number of vicarious pain errors was higher in the pain responder group than in the comparison group. Self-reported hypervigilance for pain lowered the probability of reporting vicarious pain errors in the pain responder group, but dispositional empathy and the RHI did not. Conclusion: Our paradigm allows measuring vicarious pain experiences in students. However, the prevalence of vicarious experiences of pain is low, and only a small percentage of participants display the phenomenon. It remains however unknown which variables affect its occurrence.

  8. Conflict between place and response navigation strategies: effects on vicarious trial and error (VTE) behaviors.

    Science.gov (United States)

    Schmidt, Brandy; Papale, Andrew; Redish, A David; Markus, Etan J

    2013-02-15

    Navigation can be accomplished through multiple decision-making strategies, using different information-processing computations. A well-studied dichotomy in these decision-making strategies compares hippocampal-dependent "place" and dorsal-lateral striatal-dependent "response" strategies. A place strategy depends on the ability to flexibly respond to environmental cues, while a response strategy depends on the ability to quickly recognize and react to situations with well-learned action-outcome relationships. When rats reach decision points, they sometimes pause and orient toward the potential routes of travel, a process termed vicarious trial and error (VTE). VTE co-occurs with neurophysiological information processing, including sweeps of representation ahead of the animal in the hippocampus and transient representations of reward in the ventral striatum and orbitofrontal cortex. To examine the relationship between VTE and the place/response strategy dichotomy, we analyzed data in which rats were cued to switch between place and response strategies on a plus maze. The configuration of the maze allowed for place and response strategies to work competitively or cooperatively. Animals showed increased VTE on trials entailing competition between navigational systems, linking VTE with deliberative decision-making. Even in a well-learned task, VTE was preferentially exhibited when a spatial selection was required, further linking VTE behavior with decision-making associated with hippocampal processing.

  9. Audiovisual segregation in cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Simon Landry

    Full Text Available It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition, as well as in normal controls. A visual speech recognition task (i.e. speechreading was administered either in silence or in combination with three types of auditory distractors: i noise ii reverse speech sound and iii non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users.

  10. Vicarious revenge and the death of Osama bin Laden.

    Science.gov (United States)

    Gollwitzer, Mario; Skitka, Linda J; Wisneski, Daniel; Sjöström, Arne; Liberman, Peter; Nazir, Syed Javed; Bushman, Brad J

    2014-05-01

    Three hypotheses were derived from research on vicarious revenge and tested in the context of the assassination of Osama bin Laden in 2011. In line with the notion that revenge aims at delivering a message (the "message hypothesis"), Study 1 shows that Americans' vengeful desires in the aftermath of 9/11 predicted a sense of justice achieved after bin Laden's death, and that this effect was mediated by perceptions that his assassination sent a message to the perpetrators to not "mess" with the United States. In line with the "blood lust hypothesis," his assassination also sparked a desire to take further revenge and to continue the "war on terror." Finally, in line with the "intent hypothesis," Study 2 shows that Americans (but not Pakistanis or Germans) considered the fact that bin Laden was killed intentionally more satisfactory than the possibility of bin Laden being killed accidentally (e.g., in an airplane crash).

  11. Vicarious traumatization in the work with survivors of childhood trauma.

    Science.gov (United States)

    Crothers, D

    1995-04-01

    1. Persons working with victims of childhood trauma may experience traumatic countertransference and vicarious traumatization. After hearing a patient's trauma story, which is a necessary part of childhood trauma therapy, staff may experience post-traumatic stress disorder, imagery associated with the patient's story and the same disruptions in relationships as the patient. 2. During the first 6 months of working with survivors of childhood trauma, common behaviors of staff members were identified, including a lack of attention, poor work performance, medication errors, sick calls, treatment errors, irreverence, hypervigilance, and somatic complaints. 3. Staff working with victims of childhood trauma can obtain the necessary staff support through team support, in traumatic events, and in a leadership role.

  12. Hippocampus, delay discounting, and vicarious trial-and-error.

    Science.gov (United States)

    Bett, David; Murdoch, Lauren H; Wood, Emma R; Dudchenko, Paul A

    2015-05-01

    In decision-making, an immediate reward is usually preferred to a delayed reward, even if the latter is larger. We tested whether the hippocampus is necessary for this form of temporal discounting, and for vicarious trial-and-error at the decision point. Rats were trained on a recently developed, adjustable delay-discounting task (Papale et al. (2012) Cogn Affect Behav Neurosci 12:513-526), which featured a choice between a small, nearly immediate reward, and a larger, delayed reward. Rats then received either hippocampus or sham lesions. Animals with hippocampus lesions adjusted the delay for the larger reward to a level similar to that of sham-lesioned animals, suggesting a similar valuation capacity. However, the hippocampus lesion group spent significantly longer investigating the small and large rewards in the first part of the sessions, and were less sensitive to changes in the amount of reward in the large reward maze arm. Both sham- and hippocampus-lesioned rats showed a greater amount of vicarious trial-and-error on trials in which the delay was adjusted. In a nonadjusting version of the delay discounting task, animals with hippocampus lesions showed more variability in their preference for a larger reward that was delayed by 10 s compared with sham-lesioned animals. To verify the lesion behaviorally, rat were subsequently trained on a water maze task, and rats with hippocampus lesions were significantly impaired compared with sham-lesioned animals. The findings on the delay discounting tasks suggest that damage to the hippocampus may impair the detection of reward magnitude. © 2014 Wiley Periodicals, Inc.

  13. Audiovisual preconditioning enhances the efficacy of an anatomical dissection course: A randomised study.

    Science.gov (United States)

    Collins, Anne M; Quinlan, Christine S; Dolan, Roisin T; O'Neill, Shane P; Tierney, Paul; Cronin, Kevin J; Ridgway, Paul F

    2015-07-01

    The benefits of incorporating audiovisual materials into learning are well recognised. The outcome of integrating such a modality in to anatomical education has not been reported previously. The aim of this randomised study was to determine whether audiovisual preconditioning is a useful adjunct to learning at an upper limb dissection course. Prior to instruction participants completed a standardised pre course multiple-choice questionnaire (MCQ). The intervention group was subsequently shown a video with a pre-recorded commentary. Following initial dissection, both groups completed a second MCQ. The final MCQ was completed at the conclusion of the course. Statistical analysis confirmed a significant improvement in the performance in both groups over the duration of the three MCQs. The intervention group significantly outperformed their control group counterparts immediately following audiovisual preconditioning and in the post course MCQ. Audiovisual preconditioning is a practical and effective tool that should be incorporated in to future course curricula to optimise learning. Level of evidence This study appraises an intervention in medical education. Kirkpatrick Level 2b (modification of knowledge). Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  14. Vicariance or long-distance dispersal: historical biogeography of the pantropical subfamily Chrysophylloideae (Sapotaceae)

    Czech Academy of Sciences Publication Activity Database

    Bartish, Igor; Antonelli, A.; Richardson, J. E.; Swenson, U.

    2011-01-01

    Roč. 38, č. 1 (2011), s. 177-190 ISSN 0305-0270 Institutional research plan: CEZ:AV0Z60050516 Keywords : molecular dating * Neotropics * vicariance Subject RIV: EF - Botanics Impact factor: 4.544, year: 2011

  15. In-Orbit Vicarious Calibration for Ocean Color and Aerosol Products

    National Research Council Canada - National Science Library

    Wang, Menghua

    2005-01-01

    It is well known that, to accurately retrieve the spectrum of the water-leaving radiance and derive the ocean color products from satellite sensors, a vicarious calibration procedure, which performs...

  16. Hysteresis in audiovisual synchrony perception.

    Directory of Open Access Journals (Sweden)

    Jean-Rémy Martin

    Full Text Available The effect of stimulation history on the perception of a current event can yield two opposite effects, namely: adaptation or hysteresis. The perception of the current event thus goes in the opposite or in the same direction as prior stimulation, respectively. In audiovisual (AV synchrony perception, adaptation effects have primarily been reported. Here, we tested if perceptual hysteresis could also be observed over adaptation in AV timing perception by varying different experimental conditions. Participants were asked to judge the synchrony of the last (test stimulus of an AV sequence with either constant or gradually changing AV intervals (constant and dynamic condition, respectively. The onset timing of the test stimulus could be cued or not (prospective vs. retrospective condition, respectively. We observed hysteretic effects for AV synchrony judgments in the retrospective condition that were independent of the constant or dynamic nature of the adapted stimuli; these effects disappeared in the prospective condition. The present findings suggest that knowing when to estimate a stimulus property has a crucial impact on perceptual simultaneity judgments. Our results extend beyond AV timing perception, and have strong implications regarding the comparative study of hysteresis and adaptation phenomena.

  17. A promessa do audiovisual interativo

    Directory of Open Access Journals (Sweden)

    João Baptista Winck

    Full Text Available A cadeia produtiva do audiovisual utiliza o capital cultural, especialmente a criatividade, como sua principal fonte de recursos, inaugurando o que se vem chamando de economia criativa. Essa cadeia de valor manufatura a inventividade como matéria-prima, transformado idéias em objetos de consumo de larga escala. A indústria da televisão está inserida num conglomerado maior de indústrias, como a da moda, das artes, da música etc. Esse gigantesco parque tecnológico reúne as atividades que têm a criação como valor, sua produção em escala como meio e o incremento da propriedade intelectual como fim em si mesmo. A industrialização da criatividade, aos poucos, está alterando o corpo teórico acerca do que se pensa sobre as relações de trabalho, as ferramentas e, acima de tudo, o conceito de bens como produto da inteligência.

  18. Common and distinct neural correlates of personal and vicarious reward: A quantitative meta-analysis

    Science.gov (United States)

    Morelli, Sylvia A.; Sacchet, Matthew D.; Zaki, Jamil

    2015-01-01

    Individuals experience reward not only when directly receiving positive outcomes (e.g., food or money), but also when observing others receive such outcomes. This latter phenomenon, known as vicarious reward, is a perennial topic of interest among psychologists and economists. More recently, neuroscientists have begun exploring the neuroanatomy underlying vicarious reward. Here we present a quantitative whole-brain meta-analysis of this emerging literature. We identified 25 functional neuroimaging studies that included contrasts between vicarious reward and a neutral control, and subjected these contrasts to an activation likelihood estimate (ALE) meta-analysis. This analysis revealed a consistent pattern of activation across studies, spanning structures typically associated with the computation of value (especially ventromedial prefrontal cortex) and mentalizing (including dorsomedial prefrontal cortex and superior temporal sulcus). We further quantitatively compared this activation pattern to activation foci from a previous meta-analysis of personal reward. Conjunction analyses yielded overlapping VMPFC activity in response to personal and vicarious reward. Contrast analyses identified preferential engagement of the nucleus accumbens in response to personal as compared to vicarious reward, and in mentalizing-related structures in response to vicarious as compared to personal reward. These data shed light on the common and unique components of the reward that individuals experience directly and through their social connections. PMID:25554428

  19. Audiovisual integration in speech perception: a multi-stage process

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    investigate whether the integration of auditory and visual speech observed in these two audiovisual integration effects are specific traits of speech perception. We further ask whether audiovisual integration is undertaken in a single processing stage or multiple processing stages....

  20. Perception of the Multisensory Coherence of Fluent Audiovisual Speech in Infancy: Its Emergence & the Role of Experience

    Science.gov (United States)

    Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa

    2014-01-01

    To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038

  1. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

    Science.gov (United States)

    Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa

    2015-02-01

    To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Lip movements affect infants' audiovisual speech perception.

    Science.gov (United States)

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  3. Audiovisual preservation strategies, data models and value-chains

    OpenAIRE

    Addis, Matthew; Wright, Richard

    2010-01-01

    This is a report on preservation strategies, models and value-chains for digital file-based audiovisual content. The report includes: (a)current and emerging value-chains and business-models for audiovisual preservation;(b) a comparison of preservation strategies for audiovisual content including their strengths and weaknesses, and(c) a review of current preservation metadata models, and requirements for extension to support audiovisual files.

  4. A Catalan code of best practices for the audiovisual sector

    OpenAIRE

    Teodoro, Emma; Casanovas, Pompeu

    2010-01-01

    In spite of a new general law regarding Audiovisual Communication, the regulatory framework of the audiovisual sector in Spain can still be defined as huge, disperse and obsolete. The first part of this paper provides an overview of the major challenges of the Spanish audiovisual sector as a result of the convergence of platforms, services and operators, paying especial attention to the Audiovisual Sector in Catalonia. In the second part, we will present an example of self-regulation through...

  5. 29 CFR 2.12 - Audiovisual coverage permitted.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage permitted. 2.12 Section 2.12 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.12 Audiovisual coverage permitted. The following are the types of hearings where the Department...

  6. Hybrid e-learning tool TransLearning

    NARCIS (Netherlands)

    Meij, van der Marjoleine G.; Kupper, Frank; Beers, P.J.; Broerse, Jacqueline E.W.

    2016-01-01

    E-learning and storytelling approaches can support informal vicarious learning within geographically widely distributed multi-stakeholder collaboration networks. This case study evaluates hybrid e-learning and video-storytelling approach ‘TransLearning’ by investigation into how its storytelling

  7. RECURSO AUDIOVISUAL PAA ENSEÑAR Y APRENDER EN EL AULA: ANÁLISIS Y PROPUESTA DE UN MODELO FORMATIVO

    Directory of Open Access Journals (Sweden)

    Damian Marilu Mendoza Zambrano

    2015-09-01

    Full Text Available La usabilidad de los recursos audiovisuales, gráficos y digitales, que en la actualidad se están introduciendo en el sistema educativo se despliega en varios países de la región como Chile, Colombia, México, Cuba, El Salvador, Uruguay y Venezuela. Se analiza y se justifica subtemas relacionados con la enseñanza de los medios, desde la iniciativa de España y Portugal; países que fueron convirtiéndose en protagonistas internacionales de algunos modelos educativos en el contexto universitario. Debido a la extensión y focalización en la informática y las redes de información y comunicación en la internet; el audiovisual como instrumento tecnológico va ganando espacios como un recurso dinámico e integrador; con características especiales que lo distingue del resto de los medios que conforman el ecosistema audiovisual. Como resultado de esta investigación se proponen dos líneas de aplicación: A. Propuesta del lenguaje icónico y audiovisual como objetivo de aprendizaje y/o materia curricular en los planes de estudio universitarios con talleres para el desarrollo del documento audiovisual, la fotografía digital y la producción audiovisual y B. Uso de los recursos audiovisuales como medio educativo, lo que implicaría un proceso previo de capacitación a la comunidad docente en actividades recomendadas al profesorado y alumnado respectivamente. En consecuencia, se presentan sugerencias que permiten implementar ambas líneas de acción académica.PALABRAS CLAVE: Alfabetización Mediática; Educación Audiovisual; Competencia Mediática; Educomunicación.AUDIOVISUAL RESOURCE FOR TEACHING AND LEARNING IN THE CLASSROOM: ANALYSIS AND PROPOSAL OF A TRAINING MODELABSTRACTThe usage of the graphic and digital audiovisual resources in Education that is been applied in the present, have displayed in countries such as Chile, Colombia, Mexico, Cuba, El Salvador, Uruguay, and Venezuela. The analysis and justification of the topics related to the

  8. Audiovisual Speech Synchrony Measure: Application to Biometrics

    Directory of Open Access Journals (Sweden)

    Gérard Chollet

    2007-01-01

    Full Text Available Speech is a means of communication which is intrinsically bimodal: the audio signal originates from the dynamics of the articulators. This paper reviews recent works in the field of audiovisual speech, and more specifically techniques developed to measure the level of correspondence between audio and visual speech. It overviews the most common audio and visual speech front-end processing, transformations performed on audio, visual, or joint audiovisual feature spaces, and the actual measure of correspondence between audio and visual speech. Finally, the use of synchrony measure for biometric identity verification based on talking faces is experimented on the BANCA database.

  9. Effects of vicarious pain on self-pain perception: investigating the role of awareness

    Science.gov (United States)

    Terrighena, Esslin L; Lu, Ge; Yuen, Wai Ping; Lee, Tatia MC; Keuper, Kati

    2017-01-01

    The observation of pain in others may enhance or reduce self-pain, yet the boundary conditions and factors that determine the direction of such effects are poorly understood. The current study set out to show that visual stimulus awareness plays a crucial role in determining whether vicarious pain primarily activates behavioral defense systems that enhance pain sensitivity and stimulate withdrawal or appetitive systems that attenuate pain sensitivity and stimulate approach. We employed a mixed factorial design with the between-subject factors exposure time (subliminal vs optimal) and vicarious pain (pain vs no pain images), and the within-subject factor session (baseline vs trial) to investigate how visual awareness of vicarious pain images affects subsequent self-pain in the cold-pressor test. Self-pain tolerance, intensity and unpleasantness were evaluated in a sample of 77 healthy participants. Results revealed significant interactions of exposure time and vicarious pain in all three dependent measures. In the presence of visual awareness (optimal condition), vicarious pain compared to no-pain elicited overall enhanced self-pain sensitivity, indexed by reduced pain tolerance and enhanced ratings of pain intensity and unpleasantness. Conversely, in the absence of visual awareness (subliminal condition), vicarious pain evoked decreased self-pain intensity and unpleasantness while pain tolerance remained unaffected. These findings suggest that the activation of defense mechanisms by vicarious pain depends on relatively elaborate cognitive processes, while – strikingly – the appetitive system is activated in highly automatic manner independent from stimulus awareness. Such mechanisms may have evolved to facilitate empathic, protective approach responses toward suffering individuals, ensuring survival of the protective social group. PMID:28831270

  10. Search in audiovisual broadcast archives : doctoral abstract

    NARCIS (Netherlands)

    Huurnink, B.

    Documentary makers, journalists, news editors, and other media professionals routinely require previously recorded audiovisual material for new productions. For example, a news editor might wish to reuse footage shot by overseas services for the evening news, or a documentary maker might require

  11. Planning and Producing Audiovisual Materials. Third Edition.

    Science.gov (United States)

    Kemp, Jerrold E.

    A revised edition of this handbook provides illustrated, step-by-step explanations of how to plan and produce audiovisual materials. Included are sections on the fundamental skills--photography, graphics and recording sound--followed by individual sections on photographic print series, slide series, filmstrips, tape recordings, overhead…

  12. Audiovisual laughter detection based on temporal features

    NARCIS (Netherlands)

    Petridis, Stavros; Nijholt, Antinus; Nijholt, A.; Pantic, M.; Pantic, Maja; Poel, Mannes; Poel, M.; Hondorp, G.H.W.

    2008-01-01

    Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audiovisual approach to distinguishing laughter from speech based on temporal features and we show that the integration of audio and visual information leads to improved

  13. Ordinal models of audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2011-01-01

    Audiovisual information is integrated in speech perception. One manifestation of this is the McGurk illusion in which watching the articulating face alters the auditory phonetic percept. Understanding this phenomenon fully requires a computational model with predictive power. Here, we describe...

  14. Rapid, generalized adaptation to asynchronous audiovisual speech.

    Science.gov (United States)

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-07

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  15. Audiovisual Asynchrony Detection in Human Speech

    Science.gov (United States)

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  16. Longevity and Depreciation of Audiovisual Equipment.

    Science.gov (United States)

    Post, Richard

    1987-01-01

    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  17. Quantifying temporal ventriloquism in audiovisual synchrony perception

    NARCIS (Netherlands)

    Kuling, I.A.; Kohlrausch, A.G.; Juola, J.F.

    2013-01-01

    The integration of visual and auditory inputs in the human brain works properly only if the components are perceived in close temporal proximity. In the present study, we quantified cross-modal interactions in the human brain for audiovisual stimuli with temporal asynchronies, using a paradigm from

  18. Catching Audiovisual Interactions With a First-Person Fisherman Video Game.

    Science.gov (United States)

    Sun, Yile; Hickey, Timothy J; Shinn-Cunningham, Barbara; Sekuler, Robert

    2017-07-01

    The human brain is excellent at integrating information from different sources across multiple sensory modalities. To examine one particularly important form of multisensory interaction, we manipulated the temporal correlation between visual and auditory stimuli in a first-person fisherman video game. Subjects saw rapidly swimming fish whose size oscillated, either at 6 or 8 Hz. Subjects categorized each fish according to its rate of size oscillation, while trying to ignore a concurrent broadband sound seemingly emitted by the fish. In three experiments, categorization was faster and more accurate when the rate at which a fish oscillated in size matched the rate at which the accompanying, task-irrelevant sound was amplitude modulated. Control conditions showed that the difference between responses to matched and mismatched audiovisual signals reflected a performance gain in the matched condition, rather than a cost from the mismatched condition. The performance advantage with matched audiovisual signals was remarkably robust over changes in task demands between experiments. Performance with matched or unmatched audiovisual signals improved over successive trials at about the same rate, emblematic of perceptual learning in which visual oscillation rate becomes more discriminable with experience. Finally, analysis at the level of individual subjects' performance pointed to differences in the rates at which subjects can extract information from audiovisual stimuli.

  19. Reduced audiovisual recalibration in the elderly.

    Science.gov (United States)

    Chan, Yu Man; Pianta, Michael J; McKendrick, Allison M

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22-32 years old) and 15 older (64-74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age.

  20. Specific Vicariance of Two Primeval Lowland Forest Lichen Indicators

    Science.gov (United States)

    Kubiak, Dariusz; Osyczka, Piotr

    2017-06-01

    To date, the lichens Chrysothrix candelaris and Varicellaria hemisphaerica have been classified as accurate primeval lowland forest indicators. Both inhabit particularly valuable remnants of oak-hornbeam forests in Europe, but tend toward a specific kind of vicariance on a local scale. The present study was undertaken to determine habitat factors responsible for this phenomenon and verify the indicative and conservation value of these lichens. The main spatial and climatic parameters that, along with forest structure, potentially affect their distribution patterns and abundance were analysed in four complexes with typical oak-hornbeam stands in NE Poland. Fifty plots of 400 m2 each were chosen for detailed examination of stand structure and epiphytic lichens directly associated with the indicators. The study showed that the localities of the two species barely overlap within the same forest community in a relatively small geographical area. The occurrence of Chrysothrix candelaris depends basically only on microhabitat space provided by old oaks and its role as an indicator of the ecological continuity of habitat is limited. Varicellaria hemisphaerica is not tree specific but a sufficiently high moisture of habitat is essential for the species and it requires forests with high proportion of deciduous trees in a wide landscape scale. Local landscape-level habitat continuity is more important for this species than the current age of forest stand. Regardless of the indicative value, localities of both lichens within oak-hornbeam forests deserve the special protection status since they form unique assemblages of exclusive epiphytes, including those with high conservation value.

  1. Effects of Vicarious Experiences of Nature, Environmental Attitudes, and Outdoor Recreation Benefits on Support for Increased Funding Allocations

    Science.gov (United States)

    Kil, Namyun

    2016-01-01

    This study examined the effects of vicarious experiences of nature, environmental attitudes, and recreation benefits sought by participants on their support for funding of natural resources and alternative energy options. Using a national scenic trail user survey, results demonstrated that vicarious experiences of nature influenced environmental…

  2. Audiovisual semantic congruency during encoding enhances memory performance.

    Science.gov (United States)

    Heikkilä, Jenni; Alho, Kimmo; Hyvönen, Heidi; Tiippana, Kaisa

    2015-01-01

    Studies of memory and learning have usually focused on a single sensory modality, although human perception is multisensory in nature. In the present study, we investigated the effects of audiovisual encoding on later unisensory recognition memory performance. The participants were to memorize auditory or visual stimuli (sounds, pictures, spoken words, or written words), each of which co-occurred with either a semantically congruent stimulus, incongruent stimulus, or a neutral (non-semantic noise) stimulus in the other modality during encoding. Subsequent memory performance was overall better when the stimulus to be memorized was initially accompanied by a semantically congruent stimulus in the other modality than when it was accompanied by a neutral stimulus. These results suggest that semantically congruent multisensory experiences enhance encoding of both nonverbal and verbal materials, resulting in an improvement in their later recognition memory.

  3. Testing audiovisual comprehension tasks with questions embedded in videos as subtitles: a pilot multimethod study

    OpenAIRE

    Casañ Núñez, Juan Carlos

    2017-01-01

    [EN] Listening, watching, reading and writing simultaneously in a foreign language is very complex. This paper is part of wider research which explores the use of audiovisual comprehension questions imprinted in the video image in the form of subtitles and synchronized with the relevant fragments for the purpose of language learning and testing. Compared to viewings where the comprehension activity is available only on paper, this innovative methodology may provide some benefits. Among them, ...

  4. Multi-sensory learning and learning to read.

    Science.gov (United States)

    Blomert, Leo; Froyen, Dries

    2010-09-01

    The basis of literacy acquisition in alphabetic orthographies is the learning of the associations between the letters and the corresponding speech sounds. In spite of this primacy in learning to read, there is only scarce knowledge on how this audiovisual integration process works and which mechanisms are involved. Recent electrophysiological studies of letter-speech sound processing have revealed that normally developing readers take years to automate these associations and dyslexic readers hardly exhibit automation of these associations. It is argued that the reason for this effortful learning may reside in the nature of the audiovisual process that is recruited for the integration of in principle arbitrarily linked elements. It is shown that letter-speech sound integration does not resemble the processes involved in the integration of natural audiovisual objects such as audiovisual speech. The automatic symmetrical recruitment of the assumedly uni-sensory visual and auditory cortices in audiovisual speech integration does not occur for letter and speech sound integration. It is also argued that letter-speech sound integration only partly resembles the integration of arbitrarily linked unfamiliar audiovisual objects. Letter-sound integration and artificial audiovisual objects share the necessity of a narrow time window for integration to occur. However, they differ from these artificial objects, because they constitute an integration of partly familiar elements which acquire meaning through the learning of an orthography. Although letter-speech sound pairs share similarities with audiovisual speech processing as well as with unfamiliar, arbitrary objects, it seems that letter-speech sound pairs develop into unique audiovisual objects that furthermore have to be processed in a unique way in order to enable fluent reading and thus very likely recruit other neurobiological learning mechanisms than the ones involved in learning natural or arbitrary unfamiliar

  5. Vicarious social defeat stress: Bridging the gap between physical and emotional stress.

    Science.gov (United States)

    Sial, Omar K; Warren, Brandon L; Alcantara, Lyonna F; Parise, Eric M; Bolaños-Guzmán, Carlos A

    2016-01-30

    Animal models capable of differentiating the neurobiological intricacies between physical and emotional stress are scarce. Current models rely primarily on physical stressors (e.g., chronic unpredictable or mild stress, social defeat, learned helplessness), and neglect the impact of psychological stress alone. This is surprising given extensive evidence that a traumatic event needs not be directly experienced to produce enduring perturbations on an individual's health and psychological well-being. Post-traumatic stress disorder (PTSD), a highly debilitating neuropsychiatric disorder characterized by intense fear of trauma-related stimuli, often occurs in individuals that have only witnessed a traumatic event. By modifying the chronic social defeat stress (CSDS) paradigm to include a witness component (witnessing the social defeat of another mouse), we demonstrate a novel behavioral paradigm capable of inducing a robust behavioral syndrome reminiscent of PTSD in emotionally stressed adult mice. We describe the vicarious social defeat stress (VSDS) model that is capable of inducing a host of behavioral deficits that include social avoidance and other depressive- and anxiety-like phenotypes in adult male mice. VSDS exposure induces weight loss and spike in serum corticosterone (CORT) levels. A month after stress, these mice retain the social avoidant phenotype and have an increased CORT response when exposed to subsequent stress. The VSDS is a novel paradigm capable of inducing emotional stress by isolating physical stress/confrontation in mice. The VSDS model can be used to study the short- and long-term neurobiological consequences of exposure to emotional stress in mice. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Observing the restriction of another person: Vicarious reactance and the role of self-construal and culture

    Directory of Open Access Journals (Sweden)

    Sandra eSittenthaler

    2015-08-01

    Full Text Available Psychological reactance occurs in response to threats posed to perceived behavioral freedoms. Research has shown that people can also experience vicarious reactance. They feel restricted in their own freedom even though they are not personally involved in the restriction but only witness the situation. The phenomenon of vicarious reactance is especially interesting when considered in a cross-cultural context because the cultural specific self-construal plays a crucial role in understanding people’s response to self- and vicariously experienced restrictions. Previous studies and our pilot study (N = 197 could show that people with a collectivistic cultural background show higher vicarious reactance compared to people with an individualistic cultural background. But does it matter whether people experience the vicarious restriction for an in-group or an out-group member? Differentiating vicarious-in-group and vicarious-out-group restrictions, Study 1 (N = 159 suggests that people with a more interdependent self-construal show stronger vicarious reactance only with regard to in-group restrictions but not with regard to out-group restrictions. In contrast, participants with a more independent self-construal experience stronger reactance when being self-restricted compared to vicariously-restricted. Study 2 (N = 180 replicates this pattern conceptually with regard to individualistic and collectivistic cultural background groups. Additionally, participants’ behavioral intentions show the same pattern of results. Moreover a mediation analysis demonstrates that cultural differences in behavioral intentions could be explained through people´s self-construal differences. Thus, the present studies provide new insights and show consistent evidence for vicarious reactance depending on participants’ culturally determined self-construal.

  7. Observing the restriction of another person: vicarious reactance and the role of self-construal and culture.

    Science.gov (United States)

    Sittenthaler, Sandra; Traut-Mattausch, Eva; Jonas, Eva

    2015-01-01

    Psychological reactance occurs in response to threats posed to perceived behavioral freedoms. Research has shown that people can also experience vicarious reactance. They feel restricted in their own freedom even though they are not personally involved in the restriction but only witness the situation. The phenomenon of vicarious reactance is especially interesting when considered in a cross-cultural context because the cultural specific self-construal plays a crucial role in understanding people's response to self- and vicariously experienced restrictions. Previous studies and our pilot study (N = 197) could show that people with a collectivistic cultural background show higher vicarious reactance compared to people with an individualistic cultural background. But does it matter whether people experience the vicarious restriction for an in-group or an out-group member? Differentiating vicarious-in-group and vicarious-out-group restrictions, Study 1 (N = 159) suggests that people with a more interdependent self-construal show stronger vicarious reactance only with regard to in-group restrictions but not with regard to out-group restrictions. In contrast, participants with a more independent self-construal experience stronger reactance when being self-restricted compared to vicariously-restricted. Study 2 (N = 180) replicates this pattern conceptually with regard to individualistic and collectivistic cultural background groups. Additionally, participants' behavioral intentions show the same pattern of results. Moreover a mediation analysis demonstrates that cultural differences in behavioral intentions could be explained through people's self-construal differences. Thus, the present studies provide new insights and show consistent evidence for vicarious reactance depending on participants' culturally determined self-construal.

  8. 77 FR 16561 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-03-21

    ... INTERNATIONAL TRADE COMMISSION [DN 2884] Certain Audiovisual Components and Products Containing.... International Trade Commission has received a complaint entitled Certain Audiovisual Components and Products... audiovisual components and products containing the same. The complaint names as respondents Funai Electric...

  9. 77 FR 16560 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-03-21

    ... INTERNATIONAL TRADE COMMISSION [DN 2884] Certain Audiovisual Components and Products Containing.... International Trade Commission has received a complaint entitled Certain Audiovisual Components and Products... audiovisual components and products containing the same. The complaint names as respondents Funai Electric...

  10. Testosterone and estrogen impact social evaluations and vicarious emotions: A double-blind placebo-controlled study.

    Science.gov (United States)

    Olsson, Andreas; Kopsida, Eleni; Sorjonen, Kimmo; Savic, Ivanka

    2016-06-01

    The abilities to "read" other peoples' intentions and emotions, and to learn from their experiences, are critical to survival. Previous studies have highlighted the role of sex hormones, notably testosterone and estrogen, in these processes. Yet it is unclear how these hormones affect social cognition and emotion using acute hormonal administration. In the present double-blind placebo-controlled study, we administered an acute exogenous dose of testosterone or estrogen to healthy female and male volunteers, respectively, with the aim of investigating the effects of these steroids on social-cognitive and emotional processes. Following hormonal and placebo treatment, participants made (a) facial dominance judgments, (b) mental state inferences (Reading the Mind in the Eyes Test), and (c) learned aversive associations through watching others' emotional responses (observational fear learning [OFL]). Our results showed that testosterone administration to females enhanced ratings of facial dominance but diminished their accuracy in inferring mental states. In men, estrogen administration resulted in an increase in emotional (vicarious) reactivity when watching a distressed other during the OFL task. Taken together, these results suggest that sex hormones affect social-cognitive and emotional functions at several levels, linking our results to neuropsychiatric disorders in which these functions are impaired. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Automated social skills training with audiovisual information.

    Science.gov (United States)

    Tanaka, Hiroki; Sakti, Sakriani; Neubig, Graham; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi

    2016-08-01

    People with social communication difficulties tend to have superior skills using computers, and as a result computer-based social skills training systems are flourishing. Social skills training, performed by human trainers, is a well-established method to obtain appropriate skills in social interaction. Previous works have attempted to automate one or several parts of social skills training through human-computer interaction. However, while previous work on simulating social skills training considered only acoustic and linguistic features, human social skills trainers take into account visual features (e.g. facial expression, posture). In this paper, we create and evaluate a social skills training system that closes this gap by considering audiovisual features regarding ratio of smiling, yaw, and pitch. An experimental evaluation measures the difference in effectiveness of social skill training when using audio features and audiovisual features. Results showed that the visual features were effective to improve users' social skills.

  12. Alterations in audiovisual simultaneity perception in amblyopia

    OpenAIRE

    Richards, Michael D.; Goltz, Herbert C.; Wong, Agnes M. F.

    2017-01-01

    Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV) perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window). Adults with unilateral amblyopia (n = 17) and visually normal controls (n = 17) judged...

  13. Cortical Integration of Audio-Visual Information

    Science.gov (United States)

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  14. Vicarious Effort-Based Decision-Making in Autism Spectrum Disorders

    Science.gov (United States)

    Mosner, Maya G.; Kinard, Jessica L.; McWeeny, Sean; Shah, Jasmine S.; Markiewitz, Nathan D.; Damiano-Goodwin, Cara R.; Burchinal, Margaret R.; Rutherford, Helena J. V.; Greene, Rachel K.; Treadway, Michael T.; Dichter, Gabriel S.

    2017-01-01

    This study investigated vicarious effort-based decision-making in 50 adolescents with autism spectrum disorders (ASD) compared to 32 controls using the Effort Expenditure for Rewards Task. Participants made choices to win money for themselves or for another person. When choosing for themselves, the ASD group exhibited relatively similar patterns…

  15. Hydroxylation of nitro-(pentafluorosulfanyl)benzenes via vicarious nucleophilic substitution of hydrogen

    Czech Academy of Sciences Publication Activity Database

    Beier, Petr; Pastýříková, Tereza

    2011-01-01

    Roč. 52, č. 34 (2011), s. 4392-4394 ISSN 0040-4039 R&D Projects: GA ČR GAP207/11/0344 Institutional research plan: CEZ:AV0Z40550506 Keywords : pentafluorosulfanyl group * vicarious nucleophilic substitution * hydroxylation Subject RIV: CC - Organic Chemistry Impact factor: 2.683, year: 2011

  16. Attitude change as a function of the observation of vicarious reinforcement and friendliness

    OpenAIRE

    Stocker-Kreichgauer, Gisela

    1982-01-01

    Attitude change as a function of the observation of vicarious reinforcement and friendliness : hostility in a debate / Lutz von Rosenstiel ; Gisela Stocker- Kreichgauer. - In: Group decision making / ed. by Gisela Stocker-Kreichgauer ... - London u.a. : Acad. Press, 1982. - S. 241-255. - (European monographs in social psychology ; 25)

  17. Coping with Vicarious Trauma in the Aftermath of a Natural Disaster

    Science.gov (United States)

    Smith, Lauren E.; Bernal, Darren R.; Schwartz, Billie S.; Whitt, Courtney L.; Christman, Seth T.; Donnelly, Stephanie; Wheatley, Anna; Guillaume, Casta; Nicolas, Guerda; Kish, Jonathan; Kobetz, Erin

    2014-01-01

    This study documents the vicarious psychological impact of the 2010 earthquake in Haiti on Haitians living in the United States. The role of coping resources--family, religious, and community support--was explored. The results highlight the importance of family and community as coping strategies to manage such trauma.

  18. Vicarious Desensitization of Test Anxiety Through Observation of Video-taped Treatment

    Science.gov (United States)

    Mann, Jay

    1972-01-01

    Procedural variations were compared for a vicarious group treatment of test anxiety involving observation of videotapes depicting systematic desensitization of a model. The theoretical implications of the present study and the feasibility of using videotaped materials to treat test anxiety and other avoidance responses in school settings are…

  19. Vicarious Racism: A Qualitative Analysis of Experiences with Secondhand Racism in Graduate Education

    Science.gov (United States)

    Truong, Kimberly A.; Museus, Samuel D.; McGuire, Keon M.

    2016-01-01

    In this article, the authors examine the role of vicarious racism in the experiences of doctoral students of color. The researchers conducted semi-structured individual interviews with 26 doctoral students who self-reported experiencing racism and racial trauma during their doctoral studies. The analysis generated four themes that detail the…

  20. Effects of vicarious pain on self-pain perception: investigating the role of awareness

    Directory of Open Access Journals (Sweden)

    Terrighena EL

    2017-07-01

    Full Text Available Esslin L Terrighena,1,2 Ge Lu,1 Wai Ping Yuen,1 Tatia M C Lee,1–4 Kati Keuper1,2,5 1Department of Psychology, Laboratory of Neuropsychology, The University of Hong Kong, Hong Kong; 2Laboratory of Social Cognitive Affective Neuroscience, The University of Hong Kong, Hong Kong; 3The State Key Laboratory of Brain and Cognitive Sciences, Hong Kong; 4Institute of Clinical Neuropsychology, The University of Hong Kong, Hong Kong; 5Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany Abstract: The observation of pain in others may enhance or reduce self-pain, yet the boundary conditions and factors that determine the direction of such effects are poorly understood. The current study set out to show that visual stimulus awareness plays a crucial role in ­determining whether vicarious pain primarily activates behavioral defense systems that enhance pain sensitivity and stimulate withdrawal or appetitive systems that attenuate pain sensitivity and stimulate approach. We employed a mixed factorial design with the between-subject factors exposure time (subliminal vs optimal and vicarious pain (pain vs no pain images, and the within-subject factor session (baseline vs trial to investigate how visual awareness of vicarious pain images affects subsequent self-pain in the cold-pressor test. Self-pain tolerance, intensity and unpleasantness were evaluated in a sample of 77 healthy participants. Results revealed ­significant interactions of exposure time and vicarious pain in all three dependent measures. In the presence of visual awareness (optimal condition, vicarious pain compared to no-pain elicited overall enhanced self-pain sensitivity, indexed by reduced pain tolerance and enhanced ratings of pain intensity and unpleasantness. Conversely, in the absence of visual awareness (subliminal condition, vicarious pain evoked decreased self-pain intensity and unpleasantness while pain tolerance remained unaffected. These

  1. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    Science.gov (United States)

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Understanding the basics of audiovisual archiving in Africa and the ...

    African Journals Online (AJOL)

    In the developed world, the cultural value of the audiovisual media gained legitimacy and widening acceptance after World War II, and this is what Africa still requires. There are a lot of problems in Africa, and because of this, activities such as preservation of a historical record, especially in the audiovisual media are seen as ...

  3. Trigger videos on the Web: Impact of audiovisual design

    NARCIS (Netherlands)

    Verleur, R.; Heuvelman, A.; Verhagen, Pleunes Willem

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is

  4. Audiovisual Archive Exploitation in the Networked Information Society

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.

    2011-01-01

    Safeguarding the massive body of audiovisual content, including rich music collections, in audiovisual archives and enabling access for various types of user groups is a prerequisite for unlocking the social-economic value of these collections. Data quantities and the need for specific content

  5. Decision-level fusion for audio-visual laughter detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, M.; Truong, K.; Poppe, R.; Pantic, M.

    2008-01-01

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laughter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is

  6. Haptic and Audio-visual Stimuli: Enhancing Experiences and Interaction

    NARCIS (Netherlands)

    Nijholt, Antinus; Dijk, Esko O.; Lemmens, Paul M.C.; Luitjens, S.B.

    2010-01-01

    The intention of the symposium on Haptic and Audio-visual stimuli at the EuroHaptics 2010 conference is to deepen the understanding of the effect of combined Haptic and Audio-visual stimuli. The knowledge gained will be used to enhance experiences and interactions in daily life. To this end, a

  7. Knowledge Generated by Audiovisual Narrative Action Research Loops

    Science.gov (United States)

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of…

  8. Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special

    Science.gov (United States)

    Vroomen, Jean; Stekelenburg, Jeroen J.

    2011-01-01

    Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…

  9. Neural Correlates of Audiovisual Integration of Semantic Category Information

    Science.gov (United States)

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-01-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded…

  10. Audiovisual Integration in High Functioning Adults with Autism

    Science.gov (United States)

    Keane, Brian P.; Rosenthal, Orna; Chun, Nicole H.; Shams, Ladan

    2010-01-01

    Autism involves various perceptual benefits and deficits, but it is unclear if the disorder also involves anomalous audiovisual integration. To address this issue, we compared the performance of high-functioning adults with autism and matched controls on experiments investigating the audiovisual integration of speech, spatiotemporal relations, and…

  11. Development of Sensitivity to Audiovisual Temporal Asynchrony during Midchildhood

    Science.gov (United States)

    Kaganovich, Natalya

    2016-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…

  12. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    Science.gov (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  13. Trigger Videos on the Web: Impact of Audiovisual Design

    Science.gov (United States)

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  14. Audiovisual consumption and its social logics on the web

    OpenAIRE

    Rose Marie Santini; Juan C. Calvi

    2013-01-01

    This article analyzes the social logics underlying audiovisualconsumption on digital networks. We retrieved some data on the Internet globaltraffic of audiovisual files since 2008 to identify formats, modes of distributionand consumption of audiovisual contents that tend to prevail on the Web. Thisresearch shows the types of social practices which are dominant among usersand its relation to what we designate as “Internet culture”.

  15. Multistage audiovisual integration of speech: dissociating identification and detection.

    Science.gov (United States)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias S

    2011-02-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers, the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multistage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.

  16. Audiovisual perception in amblyopia: A review and synthesis.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-05-17

    Amblyopia is a common developmental sensory disorder that has been extensively and systematically investigated as a unisensory visual impairment. However, its effects are increasingly recognized to extend beyond vision to the multisensory domain. Indeed, amblyopia is associated with altered cross-modal interactions in audiovisual temporal perception, audiovisual spatial perception, and audiovisual speech perception. Furthermore, although the visual impairment in amblyopia is typically unilateral, the multisensory abnormalities tend to persist even when viewing with both eyes. Knowledge of the extent and mechanisms of the audiovisual impairments in amblyopia, however, remains in its infancy. This work aims to review our current understanding of audiovisual processing and integration deficits in amblyopia, and considers the possible mechanisms underlying these abnormalities. Copyright © 2018. Published by Elsevier Ltd.

  17. The natural statistics of audiovisual speech.

    Directory of Open Access Journals (Sweden)

    Chandramouli Chandrasekaran

    2009-07-01

    Full Text Available Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2-7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.

  18. Feature Fusion Based Audio-Visual Speaker Identification Using Hidden Markov Model under Different Lighting Variations

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2014-01-01

    Full Text Available The aim of the paper is to propose a feature fusion based Audio-Visual Speaker Identification (AVSI system with varied conditions of illumination environments. Among the different fusion strategies, feature level fusion has been used for the proposed AVSI system where Hidden Markov Model (HMM is used for learning and classification. Since the feature set contains richer information about the raw biometric data than any other levels, integration at feature level is expected to provide better authentication results. In this paper, both Mel Frequency Cepstral Coefficients (MFCCs and Linear Prediction Cepstral Coefficients (LPCCs are combined to get the audio feature vectors and Active Shape Model (ASM based appearance and shape facial features are concatenated to take the visual feature vectors. These combined audio and visual features are used for the feature-fusion. To reduce the dimension of the audio and visual feature vectors, Principal Component Analysis (PCA method is used. The VALID audio-visual database is used to measure the performance of the proposed system where four different illumination levels of lighting conditions are considered. Experimental results focus on the significance of the proposed audio-visual speaker identification system with various combinations of audio and visual features.

  19. Dissociating verbal and nonverbal audiovisual object processing.

    Science.gov (United States)

    Hocking, Julia; Price, Cathy J

    2009-02-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

  20. Categorization of natural dynamic audiovisual scenes.

    Directory of Open Access Journals (Sweden)

    Olli Rummukainen

    Full Text Available This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database.

  1. Summarizing Audiovisual Contents of a Video Program

    Science.gov (United States)

    Gong, Yihong

    2003-12-01

    In this paper, we focus on video programs that are intended to disseminate information and knowledge such as news, documentaries, seminars, etc, and present an audiovisual summarization system that summarizes the audio and visual contents of the given video separately, and then integrating the two summaries with a partial alignment. The audio summary is created by selecting spoken sentences that best present the main content of the audio speech while the visual summary is created by eliminating duplicates/redundancies and preserving visually rich contents in the image stream. The alignment operation aims to synchronize each spoken sentence in the audio summary with its corresponding speaker's face and to preserve the rich content in the visual summary. A Bipartite Graph-based audiovisual alignment algorithm is developed to efficiently find the best alignment solution that satisfies these alignment requirements. With the proposed system, we strive to produce a video summary that: (1) provides a natural visual and audio content overview, and (2) maximizes the coverage for both audio and visual contents of the original video without having to sacrifice either of them.

  2. Alterations in audiovisual simultaneity perception in amblyopia.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2017-01-01

    Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV) perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window). Adults with unilateral amblyopia (n = 17) and visually normal controls (n = 17) judged the simultaneity of a flash and a click presented with both eyes viewing. The signal onset asynchrony (SOA) varied from 0 ms to 450 ms for auditory-lead and visual-lead conditions. A subset of participants with amblyopia (n = 6) was tested monocularly. Compared to the control group, the auditory-lead side of the AV simultaneity window was widened by 48 ms (36%; p = 0.002), whereas that of the visual-lead side was widened by 86 ms (37%; p = 0.02). The overall mean window width was 500 ms, compared to 366 ms among controls (37% wider; p = 0.002). Among participants with amblyopia, the simultaneity window parameters were unchanged by viewing condition, but subgroup analysis revealed differential effects on the parameters by amblyopia severity, etiology, and foveal suppression status. Possible mechanisms to explain these findings include visual temporal uncertainty, interocular perceptual latency asymmetry, and disruption of normal developmental tuning of sensitivity to audiovisual asynchrony.

  3. Alterations in audiovisual simultaneity perception in amblyopia.

    Directory of Open Access Journals (Sweden)

    Michael D Richards

    Full Text Available Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window. Adults with unilateral amblyopia (n = 17 and visually normal controls (n = 17 judged the simultaneity of a flash and a click presented with both eyes viewing. The signal onset asynchrony (SOA varied from 0 ms to 450 ms for auditory-lead and visual-lead conditions. A subset of participants with amblyopia (n = 6 was tested monocularly. Compared to the control group, the auditory-lead side of the AV simultaneity window was widened by 48 ms (36%; p = 0.002, whereas that of the visual-lead side was widened by 86 ms (37%; p = 0.02. The overall mean window width was 500 ms, compared to 366 ms among controls (37% wider; p = 0.002. Among participants with amblyopia, the simultaneity window parameters were unchanged by viewing condition, but subgroup analysis revealed differential effects on the parameters by amblyopia severity, etiology, and foveal suppression status. Possible mechanisms to explain these findings include visual temporal uncertainty, interocular perceptual latency asymmetry, and disruption of normal developmental tuning of sensitivity to audiovisual asynchrony.

  4. Vicarious Effort-Based Decision-Making in Autism Spectrum Disorders.

    Science.gov (United States)

    Mosner, Maya G; Kinard, Jessica L; McWeeny, Sean; Shah, Jasmine S; Markiewitz, Nathan D; Damiano-Goodwin, Cara R; Burchinal, Margaret R; Rutherford, Helena J V; Greene, Rachel K; Treadway, Michael T; Dichter, Gabriel S

    2017-10-01

    This study investigated vicarious effort-based decision-making in 50 adolescents with autism spectrum disorders (ASD) compared to 32 controls using the Effort Expenditure for Rewards Task. Participants made choices to win money for themselves or for another person. When choosing for themselves, the ASD group exhibited relatively similar patterns of effort-based decision-making across reward parameters. However, when choosing for another person, the ASD group demonstrated relatively decreased sensitivity to reward magnitude, particularly in the high magnitude condition. Finally, patterns of responding in the ASD group were related to individual differences in consummatory pleasure capacity. These findings indicate atypical vicarious effort-based decision-making in ASD and more broadly add to the growing body of literature addressing social reward processing deficits in ASD.

  5. Social work in oncology-managing vicarious trauma-the positive impact of professional supervision.

    Science.gov (United States)

    Joubert, Lynette; Hocking, Alison; Hampson, Ralph

    2013-01-01

    This exploratory study focused on the experience and management of vicarious trauma in a team of social workers (N = 16) at a specialist cancer hospital in Melbourne. Respondents completed the Traumatic Stress Institute Belief Scale (TSIBS), the Professional Quality of Life Scale (ProQOL), and participated in four focus groups. The results from the TSIBS and the ProQol scales confirm that there is a stress associated with the social work role within a cancer service, as demonstrated by the high scores related to stress. However at the same time the results indicated a high level of satisfaction which acted as a mitigating factor. The study also highlighted the importance of supervision and management support. A model for clinical social work supervision is proposed to reduce the risks associated with vicarious trauma.

  6. Games and (Preparation for Future) Learning

    Science.gov (United States)

    Hammer, Jessica; Black, John

    2009-01-01

    What makes games effective for learning? The authors argue that games provide vicarious experiences for players, which then amplify the effects of future, formal learning. However, not every game succeeds in doing so! Understanding why some games succeed and others fail at this task means investigating both a given game's design and the…

  7. Global biogeography of scaly tree ferns (Cyatheaceae): evidence for Gondwanan vicariance and limited transoceanic dispersal

    OpenAIRE

    Korall, Petra; Pryer, Kathleen

    2014-01-01

    Aim Scaly tree ferns, Cyatheaceae, are a well-supported group of mostly tree-forming ferns found throughout the tropics, the subtropics and the south-temperate zone. Fossil evidence shows that the lineage originated in the Late Jurassic period. We reconstructed large-scale historical biogeographical patterns of Cyatheaceae and tested the hypothesis that some of the observed distribution patterns are in fact compatible, in time and space, with a vicariance scenario related to the break-up of G...

  8. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan.

    Science.gov (United States)

    Noel, Jean-Paul; De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.

  9. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson's Disease.

    Science.gov (United States)

    Ren, Yanna; Suzuki, Keisuke; Yang, Weiping; Ren, Yanling; Wu, Fengxia; Yang, Jiajia; Takahashi, Satoshi; Ejima, Yoshimichi; Wu, Jinglong; Hirata, Koichi

    2018-01-01

    The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson's disease (PD). This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC) and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p audiovisual stimuli was significantly faster than that to unimodal stimuli in both NC and PD ( p audiovisual integration was absent in PD; however, it did occur in NC. Further analysis showed that there was no significant audiovisual integration in PD with/without cognitive impairment or in PD with/without sleep disturbances. Furthermore, audiovisual facilitation was not associated with Hoehn and Yahr stage, disease duration, or the presence of sleep disturbances (all p > 0.05). The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.

  10. Vicarious absolute radiometric calibration of GF-2 PMS2 sensor using permanent artificial targets in China

    Science.gov (United States)

    Liu, Yaokai; Li, Chuanrong; Ma, Lingling; Wang, Ning; Qian, Yonggang; Tang, Lingli

    2016-10-01

    GF-2, launched on August 19 2014, is one of the high-resolution land resource observing satellite of the China GF series satellites plan. The radiometric performance evaluation of the onboard optical pan and multispectral (PMS2) sensor of GF-2 satellite is very important for the further application of the data. And, the vicarious absolute radiometric calibration approach is one of the most useful way to monitor the radiometric performance of the onboard optical sensors. In this study, the traditional reflectance-based method is used to vicarious radiometrically calibrate the onboard PMS2 sensor of GF-2 satellite using three black, gray and white reflected permanent artificial targets located in the AOE Baotou site in China. Vicarious field calibration campaign were carried out in the AOE-Baotou calibration site on 22 April 2016. And, the absolute radiometric calibration coefficients were determined with in situ measured atmospheric parameters and surface reflectance of the permanent artificial calibration targets. The predicted TOA radiance of a selected desert area with our determined calibrated coefficients were compared with the official distributed calibration coefficients. Comparison results show a good consistent and the mean relative difference of the multispectral channels is less than 5%. Uncertainty analysis was also carried out and a total uncertainty with 3.87% is determined of the TOA radiance.

  11. Influencing Republicans' and Democrats' attitudes toward Obamacare: Effects of imagined vicarious cognitive dissonance on political attitudes.

    Science.gov (United States)

    Cooper, Joel; Feldman, Lauren A; Blackman, Shane F

    2018-04-16

    The field of experimental social psychology is appropriately interested in using novel theoretical approaches to implement change in the social world. In the current study, we extended cognitive dissonance theory by creating a new framework of social influence: imagined vicarious dissonance. We used the framework to influence attitudes on an important and controversial political attitude: U.S. citizens' support for the Affordable Care Act (ACA). 36 Republicans and 84 Democrats were asked to imagine fellow Republicans and Democrats, respectively, making attitude discrepant statements under high and low choice conditions about support for the ACA. The data showed that vicarious dissonance, established by imagining a group member make a counterattitudinal speech under high-choice conditions (as compared to low-choice conditions), resulted in greater support for the Act by Republicans and marginally diminished support by Democrats. The results suggest a promising role for the application of vicarious dissonance theory to relevant societal issues and for further understanding the relationship of dissonance and people's identification with their social groups.

  12. Both Direct and Vicarious Experiences of Nature Affect Children's Willingness to Conserve Biodiversity.

    Science.gov (United States)

    Soga, Masashi; Gaston, Kevin J; Yamaura, Yuichi; Kurisu, Kiyo; Hanaki, Keisuke

    2016-05-25

    Children are becoming less likely to have direct contact with nature. This ongoing loss of human interactions with nature, the extinction of experience, is viewed as one of the most fundamental obstacles to addressing global environmental challenges. However, the consequences for biodiversity conservation have been examined very little. Here, we conducted a questionnaire survey of elementary schoolchildren and investigated effects of the frequency of direct (participating in nature-based activities) and vicarious experiences of nature (reading books or watching TV programs about nature and talking about nature with parents or friends) on their affective attitudes (individuals' emotional feelings) toward and willingness to conserve biodiversity. A total of 397 children participated in the surveys in Tokyo. Children's affective attitudes and willingness to conserve biodiversity were positively associated with the frequency of both direct and vicarious experiences of nature. Path analysis showed that effects of direct and vicarious experiences on children's willingness to conserve biodiversity were mediated by their affective attitudes. This study demonstrates that children who frequently experience nature are likely to develop greater emotional affinity to and support for protecting biodiversity. We suggest that children should be encouraged to experience nature and be provided with various types of these experiences.

  13. Both Direct and Vicarious Experiences of Nature Affect Children’s Willingness to Conserve Biodiversity

    Directory of Open Access Journals (Sweden)

    Masashi Soga

    2016-05-01

    Full Text Available Children are becoming less likely to have direct contact with nature. This ongoing loss of human interactions with nature, the extinction of experience, is viewed as one of the most fundamental obstacles to addressing global environmental challenges. However, the consequences for biodiversity conservation have been examined very little. Here, we conducted a questionnaire survey of elementary schoolchildren and investigated effects of the frequency of direct (participating in nature-based activities and vicarious experiences of nature (reading books or watching TV programs about nature and talking about nature with parents or friends on their affective attitudes (individuals’ emotional feelings toward and willingness to conserve biodiversity. A total of 397 children participated in the surveys in Tokyo. Children’s affective attitudes and willingness to conserve biodiversity were positively associated with the frequency of both direct and vicarious experiences of nature. Path analysis showed that effects of direct and vicarious experiences on children’s willingness to conserve biodiversity were mediated by their affective attitudes. This study demonstrates that children who frequently experience nature are likely to develop greater emotional affinity to and support for protecting biodiversity. We suggest that children should be encouraged to experience nature and be provided with various types of these experiences.

  14. Audiovisual Interval Size Estimation Is Associated with Early Musical Training.

    Directory of Open Access Journals (Sweden)

    Mary Kathryn Abel

    Full Text Available Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.

  15. Audiovisual Interval Size Estimation Is Associated with Early Musical Training.

    Science.gov (United States)

    Abel, Mary Kathryn; Li, H Charles; Russo, Frank A; Schlaug, Gottfried; Loui, Psyche

    2016-01-01

    Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.

  16. Speech-specificity of two audiovisual integration effects

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2010-01-01

    Seeing the talker’s articulatory mouth movements can influence the auditory speech percept both in speech identification and detection tasks. Here we show that these audiovisual integration effects also occur for sine wave speech (SWS), which is an impoverished speech signal that naïve observers...... often fail to perceive as speech. While audiovisual integration in the identification task only occurred when observers were informed of the speech-like nature of SWS, integration occurred in the detection task both for informed and naïve observers. This shows that both speech-specific and general...... mechanisms underlie audiovisual integration of speech....

  17. Benefits of stimulus congruency for multisensory facilitation of visual learning.

    Directory of Open Access Journals (Sweden)

    Robyn S Kim

    Full Text Available BACKGROUND: Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning. METHODOLOGY/PRINCIPLE FINDINGS: Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli. CONCLUSIONS/SIGNIFICANCE: This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.

  18. Employers' Statutory Vicarious Liability in Terms of the Protection of Personal Information Act

    Directory of Open Access Journals (Sweden)

    Daleen Millard

    2016-07-01

    Full Text Available A person whose privacy has been infringed upon through the unlawful, culpable processing of his or her personal information can sue the infringer's employer based on vicarious liability or institute action based on the Protection of Personal Information Act 4 of 2013 (POPI. Section 99(1 of POPI provides a person (a "data subject" whose privacy has been infringed upon with the right to institute a civil action against the responsible party. POPI defines the responsible party as the person who determines the purpose of and means for the processing of the personal information of data subjects. Although POPI does not equate a responsible party to an employer, the term "responsible party" is undoubtedly a synonym for "employer" in this context. By holding an employer accountable for its employees' unlawful processing of a data subject's personal information, POPI creates a form of statutory vicarious liability. Since the defences available to an employer at common law and developed by case law differ from the statutory defences available to an employer in terms of POPI, it is necessary to compare the impact this new statute has on employers. From a risk perspective, employers must be aware of the serious implications of POPI. The question that arises is whether the Act perhaps takes matters too far. This article takes a critical look at the statutory defences available to an employer in vindication of a vicarious liability action brought by a data subject in terms of section 99(1 of POPI. It compares the defences found in section 99(2 of POPI and the common-law defences available to an employer fending off a delictual claim founded on the doctrine of vicarious liability. To support the argument that the statutory vicarious liability created by POPI is too harsh, the defences contained in section 99(2 of POPI are further analogised with those available to an employer in terms of section 60(4 of the Employment Equity Act 55 of 1998 (EEA and other

  19. Teleconferences and Audiovisual Materials in Earth Science Education

    Science.gov (United States)

    Cortina, L. M.

    2007-05-01

    Unidad de Educacion Continua y a Distancia, Universidad Nacional Autonoma de Mexico, Coyoaca 04510 Mexico, MEXICO As stated in the special session description, 21st century undergraduate education has access to resources/experiences that go beyond university classrooms. However in some cases, resources may go largely unused and a number of factors may be cited such as logistic problems, restricted internet and telecommunication service access, miss-information, etc. We present and comment on our efforts and experiences at the National University of Mexico in a new unit dedicated to teleconferences and audio-visual materials. The unit forms part of the geosciences institutes, located in the central UNAM campus and campuses in other States. The use of teleconference in formal graduate and undergraduate education allows teachers and lecturers to distribute course material as in classrooms. Course by teleconference requires learning and student and teacher effort without physical contact, but they have access to multimedia available to support their exhibition. Well selected multimedia material allows the students to identify and recognize digital information to aid understanding natural phenomena integral to Earth Sciences. Cooperation with international partnerships providing access to new materials and experiences and to field practices will greatly add to our efforts. We will present specific examples of the experiences that we have at the Earth Sciences Postgraduate Program of UNAM with the use of technology in the education in geosciences.

  20. The Effects of Audio-Visual Recorded and Audio Recorded Listening Tasks on the Accuracy of Iranian EFL Learners' Oral Production

    Science.gov (United States)

    Drood, Pooya; Asl, Hanieh Davatgari

    2016-01-01

    The ways in which task in classrooms has developed and proceeded have receive great attention in the field of language teaching and learning in the sense that they draw attention of learners to the competing features such as accuracy, fluency, and complexity. English audiovisual and audio recorded materials have been widely used by teachers and…

  1. O vídeo didático "Conhecendo o Solo" e a contribuição desse recurso audiovisual no processo de aprendizagem no ensino fundamental Didactic video "Knowing the Soil" and its contribution to learning process in elementary school

    Directory of Open Access Journals (Sweden)

    Olinda Soares Fernandes de Jesus

    2013-04-01

    Full Text Available O uso de recursos audiovisuais no ensino de solos, como estímulo para os alunos, pode auxiliar na construção de um conhecimento crítico e reflexivo. Este trabalho objetivou analisar a contribuição do vídeo "Conhecendo o Solo" no ensino e na aprendizagem dessa temática no nível fundamental. Com o intuito de estimular os alunos a perceber a importância dos solos nos ambientes, esse vídeo foi aplicado como conteúdo de ensino. Em seguida, foi aplicado um questionário, em que os alunos descreveram as principais ideias transmitidas por esse, especificando os pontos positivos e negativos do recurso utilizado. A análise do questionário revelou que o uso do vídeo foi um facilitador da aprendizagem. Porém, as respostas dos estudantes indicaram que alguns aspectos necessitam de adequações, como o dinamismo, a interatividade, a quantidade de informações e a narração. Mesmo assim, o recurso foi classificado pela maioria dos alunos como adequado, e o repertório de conteúdos apresentou similaridade com o exposto no vídeo, caracterizando-o como um recurso de influência positiva no processo de ensino e aprendizagem.The use of audiovisual resources in soil teaching, as a stimulus for students, can be useful to develop students' critical and reflexive knowledge. This study aimed to analyze the contribution of the video "Conhecendo o Solo" ("Knowing the Soil" in teaching and learning about this subject in elementary school. In order to stimulate students to realize the importance of soils in the environment, the video was used as teaching content. Then, some questions were applied in which the students described the main ideas it conveyed and specified the positive and negative points of this resource. An analysis of the questions showed that the use of the video was considered a facilitator of learning. However, the students' responses indicated that some aspects need to be adjusted, such as dynamism, interactivity, the amount of

  2. Prediction and constraint in audiovisual speech perception.

    Science.gov (United States)

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration

  3. Talker Variability in Audiovisual Speech Perception

    Directory of Open Access Journals (Sweden)

    Shannon eHeald

    2014-07-01

    Full Text Available A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition. So far, this talker-variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target-word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.

  4. Prediction and constraint in audiovisual speech perception

    Science.gov (United States)

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported

  5. Audiovisual interpretative skills: between textual culture and formalized literacy

    Directory of Open Access Journals (Sweden)

    Estefanía Jiménez, Ph. D.

    2010-01-01

    Full Text Available This paper presents the results of a study on the process of acquiring interpretative skills to decode audiovisual texts among adolescents and youth. Based on the conception of such competence as the ability to understand the meanings connoted beneath the literal discourses of audiovisual texts, this study compared two variables: the acquisition of such skills from the personal and social experience in the consumption of audiovisual products (which is affected by age difference, and, on the second hand, the differences marked by the existence of formalized processes of media literacy. Based on focus groups of young students, the research assesses the existing academic debate about these processes of acquiring skills to interpret audiovisual materials.

  6. Exposure to audiovisual programs as sources of authentic language ...

    African Journals Online (AJOL)

    Exposure to audiovisual programs as sources of authentic language input and second ... Southern African Linguistics and Applied Language Studies ... The findings of the present research contribute more insights on the type and amount of ...

  7. On-line repository of audiovisual material feminist research methodology

    Directory of Open Access Journals (Sweden)

    Lena Prado

    2014-12-01

    Full Text Available This paper includes a collection of audiovisual material available in the repository of the Interdisciplinary Seminar of Feminist Research Methodology SIMReF (http://www.simref.net.

  8. Electrophysiological assessment of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Dau, Torsten

    Speech perception integrates signal from ear and eye. This is witnessed by a wide range of audiovisual integration effects, such as ventriloquism and the McGurk illusion. Some behavioral evidence suggest that audiovisual integration of specific aspects is special for speech perception. However, our...... knowledge of such bimodal integration would be strengthened if the phenomena could be investigated by objective, neutrally based methods. One key question of the present work is if perceptual processing of audiovisual speech can be gauged with a specific signature of neurophysiological activity...... on the auditory speech percept? In two experiments, which both combine behavioral and neurophysiological measures, an uncovering of the relation between perception of faces and of audiovisual integration is attempted. Behavioral findings suggest a strong effect of face perception, whereas the MMN results are less...

  9. Multistage audiovisual integration of speech: dissociating identification and detection

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech...... signal. Here we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers...... informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multi-stage account of audiovisual integration of speech in which the many attributes...

  10. Proper Use of Audio-Visual Aids: Essential for Educators.

    Science.gov (United States)

    Dejardin, Conrad

    1989-01-01

    Criticizes educators as the worst users of audio-visual aids and among the worst public speakers. Offers guidelines for the proper use of an overhead projector and the development of transparencies. (DMM)

  11. An Instrumented Glove for Control Audiovisual Elements in Performing Arts

    Directory of Open Access Journals (Sweden)

    Rafael Tavares

    2018-02-01

    Full Text Available The use of cutting-edge technologies such as wearable devices to control reactive audiovisual systems are rarely applied in more conventional stage performances, such as opera performances. This work reports a cross-disciplinary approach for the research and development of the WMTSensorGlove, a data-glove used in an opera performance to control audiovisual elements on stage through gestural movements. A system architecture of the interaction between the wireless wearable device and the different audiovisual systems is presented, taking advantage of the Open Sound Control (OSC protocol. The developed wearable system was used as audiovisual controller in “As sete mulheres de Jeremias Epicentro”, a portuguese opera by Quarteto Contratempus, which was premiered in September 2017.

  12. Improving Classroom Learning by Collaboratively Observing Human Tutoring Videos while Problem Solving

    Science.gov (United States)

    Craig, Scotty D.; Chi, Michelene T. H.; VanLehn, Kurt

    2009-01-01

    Collaboratively observing tutoring is a promising method for observational learning (also referred to as vicarious learning). This method was tested in the Pittsburgh Science of Learning Center's Physics LearnLab, where students were introduced to physics topics by observing videos while problem solving in Andes, a physics tutoring system.…

  13. Audiovisual communication of object-names improves the spatial accuracy of recalled object-locations in topographic maps.

    Science.gov (United States)

    Lammert-Siepmann, Nils; Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank

    2017-01-01

    Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory.

  14. Audiovisual consumption and its social logics on the web

    Directory of Open Access Journals (Sweden)

    Rose Marie Santini

    2013-06-01

    Full Text Available This article analyzes the social logics underlying audiovisualconsumption on digital networks. We retrieved some data on the Internet globaltraffic of audiovisual files since 2008 to identify formats, modes of distributionand consumption of audiovisual contents that tend to prevail on the Web. Thisresearch shows the types of social practices which are dominant among usersand its relation to what we designate as “Internet culture”.

  15. Narrativa audiovisual. Estrategias y recursos [Reseña

    OpenAIRE

    Cuenca Jaramillo, María Dolores

    2011-01-01

    Reseña del libro "Narrativa audiovisual. Estrategias y recursos" de Fernando Canet y Josep Prósper. Cuenca Jaramillo, MD. (2011). Narrativa audiovisual. Estrategias y recursos [Reseña]. Vivat Academia. Revista de Comunicación. Año XIV(117):125-130. http://hdl.handle.net/10251/46210 Senia 125 130 Año XIV 117

  16. [Audio-visual communication in the history of psychiatry].

    Science.gov (United States)

    Farina, B; Remoli, V; Russo, F

    1993-12-01

    The authors analyse the evolution of visual communication in the history of psychiatry. From the 18th century oil paintings to the first dagherrotic prints until the cinematography and the modern audiovisual systems they observed an increasing diffusion of the new communication techniques in psychiatry, and described the use of the different techniques in psychiatric practice. The article ends with a brief review of the current applications of the audiovisual in therapy, training, teaching, and research.

  17. Plan empresa productora de audiovisuales : La Central Audiovisual y Publicidad

    OpenAIRE

    Arroyave Velasquez, Alejandro

    2015-01-01

    El presente documento corresponde al plan de creación de empresa La Central Publicidad y Audiovisual, una empresa dedicada a la pre-producción, producción y post-producción de material de tipo audiovisual. La empresa estará ubicada en la ciudad de Cali y tiene como mercado objetivo atender los diferentes tipos de empresas de la ciudad, entre las cuales se encuentran las pequeñas, medianas y grandes empresas.

  18. Influences of selective adaptation on perception of audiovisual speech

    Science.gov (United States)

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  19. Elevated audiovisual temporal interaction in patients with migraine without aura

    Science.gov (United States)

    2014-01-01

    Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903

  20. 36 CFR 1237.14 - What are the additional scheduling requirements for audiovisual, cartographic, and related records?

    Science.gov (United States)

    2010-07-01

    ... scheduling requirements for audiovisual, cartographic, and related records? 1237.14 Section 1237.14 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL... audiovisual, cartographic, and related records? The disposition instructions should also provide that...

  1. Lousa Digital Interativa: avaliação da interação didática e proposta de aplicação de narrativa audiovisual / Interactive White Board – IWB: assessment in interaction didactic and audiovisual narrative proposal

    Directory of Open Access Journals (Sweden)

    Francisco García García

    2011-04-01

    Full Text Available O uso de audiovisual em sala de aula não garante uma eficácia na aprendizagem, mas para os estudantes é um elemento interessante e ainda atrativo. Este trabalho — uma aproximação de duas pesquisas: a primeira apresenta a importância da interação didática com a LDI e a segunda, uma lista de elementos de narrativa audiovisual que podem ser aplicados em sala de aula — propõe o domínio de elementos da narrativa audiovisual como uma possibilidade teórica para o professor que quer produzir um conteúdo audiovisual para aplicar em plataformas digitais, como é o caso da Lousa Digital Interativa - LDI. O texto está divido em três partes: a primeira apresenta os conceitos teóricos das duas pesquisas, a segunda discute os resultados de ambas e, por fim, a terceira parte propõe uma prática pedagógica de interação didática com elementos de narrativa audiovisual para uso em LDI. AbstractThe audiovisual use in classroom does not guarantee effectiveness in learning, but for students is an interesting element and still attractive. This work suggests that the field of audiovisual elements of the narrative is a theoretical possibility for the teacher who wants to produce an audiovisual content to apply to digital platforms, such as the Interactive Digital Whiteboard - LDI. This work is an approximation of two doctoral theses, the first that shows the importance of interaction with the didactic and the second LDI provides a list of audiovisual narrative elements that can be applied in the classroom. This work is divided into three parts, the first part presents the theoretical concepts of the two surveys, the second part discusses the results of two surveys and finally the third part, proposes a practical pedagogical didactic interaction with audiovisual narrative elements to use in LDI.

  2. Global biogeography of scaly tree ferns (Cyatheaceae): evidence for Gondwanan vicariance and limited transoceanic dispersal.

    Science.gov (United States)

    Korall, Petra; Pryer, Kathleen M

    2014-02-01

    Scaly tree ferns, Cyatheaceae, are a well-supported group of mostly tree-forming ferns found throughout the tropics, the subtropics and the south-temperate zone. Fossil evidence shows that the lineage originated in the Late Jurassic period. We reconstructed large-scale historical biogeographical patterns of Cyatheaceae and tested the hypothesis that some of the observed distribution patterns are in fact compatible, in time and space, with a vicariance scenario related to the break-up of Gondwana. Tropics, subtropics and south-temperate areas of the world. The historical biogeography of Cyatheaceae was analysed in a maximum likelihood framework using Lagrange. The 78 ingroup taxa are representative of the geographical distribution of the entire family. The phylogenies that served as a basis for the analyses were obtained by Bayesian inference analyses of mainly previously published DNA sequence data using MrBayes. Lineage divergence dates were estimated in a Bayesian Markov chain Monte Carlo framework using beast. Cyatheaceae originated in the Late Jurassic in either South America or Australasia. Following a range expansion, the ancestral distribution of the marginate-scaled clade included both these areas, whereas Sphaeropteris is reconstructed as having its origin only in Australasia. Within the marginate-scaled clade, reconstructions of early divergences are hampered by the unresolved relationships among the Alsophila , Cyathea and Gymnosphaera lineages. Nevertheless, it is clear that the occurrence of the Cyathea and Sphaeropteris lineages in South America may be related to vicariance, whereas transoceanic dispersal needs to be inferred for the range shifts seen in Alsophila and Gymnosphaera . The evolutionary history of Cyatheaceae involves both Gondwanan vicariance scenarios as well as long-distance dispersal events. The number of transoceanic dispersals reconstructed for the family is rather few when compared with other fern lineages. We suggest that a causal

  3. Microscale vicariance and diversification of Western Balkan caddisflies linked to karstification.

    Science.gov (United States)

    Previšić, Ana; Schnitzler, Jan; Kučinić, Mladen; Graf, Wolfram; Ibrahimi, Halil; Kerovec, Mladen; Pauls, Steffen U

    2014-03-01

    The karst areas in the Dinaric region of the Western Balkan Peninsula are a hotspot of freshwater biodiversity. Many investigators have examined diversification of the subterranean freshwater fauna in these karst systems. However, diversification of surface-water fauna remains largely unexplored. We assessed local and regional diversification of surface-water species in karst systems and asked whether patterns of population differentiation could be explained by dispersal-diversification processes or allopatric diversification following karst-related microscale vicariance. We analyzed mitochondrial cytochrome c oxidase subunit I (mtCOI) sequence data of 4 caddisfly species (genus Drusus ) in a phylogeographic framework to assess local and regional population genetic structure and Pliocene/Pleistocene history. We used BEAST software to assess the timing of intraspecific diversification of the target species. We compared climate envelopes of the study species and projected climatically suitable areas during the last glacial maximum (LGM) to assess differences in the species climatic niches and infer potential LGM refugia. The haplotype distribution of the 4 species (324 individuals from 32 populations) was characterized by strong genetic differentiation with few haplotypes shared among populations (16%) and deep divergence among populations of the 3 endemic species, even at local scales. Divergence among local populations of endemics often exceeded divergence among regional and continental clades of the widespread D. discolor . Major divergences among regional populations dated to 2.0 to 0.5 Mya. Species distribution model projections and genetic structure suggest that the endemic species persisted in situ and diversified locally throughout multiple Pleistocene climate cycles. The pattern for D. discolor was different and consistent with multiple invasions into the region. Patterns of population genetic structure and diversification were similar for the 3 regional

  4. Direct and vicarious violent victimization and juvenile delinquency: an application of general strain theory.

    Science.gov (United States)

    Lin, Wen-Hsu; Cochran, John K; Mieczkowski, Thomas

    2011-01-01

    Using a national probability sample of adolescents (12–17), this study applies general strain theory to how violent victimization, vicarious violent victimization, and dual violent victimization affect juvenile violent/property crime and drug use. In addition, the mediating effect and moderating effect of depression, low social control, and delinquent peer association on the victimization–delinquency relationship is also examined. Based on SEM analyses and contingency tables, the results indicate that all three types of violent victimization have significant and positive direct effects on violent/property crime and drug use. In addition, the expected mediating effects and moderating effects are also found. Limitations and future directions are discussed.

  5. Vicarious Radiometric Calibration of a Multispectral Camera on Board an Unmanned Aerial System

    Directory of Open Access Journals (Sweden)

    Susana Del Pozo

    2014-02-01

    Full Text Available Combinations of unmanned aerial platforms and multispectral sensors are considered low-cost tools for detailed spatial and temporal studies addressing spectral signatures, opening a broad range of applications in remote sensing. Thus, a key step in this process is knowledge of multi-spectral sensor calibration parameters in order to identify the physical variables collected by the sensor. This paper discusses the radiometric calibration process by means of a vicarious method applied to a high-spatial resolution unmanned flight using low-cost artificial and natural covers as control and check surfaces, respectively.

  6. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Neural circuits in auditory and audiovisual memory.

    Science.gov (United States)

    Plakke, B; Romanski, L M

    2016-06-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Causal inference of asynchronous audiovisual speech

    Directory of Open Access Journals (Sweden)

    John F Magnotti

    2013-11-01

    Full Text Available During speech perception, humans integrate auditory information from the voice with visual information from the face. This multisensory integration increases perceptual precision, but only if the two cues come from the same talker; this requirement has been largely ignored by current models of speech perception. We describe a generative model of multisensory speech perception that includes this critical step of determining the likelihood that the voice and face information have a common cause. A key feature of the model is that it is based on a principled analysis of how an observer should solve this causal inference problem using the asynchrony between two cues and the reliability of the cues. This allows the model to make predictions abut the behavior of subjects performing a synchrony judgment task, predictive power that does not exist in other approaches, such as post hoc fitting of Gaussian curves to behavioral data. We tested the model predictions against the performance of 37 subjects performing a synchrony judgment task viewing audiovisual speech under a variety of manipulations, including varying asynchronies, intelligibility, and visual cue reliability. The causal inference model outperformed the Gaussian model across two experiments, providing a better fit to the behavioral data with fewer parameters. Because the causal inference model is derived from a principled understanding of the task, model parameters are directly interpretable in terms of stimulus and subject properties.

  9. "Audio-visuel Integre" et Communication(s) ("Integrated Audiovisual" and Communication)

    Science.gov (United States)

    Moirand, Sophie

    1974-01-01

    This article examines the usefullness of the audiovisual method in teaching communication competence, and calls for research in audiovisual methods as well as in communication theory for improvement in these areas. (Text is in French.) (AM)

  10. Challenges and opportunities for audiovisual diversity in the Internet

    Directory of Open Access Journals (Sweden)

    Trinidad García Leiva

    2017-06-01

    Full Text Available http://dx.doi.org/10.5007/2175-7984.2017v16n35p132 At the gates of the first quarter of the XXI century, nobody doubts the fact that the value chain of the audiovisual industry has suffered important transformations. The digital era presents opportunities for cultural enrichment as well as displays new challenges. After presenting a general portray of the audiovisual industries in the digital era, taking as a point of departure the Spanish case and paying attention to players and logics in tension, this paper will present some notes about the advantages and disadvantages that exist for the diversity of audiovisual production, distribution and consumption online. It is here sustained that the diversity of the audiovisual sector online is not guaranteed because the formula that has made some players successful and powerful is based on walled-garden models to monetize contents (which, besides, add restrictions to their reproduction and circulation by and among consumers. The final objective is to present some ideas about the elements that prevent the strengthening of the diversity of the audiovisual industry in the digital scenario. Barriers to overcome are classified as technological, financial, social, legal and political.

  11. The role of visual spatial attention in audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias; Tiippana, K.; Laarni, J.

    2009-01-01

    Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive b......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre...... from each of the faces and from the voice on the auditory speech percept. We found that directing visual spatial attention towards a face increased the influence of that face on auditory perception. However, the influence of the voice on auditory perception did not change suggesting that audiovisual...... integration did not change. Visual spatial attention was also able to select between the faces when lip reading. This suggests that visual spatial attention acts at the level of visual speech perception prior to audiovisual integration and that the effect propagates through audiovisual integration...

  12. Parametric packet-based audiovisual quality model for IPTV services

    CERN Document Server

    Garcia, Marie-Neige

    2014-01-01

    This volume presents a parametric packet-based audiovisual quality model for Internet Protocol TeleVision (IPTV) services. The model is composed of three quality modules for the respective audio, video and audiovisual components. The audio and video quality modules take as input a parametric description of the audiovisual processing path, and deliver an estimate of the audio and video quality. These outputs are sent to the audiovisual quality module which provides an estimate of the audiovisual quality. Estimates of perceived quality are typically used both in the network planning phase and as part of the quality monitoring. The same audio quality model is used for both these phases, while two variants of the video quality model have been developed for addressing the two application scenarios. The addressed packetization scheme is MPEG2 Transport Stream over Real-time Transport Protocol over Internet Protocol. In the case of quality monitoring, that is the case for which the network is already set-up, the aud...

  13. The Fungible Audio-Visual Mapping and its Experience

    Directory of Open Access Journals (Sweden)

    Adriana Sa

    2014-12-01

    Full Text Available This article draws a perceptual approach to audio-visual mapping. Clearly perceivable cause and effect relationships can be problematic if one desires the audience to experience the music. Indeed perception would bias those sonic qualities that fit previous concepts of causation, subordinating other sonic qualities, which may form the relations between the sounds themselves. The question is, how can an audio-visual mapping produce a sense of causation, and simultaneously confound the actual cause-effect relationships. We call this a fungible audio-visual mapping. Our aim here is to glean its constitution and aspect. We will report a study, which draws upon methods from experimental psychology to inform audio-visual instrument design and composition. The participants are shown several audio-visual mapping prototypes, after which we pose quantitative and qualitative questions regarding their sense of causation, and their sense of understanding the cause-effect relationships. The study shows that a fungible mapping requires both synchronized and seemingly non-related components – sufficient complexity to be confusing. As the specific cause-effect concepts remain inconclusive, the sense of causation embraces the whole. 

  14. Media Aid Beyond the Factual: Culture, Development, and Audiovisual Assistance

    Directory of Open Access Journals (Sweden)

    Benjamin A. J. Pearson

    2015-01-01

    Full Text Available This paper discusses audiovisual assistance, a form of development aid that focuses on the production and distribution of cultural and entertainment media such as fictional films and TV shows. While the first audiovisual assistance program dates back to UNESCO’s International Fund for the Promotion of Culture in the 1970s, the past two decades have seen a proliferation of audiovisual assistance that, I argue, is related to a growing concern for culture in post-2015 global development agendas. In this paper, I examine the aims and motivations behind the EU’s audiovisual assistance programs to countries in the Global South, using data from policy documents and semi-structured, in-depth interviews with Program Managers and administrative staff in Brussels. These programs prioritize forms of audiovisual content that are locally specific, yet globally tradable. Furthermore, I argue that they have an ambivalent relationship with traditional notions of international development, one that conceptualizes media not only as a means to achieve economic development and human rights aims, but as a form of development itself.

  15. Documentary management of the sport audio-visual information in the generalist televisions

    OpenAIRE

    Jorge Caldera Serrano; Felipe Alonso

    2007-01-01

    The management of the sport audio-visual documentation of the Information Systems of the state, zonal and local chains is analyzed within the framework. For it it is made makes a route by the documentary chain that makes the sport audio-visual information with the purpose of being analyzing each one of the parameters, showing therefore a series of recommendations and norms for the preparation of the sport audio-visual registry. Evidently the audio-visual sport documentation difference i...

  16. Social identity shapes social valuation: evidence from prosocial behavior and vicarious reward.

    Science.gov (United States)

    Hackel, Leor M; Zaki, Jamil; Van Bavel, Jay J

    2017-08-01

    People frequently engage in more prosocial behavior toward members of their own groups, as compared to other groups. Such group-based prosociality may reflect either strategic considerations concerning one's own future outcomes or intrinsic value placed on the outcomes of in-group members. In a functional magnetic resonance imaging experiment, we examined vicarious reward responses to witnessing the monetary gains of in-group and out-group members, as well as prosocial behavior towards both types of individuals. We found that individuals' investment in their group-a motivational component of social identification-tracked the intensity of their responses in ventral striatum to in-group (vs out-group) members' rewards, as well as their tendency towards group-based prosociality. Individuals with strong motivational investment in their group preferred rewards for an in-group member, whereas individuals with low investment preferred rewards for an out-group member. These findings suggest that the motivational importance of social identity-beyond mere similarity to group members-influences vicarious reward and prosocial behavior. More broadly, these findings support a theoretical framework in which salient social identities can influence neural representations of subjective value, and suggest that social preferences can best be understood by examining the identity contexts in which they unfold. © The Author (2017). Published by Oxford University Press.

  17. Out of Africa:Miocene Dispersal, Vicariance, and Extinction within Hyacinthaceae Subfamily Urgineoideae

    Institute of Scientific and Technical Information of China (English)

    Syed Shujait Ali; Martin Pfosser; Wolfgang Wetschnig; Mario MartnezAzorn; Manuel B. Crespo; Yan Yu

    2013-01-01

    Disjunct distribution patterns in plant lineages are usually explained according to three hypotheses:vicariance, geodispersal, and long-distance dispersal. The role of these hypotheses is tested in Urgineoideae (Hyacinthaceae), a subfamily disjunctly distributed in Africa, Madagascar, India, and the Mediterranean region. The potential ancestral range, dispersal routes, and factors responsible for the current distribution in Urgineoideae are investigated using divergence time estimations. Urgineoideae originated in Southern Africa approximately 48.9 Mya. Two independent dispersal events in the Western Mediterranean region possibly occurred during Early Oligocene and Miocene (29.9-8.5 Mya) via Eastern and Northwestern Africa. A dispersal from Northwestern Africa to India could have occurred between 16.3 and 7.6 Mya. Vicariance and extinction events occurred approximately 21.6 Mya. Colonization of Madagascar occurred between 30.6 and 16.6 Mya, after a single transoceanic dispersal event from Southern Africa. The current disjunct distributions of Urgineoideae are not satisfactorily explained by Gondwana fragmentation or dispersal via boreotropical forests, due to the younger divergence time estimates. The flattened winged seeds of Urgineoideae could have played an important role in long-distance dispersal by strong winds and big storms, whereas geodispersal could have also occurred from Southern Africa to Asia and the Mediterranean region via the so-called arid and high-altitude corridors.

  18. Asymmetries in Experiential and Vicarious Feedback: Lessons from the Hiring and Firing of Baseball Managers

    Directory of Open Access Journals (Sweden)

    David Strang

    2014-05-01

    Full Text Available We examine experiential and vicarious feedback in the hiring and firing of baseball managers. Realized outcomes play a large role in both decisions; the probability that a manager will be fired is a function of the team’s win–loss record, and a manager is quicker to be rehired if his teams had won more in the past. There are substantial asymmetries, however, in the fine structure of the two feedback functions. The rate at which managers are fired is powerfully shaped by recent outcomes, falls with success and rises with failure, and adjusts for history-based expectations. By contrast, hiring reflects a longer-term perspective that emphasizes outcomes over the manager’s career as well as the most recent campaign, rewards success but does not penalize failure, and exhibits no adjustment for historical expectations. We explain these asymmetries in terms of the disparate displays of rationality that organizations enact in response to their own outcomes versus those of others. Experiential feedback is conditioned by a logic of accountability, vicarious feedback by a logic of emulation.

  19. No experience required: Violent crime and anticipated, vicarious, and experienced racial discrimination.

    Science.gov (United States)

    Herda, Daniel; McCarthy, Bill

    2018-02-01

    There is a growing body of evidence linking racial discrimination and juvenile crime, and a number of theories explain this relationship. In this study, we draw on one popular approach, Agnew's general strain theory, and extend prior research by moving from a focus on experienced discrimination to consider two other forms, anticipated and vicarious discrimination. Using data on black, white, and Hispanic youth, from the Project on Human Development in Chicago Neighborhoods (PHDCN), we find that experienced, anticipated, and to a lesser extent, vicarious discrimination, significantly predict violent crime independent of a set of neighborhood, parental, and individual level controls, including prior violent offending. Additional analyses on the specific contexts of discrimination reveal that violence is associated with the anticipation of police discrimination. The effects tend to be larger for African American than Hispanic youth, but the differences are not statistically significant. These findings support the thesis that, like other strains, discrimination may not have to be experienced directly to influence offending. Copyright © 2017. Published by Elsevier Inc.

  20. Paired Peer Learning through Engineering Education Outreach

    Science.gov (United States)

    Fogg-Rogers, Laura; Lewis, Fay; Edmonds, Juliet

    2017-01-01

    Undergraduate education incorporating active learning and vicarious experience through education outreach presents a critical opportunity to influence future engineering teaching and practice capabilities. Engineering education outreach activities have been shown to have multiple benefits; increasing interest and engagement with science and…

  1. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    Science.gov (United States)

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  2. 36 CFR 1237.26 - What materials and processes must agencies use to create audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... must agencies use to create audiovisual records? 1237.26 Section 1237.26 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.26 What materials and processes must agencies use to create audiovisual...

  3. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... considerations in the maintenance of audiovisual records? 1237.20 Section 1237.20 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.20 What are special considerations in the maintenance of audiovisual...

  4. 36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?

    Science.gov (United States)

    2010-07-01

    ... standards for audiovisual records storage? 1237.18 Section 1237.18 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.18 What are the environmental standards for audiovisual records storage? (a...

  5. 77 FR 22803 - Certain Audiovisual Components and Products Containing the Same; Institution of Investigation...

    Science.gov (United States)

    2012-04-17

    ... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-837] Certain Audiovisual Components and Products... importation of certain audiovisual components and products containing the same by reason of infringement of... importation, or the sale within the United States after importation of certain audiovisual components and...

  6. 36 CFR 1237.16 - How do agencies store audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... audiovisual records? 1237.16 Section 1237.16 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.16 How do agencies store audiovisual records? Agencies must maintain appropriate storage conditions for permanent...

  7. 36 CFR 1237.10 - How must agencies manage their audiovisual, cartographic, and related records?

    Science.gov (United States)

    2010-07-01

    ... their audiovisual, cartographic, and related records? 1237.10 Section 1237.10 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.10 How must agencies manage their audiovisual, cartographic, and related...

  8. Rhythmic synchronization tapping to an audio-visual metronome in budgerigars.

    Science.gov (United States)

    Hasegawa, Ai; Okanoya, Kazuo; Hasegawa, Toshikazu; Seki, Yoshimasa

    2011-01-01

    In all ages and countries, music and dance have constituted a central part in human culture and communication. Recently, vocal-learning animals such as parrots and elephants have been found to share rhythmic ability with humans. Thus, we investigated the rhythmic synchronization of budgerigars, a vocal-mimicking parrot species, under controlled conditions and a systematically designed experimental paradigm as a first step in understanding the evolution of musical entrainment. We trained eight budgerigars to perform isochronous tapping tasks in which they pecked a key to the rhythm of audio-visual metronome-like stimuli. The budgerigars showed evidence of entrainment to external stimuli over a wide range of tempos. They seemed to be inherently inclined to tap at fast tempos, which have a similar time scale to the rhythm of budgerigars' natural vocalizations. We suggest that vocal learning might have contributed to their performance, which resembled that of humans.

  9. Enhancing audiovisual experience with haptic feedback: a survey on HAV.

    Science.gov (United States)

    Danieau, F; Lecuyer, A; Guillotel, P; Fleureau, J; Mollet, N; Christie, M

    2013-01-01

    Haptic technology has been widely employed in applications ranging from teleoperation and medical simulation to art and design, including entertainment, flight simulation, and virtual reality. Today there is a growing interest among researchers in integrating haptic feedback into audiovisual systems. A new medium emerges from this effort: haptic-audiovisual (HAV) content. This paper presents the techniques, formalisms, and key results pertinent to this medium. We first review the three main stages of the HAV workflow: the production, distribution, and rendering of haptic effects. We then highlight the pressing necessity for evaluation techniques in this context and discuss the key challenges in the field. By building on existing technologies and tackling the specific challenges of the enhancement of audiovisual experience with haptics, we believe the field presents exciting research perspectives whose financial and societal stakes are significant.

  10. Specialization in audiovisual speech perception: a replication study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by bimodal integration in the McGurk effect. This integration effect may be specific to speech or be applied to all stimuli in general. To investigate this, Tuomainen et al. (2005) used sine-wave speech, which naïve observers may perceive as non......-speech, but hear as speech once informed of the linguistic origin of the signal. Combinations of sine-wave speech and incongruent video of the talker elicited a McGurk effect only for informed observers. This indicates that the audiovisual integration effect is specific to speech perception. However, observers...... that observers did look near the mouth. We conclude that eye-movements did not influence the results of Tuomainen et al. and that their results thus can be taken as evidence of a speech specific mode of audiovisual integration underlying the McGurk illusion....

  11. Nuevos desarrollos en el campus virtual UCM: estudio exploratorio sobre las plataformas e-learning en los estudios de comunicación audiovisual y publicidad / New developments in the virtual campus of the complutense university: an exploratory research...

    Directory of Open Access Journals (Sweden)

    Jorge CLEMENTE MEDIAVILLA

    2012-03-01

    alumnos demandan mayor participación mediante el uso de las redes sociales mayoritariamente extendidas.       This paper analyzes the capabilities of the present tool UCM Virtual Campus, WebCT 4.0, versus Moodle, the new working environment of the Campus. We examine accessibility and usability, communication tools and comprehensive assessment, as well as multimedia functionality. The proposed methodology includes both qualitative and quantitative techniques. Over three consecutive phases this study analyzes the professor’s experience using these tools. We also developed a questionnaire that was completed by the students in order to evaluate these e-learning platforms. Finally, the third phase of the research consisted of an experiment conducted in the classroom with the students simulating a real class context. The usability, accessibility and the communication opportunities are more dynamic in WebCT than in Moodle. In addition, students demand a more participatory role by supplementing the use of social networks.

  12. Audiovisual biofeedback improves motion prediction accuracy.

    Science.gov (United States)

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-04-01

    The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients' respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p biofeedback improves prediction accuracy. This would result in increased efficiency of motion management techniques affected by system latencies used in radiotherapy.

  13. The process of developing audiovisual patient information: challenges and opportunities.

    Science.gov (United States)

    Hutchison, Catherine; McCreaddie, May

    2007-11-01

    The aim of this project was to produce audiovisual patient information, which was user friendly and fit for purpose. The purpose of the audiovisual patient information is to inform patients about randomized controlled trials, as a supplement to their trial-specific written information sheet. Audiovisual patient information is known to be an effective way of informing patients about treatment. User involvement is also recognized as being important in the development of service provision. The aim of this paper is (i) to describe and discuss the process of developing the audiovisual patient information and (ii) to highlight the challenges and opportunities, thereby identifying implications for practice. A future study will test the effectiveness of the audiovisual patient information in the cancer clinical trial setting. An advisory group was set up to oversee the project and provide guidance in relation to information content, level and delivery. An expert panel of two patients provided additional guidance and a dedicated operational team dealt with the logistics of the project including: ethics; finance; scriptwriting; filming; editing and intellectual property rights. Challenges included the limitations of filming in a busy clinical environment, restricted technical and financial resources, ethical needs and issues around copyright. There were, however, substantial opportunities that included utilizing creative skills, meaningfully involving patients, teamworking and mutual appreciation of clinical, multidisciplinary and technical expertise. Developing audiovisual patient information is an important area for nurses to be involved with. However, this must be performed within the context of the multiprofessional team. Teamworking, including patient involvement, is crucial as a wide variety of expertise is required. Many aspects of the process are transferable and will provide information and guidance for nurses, regardless of specialty, considering developing this

  14. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    Science.gov (United States)

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate

  15. Perceived synchrony for realistic and dynamic audiovisual events.

    Science.gov (United States)

    Eg, Ragnhild; Behne, Dawn M

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli.

  16. Electrophysiological evidence for speech-specific audiovisual integration.

    Science.gov (United States)

    Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode. © 2013 Published by Elsevier Ltd.

  17. Cross-modal cueing in audiovisual spatial attention

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Greenlee, Mark W.; Gondan, Matthias

    2015-01-01

    effects have been reported for endogenous visual cues while exogenous cues seem to be mostly ineffective. In three experiments, we investigated cueing effects on the processing of audiovisual signals. In Experiment 1 we used endogenous cues to investigate their effect on the detection of auditory, visual......, and audiovisual targets presented with onset asynchrony. Consistent cueing effects were found in all target conditions. In Experiment 2 we used exogenous cues and found cueing effects only for visual target detection, but not auditory target detection. In Experiment 3 we used predictive exogenous cues to examine...

  18. A conceptual framework for audio-visual museum media

    DEFF Research Database (Denmark)

    Kirkedahl Lysholm Nielsen, Mikkel

    2017-01-01

    In today's history museums, the past is communicated through many other means than original artefacts. This interdisciplinary and theoretical article suggests a new approach to studying the use of audio-visual media, such as film, video and related media types, in a museum context. The centre...... and museum studies, existing case studies, and real life observations, the suggested framework instead stress particular characteristics of contextual use of audio-visual media in history museums, such as authenticity, virtuality, interativity, social context and spatial attributes of the communication...

  19. The Moderating Effects of Peer and Parental Support on the Relationship Between Vicarious Victimization and Substance Use.

    Science.gov (United States)

    Miller, Riane N; Fagan, Abigail A; Wright, Emily M

    2014-10-01

    General strain theory (GST) hypothesizes that youth are more likely to engage in delinquency when they experience vicarious victimization, defined as knowing about or witnessing violence perpetrated against others, but that this relationship may be attenuated for those who receive social support from significant others. Based on prospective data from youth aged 8 to 17 participating in the Project on Human Development in Chicago Neighborhoods (PHDCN), this article found mixed support for these hypotheses. Controlling for prior involvement in delinquency, as well as other risk and protective factors, adolescents who reported more vicarious victimization had an increased likelihood of alcohol use in the short term, but not the long term, and victimization was not related to tobacco or marijuana use. Peer support did not moderate the relationship between vicarious victimization and substance use, but family support did. In contrast to strain theory's predictions, the relationship between vicarious victimization and substance use was stronger for those who had higher compared with lower levels of family support. Implications of these findings for strain theory and future research are discussed.

  20. Shedding light on our audiovisual heritage: perspectives to emphasise CERN Digital Memory

    CERN Document Server

    Salvador, Mathilde Estelle

    2017-01-01

    This work aims to answer the question of how to add value to CERN’s audiovisual heritage available on CERN Document Server. In other terms, how to make more visible to the scientific community and grand public what is hidden and classified: namely CERN’s archives, and more precisely audiovisual ones because of their creative potential. Rather than focusing on its scientific and technical value, we will analyse its artistic and attractive power. In fact, we will see that all kind of archive can be intentionally or even accidentally artistic and exciting, that it is possible to change our vision of a photo, a sound or a film. This process of enhancement is a virtuous circle as it has an educational value and makes accessible scientific content that is normally out of range. However, the problem of how to magnify such archives remains. That is why we will try to learn from other digital memories in the world to see how they managed to highlight their own archives, in order to suggest new ways of enhancing au...

  1. 36 CFR 1237.12 - What record elements must be created and preserved for permanent audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... created and preserved for permanent audiovisual records? 1237.12 Section 1237.12 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC... permanent audiovisual records? For permanent audiovisual records, the following record elements must be...

  2. Self-Efficacy and Vicarious Learning in Doctoral Studies at a Distance

    Science.gov (United States)

    Kozar, Olga; Lum, Juliet F.; Benson, Phil

    2015-01-01

    Even though there are increasing numbers of PhD students in the distance mode, our current understanding of PhD candidature at a distance is limited and incomplete. On the one end of the spectrum are accounts of unhappy and isolated doctoral students who are separated from communities of practice. At the same time, literature offers accounts of…

  3. Religiosity, Heavy Alcohol Use, and Vicarious Learning Networks among Adolescents in the United States

    Science.gov (United States)

    Gryczynski, Jan; Ward, Brian W.

    2012-01-01

    Previous research has found that religiosity may protect against risky alcohol and drug use behaviors among adolescents, but the social mechanics underpinning the relationship are not well understood. This study examined the relationship between religiosity, heavy drinking, and social norms among U.S. adolescents aged 12 to 17 years, using the…

  4. A Vicarious Learning Activity for University Sophomores in a Multiculturalism Course

    Science.gov (United States)

    Chennault, Ronald E.

    2005-01-01

    How can one teach a course about multiculturalism to a broad spectrum of university sophomores in a way that is research-based, pedagogically sound, and appealing--all in ten weeks? In this article, the author states that a course he teaches, "Multiculturalism in Education," examines cultural differences as they relate to social inequalities in…

  5. Audio-visual materials usage preference among agricultural ...

    African Journals Online (AJOL)

    It was found that respondents preferred radio, television, poster, advert, photographs, specimen, bulletin, magazine, cinema, videotape, chalkboard, and bulletin board as audio-visual materials for extension work. These are the materials that can easily be manipulated and utilized for extension work. Nigerian Journal of ...

  6. Audio-Visual Aids for Cooperative Education and Training.

    Science.gov (United States)

    Botham, C. N.

    Within the context of cooperative education, audiovisual aids may be used for spreading the idea of cooperatives and helping to consolidate study groups; for the continuous process of education, both formal and informal, within the cooperative movement; for constant follow up purposes; and for promoting loyalty to the movement. Detailed…

  7. Content-based analysis improves audiovisual archive retrieval

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2012-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. In this paper, we take into account the information needs

  8. Today's and tomorrow's retrieval practice in the audiovisual archive

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2010-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. We investigate to what extent content-based video

  9. Narrativa audiovisual i cinema d'animació per ordinador

    OpenAIRE

    Duran Castells, Jaume

    2009-01-01

    DE LA TESI:Aquesta tesi doctoral estudia les relacions entre la narrativa audiovisual i el cinema d'animació per ordinador i fa una anàlisi al respecte dels llargmetratges de Pixar Animation Studios compresos entre 1995 i 2006.

  10. Market potential for interactive audio-visual media

    NARCIS (Netherlands)

    Leurdijk, A.; Limonard, S.

    2005-01-01

    NM2 (New Media for a New Millennium) develops tools for interactive, personalised and non-linear audio-visual content that will be tested in seven pilot productions. This paper looks at the market potential for these productions from a technological, a business and a users' perspective. It shows

  11. Computationally efficient clustering of audio-visual meeting data

    NARCIS (Netherlands)

    Hung, H.; Friedland, G.; Yeo, C.; Shao, L.; Shan, C.; Luo, J.; Etoh, M.

    2010-01-01

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors,

  12. Decision-Level Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, Mannes; Truong, Khiet Phuong; Poppe, Ronald Walter; Pantic, Maja; Popescu-Belis, Andrei; Stiefelhagen, Rainer

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laugh- ter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio- visual laughter detection is

  13. Temporal Ventriloquism Reveals Intact Audiovisual Temporal Integration in Amblyopia.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-02-01

    We have shown previously that amblyopia involves impaired detection of asynchrony between auditory and visual events. To distinguish whether this impairment represents a defect in temporal integration or nonintegrative multisensory processing (e.g., cross-modal matching), we used the temporal ventriloquism effect in which visual temporal order judgment (TOJ) is normally enhanced by a lagging auditory click. Participants with amblyopia (n = 9) and normally sighted controls (n = 9) performed a visual TOJ task. Pairs of clicks accompanied the two lights such that the first click preceded the first light, or second click lagged the second light by 100, 200, or 450 ms. Baseline audiovisual synchrony and visual-only conditions also were tested. Within both groups, just noticeable differences for the visual TOJ task were significantly reduced compared with baseline in the 100- and 200-ms click lag conditions. Within the amblyopia group, poorer stereo acuity and poorer visual acuity in the amblyopic eye were significantly associated with greater enhancement in visual TOJ performance in the 200-ms click lag condition. Audiovisual temporal integration is intact in amblyopia, as indicated by perceptual enhancement in the temporal ventriloquism effect. Furthermore, poorer stereo acuity and poorer visual acuity in the amblyopic eye are associated with a widened temporal binding window for the effect. These findings suggest that previously reported abnormalities in audiovisual multisensory processing may result from impaired cross-modal matching rather than a diminished capacity for temporal audiovisual integration.

  14. Electrophysiological evidence for speech-specific audiovisual integration

    NARCIS (Netherlands)

    Baart, M.; Stekelenburg, J.J.; Vroomen, J.

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were

  15. Voice activity detection using audio-visual information

    DEFF Research Database (Denmark)

    Petsatodis, Theodore; Pnevmatikakis, Aristodemos; Boukis, Christos

    2009-01-01

    An audio-visual voice activity detector that uses sensors positioned distantly from the speaker is presented. Its constituting unimodal detectors are based on the modeling of the temporal variation of audio and visual features using Hidden Markov Models; their outcomes are fused using a post...

  16. Audio-Visual Equipment Depreciation. RDU-75-07.

    Science.gov (United States)

    Drake, Miriam A.; Baker, Martha

    A study was conducted at Purdue University to gather operational and budgetary planning data for the Libraries and Audiovisual Center. The objectives were: (1) to complete a current inventory of equipment including year of purchase, costs, and salvage value; (2) to determine useful life data for general classes of equipment; and (3) to determine…

  17. Users Requirements in Audiovisual Search: A Quantitative Approach

    NARCIS (Netherlands)

    Nadeem, Danish; Ordelman, Roeland J.F.; Aly, Robin; Verbruggen, Erwin; Aalberg, Trond; Papatheodorou, Christos; Dobreva, Milena; Tsakonas, Giannis; Farrugia, Charles J.

    2013-01-01

    This paper reports on the results of a quantitative analysis of user requirements for audiovisual search that allow the categorisation of requirements and to compare requirements across user groups. The categorisation provides clear directions with respect to the prioritisation of system features

  18. Selected Audio-Visual Materials for Consumer Education. [New Version.

    Science.gov (United States)

    Johnston, William L.

    Ninety-two films, filmstrips, multi-media kits, slides, and audio cassettes, produced between 1964 and 1974, are listed in this selective annotated bibliography on consumer education. The major portion of the bibliography is devoted to films and filmstrips. The main topics of the audio-visual materials include purchasing, advertising, money…

  19. Cortical oscillations modulated by congruent and incongruent audiovisual stimuli.

    Science.gov (United States)

    Herdman, A T; Fujioka, T; Chau, W; Ross, B; Pantev, C; Picton, T W

    2004-11-30

    Congruent or incongruent grapheme-phoneme stimuli are easily perceived as one or two linguistic objects. The main objective of this study was to investigate the changes in cortical oscillations that reflect the processing of congruent and incongruent audiovisual stimuli. Graphemes were Japanese Hiragana characters for four different vowels (/a/, /o/, /u/, and /i/). They were presented simultaneously with their corresponding phonemes (congruent) or non-corresponding phonemes (incongruent) to native-speaking Japanese participants. Participants' reaction times to the congruent audiovisual stimuli were significantly faster by 57 ms as compared to reaction times to incongruent stimuli. We recorded the brain responses for each condition using a whole-head magnetoencephalograph (MEG). A novel approach to analysing MEG data, called synthetic aperture magnetometry (SAM), was used to identify event-related changes in cortical oscillations involved in audiovisual processing. The SAM contrast between congruent and incongruent responses revealed greater event-related desynchonization (8-16 Hz) bilaterally in the occipital lobes and greater event-related synchronization (4-8 Hz) in the left transverse temporal gyrus. Results from this study further support the concept of interactions between the auditory and visual sensory cortices in multi-sensory processing of audiovisual objects.

  20. Not My Problem: Vicarious Conflict Adaptation with Human and Virtual Co-Actors

    Directory of Open Access Journals (Sweden)

    Michiel M. Spapé

    2016-04-01

    Full Text Available The Simon effect refers to an incompatibility between stimulus and response locations resulting in a conflict situation and, consequently, slower responses. Like other conflict effects, it is commonly reduced after repetitions, suggesting an executive control ability, which flexibly rewires cognitive processing and adapts to conflict. Interestingly, conflict is not necessarily individually defined: the Social Simon effect refers to a scenario where two people who share a task show a conflict effect where a single person does not. Recent studies showed these observations might converge into what could be called vicarious conflict adaptation, with evidence indicating that observing someone else’s conflict may subsequently reduce one’s own. While plausible, there is reason for doubt: both the social aspect of the Simon Effect, and the degree to which executive control accounts for the conflict adaptation effect, have become foci of debate in recent studies. Here, we present two experiments that were designed to test the social dimension of the effect by varying the social relationship between the actor and the co-actor. In Experiment 1, participants performed a conflict task with a virtual co-actor, while the actor-observer relationship was manipulated as a function of the similarity between response modalities. In Experiment 2, the same task was performed both with a virtual and with a human co-actor, while heart-rate measurements were taken to measure the impact of observed conflict on autonomous activity. While both experiments replicated the interpersonal conflict adaptation effects, neither showed evidence of the critical social dimension. We consider the findings as demonstrating that vicarious conflict adaptation does not rely on the social relationship between the actor and co-actor.

  1. Subtitles and language learning principles, strategies and practical experiences

    CERN Document Server

    Mariotti, Cristina; Caimi, Annamaria

    2014-01-01

    The articles collected in this publication combine diachronic and synchronic research with the description of updated teaching experiences showing the educational role of subtitled audiovisuals in various foreign language learning settings.

  2. Context-specific effects of musical expertise on audiovisual integration

    Science.gov (United States)

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  3. Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.

    Science.gov (United States)

    Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu

    2018-05-01

    Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.

  4. Supporting Reflective Practices in Social Change Processes with the Dynamic Learning Agenda: An Example of Learning about the Process towards Disability Inclusive Development

    Science.gov (United States)

    van Veen, Saskia C.; de Wildt-Liesveld, Renée; Bunders, Joske F. G.; Regeer, Barbara J.

    2014-01-01

    Change processes are increasingly seen as the solution to entrenched (social) problems. However, change is difficult to realise while dealing with multiple actors, values, and approaches. (Inter)organisational learning is seen as a way to facilitate reflective practices in social change that support emergent changes, vicarious learning, and…

  5. Voice over: Audio-visual congruency and content recall in the gallery setting.

    Science.gov (United States)

    Fairhurst, Merle T; Scott, Minnie; Deroy, Ophelia

    2017-01-01

    Experimental research has shown that pairs of stimuli which are congruent and assumed to 'go together' are recalled more effectively than an item presented in isolation. Will this multisensory memory benefit occur when stimuli are richer and longer, in an ecological setting? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain. By varying the gender and narrative style of the voice-over, we examined how the perceived congruency and assumed unity of the audio guide track with painted portraits affected subsequent recall. We show that tracks perceived as best matching the viewed portraits led to greater recall of both sensory and linguistic content. We provide the first evidence that manipulating crossmodal congruence and unity assumptions can effectively impact memory in a multisensory ecological setting, even in the absence of precise temporal alignment between sensory cues.

  6. Neuromorphic Audio-Visual Sensor Fusion on a Sound-Localising Robot

    Directory of Open Access Journals (Sweden)

    Vincent Yue-Sek Chan

    2012-02-01

    Full Text Available This paper presents the first robotic system featuring audio-visual sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localisation through self-motion and visual feedback, using an adaptive ITD-based sound localisation algorithm. After training, the robot can localise sound sources (white or pink noise in a reverberant environment with an RMS error of 4 to 5 degrees in azimuth. In the second part of the paper, we investigate the source binding problem. An experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. The results show that this technique can be quite effective, despite its simplicity.

  7. Audiovisual physics reports: students' video production as a strategy for the didactic laboratory

    Science.gov (United States)

    Vinicius Pereira, Marcus; de Souza Barros, Susana; de Rezende Filho, Luiz Augusto C.; Fauth, Leduc Hermeto de A.

    2012-01-01

    Constant technological advancement has facilitated access to digital cameras and cell phones. Involving students in a video production project can work as a motivating aspect to make them active and reflective in their learning, intellectually engaged in a recursive process. This project was implemented in high school level physics laboratory classes resulting in 22 videos which are considered as audiovisual reports and analysed under two components: theoretical and experimental. This kind of project allows the students to spontaneously use features such as music, pictures, dramatization, animations, etc, even when the didactic laboratory may not be the place where aesthetic and cultural dimensions are generally developed. This could be due to the fact that digital media are more legitimately used as cultural tools than as teaching strategies.

  8. A robotic approach to understanding the role and the mechanism of vicarious trial-and-error in a T-maze task.

    Science.gov (United States)

    Matsuda, Eiko; Hubert, Julien; Ikegami, Takashi

    2014-01-01

    Vicarious trial-and-error (VTE) is a behavior observed in rat experiments that seems to suggest self-conflict. This behavior is seen mainly when the rats are uncertain about making a decision. The presence of VTE is regarded as an indicator of a deliberative decision-making process, that is, searching, predicting, and evaluating outcomes. This process is slower than automated decision-making processes, such as reflex or habituation, but it allows for flexible and ongoing control of behavior. In this study, we propose for the first time a robotic model of VTE to see if VTE can emerge just from a body-environment interaction and to show the underlying mechanism responsible for the observation of VTE and the advantages provided by it. We tried several robots with different parameters, and we have found that they showed three different types of VTE: high numbers of VTE at the beginning of learning, decreasing numbers afterward (similar VTE pattern to experiments with rats), low during the whole learning period, and high numbers all the time. Therefore, we were able to reproduce the phenomenon of VTE in a model robot using only a simple dynamical neural network with Hebbian learning, which suggests that VTE is an emergent property of a plastic and embodied neural network. From a comparison of the three types of VTE, we demonstrated that 1) VTE is associated with chaotic activity of neurons in our model and 2) VTE-showing robots were robust to environmental perturbations. We suggest that the instability of neuronal activity found in VTE allows ongoing learning to rebuild its strategy continuously, which creates robust behavior. Based on these results, we suggest that VTE is caused by a similar mechanism in biology and leads to robust decision making in an analogous way.

  9. A robotic approach to understanding the role and the mechanism of vicarious trial-and-error in a T-maze task.

    Directory of Open Access Journals (Sweden)

    Eiko Matsuda

    Full Text Available Vicarious trial-and-error (VTE is a behavior observed in rat experiments that seems to suggest self-conflict. This behavior is seen mainly when the rats are uncertain about making a decision. The presence of VTE is regarded as an indicator of a deliberative decision-making process, that is, searching, predicting, and evaluating outcomes. This process is slower than automated decision-making processes, such as reflex or habituation, but it allows for flexible and ongoing control of behavior. In this study, we propose for the first time a robotic model of VTE to see if VTE can emerge just from a body-environment interaction and to show the underlying mechanism responsible for the observation of VTE and the advantages provided by it. We tried several robots with different parameters, and we have found that they showed three different types of VTE: high numbers of VTE at the beginning of learning, decreasing numbers afterward (similar VTE pattern to experiments with rats, low during the whole learning period, and high numbers all the time. Therefore, we were able to reproduce the phenomenon of VTE in a model robot using only a simple dynamical neural network with Hebbian learning, which suggests that VTE is an emergent property of a plastic and embodied neural network. From a comparison of the three types of VTE, we demonstrated that 1 VTE is associated with chaotic activity of neurons in our model and 2 VTE-showing robots were robust to environmental perturbations. We suggest that the instability of neuronal activity found in VTE allows ongoing learning to rebuild its strategy continuously, which creates robust behavior. Based on these results, we suggest that VTE is caused by a similar mechanism in biology and leads to robust decision making in an analogous way.

  10. Testing audiovisual comprehension tasks with questions embedded in videos as subtitles: a pilot multimethod study

    Directory of Open Access Journals (Sweden)

    Juan Carlos Casañ Núñez

    2017-06-01

    Full Text Available Listening, watching, reading and writing simultaneously in a foreign language is very complex. This paper is part of wider research which explores the use of audiovisual comprehension questions imprinted in the video image in the form of subtitles and synchronized with the relevant fragments for the purpose of language learning and testing. Compared to viewings where the comprehension activity is available only on paper, this innovative methodology may provide some benefits. Among them, it could reduce the conflict in visual attention between watching the video and completing the task, by spatially and temporally approximating the questions and the relevant fragments. The technique is seen as especially beneficial for students with a low proficiency language level. The main objectives of this study were to investigate if embedded questions had an impact on SFL students’ audiovisual comprehension test performance and to find out what examinees thought about them. A multimethod design (Morse, 2003 involving the sequential collection of three quantitative datasets was employed. A total of 41 learners of Spanish as a foreign language (SFL participated in the study (22 in the control group and 19 in the experimental one. Informants were selected by non-probabilistic sampling. The results showed that imprinted questions did not have any effect on test performance. Test-takers’ attitudes towards this methodology were positive. Globally, students in the experimental group agreed that the embedded questions helped them to complete the tasks. Furthermore, most of them were in favour of having the questions imprinted in the video in the audiovisual comprehension test of the final exam. These opinions are in line with those obtained in previous studies that looked into experts’, SFL students’ and SFL teachers’ views about this methodology (Casañ Núñez, 2015a, 2016a, in press-b. On the whole, these studies suggest that this technique has

  11. Sistema audiovisual para reconocimiento de comandos Audiovisual system for recognition of commands

    Directory of Open Access Journals (Sweden)

    Alexander Ceballos

    2011-08-01

    Full Text Available Se presenta el desarrollo de un sistema automático de reconocimiento audiovisual del habla enfocado en el reconocimiento de comandos. La representación del audio se realizó mediante los coeficientes cepstrales de Mel y las primeras dos derivadas temporales. Para la caracterización del vídeo se hizo seguimiento automático de características visuales de alto nivel a través de toda la secuencia. Para la inicialización automática del algoritmo se emplearon transformaciones de color y contornos activos con información de flujo del vector gradiente ("GVF snakes" sobre la región labial, mientras que para el seguimiento se usaron medidas de similitud entre vecindarios y restricciones morfológicas definidas en el estándar MPEG-4. Inicialmente, se presenta el diseño del sistema de reconocimiento automático del habla, empleando únicamente información de audio (ASR, mediante Modelos Ocultos de Markov (HMMs y un enfoque de palabra aislada; posteriormente, se muestra el diseño de los sistemas empleando únicamente características de vídeo (VSR, y empleando características de audio y vídeo combinadas (AVSR. Al final se comparan los resultados de los tres sistemas para una base de datos propia en español y francés, y se muestra la influencia del ruido acústico, mostrando que el sistema de AVSR es más robusto que ASR y VSR.We present the development of an automatic audiovisual speech recognition system focused on the recognition of commands. Signal audio representation was done using Mel cepstral coefficients and their first and second order time derivatives. In order to characterize the video signal, a set of high-level visual features was tracked throughout the sequences. Automatic initialization of the algorithm was performed using color transformations and active contour models based on Gradient Vector Flow (GVF Snakes on the lip region, whereas visual tracking used similarity measures across neighborhoods and morphological

  12. Vicarious Traumatisation in Practitioners Who Work with Adult Survivors of Sexual Violence in Child Sexual Abuse: Literature Review and Directions for Future Research

    OpenAIRE

    Choularia, Zoe; Hutchison, Craig; Karatzias, Thanos

    2009-01-01

    Primary objective: The authors sought to summarise and evaluate evidence regarding vicarious traumatisation (VT) in practitioners working with adult survivors of sexual violence and/or child sexual abuse (CSA). Methods and selection criteria: Relevant publications were identified from systematic literature searches of PubMed and PsycINFO. Studies were selected for inclusion if they examined vicarious traumatisation resulting from sexual violence and/or CSA work and were published in English b...

  13. Social learning theory and the effects of living arrangement on heavy alcohol use: results from a national study of college students.

    Science.gov (United States)

    Ward, Brian W; Gryczynski, Jan

    2009-05-01

    This study examined the relationship between living arrangement and heavy episodic drinking among college students in the United States. Using social learning theory as a framework, it was hypothesized that vicarious learning of peer and family alcohol-use norms would mediate the effects of living arrangement on heavy episodic drinking. Analyses were conducted using data from the 2001 Harvard School of Public Health College Alcohol Study, a national survey of full-time undergraduate students attending 4-year colleges or universities in the United States (N = 10,008). Logistic regression models examined the relationship between heavy episodic drinking and various measures of living arrangement and vicarious learning/social norms. Mediation of the effects of living arrangement was tested using both indirect and direct methods. Both student living arrangement and vicarious-learning/social-norm variables remained significant predictors of heavy episodic drinking in multivariate models when controlling for a variety of individual characteristics. Slight mediation of the effects of living arrangement on heavy episodic drinking by vicarious learning/social norms was confirmed for some measures. Although vicarious learning of social norms does appear to play a role in the association between living arrangement and alcohol use, other processes may underlie the relationship. These findings suggest that using theory alongside empirical evidence to inform the manipulation of living environments could present a promising policy strategy to reduce alcohol-related harm in collegiate contexts.

  14. A Tutorial Task and Tertiary Courseware Model for Collaborative Learning Communities

    Science.gov (United States)

    Newman, Julian; Lowe, Helen; Neely, Steve; Gong, Xiaofeng; Eyers, David; Bacon, Jean

    2004-01-01

    RAED provides a computerised infrastructure to support the development and administration of Vicarious Learning in collaborative learning communities spread across multiple universities and workplaces. The system is based on the OASIS middleware for Role-based Access Control. This paper describes the origins of the model and the approach to…

  15. Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation.

    Science.gov (United States)

    Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina

    2017-01-01

    Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this "online" multisensory improvement, there is evidence of long-lasting, "offline" effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced "online" effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual

  16. Audiovisual Temporal Processing and Synchrony Perception in the Rat.

    Science.gov (United States)

    Schormans, Ashley L; Scott, Kaela E; Vo, Albert M Q; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L

    2016-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer's ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats ( n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats ( n = 7) perceived the synchronous audiovisual stimuli to be "visual first" for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20-40 ms. Ultimately, given

  17. Your Most Essential Audiovisual Aid--Yourself!

    Science.gov (United States)

    Hamp-Lyons, Elizabeth

    2012-01-01

    Acknowledging that an interested and enthusiastic teacher can create excitement for students and promote learning, the author discusses how teachers can improve their appearance, and, consequently, how their students perceive them. She offers concrete suggestions on how a teacher can be both a "visual aid" and an "audio aid" in the classroom.…

  18. The Moderating Effects of Peer and Parental Support on the Relationship Between Vicarious Victimization and Substance Use

    OpenAIRE

    Miller, Riane N.; Fagan, Abigail A.; Wright, Emily M.

    2014-01-01

    General strain theory (GST) hypothesizes that youth are more likely to engage in delinquency when they experience vicarious victimization, defined as knowing about or witnessing violence perpetrated against others, but that this relationship may be attenuated for those who receive social support from significant others. Based on prospective data from youth aged 8 to 17 participating in the Project on Human Development in Chicago Neighborhoods (PHDCN), this article found mixed support for thes...

  19. Noradrenergic signaling in the medial prefrontal cortex and amygdala differentially regulates vicarious trial-and-error in a spatial decision-making task.

    Science.gov (United States)

    Amemiya, Seiichiro; Kubota, Natsuko; Umeyama, Nao; Nishijima, Takeshi; Kita, Ichiro

    2016-01-15

    In uncertain choice situations, we deliberately search and evaluate possible options before taking an action. Once we form a preference regarding the current situation, we take an action more automatically and with less deliberation. In rats, the deliberation process can be seen in vicarious trial-and-error behavior (VTE), which is a head-orienting behavior toward options at a choice point. Recent neurophysiological findings suggest that VTE reflects the rat's thinking about future options as deliberation, expectation, and planning when rats feel conflict. VTE occurs depending on the demand: an increase occurs during initial learning, and a decrease occurs with progression in learning. However, the brain circuit underlying the regulation of VTE has not been thoroughly examined. In situations in which VTE often appears, the medial prefrontal cortex (mPFC) and the amygdala (AMY) are crucial for learning and decision making. Our previous study reported that noradrenaline regulates VTE. Here, to investigate whether the mPFC and AMY are involved in regulation of VTE, we examined the effects of local injection of clonidine, an alpha2 adrenergic autoreceptor agonist, into either region in rats during VTE and choice behavior during a T-maze choice task. Injection of clonidine into either region impaired selection of the advantageous choice in the task. Furthermore, clonidine injection into the mPFC suppressed occurrence of VTE in the early phase of the task, whereas injection into the AMY inhibited the decrease in VTE in the later phase and thus maintained a high level of VTE throughout the task. These results suggest that the mPFC and AMY play a role in the increase and decrease in VTE, respectively, and that noradrenergic mechanisms mediate the dynamic regulation of VTE over experiences. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Vicarious Calibration of sUAS Microbolometer Temperature Imagery for Estimation of Radiometric Land Surface Temperature

    Directory of Open Access Journals (Sweden)

    Alfonso Torres-Rua

    2017-06-01

    Full Text Available In recent years, the availability of lightweight microbolometer thermal cameras compatible with small unmanned aerial systems (sUAS has allowed their use in diverse scientific and management activities that require sub-meter pixel resolution. Nevertheless, as with sensors already used in temperature remote sensing (e.g., Landsat satellites, a radiance atmospheric correction is necessary to estimate land surface temperature. This is because atmospheric conditions at any sUAS flight elevation will have an adverse impact on the image accuracy, derived calculations, and study replicability using the microbolometer technology. This study presents a vicarious calibration methodology (sUAS-specific, time-specific, flight-specific, and sensor-specific for sUAS temperature imagery traceable back to NIST-standards and current atmospheric correction methods. For this methodology, a three-year data collection campaign with a sUAS called “AggieAir”, developed at Utah State University, was performed for vineyards near Lodi, California, for flights conducted at different times (early morning, Landsat overpass, and mid-afternoon” and seasonal conditions. From the results of this study, it was found that, despite the spectral response of microbolometer cameras (7.0 to 14.0 μm, it was possible to account for the effects of atmospheric and sUAS operational conditions, regardless of time and weather, to acquire accurate surface temperature data. In addition, it was found that the main atmospheric correction parameters (transmissivity and atmospheric radiance significantly varied over the course of a day. These parameters fluctuated the most in early morning and partially stabilized in Landsat overpass and in mid-afternoon times. In terms of accuracy, estimated atmospheric correction parameters presented adequate statistics (confidence bounds under ±0.1 for transmissivity and ±1.2 W/m2/sr/um for atmospheric radiance, with a range of RMSE below 1.0 W/m2/sr

  1. Recording and Validation of Audiovisual Expressions by Faces and Voices

    Directory of Open Access Journals (Sweden)

    Sachiko Takagi

    2011-10-01

    Full Text Available This study aims to further examine the cross-cultural differences in multisensory emotion perception between Western and East Asian people. In this study, we recorded the audiovisual stimulus video of Japanese and Dutch actors saying neutral phrase with one of the basic emotions. Then we conducted a validation experiment of the stimuli. In the first part (facial expression, participants watched a silent video of actors and judged what kind of emotion the actor is expressing by choosing among 6 options (ie, happiness, anger, disgust, sadness, surprise, and fear. In the second part (vocal expression, they listened to the audio part of the same videos without video images while the task was the same. We analyzed their categorization responses based on accuracy and confusion matrix and created a controlled audiovisual stimulus set.

  2. Computationally Efficient Clustering of Audio-Visual Meeting Data

    Science.gov (United States)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  3. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    OpenAIRE

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin?Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possib...

  4. Neural correlates of quality during perception of audiovisual stimuli

    CERN Document Server

    Arndt, Sebastian

    2016-01-01

    This book presents a new approach to examining perceived quality of audiovisual sequences. It uses electroencephalography to understand how exactly user quality judgments are formed within a test participant, and what might be the physiologically-based implications when being exposed to lower quality media. The book redefines experimental paradigms of using EEG in the area of quality assessment so that they better suit the requirements of standard subjective quality testings. Therefore, experimental protocols and stimuli are adjusted accordingly. .

  5. Psychophysiological effects of audiovisual stimuli during cycle exercise.

    Science.gov (United States)

    Barreto-Silva, Vinícius; Bigliassi, Marcelo; Chierotti, Priscila; Altimari, Leandro R

    2018-05-01

    Immersive environments induced by audiovisual stimuli are hypothesised to facilitate the control of movements and ameliorate fatigue-related symptoms during exercise. The objective of the present study was to investigate the effects of pleasant and unpleasant audiovisual stimuli on perceptual and psychophysiological responses during moderate-intensity exercises performed on an electromagnetically braked cycle ergometer. Twenty young adults were administered three experimental conditions in a randomised and counterbalanced order: unpleasant stimulus (US; e.g. images depicting laboured breathing); pleasant stimulus (PS; e.g. images depicting pleasant emotions); and neutral stimulus (NS; e.g. neutral facial expressions). The exercise had 10 min of duration (2 min of warm-up + 6 min of exercise + 2 min of warm-down). During all conditions, the rate of perceived exertion and heart rate variability were monitored to further understanding of the moderating influence of audiovisual stimuli on perceptual and psychophysiological responses, respectively. The results of the present study indicate that PS ameliorated fatigue-related symptoms and reduced the physiological stress imposed by the exercise bout. Conversely, US increased the global activity of the autonomic nervous system and increased exertional responses to a greater degree when compared to PS. Accordingly, audiovisual stimuli appear to induce a psychophysiological response in which individuals visualise themselves within the story presented in the video. In such instances, individuals appear to copy the behaviour observed in the videos as if the situation was real. This mirroring mechanism has the potential to up-/down-regulate the cardiac work as if in fact the exercise intensities were different in each condition.

  6. Audiovisual speech perception development at varying levels of perceptual processing

    OpenAIRE

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-01-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the le...

  7. Health Education Audiovisual Media on Mental Illness for Family

    OpenAIRE

    Wahyuningsih, Dyah; Wiyati, Ruti; Subagyo, Widyo

    2012-01-01

    This study aimed to produce health education media in form of Video Compact Disk (VCD). The first disk consist of method how to take care of patient with social isolation and the second disk consist of method how to take care of patient with violence behaviour. The implementation of audiovisual media is giving for family in Psyciatric Ward Banyumas hospital. The family divided in two groups, the first group was given health education about social isolation and the second group was given healt...

  8. Conditioning Influences Audio-Visual Integration by Increasing Sound Saliency

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    2011-10-01

    Full Text Available We investigated the effect of prior conditioning of an auditory stimulus on audiovisual integration in a series of four psychophysical experiments. The experiments factorially manipulated the conditioning procedure (picture vs monetary conditioning and multisensory paradigm (2AFC visual detection vs redundant target paradigm. In the conditioning sessions, subjects were presented with three pure tones (= conditioned stimulus, CS that were paired with neutral, positive, or negative unconditioned stimuli (US, monetary: +50 euro cents,.–50 cents, 0 cents; pictures: highly pleasant, unpleasant, and neutral IAPS. In a 2AFC visual selective attention paradigm, detection of near-threshold Gabors was improved by concurrent sounds that had previously been paired with a positive (monetary or negative (picture outcome relative to neutral sounds. In the redundant target paradigm, sounds previously paired with positive (monetary or negative (picture outcomes increased response speed to both auditory and audiovisual targets similarly. Importantly, prior conditioning did not increase the multisensory response facilitation (ie, (A + V/2 – AV or the race model violation. Collectively, our results suggest that prior conditioning primarily increases the saliency of the auditory stimulus per se rather than influencing audiovisual integration directly. In turn, conditioned sounds are rendered more potent for increasing response accuracy or speed in detection of visual targets.

  9. Infants' preference for native audiovisual speech dissociated from congruency preference.

    Directory of Open Access Journals (Sweden)

    Kathleen Shaw

    Full Text Available Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces. Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English and non-native (Spanish language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.

  10. Audiovisual integration of speech falters under high attention demands.

    Science.gov (United States)

    Alsius, Agnès; Navarra, Jordi; Campbell, Ruth; Soto-Faraco, Salvador

    2005-05-10

    One of the most commonly cited examples of human multisensory integration occurs during exposure to natural speech, when the vocal and the visual aspects of the signal are integrated in a unitary percept. Audiovisual association of facial gestures and vocal sounds has been demonstrated in nonhuman primates and in prelinguistic children, arguing for a general basis for this capacity. One critical question, however, concerns the role of attention in such multisensory integration. Although both behavioral and neurophysiological studies have converged on a preattentive conceptualization of audiovisual speech integration, this mechanism has rarely been measured under conditions of high attentional load, when the observers' attention resources are depleted. We tested the extent to which audiovisual integration was modulated by the amount of available attentional resources by measuring the observers' susceptibility to the classic McGurk illusion in a dual-task paradigm. The proportion of visually influenced responses was severely, and selectively, reduced if participants were concurrently performing an unrelated visual or auditory task. In contrast with the assumption that crossmodal speech integration is automatic, our results suggest that these multisensory binding processes are subject to attentional demands.

  11. [Intermodal timing cues for audio-visual speech recognition].

    Science.gov (United States)

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  12. Neural correlates of audiovisual speech processing in a second language.

    Science.gov (United States)

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Audiovisual integration for speech during mid-childhood: Electrophysiological evidence

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-01-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7–8-year-olds and 10–11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception. PMID:25463815

  14. Audiovisual integration of speech in a patient with Broca's Aphasia

    Science.gov (United States)

    Andersen, Tobias S.; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia. PMID:25972819

  15. The role of emotion in dynamic audiovisual integration of faces and voices.

    Science.gov (United States)

    Kokinous, Jenny; Kotz, Sonja A; Tavano, Alessandro; Schröger, Erich

    2015-05-01

    We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  16. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    Science.gov (United States)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  17. Drought responses of three closely related Caragana species: implication for their vicarious distribution.

    Science.gov (United States)

    Ma, Fei; Na, Xiaofan; Xu, Tingting

    2016-05-01

    Drought is a major environmental constraint affecting growth and distribution of plants in the desert region of the Inner Mongolia plateau. Caragana microphylla, C. liouana, and C. korshinskii are phylogenetically close but distribute vicariously in Mongolia plateau. To gain a better understanding of the ecological differentiation between these three species, we examined the leaf gas exchange, growth, water use efficiency, biomass accumulation and allocation by subjecting their seedlings to low and high drought treatments in a glasshouse. Increasing drought stress had a significant effect on many aspects of seedling performance in all species, but the physiology and growth varied with species in response to drought. C. korshinskii exhibited lower sensitivity of photosynthetic rate and growth, lower specific leaf area, higher biomass allocation to roots, higher levels of water use efficiency to drought compared with the other two species. Only minor interspecific differences in growth performances were observed between C. liouana and C. microphylla. These results indicated that faster seedling growth rate and more efficient water use of C. korshinskii should confer increased drought tolerance and facilitate its establishment in more severe drought regions relative to C. liouana and C. microphylla.

  18. Ultra-portable field transfer radiometer for vicarious calibration of earth imaging sensors

    Science.gov (United States)

    Thome, Kurtis; Wenny, Brian; Anderson, Nikolaus; McCorkel, Joel; Czapla-Myers, Jeffrey; Biggar, Stuart

    2018-06-01

    A small portable transfer radiometer has been developed as part of an effort to ensure the quality of upwelling radiance from test sites used for vicarious calibration in the solar reflective. The test sites are used to predict top-of-atmosphere reflectance relying on ground-based measurements of the atmosphere and surface. The portable transfer radiometer is designed for one-person operation for on-site field calibration of instrumentation used to determine ground-leaving radiance. The current work describes the detector- and source-based radiometric calibration of the transfer radiometer highlighting the expected accuracy and SI-traceability. The results indicate differences between the detector-based and source-based results greater than the combined uncertainties of the approaches. Results from recent field deployments of the transfer radiometer using a solar radiation based calibration agree with the source-based laboratory calibration within the combined uncertainties of the methods. The detector-based results show a significant difference to the solar-based calibration. The source-based calibration is used as the basis for a radiance-based calibration of the Landsat-8 Operational Land Imager that agrees with the OLI calibration to within the uncertainties of the methods.

  19. Dissociating Cortical Activity during Processing of Native and Non-Native Audiovisual Speech from Early to Late Infancy

    Directory of Open Access Journals (Sweden)

    Eswen Fava

    2014-08-01

    Full Text Available Initially, infants are capable of discriminating phonetic contrasts across the world’s languages. Starting between seven and ten months of age, they gradually lose this ability through a process of perceptual narrowing. Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech. Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds. But infants learn from speech produced by people talking to them, meaning they learn from a complex audiovisual signal. Here, we use near-infrared spectroscopy to measure blood concentration changes in the bilateral temporal cortices of infants in three different age groups: 3-to-6 months, 7-to-10 months, and 11-to-14-months. Critically, all three groups of infants were tested with continuous audiovisual speech in both their native and another, unfamiliar language. We found that at each age range, infants showed different patterns of cortical activity in response to the native and non-native stimuli. Infants in the youngest group showed bilateral cortical activity that was greater overall in response to non-native relative to native speech; the oldest group showed left lateralized activity in response to native relative to non-native speech. These results highlight perceptual tuning as a dynamic process that happens across modalities and at different levels of stimulus complexity.

  20. Use of audiovisual resources in a FlexQuest strategy on Radioactivity

    Directory of Open Access Journals (Sweden)

    Flávia Cristina Gomes Catunda de Vasconcelos

    2012-03-01

    Full Text Available This paper presents a study conducted in a private school in Recife - PE, Brazil, with 25 students from 1st year of high school. One of the focuses was to evaluate the implementation of the strategy FlexQuest on the teaching of radioactivity. The FlexQuest incorporates, within the WebQuest, the Cognitive Flexibility Theory (TFC, which is a theory of teaching, learning and knowledge representation, aiming to propose strategies for the acquisition of advanced levels of knowledge. With a qualitative approach, there were interventions of application having, as axle, an analysis of landscape crossings that the students have accomplished during the execution of required tasks. The results revealed that this strategy involves audiovisual resources, and these make learning possible, provided that strategies are embedded in a constructivist approach to teaching and learning. In this sense, it was perceived to be effective, the introductory level/stimulator, for the understanding of the applications of radioactivity. Showing a tool based on real situations, enabling students to develop the critical eye on what it is televised, including also the study of radioactivity.

  1. Detection and Identification of Rare Audiovisual Cues

    CERN Document Server

    Anemüller, Jörn; Gool, Luc

    2012-01-01

    Machine learning builds models of the world using training data from the application domain and prior knowledge about the problem. The models are later applied to future data in order to estimate the current state of the world. An implied assumption is that the future is stochastically similar to the past. The approach fails when the system encounters situations that are not anticipated from the past experience. In contrast, successful natural organisms identify new unanticipated stimuli and situations and frequently generate appropriate responses. The observation described above lead to the initiation of the DIRAC EC project in 2006. In 2010 a workshop was held, aimed to bring together researchers and students from different disciplines in order to present and discuss new approaches for identifying and reacting to unexpected events in information-rich environments. This book includes a summary of the achievements of the DIRAC project in chapter 1, and a collection of the papers presented in this workshop in ...

  2. AUTHOR’S DIGITAL VIDEO: CREATING AND USING FOR THE LEARNING

    Directory of Open Access Journals (Sweden)

    Igor V. Riatshentcev

    2014-01-01

    Full Text Available The article considers the functionality of software to construct the author’s video for its use in distance learning and its audiovisual implementation in the open educational space. 

  3. A General Audiovisual Temporal Processing Deficit in Adult Readers with Dyslexia

    Science.gov (United States)

    Francisco, Ana A.; Jesse, Alexandra; Groen, Margriet A.; McQueen, James M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of…

  4. Selective Attention and Audiovisual Integration: Is Attending to Both Modalities a Prerequisite for Early Integration?

    NARCIS (Netherlands)

    Talsma, D.; Doty, Tracy J.; Woldorff, Marty G.

    2007-01-01

    Interactions between multisensory integration and attention were studied using a combined audiovisual streaming design and a rapid serial visual presentation paradigm. Event-related potentials (ERPs) following audiovisual objects (AV) were compared with the sum of the ERPs following auditory (A) and

  5. Audiovisual Narrative Creation and Creative Retrieval: How Searching for a Story Shapes the Story

    NARCIS (Netherlands)

    Sauer, Sabrina

    2017-01-01

    Media professionals – such as news editors, image researchers, and documentary filmmakers - increasingly rely on online access to digital content within audiovisual archives to create narratives. Retrieving audiovisual sources therefore requires an in-depth knowledge of how to find sources

  6. Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability

    NARCIS (Netherlands)

    Francisco, A.A.; Groen, M.A.; Jesse, A.; McQueen, J.M.

    2017-01-01

    The aim of this study was to clarify whether audiovisual processing accounted for variance in reading and reading-related abilities, beyond the effect of a set of measures typically associated with individual differences in both reading and audiovisual processing. Testing adults with and without a

  7. Audiovisual cultural heritage: bridging the gap between digital archives and its users

    NARCIS (Netherlands)

    Ongena, G.; Donoso, Veronica; Geerts, David; Cesar, Pablo; de Grooff, Dirk

    2009-01-01

    This document describes a PhD research track on the disclosure of audiovisual digital archives. The domain of audiovisual material is introduced as well as a problem description is formulated. The main research objective is to investigate the gap between the different users and the digital archives.

  8. First clinical implementation of audiovisual biofeedback in liver cancer stereotactic body radiation therapy

    International Nuclear Information System (INIS)

    Pollock, Sean; Tse, Regina; Martin, Darren

    2015-01-01

    This case report details a clinical trial's first recruited liver cancer patient who underwent a course of stereotactic body radiation therapy treatment utilising audiovisual biofeedback breathing guidance. Breathing motion results for both abdominal wall motion and tumour motion are included. Patient 1 demonstrated improved breathing motion regularity with audiovisual biofeedback. A training effect was also observed.

  9. 78 FR 63243 - Certain Audiovisual Components and Products Containing the Same; Commission Determination To...

    Science.gov (United States)

    2013-10-23

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-837] Certain Audiovisual Components and Products Containing the Same; Commission Determination To Review a Final Initial Determination Finding a... section 337 as to certain audiovisual components and products containing the same with respect to claims 1...

  10. Exploring Student Perceptions of Audiovisual Feedback via Screencasting in Online Courses

    Science.gov (United States)

    Mathieson, Kathleen

    2012-01-01

    Using Moore's (1993) theory of transactional distance as a framework, this action research study explored students' perceptions of audiovisual feedback provided via screencasting as a supplement to text-only feedback. A crossover design was employed to ensure that all students experienced both text-only and text-plus-audiovisual feedback and to…

  11. Audiovisual News, Cartoons, and Films as Sources of Authentic Language Input and Language Proficiency Enhancement

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2012-01-01

    In today's audiovisually driven world, various audiovisual programs can be incorporated as authentic sources of potential language input for second language acquisition. In line with this view, the present research aimed at discovering the effectiveness of exposure to news, cartoons, and films as three different types of authentic audiovisual…

  12. 78 FR 48190 - Certain Audiovisual Components and Products Containing the Same Notice of Request for Statements...

    Science.gov (United States)

    2013-08-07

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-837] Certain Audiovisual Components and Products Containing the Same Notice of Request for Statements on the Public Interest AGENCY: U.S... infringing audiovisual components and products containing the same, imported by Funai Corporation, Inc. of...

  13. Age-related audiovisual interactions in the superior colliculus of the rat.

    Science.gov (United States)

    Costa, M; Piché, M; Lepore, F; Guillemot, J-P

    2016-04-21

    It is well established that multisensory integration is a functional characteristic of the superior colliculus that disambiguates external stimuli and therefore reduces the reaction times toward simple audiovisual targets in space. However, in a condition where a complex audiovisual stimulus is used, such as the optical flow in the presence of modulated audio signals, little is known about the processing of the multisensory integration in the superior colliculus. Furthermore, since visual and auditory deficits constitute hallmark signs during aging, we sought to gain some insight on whether audiovisual processes in the superior colliculus are altered with age. Extracellular single-unit recordings were conducted in the superior colliculus of anesthetized Sprague-Dawley adult (10-12 months) and aged (21-22 months) rats. Looming circular concentric sinusoidal (CCS) gratings were presented alone and in the presence of sinusoidally amplitude modulated white noise. In both groups of rats, two different audiovisual response interactions were encountered in the spatial domain: superadditive, and suppressive. In contrast, additive audiovisual interactions were found only in adult rats. Hence, superior colliculus audiovisual interactions were more numerous in adult rats (38%) than in aged rats (8%). These results suggest that intersensory interactions in the superior colliculus play an essential role in space processing toward audiovisual moving objects during self-motion. Moreover, aging has a deleterious effect on complex audiovisual interactions. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  14. A general audiovisual temporal processing deficit in adult readers with dyslexia

    NARCIS (Netherlands)

    Francisco, A.A.; Jesse, A.; Groen, M.A.; McQueen, J.M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with

  15. Audiovisual Speech Perception and Eye Gaze Behavior of Adults with Asperger Syndrome

    Science.gov (United States)

    Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-01-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…

  16. 16 CFR 307.8 - Requirements for disclosure in audiovisual and audio advertising.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Requirements for disclosure in audiovisual and audio advertising. 307.8 Section 307.8 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... ACT OF 1986 Advertising Disclosures § 307.8 Requirements for disclosure in audiovisual and audio...

  17. Threats and opportunities for new audiovisual cultural heritage archive services: the Dutch case

    NARCIS (Netherlands)

    Ongena, G.; Huizer, E.; van de Wijngaert, Lidwien

    2012-01-01

    Purpose The purpose of this paper is to analyze the business-to-consumer market for digital audiovisual archiving services. In doing so we identify drivers, threats, and opportunities for new services based on audiovisual archives in the cultural heritage domain. By analyzing the market we provide

  18. Children with a History of SLI Show Reduced Sensitivity to Audiovisual Temporal Asynchrony: An ERP Study

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer; Leonard, Laurence B.; Gustafson, Dana; Macias, Danielle

    2014-01-01

    Purpose: The authors examined whether school-age children with a history of specific language impairment (H-SLI), their peers with typical development (TD), and adults differ in sensitivity to audiovisual temporal asynchrony and whether such difference stems from the sensory encoding of audiovisual information. Method: Fifteen H-SLI children, 15…

  19. Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection.

    Science.gov (United States)

    Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica

    2015-09-01

    Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration.

  20. From "Piracy" to Payment: Audio-Visual Copyright and Teaching Practice.

    Science.gov (United States)

    Anderson, Peter

    1993-01-01

    The changing circumstances in Australia governing the use of broadcast television and radio material in education are examined, from the uncertainty of the early 1980s to current management of copyrighted audiovisual material under the statutory licensing agreement between universities and an audiovisual copyright agency. (MSE)

  1. Film Studies in Motion : From Audiovisual Essay to Academic Research Video

    NARCIS (Netherlands)

    Kiss, Miklós; van den Berg, Thomas

    2016-01-01

    Our (co-written with Thomas van den Berg) ‪media rich,‬ ‪‎open access‬ ‪‎Scalar‬ ‪e-book‬ on the ‪‎Audiovisual Essay‬ practice is available online: http://scalar.usc.edu/works/film-studies-in-motion Audiovisual essaying should be more than an appropriation of traditional video artistry, or a mere

  2. 36 CFR 1235.42 - What specifications and standards for transfer apply to audiovisual records, cartographic, and...

    Science.gov (United States)

    2010-07-01

    ... standards for transfer apply to audiovisual records, cartographic, and related records? 1235.42 Section 1235... Standards § 1235.42 What specifications and standards for transfer apply to audiovisual records... elements that are needed for future preservation, duplication, and reference for audiovisual records...

  3. 36 CFR 1256.100 - What is the copying policy for USIA audiovisual records that either have copyright protection or...

    Science.gov (United States)

    2010-07-01

    ... for USIA audiovisual records that either have copyright protection or contain copyrighted material... Distribution of United States Information Agency Audiovisual Materials in the National Archives of the United States § 1256.100 What is the copying policy for USIA audiovisual records that either have copyright...

  4. Audiovisual facilitation of clinical knowledge: a paradigm for dispersed student education based on Paivio's Dual Coding Theory.

    Science.gov (United States)

    Hartland, William; Biddle, Chuck; Fallacaro, Michael

    2008-06-01

    This article explores the application of Paivio's Dual Coding Theory (DCT) as a scientifically sound rationale for the effects of multimedia learning in programs of nurse anesthesia. We explore and highlight this theory as a practical infrastructure for programs that work with dispersed students (ie, distance education models). Exploring the work of Paivio and others, we are engaged in an ongoing outcome study using audiovisual teaching interventions (SBVTIs) that we have applied to a range of healthcare providers in a quasiexperimental model. The early results of that study are reported in this article. In addition, we have observed powerful and sustained learning in a wide range of healthcare providers with our SBVTIs and suggest that this is likely explained by DCT.

  5. The level of audiovisual print-speech integration deficits in dyslexia.

    Science.gov (United States)

    Kronschnabel, Jens; Brem, Silvia; Maurer, Urs; Brandeis, Daniel

    2014-09-01

    The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No

  6. The endemic Patagonian vespertilionid assemblage is a depauperate ecomorphological vicariant of species-rich neotropical assemblages

    Institute of Scientific and Technical Information of China (English)

    Analía L.GIM(E)NEZ; Norberto P. GIANNINI

    2017-01-01

    Vespertilionidae is the most diverse chiropteran family,and its diversity is concentrated in warm regions of the World;however,due to physiological and behavioral adaptations,these bats also dominate bat faunas in temperate regions.Here we performed a comparative study of vespertilionid assemblages from two broad regions of the New World,the cold and harsh Patagonia,versus the remaining temperate-to-subtropical,extra-Patagonian eco-regions of the South American Southern Cone.We took an ecomorphological approach and analyzed the craniodental morphological structure of these assemblages within a phylogenetic framework.We measured 17 craniodental linear variables from 447 specimens of 22 currently recognized vespertilionid species of the study regions.We performed a multivariate analysis to define the morphofunctional space,and calculated the pattern and degree of species packing for each assemblage.We assessed the importance of phylogeny and biogeography,and their impact on depauperate (Patagonian) versus rich (extra-Patagonian) vespertilionid assemblages as determinants of morphospace structuring.We implemented a sensitivity analysis associated to small samples of rare species.The morphological patterns were determined chiefly by the evolutionary history of the family.The Patagonian assemblage can be described as a structurally similar but comparatively depauperate ecomorphological version of those assemblages from neighboring extra-Patagonian eco-regions.The Patagonian assemblage seems to have formed by successively adding populations from Northern regions that eventually speciated in the region,leaving corresponding sisters (vicariants) in extraPatagonian eco-regions that continued to be characteristically richer.Despite being structurally akin,degree of species packing in Patagonia was comparatively very low,which may reflect the effect of limited dispersal success into a harsh region for bat survival.

  7. Supervised Vicarious Calibration (SVC of Multi-Source Hyperspectral Remote-Sensing Data

    Directory of Open Access Journals (Sweden)

    Anna Brook

    2015-05-01

    Full Text Available Introduced in 2011, the supervised vicarious calibration (SVC approach is a promising approach to radiometric calibration and atmospheric correction of airborne hyperspectral (HRS data. This paper presents a comprehensive study by which the SVC method has been systematically examined and a complete protocol for its practical execution has been established—along with possible limitations encountered during the campaign. The technique was applied to multi-sourced HRS data in order to: (1 verify the at-sensor radiometric calibration and (2 obtain radiometric and atmospheric correction coefficients. Spanning two select study sites along the southeast coast of France, data were collected simultaneously by three airborne sensors (AisaDUAL, AHS and CASI-1500i aboard two aircrafts (CASA of National Institute for Aerospace Technology INTA ES and DORNIER 228 of NERC-ARSF Centre UK. The SVC ground calibration site was assembled along sand dunes near Montpellier and the thematic data were acquired from other areas in the south of France (Salon-de-Provence, Marseille, Avignon and Montpellier on 28 October 2010 between 12:00 and 16:00 UTC. The results of this study confirm that the SVC method enables reliable inspection and, if necessary, in-situ fine radiometric recalibration of airborne hyperspectral data. Independent of sensor or platform quality, the SVC approach allows users to improve at-sensor data to obtain more accurate physical units and subsequently improved reflectance information. Flight direction was found to be important, whereas the flight altitude posed very low impact. The numerous rules and major outcomes of this experiment enable a new standard of atmospherically corrected data based on better radiometric output. Future research should examine the potential of SVC to be applied to super-and-hyperspectral data obtained from on-orbit sensors.

  8. Long-term music training modulates the recalibration of audiovisual simultaneity.

    Science.gov (United States)

    Jicol, Crescent; Proulx, Michael J; Pollick, Frank E; Petrini, Karin

    2018-07-01

    To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally occurring multisensory training known to enhance audiovisual perceptual accuracy. Hence, we asked a group of drummers, of non-drummer musicians and of non-musicians to judge the audiovisual simultaneity of musical and non-musical audiovisual events, before and after adaptation with two fixed audiovisual asynchronies. We found that the recalibration for the musicians and drummers was in the opposite direction (sound leading vision) to that of non-musicians (vision leading sound), and change together with both increased music training and increased perceptual accuracy (i.e. ability to detect asynchrony). Our findings demonstrate that long-term musical training reshapes the way humans adaptively recalibrate simultaneity between auditory and visual signals.

  9. On the relevance of script writing basics in audiovisual translation practice and training

    Directory of Open Access Journals (Sweden)

    Juan José Martínez-Sierra

    2012-07-01

    Full Text Available http://dx.doi.org/10.5007/2175-7968.2012v1n29p145   Audiovisual texts possess characteristics that clearly differentiate audiovisual translation from both oral and written translation, and prospective screen translators are usually taught about the issues that typically arise in audiovisual translation. This article argues for the development of an interdisciplinary approach that brings together Translation Studies and Film Studies, which would prepare future audiovisual translators to work with the nature and structure of a script in mind, in addition to the study of common and diverse translational aspects. Focusing on film, the article briefly discusses the nature and structure of scripts, and identifies key points in the development and structuring of a plot. These key points and various potential hurdles are illustrated with examples from the films Chinatown and La habitación de Fermat. The second part of this article addresses some implications for teaching audiovisual translation.

  10. Audiovisual English-Arabic Translation: De Beaugrande's Perspective

    Directory of Open Access Journals (Sweden)

    Alaa Eddin Hussain

    2016-05-01

    Full Text Available This paper attempts to demonstrate the significance of the seven standards of textuality with special application to audiovisual English Arabic translation.  Ample and thoroughly analysed examples have been provided to help in audiovisual English-Arabic translation decision-making. A text is meaningful if and only if it carries meaning and knowledge to its audience, and is optimally activatable, recoverable and accessible.  The same is equally applicable to audiovisual translation (AVT. The latter should also carry knowledge which can be easily accessed by the TL audience, and be processed with least energy and time, i.e. achieving the utmost level of efficiency. Communication occurs only when that text is coherent, with continuity of senses and concepts that are appropriately linked. Coherence of a text will be achieved when all aspects of cohesive devices are well accounted for pragmatically.  This combined with a good amount of psycholinguistic element will provide a text with optimal communicative value. Non-text is certainly devoid of such components and ultimately non-communicative. Communicative knowledge can be classified into three categories: determinate knowledge, typical knowledge and accidental knowledge. To create dramatic suspense and the element of surprise, the text in AV environment, as in any dialogue, often carries accidental knowledge.  This unusual knowledge aims to make AV material interesting in the eyes of its audience. That cognitive environment is enhanced by an adequate employment of material (picture and sound, and helps to recover sense in the text. Hence, the premise of this paper is the application of certain aspects of these standards to AV texts taken from various recent feature films and documentaries, in order to facilitate the translating process and produce a final appropriate product.

  11. Bayesian calibration of simultaneity in audiovisual temporal order judgments.

    Directory of Open Access Journals (Sweden)

    Shinya Yamamoto

    Full Text Available After repeated exposures to two successive audiovisual stimuli presented in one frequent order, participants eventually perceive a pair separated by some lag time in the same order as occurring simultaneously (lag adaptation. In contrast, we previously found that perceptual changes occurred in the opposite direction in response to tactile stimuli, conforming to bayesian integration theory (bayesian calibration. We further showed, in theory, that the effect of bayesian calibration cannot be observed when the lag adaptation was fully operational. This led to the hypothesis that bayesian calibration affects judgments regarding the order of audiovisual stimuli, but that this effect is concealed behind the lag adaptation mechanism. In the present study, we showed that lag adaptation is pitch-insensitive using two sounds at 1046 and 1480 Hz. This enabled us to cancel lag adaptation by associating one pitch with sound-first stimuli and the other with light-first stimuli. When we presented each type of stimulus (high- or low-tone in a different block, the point of simultaneity shifted to "sound-first" for the pitch associated with sound-first stimuli, and to "light-first" for the pitch associated with light-first stimuli. These results are consistent with lag adaptation. In contrast, when we delivered each type of stimulus in a randomized order, the point of simultaneity shifted to "light-first" for the pitch associated with sound-first stimuli, and to "sound-first" for the pitch associated with light-first stimuli. The results clearly show that bayesian calibration is pitch-specific and is at work behind pitch-insensitive lag adaptation during temporal order judgment of audiovisual stimuli.

  12. La regulación audiovisual: argumentos a favor y en contra The audio-visual regulation: the arguments for and against

    Directory of Open Access Journals (Sweden)

    Jordi Sopena Palomar

    2008-03-01

    Full Text Available El artículo analiza la efectividad de la regulación audiovisual y valora los diversos argumentos a favor y en contra de la existencia de consejos reguladores a nivel estatal. El debate sobre la necesidad de un organismo de este calado en España todavía persiste. La mayoría de los países comunitarios se han dotado de consejos competentes en esta materia, como es el caso del OFCOM en el Reino Unido o el CSA en Francia. En España, la regulación audiovisual se limita a organismos de alcance autonómico, como son el Consejo Audiovisual de Navarra, el de Andalucía y el Consell de l’Audiovisual de Catalunya (CAC, cuyo modelo también es abordado en este artículo. The article analyzes the effectiveness of the audio-visual regulation and assesses the different arguments for and against the existence of the broadcasting authorities at the state level. The debate of the necessity of a Spanish organism of regulation is still active. Most of the European countries have created some competent authorities, like the OFCOM in United Kingdom and the CSA in France. In Spain, the broadcasting regulation is developed by regional organisms, like the Consejo Audiovisual de Navarra, the Consejo Audiovisual de Andalucía and the Consell de l’Audiovisual de Catalunya (CAC, whose case is also studied in this article.

  13. Interpreters’ Experiences of Transferential Dynamics, Vicarious Traumatisation, and Their Need for Support and Supervision: A Systematic Literature Review

    Directory of Open Access Journals (Sweden)

    Emma Darroch

    2016-08-01

    Full Text Available Using thematic analysis, this systematic review aimed to explore sign language interpreters’ experiences of transferential dynamics and vicarious trauma. The notion of transferential dynamics, such as transference and countertransference, originate from psychodynamic therapy and refer to the mutual impact that client and therapist have on one another (Chessick, 1986. Psychodynamic models of therapy are predominantly concerned with unconscious processes and theorise that such processes have a powerful influence over an individuals’ thoughts, feelings and behaviours (Howard, 2011. In contrast to countertransference, which is a immediate response to a particular client, vicarious trauma is thought to develop as a result of continuous exposure to, and engagement across, many therapeutic interactions (Pearlman & Saakvitne, 1995a. A search of the available literature uncovered a striking lack of literature into the experiences of sign language interpreters, and in all, only two of the 11 identified empirical studies addressed sign language interpreters. The vast majority of the literature analysed reflected the experiences of spoken language interpreters. The results indicate that interpreters experience transferential dynamics as part of their work as well as suggesting the presence of vicarious trauma among interpreters. Additionally, a unique contribution to the fields of interpreting and psychology is offered, as it is consistently demonstrated that ‘service providers’ and ‘mental health workers’, which are umbrella terms for psychologists, immensely under-estimate the role of interpreters, as they fail to consider the emotional impact of their work and ignore the linguistic complexities of translation by failing to appreciate their need for information in order to ensure an effective translation.

  14. Networked Learning in 70001 Programs.

    Science.gov (United States)

    Fine, Marija Futchs

    The 7000l Training and Employment Institute offers self-paced instruction through the use of computers and audiovisual materials to young people to improve opportunities for success in the work force. In 1988, four sites were equipped with Apple stand-alone software in an integrated learning system that included courses in reading and math, test…

  15. Heart House: Where Doctors Learn

    Science.gov (United States)

    American School and University, 1978

    1978-01-01

    The new learning center and administrative headquarters of the American College of Cardiology in Bethesda, Maryland, contain a unique classroom equipped with the highly sophisticated audiovisual aids developed to teach the latest techniques in the diagnosis and treatment of heart disease. (Author/MLF)

  16. Panorama del derecho audiovisual francés

    OpenAIRE

    Derieux, E. (Emmanuel)

    1999-01-01

    El artículo realiza una panorámica del Derecho audiovisual francés hasta 1998. Como características básicas, se destacan su complejidad e inestabilidad, debida en gran parte a la incapacidad para asumir los rápidos cambios tecnológicos y a las continuas modificaciones que han ido introduciendo los gobiernos de distinto signo. Además, se repasan algunas de las cuestiones actuales más relevantes, desde la regulación de las estructuras empresariales hasta los programas audiovisuales y sus conten...

  17. Sistemas de Registro Audiovisual del Patrimonio Urbano (SRAPU)

    OpenAIRE

    Conles, Liliana Eva

    2006-01-01

    El Sistema SRAPU es un método de relevamiento fílmico diseñado para configurar una base de datos interactiva del paisaje urbano. Sobre esta base se persigue la formulación de criterios ordenados en términos de: flexibilidad y eficacia económica, eficiencia en el manejo de datos, democratización de la información. El SRAPU se plantea como un registro audiovisual del patrimonio material e intangible en su singularidad y como conjunto histórico y natural. En su concepción involucra los pro...

  18. A Joint Audio-Visual Approach to Audio Localization

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2015-01-01

    Localization of audio sources is an important research problem, e.g., to facilitate noise reduction. In the recent years, the problem has been tackled using distributed microphone arrays (DMA). A common approach is to apply direction-of-arrival (DOA) estimation on each array (denoted as nodes), a...... time-of-flight cameras. Moreover, we propose an optimal method for weighting such DOA and range information for audio localization. Our experiments on both synthetic and real data show that there is a clear, potential advantage of using the joint audiovisual localization framework....

  19. Content and retention evaluation of an audiovisual patient-education program on bronchodilators.

    Science.gov (United States)

    Darr, M S; Self, T H; Ryan, M R; Vanderbush, R E; Boswell, R L

    1981-05-01

    A study was conducted to: (1) evaluate the effect of a slide-tape program on patients' short-term and long-term knowledge about their bronchodilator medications; and (2) determine it any differences exist in learning or retention patterns for different content areas of drug information. The knowledge of 30 patients was measured using a randomized sequence of three comparable 15-question tests. The first test was given before the slide-tape program was presented, the second test within 24 hours, and the last test one to six months (mean = 2.8 months) later. Scores attained on the first posttest were significantly higher (p less than 0.001) than pretest scores. Learning differences among drug-information-content areas were not evidenced on the first posttest. No significant difference was demonstrated between scores on pretest and last posttest (p = 0.100). However, retention patterns among content areas were found to differ significantly (p less than 0.05). Carefully designed audiovisual programs can impart drug information to patients. Medication counseling should be repeated at appropriate opportunities because patients lose drug knowledge over time.

  20. Vicarious calibration of the solar reflection channels of radiometers onboard satellites through the field campaigns with measurements of refractive index and size distribution of aerosols

    Science.gov (United States)

    Arai, K.

    A comparative study on vicarious calibration for the solar reflection channels of radiometers onboard satellite through the field campaigns between with and without measurements of refractive index and size distribution of aerosols is made. In particular, it is noticed that the influence due to soot from the cars exhaust has to be care about for the test sites near by a heavy trafficked roads. It is found that the 0.1% inclusion of soot induces around 10% vicarious calibration error so that it is better to measure refractive index properly at the test site. It is found that the vicarious calibration coefficients with the field campaigns at the different test site, Ivanpah (near road) and Railroad (distant from road) shows approximately 10% discrepancy. It seems that one of the possible causes for the difference is the influence due to soot from cars exhaust.

  1. The role of vicariance vs. dispersal in shaping genetic patterns in ocellated lizard species in the western Mediterranean

    DEFF Research Database (Denmark)

    Paulo, O. S.; Pinheiro, J.; Miraldo, A.

    2008-01-01

    in the western Mediterranean as exemplified by the distribution of species and subspecies and genetic variation within the ocellated lizard group. To reassess the role of the MSC, partial sequences of three mitochondrial DNA genes (cytochrome b, 12S and 16S ribosomal RNA) and two nuclear genes (beta......-fibrinogen and C-mos) from species of the ocellated lizard group were analysed. Three alternative hypotheses were tested: that divergence was initiated (i) by post-MSC vicariance as the basin filled, (ii) when separate populations established either side of the strait by pre-MSC overseas dispersal, and (iii...

  2. Mobile Guide System Using Problem-Solving Strategy for Museum Learning: A Sequential Learning Behavioural Pattern Analysis

    Science.gov (United States)

    Sung, Y.-T.; Hou, H.-T.; Liu, C.-K.; Chang, K.-E.

    2010-01-01

    Mobile devices have been increasingly utilized in informal learning because of their high degree of portability; mobile guide systems (or electronic guidebooks) have also been adopted in museum learning, including those that combine learning strategies and the general audio-visual guide systems. To gain a deeper understanding of the features and…

  3. Expert-led didactic versus self-directed audiovisual training of confocal laser endomicroscopy in evaluation of mucosal barrier defects.

    Science.gov (United States)

    Huynh, Roy; Ip, Matthew; Chang, Jeff; Haifer, Craig; Leong, Rupert W

    2018-01-01

     Confocal laser endomicroscopy (CLE) allows mucosal barrier defects along the intestinal epithelium to be visualized in vivo during endoscopy. Training in CLE interpretation can be achieved didactically or through self-directed learning. This study aimed to compare the effectiveness of expert-led didactic with self-directed audiovisual teaching for training inexperienced analysts on how to recognize mucosal barrier defects on endoscope-based CLE (eCLE).  This randomized controlled study involved trainee analysts who were taught how to recognize mucosal barrier defects on eCLE either didactically or through an audiovisual clip. After being trained, they evaluated 6 sets of 30 images. Image evaluation required the trainees to determine whether specific features of barrier dysfunction were present or not. Trainees in the didactic group engaged in peer discussion and received feedback after each set while this did not happen in the self-directed group. Accuracy, sensitivity, and specificity of both groups were compared. Trainees in the didactic group achieved a higher overall accuracy (87.5 % vs 85.0 %, P  = 0.002) and sensitivity (84.5 % vs 80.4 %, P  = 0.002) compared to trainees in the self-directed group. Interobserver agreement was higher in the didactic group (k = 0.686, 95 % CI 0.680 - 0.691, P  barrier defects on eCLE.

  4. Use of audiovisual media for education and self-management of patients with Chronic Obstructive Pulmonary Disease – COPD

    Directory of Open Access Journals (Sweden)

    Janaína Schäfer

    Full Text Available Introduction Chronic Obstructive Pulmonary Disease (COPD is considered a disease with high morbidity and mortality, even though it is a preventable and treatable disease. Objective To assess the effectiveness of an audiovisual educational material about the knowledge and self-management in COPD. Methods Quasi-experimental design and convenience sample was composed of COPD patients of Pulmonary Rehabilitation (PR (n = 42, in advanced stage of the disease, adults of both genders, and with low education. All subjects answered a specific questionnaire before and post-education audiovisual session, to assess their acquired knowledge about COPD. Results Positive results were obtained in the topics: COPD and its consequences, first symptom identified when the disease is aggravated and physical exercise practice. Regarding the second and third symptoms, it was observed that the education session did not improve this learning, as well as the decision facing the worsening of COPD. Conclusion COPD patients showed reasonable knowledge about the disease, its implications and symptomatology. Important aspects should be emphasized, such as identification of exacerbations of COPD and decision facing this exacerbation.

  5. Pavlovian conditioned approach, extinction, and spontaneous recovery to an audiovisual cue paired with an intravenous heroin infusion.

    Science.gov (United States)

    Peters, Jamie; De Vries, Taco J

    2014-01-01

    Novel stimuli paired with exposure to addictive drugs can elicit approach through Pavlovian learning. While such approach behavior, or sign tracking, has been documented for cocaine and alcohol, it has not been shown to occur with opiate drugs like heroin. Most Pavlovian conditioned approach paradigms use an operandum as the sign, so that sign tracking can be easily automated. We were interested in assessing whether approach behavior occurs to an audiovisual cue paired with an intravenous heroin infusion. If so, would this behavior exhibit characteristics of other Pavlovian conditioned behaviors, such as extinction and spontaneous recovery? Rats were repeatedly exposed to an audiovisual cue, similar to that used in standard self-administration models, along with an intravenous heroin infusion. Sign tracking was measured in an automated fashion by analyzing motion pixels within the cue zone during each cue presentation. We were able to observe significant sign tracking after only five pairings of the conditioned stimulus (CS) with the unconditioned stimulus (US). This behavior rapidly extinguished over 2 days, but exhibited pronounced spontaneous recovery 3 weeks later. We conclude that sign tracking measured by these methods exhibits all the characteristics of a classically conditioned behavior. This model can be used to examine the Pavlovian component of drug memories, alone, or in combination with self-administration methods.

  6. Vicariously touching products through observing others' hand actions increases purchasing intention, and the effect of visual perspective in this process: An fMRI study.

    Science.gov (United States)

    Liu, Yi; Zang, Xuelian; Chen, Lihan; Assumpção, Leonardo; Li, Hong

    2018-01-01

    The growth of online shopping increases consumers' dependence on vicarious sensory experiences, such as observing others touching products in commercials. However, empirical evidence on whether observing others' sensory experiences increases purchasing intention is still scarce. In the present study, participants observed others interacting with products in the first- or third-person perspective in video clips, and their neural responses were measured with functional magnetic resonance imaging (fMRI). We investigated (1) whether and how vicariously touching certain products affected purchasing intention, and the neural correlates of this process; and (2) how visual perspective interacts with vicarious tactility. Vicarious tactile experiences were manipulated by hand actions touching or not touching the products, while the visual perspective was manipulated by showing the hand actions either in first- or third-person perspective. During the fMRI scanning, participants watched the video clips and rated their purchasing intention for each product. The results showed that, observing others touching (vs. not touching) the products increased purchasing intention, with vicarious neural responses found in mirror neuron systems (MNS) and lateral occipital complex (LOC). Moreover, the stronger neural activities in MNS was associated with higher purchasing intention. The effects of visual perspectives were found in left superior parietal lobule (SPL), while the interaction of tactility and visual perspective was shown in precuneus and precuneus-LOC connectivity. The present study provides the first evidence that vicariously touching a given product increased purchasing intention and the neural activities in bilateral MNS, LOC, left SPL and precuneus are involved in this process. Hum Brain Mapp 39:332-343, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  7. [Virtual audiovisual talking heads: articulatory data and models--applications].

    Science.gov (United States)

    Badin, P; Elisei, F; Bailly, G; Savariaux, C; Serrurier, A; Tarabalka, Y

    2007-01-01

    In the framework of experimental phonetics, our approach to the study of speech production is based on the measurement, the analysis and the modeling of orofacial articulators such as the jaw, the face and the lips, the tongue or the velum. Therefore, we present in this article experimental techniques that allow characterising the shape and movement of speech articulators (static and dynamic MRI, computed tomodensitometry, electromagnetic articulography, video recording). We then describe the linear models of the various organs that we can elaborate from speaker-specific articulatory data. We show that these models, that exhibit a good geometrical resolution, can be controlled from articulatory data with a good temporal resolution and can thus permit the reconstruction of high quality animation of the articulators. These models, that we have integrated in a virtual talking head, can produce augmented audiovisual speech. In this framework, we have assessed the natural tongue reading capabilities of human subjects by means of audiovisual perception tests. We conclude by suggesting a number of other applications of talking heads.

  8. Extraction of Information of Audio-Visual Contents

    Directory of Open Access Journals (Sweden)

    Carlos Aguilar

    2011-10-01

    Full Text Available In this article we show how it is possible to use Channel Theory (Barwise and Seligman, 1997 for modeling the process of information extraction realized by audiences of audio-visual contents. To do this, we rely on the concepts pro- posed by Channel Theory and, especially, its treatment of representational systems. We then show how the information that an agent is capable of extracting from the content depends on the number of channels he is able to establish between the content and the set of classifications he is able to discriminate. The agent can endeavor the extraction of information through these channels from the totality of content; however, we discuss the advantages of extracting from its constituents in order to obtain a greater number of informational items that represent it. After showing how the extraction process is endeavored for each channel, we propose a method of representation of all the informative values an agent can obtain from a content using a matrix constituted by the channels the agent is able to establish on the content (source classifications, and the ones he can understand as individual (destination classifications. We finally show how this representation allows reflecting the evolution of the informative items through the evolution of audio-visual content.

  9. Automatic summarization of soccer highlights using audio-visual descriptors.

    Science.gov (United States)

    Raventós, A; Quijada, R; Torres, Luis; Tarrés, Francesc

    2015-01-01

    Automatic summarization generation of sports video content has been object of great interest for many years. Although semantic descriptions techniques have been proposed, many of the approaches still rely on low-level video descriptors that render quite limited results due to the complexity of the problem and to the low capability of the descriptors to represent semantic content. In this paper, a new approach for automatic highlights summarization generation of soccer videos using audio-visual descriptors is presented. The approach is based on the segmentation of the video sequence into shots that will be further analyzed to determine its relevance and interest. Of special interest in the approach is the use of the audio information that provides additional robustness to the overall performance of the summarization system. For every video shot a set of low and mid level audio-visual descriptors are computed and lately adequately combined in order to obtain different relevance measures based on empirical knowledge rules. The final summary is generated by selecting those shots with highest interest according to the specifications of the user and the results of relevance measures. A variety of results are presented with real soccer video sequences that prove the validity of the approach.

  10. Audiovisual integration in children listening to spectrally degraded speech.

    Science.gov (United States)

    Maidment, David W; Kang, Hi Jee; Stewart, Hannah J; Amitay, Sygal

    2015-02-01

    The study explored whether visual information improves speech identification in typically developing children with normal hearing when the auditory signal is spectrally degraded. Children (n=69) and adults (n=15) were presented with noise-vocoded sentences from the Children's Co-ordinate Response Measure (Rosen, 2011) in auditory-only or audiovisual conditions. The number of bands was adaptively varied to modulate the degradation of the auditory signal, with the number of bands required for approximately 79% correct identification calculated as the threshold. The youngest children (4- to 5-year-olds) did not benefit from accompanying visual information, in comparison to 6- to 11-year-old children and adults. Audiovisual gain also increased with age in the child sample. The current data suggest that children younger than 6 years of age do not fully utilize visual speech cues to enhance speech perception when the auditory signal is degraded. This evidence not only has implications for understanding the development of speech perception skills in children with normal hearing but may also inform the development of new treatment and intervention strategies that aim to remediate speech perception difficulties in pediatric cochlear implant users.

  11. O potencial da imagem televisiva na sociedade da cultura audiovisual

    Directory of Open Access Journals (Sweden)

    Juliana L. M. F. Sabino

    Full Text Available Resumo A cultura audiovisual vem cada vez mais ganhando espaço, e os avanços tecnológicos contribuem, vertiginosamente, para o seu desenvolvimento e sua abrangência. Assim, este estudo tem como temática a cultura audiovisual, e como objetivo de pesquisa, discutir a importância das imagens na televisão. Para tanto, selecionamos um exemplo de propaganda televisiva observada no ano de 2006, que inspirou uma reflexão crítica sobre a importância das linguagens híbridas na televisão, ilustrando a interferência dessas na produção do sentido na mensagem televisiva. Como referencial teórico e metodológico, utilizamos as concepções de imagem e linguagens híbridas de Lúcia Santaella. A partir da análise da propaganda ora proposta concluímos que sua constituição é mais icônica do que de verbal, mas que se insere numa concepção dialógica, constituindo-se, portanto, por meio de um processo criativo de produção de significados.

  12. Compliments in Audiovisual Translation – issues in character identity

    Directory of Open Access Journals (Sweden)

    Isabel Fernandes Silva

    2011-12-01

    Full Text Available Over the last decades, audiovisual translation has gained increased significance in Translation Studies as well as an interdisciplinary subject within other fields (media, cinema studies etc. Although many articles have been published on communicative aspects of translation such as politeness, only recently have scholars taken an interest in the translation of compliments. This study will focus on both these areas from a multimodal and pragmatic perspective, emphasizing the links between these fields and how this multidisciplinary approach will evidence the polysemiotic nature of the translation process. In Audiovisual Translation both text and image are at play, therefore, the translation of speech produced by the characters may either omit (because it is provided by visualgestual signs or it may emphasize information. A selection was made of the compliments present in the film What Women Want, our focus being on subtitles which did not successfully convey the compliment expressed in the source text, as well as analyze the reasons for this, namely difference in register, Culture Specific Items and repetitions. These differences lead to a different portrayal/identity/perception of the main character in the English version (original soundtrack and subtitled versions in Portuguese and Italian.

  13. Authentic Language Input Through Audiovisual Technology and Second Language Acquisition

    Directory of Open Access Journals (Sweden)

    Taher Bahrani

    2014-09-01

    Full Text Available Second language acquisition cannot take place without having exposure to language input. With regard to this, the present research aimed at providing empirical evidence about the low and the upper-intermediate language learners’ preferred type of audiovisual programs and language proficiency development outside the classroom. To this end, 60 language learners (30 low level and 30 upper-intermediate level were asked to have exposure to their preferred types of audiovisual program(s outside the classroom and keep a diary of the amount and the type of exposure. The obtained data indicated that the low-level participants preferred cartoons and the upper-intermediate participants preferred news more. To find out which language proficiency level could improve its language proficiency significantly, a post-test was administered. The results indicated that only the upper-intermediate language learners gained significant improvement. Based on the findings, the quality of the language input should be given priority over the amount of exposure.

  14. Valores occidentales en el discurso publicitario audiovisual argentino

    Directory of Open Access Journals (Sweden)

    Isidoro Arroyo Almaraz

    2012-04-01

    Full Text Available En el presente artículo se desarrolla un análisis del discurso publicitario audiovisual argentino. Se pretende identificar los valores sociales que comunica con mayor predominancia y su posible vinculación con los valores característicos de la sociedad occidental posmoderna. Con este propósito se analizó la frecuencia de aparición de valores sociales para el estudio de 28 anuncios de diferentes anunciantes . Como modelo de análisis se utilizó el modelo “Seven/Seven” (siete pecados capitales y siete virtudes cardinales ya que se considera que los valores tradicionales son herederos de las virtudes y los pecados, utilizados por la publicidad para resolver necesidades relacionadas con el consumo. La publicidad audiovisual argentina promueve y anima ideas relacionadas con las virtudes y pecados a través de los comportamientos de los personajes de los relatos audiovisuales. Los resultados evidencian una mayor frecuencia de valores sociales caracterizados como pecados que de valores sociales caracterizados como virtudes ya que los pecados se transforman a través de la publicidad en virtudes que dinamizan el deseo y que favorecen el consumo fortaleciendo el aprendizaje de las marcas. Finalmente, a partir de los resultados obtenidos se reflexiona acerca de los usos y alcances sociales que el discurso publicitario posee.

  15. A imagem-ritmo e o videoclipe no audiovisual

    Directory of Open Access Journals (Sweden)

    Felipe de Castro Muanis

    2012-12-01

    Full Text Available A televisão pode ser um espaço de reunião entre som e imagem em um dispositivo que possibilita a imagem-ritmo – dando continuidade à teoria da imagem de Gilles Deleuze, proposta para o cinema. Ela agregaria, simultaneamente, ca-racterísticas da imagem-movimento e da imagem-tempo, que se personificariam na construção de imagens pós-modernas, em produtos audiovisuais não necessariamente narrativos, porém populares. Filmes, videogames, videoclipes e vinhetas em que a música conduz as imagens permitiriam uma leitura mais sensorial. O audiovisual como imagem-música abre, assim, para uma nova forma de percepção além da textual tradicional, fruto da interação entre ritmo, texto e dispositivo. O tempo das imagens em movimento no audiovisual está atrelado inevitável e prioritariamente ao som. Elas agregam possibilidades não narrativas que se realizam, na maioria das vezes, sobre a lógica do ritmo musical, so-bressaindo-se como um valor fundamental, observado nos filmes Sem Destino (1969, Assassinos por Natureza (1994 e Corra Lola Corra (1998.

  16. Effect of attentional load on audiovisual speech perception: Evidence from ERPs

    Directory of Open Access Journals (Sweden)

    Agnès eAlsius

    2014-07-01

    Full Text Available Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e. a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  17. Audio-visual temporal recalibration can be constrained by content cues regardless of spatial overlap

    Directory of Open Access Journals (Sweden)

    Warrick eRoseboom

    2013-04-01

    Full Text Available It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this was necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; Experiment 1 and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; Experiment 2 we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  18. Efficient visual search from synchronized auditory signals requires transient audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.

  19. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG.

    Directory of Open Access Journals (Sweden)

    Yao Lu

    Full Text Available Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave, while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  20. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    Science.gov (United States)

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  1. The Influence of Selective and Divided Attention on Audiovisual Integration in Children.

    Science.gov (United States)

    Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong

    2016-01-24

    This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder. © The Author(s) 2016.

  2. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    Science.gov (United States)

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  3. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    Science.gov (United States)

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  4. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis.

    Science.gov (United States)

    Altieri, Nicholas; Wenger, Michael J

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of -12 dB, and S/N ratio of -18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity.

  5. An analysis of dinosaurian biogeography: evidence for the existence of vicariance and dispersal patterns caused by geological events.

    Science.gov (United States)

    Upchurch, Paul; Hunn, Craig A; Norman, David B

    2002-03-22

    As the supercontinent Pangaea fragmented during the Mesozoic era, dinosaur faunas were divided into isolated populations living on separate continents. It has been predicted, therefore, that dinosaur distributions should display a branching ('vicariance') pattern that corresponds with the sequence and timing of continental break-up. Several recent studies, however, minimize the importance of plate tectonics and instead suggest that dispersal and regional extinction were the main controls on dinosaur biogeography. Here, in order to test the vicariance hypothesis, we apply a cladistic biogeographical method to a large dataset on dinosaur relationships and distributions. We also introduce a methodological refinement termed 'time-slicing', which is shown to be a key step in the detection of ancient biogeographical patterns. These analyses reveal biogeographical patterns that closely correlate with palaeogeography. The results provide the first statistically robust evidence that, from Middle Jurassic to mid-Cretaceous times, tectonic events had a major role in determining where and when particular dinosaur groups flourished. The fact that evolutionary trees for extinct organisms preserve such distribution patterns opens up a new and fruitful direction for palaeobiogeographical research.

  6. Effect of Audiovisual Treatment Information on Relieving Anxiety in Patients Undergoing Impacted Mandibular Third Molar Removal.

    Science.gov (United States)

    Choi, Sung-Hwan; Won, Ji-Hoon; Cha, Jung-Yul; Hwang, Chung-Ju

    2015-11-01

    The authors hypothesized that an audiovisual slide presentation that provided treatment information regarding the removal of an impacted mandibular third molar could improve patient knowledge of postoperative complications and decrease anxiety in young adults before and after surgery. A group that received an audiovisual description was compared with a group that received the conventional written description of the procedure. This randomized clinical trial included young adult patients who required surgical removal of an impacted mandibular third molar and fulfilled the predetermined criteria. The predictor variable was the presentation of an audiovisual slideshow. The audiovisual informed group provided informed consent after viewing an audiovisual slideshow. The control group provided informed consent after reading a written description of the procedure. The outcome variables were the State-Trait Anxiety Inventory, the Dental Anxiety Scale, a self-reported anxiety questionnaire, completed immediately before and 1 week after surgery, and a postoperative questionnaire about the level of understanding of potential postoperative complications. The data were analyzed with χ(2) tests, independent t tests, Mann-Whitney U  tests, and Spearman rank correlation coefficients. Fifty-one patients fulfilled the inclusion criteria. The audiovisual informed group was comprised of 20 men and 5 women; the written informed group was comprised of 21 men and 5 women. The audiovisual informed group remembered significantly more information than the control group about a potential allergic reaction to local anesthesia or medication and potential trismus (P audiovisual informed group had lower self-reported anxiety scores than the control group 1 week after surgery (P audiovisual slide presentation could improve patient knowledge about postoperative complications and aid in alleviating anxiety after the surgical removal of an impacted mandibular third molar. Copyright © 2015

  7. La estacíon de trabajo del traductor audiovisual: Herramientas y Recursos.

    Directory of Open Access Journals (Sweden)

    Anna Matamala

    2005-01-01

    Full Text Available In this article, we discuss the relationship between audiovisual translation and new technologies, and describe the characteristics of the audiovisual translator´s workstation, especially as regards dubbing and voiceover. After presenting the tools necessary for the translator to perform his/ her task satisfactorily as well as pointing to future perspectives, we make a list of sources that can be consulted in order to solve translation problems, including those available on the Internet. Keywords: audiovisual translation, new technologies, Internet, translator´s tools.

  8. Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception.

    Science.gov (United States)

    Baart, Martijn; Lindborg, Alma; Andersen, Tobias S

    2017-11-01

    Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure of audiovisual integration) for fusions was similar to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli. © 2017 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  9. Audiovisual Integration of Speech in a Patient with Broca’s Aphasia

    DEFF Research Database (Denmark)

    Andersen, Tobias; Starrfelt, Randi

    2015-01-01

    's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical......, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing...

  10. Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception

    DEFF Research Database (Denmark)

    Baart, Martijn; Lindborg, Alma Cornelia; Andersen, Tobias S

    2017-01-01

    Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure...... of audiovisual integration) for fusions was comparable to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli. This article is protected...

  11. The audiovisual mounting narrative as a basis for the documentary film interactive: news studies

    Directory of Open Access Journals (Sweden)

    Mgs. Denis Porto Renó

    2008-01-01

    Full Text Available This paper presents a literature review and experiment results from pilot-doctoral research "assembly language visual narrative for the documentary film interactive," which defend the thesis that there are features interactive audio and video editing of the movie, even as causing agent of interactivity. The search for interactive audio-visual formats are present in international investigations, but sob glances technology. He believes that this paper is to propose possible formats for interactive audiovisual production film, video, television, computer and cell phone from the postmodern society. Key words: Audiovisual, language, interactivity, cinema interactive, documentary, communication.

  12. La comunicación corporativa audiovisual: propuesta metodológica de estudio

    OpenAIRE

    Lorán Herrero, María Dolores

    2016-01-01

    Esta investigación, versa en torno a dos conceptos, la Comunicación Audiovisual y La Comunicación Corporativa, disciplinas que afectan a las organizaciones y que se van articulando de tal manera que dan lugar a la Comunicación Corporativa Audiovisual, concepto que se propone en esta tesis. Se realiza una clasificación y definición de los formatos que utilizan las organizaciones para su comunicación. Se trata de poder analizar cualquier documento audiovisual corporativo para constatar si el l...

  13. Atypical audiovisual speech integration in infants at risk for autism.

    Directory of Open Access Journals (Sweden)

    Jeanne A Guiraud

    Full Text Available The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/- audio/ba/and the congruent visual/ba/- audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/- audio/ga/display compared with the congruent visual/ga/- audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16 = 17.153, p = 0.001. The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25 = 0.09, p = 0.767, in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41 = 4.466, p = 0.041. In some cases this reduced ability might lead to the poor communication skills characteristic of autism.

  14. The audiovisual structure of onomatopoeias: An intrusion of real-world physics in lexical creation

    Science.gov (United States)

    Elisei, Natalia; Trípodi, Mónica; Cohen, Laurent; Sitt, Jacobo D.

    2018-01-01

    Sound-symbolic word classes are found in different cultures and languages worldwide. These words are continuously produced to code complex information about events. Here we explore the capacity of creative language to transport complex multisensory information in a controlled experiment, where our participants improvised onomatopoeias from noisy moving objects in audio, visual and audiovisual formats. We found that consonants communicate movement types (slide, hit or ring) mainly through the manner of articulation in the vocal tract. Vowels communicate shapes in visual stimuli (spiky or rounded) and sound frequencies in auditory stimuli through the configuration of the lips and tongue. A machine learning model was trained to classify movement types and used to validate generalizations of our results across formats. We implemented the classifier with a list of cross-linguistic onomatopoeias simple actions were correctly classified, while different aspects were selected to build onomatopoeias of complex actions. These results show how the different aspects of complex sensory information are coded and how they interact in the creation of novel onomatopoeias. PMID:29561853

  15. Classifying Schizotypy Using an Audiovisual Emotion Perception Test and Scalp Electroencephalography

    Directory of Open Access Journals (Sweden)

    Ji Woon Jeong

    2017-09-01

    Full Text Available Schizotypy refers to the personality trait of experiencing “psychotic” symptoms and can be regarded as a predisposition of schizophrenia-spectrum psychopathology (Raine, 1991. Cumulative evidence has revealed that individuals with schizotypy, as well as schizophrenia patients, have emotional processing deficits. In the present study, we investigated multimodal emotion perception in schizotypy and implemented the machine learning technique to find out whether a schizotypy group (ST is distinguishable from a control group (NC, using electroencephalogram (EEG signals. Forty-five subjects (30 ST and 15 NC were divided into two groups based on their scores on a Schizotypal Personality Questionnaire. All participants performed an audiovisual emotion perception test while EEG was recorded. After the preprocessing stage, the discriminatory features were extracted using a mean subsampling technique. For an accurate estimation of covariance matrices, the shrinkage linear discriminant algorithm was used. The classification attained over 98% accuracy and zero rate of false-positive results. This method may have important clinical implications in discriminating those among the general population who have a subtle risk for schizotypy, requiring intervention in advance.

  16. The audiovisual structure of onomatopoeias: An intrusion of real-world physics in lexical creation.

    Science.gov (United States)

    Taitz, Alan; Assaneo, M Florencia; Elisei, Natalia; Trípodi, Mónica; Cohen, Laurent; Sitt, Jacobo D; Trevisan, Marcos A

    2018-01-01

    Sound-symbolic word classes are found in different cultures and languages worldwide. These words are continuously produced to code complex information about events. Here we explore the capacity of creative language to transport complex multisensory information in a controlled experiment, where our participants improvised onomatopoeias from noisy moving objects in audio, visual and audiovisual formats. We found that consonants communicate movement types (slide, hit or ring) mainly through the manner of articulation in the vocal tract. Vowels communicate shapes in visual stimuli (spiky or rounded) and sound frequencies in auditory stimuli through the configuration of the lips and tongue. A machine learning model was trained to classify movement types and used to validate generalizations of our results across formats. We implemented the classifier with a list of cross-linguistic onomatopoeias simple actions were correctly classified, while different aspects were selected to build onomatopoeias of complex actions. These results show how the different aspects of complex sensory information are coded and how they interact in the creation of novel onomatopoeias.

  17. The Digital Turn in the French Audiovisual Model

    Directory of Open Access Journals (Sweden)

    Olivier Alexandre

    2016-07-01

    Full Text Available This article deals with the digital turn in the French audiovisual model. An organizational and legal system has evolved with changing technology and economic forces over the past thirty years. The high-income television industry served as the key element during the 1980s to compensate for a shifting value economy from movie theaters to domestic screens and personal devices. However, the growing competition in the TV sector and the rise of tech companies have initiated a disruption process. A challenged French conception copyright, the weakened position of TV channels and the scaling of content market all now call into question the sustainability of the French model in a digital era.

  18. A simple and efficient method to enhance audiovisual binding tendencies

    Directory of Open Access Journals (Sweden)

    Brian Odegaard

    2017-04-01

    Full Text Available Individuals vary in their tendency to bind signals from multiple senses. For the same set of sights and sounds, one individual may frequently integrate multisensory signals and experience a unified percept, whereas another individual may rarely bind them and often experience two distinct sensations. Thus, while this binding/integration tendency is specific to each individual, it is not clear how plastic this tendency is in adulthood, and how sensory experiences may cause it to change. Here, we conducted an exploratory investigation which provides evidence that (1 the brain’s tendency to bind in spatial perception is plastic, (2 that it can change following brief exposure to simple audiovisual stimuli, and (3 that exposure to temporally synchronous, spatially discrepant stimuli provides the most effective method to modify it. These results can inform current theories about how the brain updates its internal model of the surrounding sensory world, as well as future investigations seeking to increase integration tendencies.

  19. Handicrafts production: documentation and audiovisual dissemination as sociocultural appreciation technology

    Directory of Open Access Journals (Sweden)

    Luciana Alvarenga

    2016-01-01

    Full Text Available The paper presents the results of scientific research, technology and innovation project in the creative economy sector, conducted from January 2014 to January 2015 that aimed to document and disclose the artisans and handicraft production of Vila de Itaúnas, ES, Brasil. The process was developed from initial conversations, followed by planning and conducting participatory workshops for documentation and audiovisual dissemination around the production of handicrafts and its relation to biodiversity and local culture. The initial objective was to promote expression and diffusion spaces of knowledge among and for the local population, also reaching a regional, state and national public. Throughout the process, it was found that the participatory workshops and the collective production of a virtual site for disclosure of practices and products contributed to the development and socio-cultural recognition of artisan and craft in the region.

  20. Moedor de Pixels : interfaces, interações e audiovisual

    OpenAIRE

    Vieira, Jackson Marinho

    2016-01-01

    Moedor de Pixels: interfaces, interações e audiovisual é uma pesquisa teórica e prática sobre obras de arte que empregam meios audiovisuais e computacionais em contextos onde a participação e a interação do público tornam-se o centro da experiência estética. O estudo sugere que a videoarte envolve novos procedimentos na tecnologia do vídeo que deram impulso para explorações mais extensas no campo da arte mídia interativa. A pesquisa também destaca como a inclusão dos meios digitais fornece ex...