Sample records for audiovisual aids

  1. Audio-Visual Aids: Historians in Blunderland. (United States)

    Decarie, Graeme


    A history professor relates his experiences producing and using audio-visual material and warns teachers not to rely on audio-visual aids for classroom presentations. Includes examples of popular audio-visual aids on Canada that communicate unintended, inaccurate, or unclear ideas. Urges teachers to exercise caution in the selection and use of…

  2. Audio-Visual Aids in Universities (United States)

    Douglas, Jackie


    A report on the proceedings and ideas expressed at a one day seminar on "Audio-Visual Equipment--Its Uses and Applications for Teaching and Research in Universities." The seminar was organized by England's National Committee for Audio-Visual Aids in Education in conjunction with the British Universities Film Council. (LS)

  3. Uses and Abuses of Audio-Visual Aids in Reading. (United States)

    Eggers, Edwin H.

    Audiovisual aids are properly used in reading when they "turn students on," and they are abused when they fail to do so or when they actually "turn students off." General guidelines one could use in sorting usable from unusable aids are (1) Has the teacher saved time by using an audiovisual aid? (2) Is the aid appropriate to the sophistication…

  4. Proper Use of Audio-Visual Aids: Essential for Educators. (United States)

    Dejardin, Conrad


    Criticizes educators as the worst users of audio-visual aids and among the worst public speakers. Offers guidelines for the proper use of an overhead projector and the development of transparencies. (DMM)

  5. The Use of Audio-Visual Aids in Teaching: A Study in the Saudi Girls Colleges. (United States)

    Al-Sharhan, Jamal A.


    A survey of faculty in girls colleges in Riyadh, Saudi Arabia, investigated teaching experience, academic rank, importance of audiovisual aids, teacher training, availability of audiovisual centers, and reasons for not using audiovisual aids. Proposes changes to increase use of audiovisual aids: more training courses, more teacher release time,…

  6. Audio/Visual Aids: A Study of the Effect of Audio/Visual Aids on the Comprehension Recall of Students. (United States)

    Bavaro, Sandra

    A study investigated whether the use of audio/visual aids had an effect upon comprehension recall. Thirty fourth-grade students from an urban public school were randomly divided into two equal samples of 15. One group was given a story to read (print only), while the other group viewed a filmstrip of the same story, thereby utilizing audio/visual…

  7. Media Aid Beyond the Factual: Culture, Development, and Audiovisual Assistance

    Directory of Open Access Journals (Sweden)

    Benjamin A. J. Pearson


    Full Text Available This paper discusses audiovisual assistance, a form of development aid that focuses on the production and distribution of cultural and entertainment media such as fictional films and TV shows. While the first audiovisual assistance program dates back to UNESCO’s International Fund for the Promotion of Culture in the 1970s, the past two decades have seen a proliferation of audiovisual assistance that, I argue, is related to a growing concern for culture in post-2015 global development agendas. In this paper, I examine the aims and motivations behind the EU’s audiovisual assistance programs to countries in the Global South, using data from policy documents and semi-structured, in-depth interviews with Program Managers and administrative staff in Brussels. These programs prioritize forms of audiovisual content that are locally specific, yet globally tradable. Furthermore, I argue that they have an ambivalent relationship with traditional notions of international development, one that conceptualizes media not only as a means to achieve economic development and human rights aims, but as a form of development itself.

  8. Your Most Essential Audiovisual Aid--Yourself! (United States)

    Hamp-Lyons, Elizabeth


    Acknowledging that an interested and enthusiastic teacher can create excitement for students and promote learning, the author discusses how teachers can improve their appearance, and, consequently, how their students perceive them. She offers concrete suggestions on how a teacher can be both a "visual aid" and an "audio aid" in the classroom.…




  10. Seminario latinoamericano de didactica de los medios audiovisuales (Latin American Seminar on Teaching with Audiovisual Aids). (United States)

    Eduplan Informa, 1971


    This seminar on the use of audiovisual aids reached several conclusions on the need for and the use of such aids in Latin America. The need for educational innovation in the face of a new society, a new type of communication, and a new vision of man is stressed. A new definition of teaching and learning as a fundamental process of communication is…

  11. The Efficacy of an Audiovisual Aid in Teaching the Neo-Classical Screenplay Paradigm (United States)

    Uys, P. G.


    This study interrogated the central theoretical statement that understanding and learning to apply the abstract concept of classical dramatic narrative structure can be addressed effectively through a useful audiovisual teaching method. The purpose of the study was to design an effective DVD teaching and learning aid, to justify the design through…

  12. Effects of audio-visual aids on foreign language test anxiety, reading and listening comprehension, and retention in EFL learners. (United States)

    Lee, Shu-Ping; Lee, Shin-Da; Liao, Yuan-Lin; Wang, An-Chi


    This study examined the effects of audio-visual aids on anxiety, comprehension test scores, and retention in reading and listening to short stories in English as a Foreign Language (EFL) classrooms. Reading and listening tests, general and test anxiety, and retention were measured in English-major college students in an experimental group with audio-visual aids (n=83) and a control group without audio-visual aids (n=94) with similar general English proficiency. Lower reading test anxiety, unchanged reading comprehension scores, and better reading short-term and long-term retention after four weeks were evident in the audiovisual group relative to the control group. In addition, lower listening test anxiety, higher listening comprehension scores, and unchanged short-term and long-term retention were found in the audiovisual group relative to the control group after the intervention. Audio-visual aids may help to reduce EFL learners' listening test anxiety and enhance their listening comprehension scores without facilitating retention of such materials. Although audio-visual aids did not increase reading comprehension scores, they helped reduce EFL learners' reading test anxiety and facilitated retention of reading materials. PMID:25914939

  13. Audio-visual training-aid for speechreading

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich; Gebert, H.


    employment of computer‐based communication aids for hearing‐impaired, deaf and deaf‐blind people [6]. This paper presents the complete system that is composed of a 3D‐facial animation with synchronized speech synthesis, a natural language dialogue unit and a student‐teacher‐training module. Due to the very...... is important for hard‐of‐hearing students and acoustic reverberation effects of the prospective roomfor people with low residual hearing. Speechreading requires thorough understanding of spoken language but first and foremost, also of the situational context and the pragmatic meaning of an utterance...... without the need of fundamental knowledge of other words. The present version of the training aid can be used for the training of speechreading in English, this as a consequence of the integrated English language models for facial animation and speech synthesis. Nevertheless, the training aid is prepared...

  14. An Audio-Visual Resource Notebook for Adult Consumer Education. An Annotated Bibliography of Selected Audio-Visual Aids for Adult Consumer Education, with Special Emphasis on Materials for Elderly, Low-Income and Handicapped Consumers. (United States)

    Virginia State Dept. of Agriculture and Consumer Services, Richmond, VA.

    This document is an annotated bibliography of audio-visual aids in the field of consumer education, intended especially for use among low-income, elderly, and handicapped consumers. It was developed to aid consumer education program planners in finding audio-visual resources to enhance their presentations. Materials listed include 293 resources…

  15. Audio-Visual Aid in Teaching "Fatty Liver" (United States)

    Dash, Sambit; Kamath, Ullas; Rao, Guruprasad; Prakash, Jay; Mishra, Snigdha


    Use of audio visual tools to aid in medical education is ever on a rise. Our study intends to find the efficacy of a video prepared on "fatty liver," a topic that is often a challenge for pre-clinical teachers, in enhancing cognitive processing and ultimately learning. We prepared a video presentation of 11:36 min, incorporating various…

  16. Audio-visual aid in teaching "fatty liver". (United States)

    Dash, Sambit; Kamath, Ullas; Rao, Guruprasad; Prakash, Jay; Mishra, Snigdha


    Use of audio visual tools to aid in medical education is ever on a rise. Our study intends to find the efficacy of a video prepared on "fatty liver," a topic that is often a challenge for pre-clinical teachers, in enhancing cognitive processing and ultimately learning. We prepared a video presentation of 11:36 min, incorporating various concepts of the topic, while keeping in view Mayer's and Ellaway guidelines for multimedia presentation. A pre-post test study on subject knowledge was conducted for 100 students with the video shown as intervention. A retrospective pre study was conducted as a survey which inquired about students understanding of the key concepts of the topic and a feedback on our video was taken. Students performed significantly better in the post test (mean score 8.52 vs. 5.45 in pre-test), positively responded in the retrospective pre-test and gave a positive feedback for our video presentation. Well-designed multimedia tools can aid in cognitive processing and enhance working memory capacity as shown in our study. In times when "smart" device penetration is high, information and communication tools in medical education, which can act as essential aid and not as replacement for traditional curriculums, can be beneficial to the students. © 2015 by The International Union of Biochemistry and Molecular Biology, 44:241-245, 2016. PMID:26625860

  17. The efectiveness of mnemonic audio-visual aids in teaching content words to EFL students at a Turkish university


    Kılınç, A Reha


    Ankara : Institute of Economics and Social Sciences, Bilkent University, 1996. Thesis(Master's) -- Bilkent University, 1996. Includes bibliographical references leaves 63-67 This experimental study aimed at investigating the effects of mnemonic audio-visual aids on recognition and recall of vocabulary items in comparison to a dictionary using control group. The study was conducted at Middle East Technical University Department of Basic English. The participants were 64 beginner and u...

  18. Audio-Visual Aids in Language Teaching, with Special Reference to English as a Foreign Language. Specialised Bibliography B9, June 1977. (United States)

    British Council, London (England). English-Teaching Information Centre.

    This bibliography lists 16 books, 51 articles, and 2 films, all dealing with the use of visual and audiovisual aids in the modern language classroom, particularly in the area of English as a second language. Most of the material cited has been published since 1970. (AM)

  19. Twenty-Fifth Annual Audio-Visual Aids Conference, Wednesday 9th to Friday 11th July 1975, Whitelands College, Putney SW15. Conference Preprints. (United States)

    National Committee for Audio-Visual Aids in Education, London (England).

    Preprints of papers to be presented at the 25th annual Audio-Visual Aids Conference are collected along with the conference program. Papers include official messages, a review of the conference's history, and presentations on photography in education, using school broadcasts, flexibility in the use of television, the "communications generation,"…

  20. Audiovisual Resources. (United States)

    Beasley, Augie E.; And Others


    Six articles on the use of audiovisual materials in the school library media center cover how to develop an audiovisual production center; audiovisual forms; a checklist for effective video/16mm use in the classroom; slides in learning; hazards of videotaping in the library; and putting audiovisuals on the shelf. (EJS)

  1. The Use of Audio-Visual Aids in Adult Education in Wales (United States)

    Powell, Anthony


    A survey of the provision of visual aids for use by tutors in Wales shows that the supply of equipment is not always adequately arranged by education authorities, and that tutors are often not sufficiently trained in the use of aids. (Author/AG)


    Directory of Open Access Journals (Sweden)

    Zahra Sadat NOORI


    Full Text Available This study aimed to examine the effect of using audio-visual aids and pictures on foreign language vocabulary learning of individuals with mild intellectual disability. Method: To this end, a comparison group quasi-experimental study was conducted along with a pre-test and a post-test. The participants were 16 individuals with mild intellectual disability living in a center for mentally disabled individuals in Dezfoul, Iran. They were all male individuals with the age range of 20 to 30. Their mother tongue was Persian, and they did not have any English background. In order to ensure that all participants were within the same IQ level, a standard IQ test, i.e. Colored Progressive Matrices test, was run. Afterwards, the participants were randomly assigned to two experimental groups; one group received the instruction through audio-visual aids, while the other group was taught through pictures. The treatment lasted for four weeks, 20 sessions on aggregate. A total number of 60 English words selected from the English package named 'The Smart Child' were taught. After the treatment, the participants took the posttest in which the researchers randomly selected 40 words from among the 60 target words. Results: The results of Mann-Whitney U-test indicated that using audio-visual aids was more effective than pictures in foreign language vocabulary learning of individuals with mild intellectual disability. Conclusions: It can be concluded that the use of audio-visual aids can be more effective than pictures in foreign language vocabulary learning of individuals with mild intellectual disability.



    Zahra Sadat NOORI; FARVARDIN Mohammad Taghi


    This study aimed to examine the effect of using audio-visual aids and pictures on foreign language vocabulary learning of individuals with mild intellectual disability. Method: To this end, a comparison group quasi-experimental study was conducted along with a pre-test and a post-test. The participants were 16 individuals with mild intellectual disability living in a center for mentally disabled individuals in Dezfoul, Iran. They were all male individuals with the age range of 20 to 30. Th...

  4. Utilizing New Audiovisual Resources (United States)

    Miller, Glen


    The University of Arizona's Agriculture Department has found that video cassette systems and 8 mm films are excellent audiovisual aids to classroom instruction at the high school level in small gasoline engines. Each system is capable of improving the instructional process for motor skill development. (MW)

  5. Guide to the Production and Use of Audio-Visual Aids in Library and Information Science Teaching. (United States)

    Thompson, Anthony H.

    Designed particularly for use in developing countries, this guide provides information to help teachers of librarianship and information science make their own simple and effective audiovisual (AV) materials. It is noted that all illustrations in the guide may be duplicated or adapted as desired. Sections cover: (1) the advantages of using AV…

  6. Will Primary Grade Title I Students Demonstrate Greater Achievement in Reading With the Use of Audio-Visual Aids Than Those Who Haven't Utilized the Same Media? (United States)

    Skobo, Kathleen Ward

    Forty-two first, second, and third grade students participated in a 15-week study to determine the effects of audiovisual aids on reading achievement. The students were pretested and posttested using the Comprehensive Test of Basic Skills. Each group received 40 minutes of small group and individual instruction each day. The experimental group…

  7. Audio-visual speechreading in a group of hearing aid users. The effects of onset age, handicap age, and degree of hearing loss. (United States)

    Tillberg, I; Rönnberg, J; Svärd, I; Ahlner, B


    Speechreading ability was investigated among hearing aid users with different time of onset and different degree of hearing loss. Audio-visual and visual-only performance were assessed. One group of subjects had been hearing-impaired for a large part of their lives, and the impairments appeared early in life. The other group of subjects had been impaired for a fewer number of years, and the impairments appeared later in life. Differences between the groups were obtained. There was no significant difference on the audio-visual test between the groups in spite of the fact that the early onset group scored very poorly auditorily. However, the early-onset group performed significantly better on the visual test. It was concluded that the visual information constituted the dominant coding strategy for the early onset group. An interpretation chiefly in terms of early onset may be the most appropriate, since dB loss variations as such are not related to speechreading skill. PMID:8976000

  8. Audiovisual Review (United States)

    Physiology Teacher, 1976


    Lists and reviews recent audiovisual materials in areas of medical, dental, nursing and allied health, and veterinary medicine; undergraduate, and high school studies. Each is classified as to level, type of instruction, usefulness, and source of availability. Topics include respiration, renal physiology, muscle mechanics, anatomy, evolution,…

  9. 运用电教手段优化竞技健美操专业教学%Improvement of Sports Aerobics Teaching by Electrical Audio-visual Aids

    Institute of Scientific and Technical Information of China (English)



    This paper discusses the better effects on teaching methods, teaching course, content of courses, teaching purpose and teaching results by education with electrical audio-visual aids in Sports Aerobics teaching, It provides the basis to the use of electrical audio-visual aids in Sports Aerobics teaching.%文章主要针对在竞技健美操专业课教学中运用电教手段,以达到优化教学方法,优化教学过程、优化教学内容、优化教学目的及优化教学效果等进行阐述,旨在为竞技健美操专业教学过程中合理运用电教手段提供科学依据.

  10. Student performance and their perception of a patient-oriented problem-solving approach with audiovisual aids in teaching pathology: a comparison with traditional lectures

    Directory of Open Access Journals (Sweden)

    Arjun Singh


    Full Text Available Arjun SinghDepartment of Pathology, Sri Venkateshwara Medical College Hospital and Research Centre, Pondicherry, IndiaPurpose: We use different methods to train our undergraduates. The patient-oriented problem-solving (POPS system is an innovative teaching–learning method that imparts knowledge, enhances intrinsic motivation, promotes self learning, encourages clinical reasoning, and develops long-lasting memory. The aim of this study was to develop POPS in teaching pathology, assess its effectiveness, and assess students’ preference for POPS over didactic lectures.Method: One hundred fifty second-year MBBS students were divided into two groups: A and B. Group A was taught by POPS while group B was taught by traditional lectures. Pre- and post-test numerical scores of both groups were evaluated and compared. Students then completed a self-structured feedback questionnaire for analysis.Results: The mean (SD difference in pre- and post-test scores of groups A and B was 15.98 (3.18 and 7.79 (2.52, respectively. The significance of the difference between scores of group A and group B teaching methods was 16.62 (P < 0.0001, as determined by the z-test. Improvement in post-test performance of group A was significantly greater than of group B, demonstrating the effectiveness of POPS. Students responded that POPS facilitates self-learning, helps in understanding topics, creates interest, and is a scientific approach to teaching. Feedback response on POPS was strong in 57.52% of students, moderate in 35.67%, and negative in only 6.81%, showing that 93.19% students favored POPS over simple lectures.Conclusion: It is not feasible to enforce the PBL method of teaching throughout the entire curriculum; However, POPS can be incorporated along with audiovisual aids to break the monotony of dialectic lectures and as alternative to PBL.Keywords: medical education, problem-solving exercise, problem-based learning

  11. Historia audiovisual para una sociedad audiovisual

    Directory of Open Access Journals (Sweden)

    Julio Montero Díaz


    Full Text Available This article analyzes the possibilities of presenting an audiovisual history in a society in which audiovisual media has progressively gained greater protagonism. We analyze specific cases of films and historical documentaries and we assess the difficulties faced by historians to understand the keys of audiovisual language and by filmmakers to understand and incorporate history into their productions. We conclude that it would not be possible to disseminate history in the western world without audiovisual resources circulated through various types of screens (cinema, television, computer, mobile phone, video games.

  12. The Practical Audio-Visual Handbook for Teachers. (United States)

    Scuorzo, Herbert E.

    The use of audio/visual media as an aid to instruction is a common practice in today's classroom. Most teachers, however, have little or no formal training in this field and rarely a knowledgeable coordinator to help them. "The Practical Audio-Visual Handbook for Teachers" discusses the types and mechanics of many of these media forms and proposes…

  13. Digital audiovisual archives

    CERN Document Server

    Stockinger, Peter


    Today, huge quantities of digital audiovisual resources are already available - everywhere and at any time - through Web portals, online archives and libraries, and video blogs. One central question with respect to this huge amount of audiovisual data is how they can be used in specific (social, pedagogical, etc.) contexts and what are their potential interest for target groups (communities, professionals, students, researchers, etc.).This book examines the question of the (creative) exploitation of digital audiovisual archives from a theoretical, methodological, technical and practical

  14. AIDS (United States)

    ... page: // HIV/AIDS To use the sharing features on this page, ... immunodeficiency virus (HIV) is the virus that causes AIDS. When a person becomes infected with HIV, the ...

  15. Application and design of audio-visual aids in stomatology teaching cariology, endodontology and operative dentistry in non-stomatology students%直观教学法在非口腔医学专业医学生牙体牙髓病教学中的设计与应用

    Institute of Scientific and Technical Information of China (English)

    倪雪岩; 吕亚林; 曹莹; 臧滔; 董坚; 丁芳; 李若萱


    Objective To evaluate the effects of audio-visual aids on stomatology teaching cariology , end-odontology and operative dentistry among non-stomatology students .Methods Totally 77 students from 2010-2011 matriculating classes of the Preventive Medicine Department of Capital Medical University were selected .Di-versified audio-visual aids were used comprehensively in teaching .An examination of theory and a follow-up survey were carried out and analyzed to obtain the feedback of the combined teaching methods .Results The students had better theoretical knowledge of endodontics; mean score was 24.2 ±1.1; questionnaire survey showed that 89.6%(69/77) of students had positive attitude towards the improvement of teaching method .90.9% of the students (70/77) that had audio-visual aids in stomatology teaching had good learning ability .Conclusions Ap-plication of audio-visual aids for stomatology teaching increases the interest in learning and improves the teaching effect.However, the integration should be carefully prepared in combination with cross teaching method and elicit -ation pedagogy in order to accomplish optimistic teaching results .%目的:评价在非口腔医学专业医学生牙体牙髓病教学中设计并实施口腔直观教学法的教学效果。方法以首都医科大学2010、2011级预防医学专业77名学生作为研究对象,授课时综合运用多种直观教学方式与手段,教学结束后,采用理论考核和问卷调查方式评价教学效果,分析学生对口腔直观教学法的评价。结果学生对牙体牙髓病学理论知识掌握较好,平均分为(24.2±1.1)分,问卷调查结果显示,89.6%(69/77)的学生对直观教学法给予肯定。90.9%(70/77)的学生认为应用直观教学法提高了学习能力。结论直观教学法的应用,增强了学习兴趣,提高了教学效果。直观教学法适用于牙体牙髓病学教学,但需要精心设计,将直观教学

  16. Blacklist Established in Chinese Audiovisual Market

    Institute of Scientific and Technical Information of China (English)


    The Chinese audiovisual market is to impose a ban on audiovisual product dealers whose licenses have been revoked for violatingthe law. This ban will prohibit them from dealing in audiovisual products for ten years. Their names are to be included on a blacklist made known to the public.


    United Nations Educational, Scientific, and Cultural Organization, Paris (France).


  18. Rapid recalibration to audiovisual asynchrony. (United States)

    Van der Burg, Erik; Alais, David; Cass, John


    To combine information from different sensory modalities, the brain must deal with considerable temporal uncertainty. In natural environments, an external event may produce simultaneous auditory and visual signals yet they will invariably activate the brain asynchronously due to different propagation speeds for light and sound, and different neural response latencies once the signals reach the receptors. One strategy the brain uses to deal with audiovisual timing variation is to adapt to a prevailing asynchrony to help realign the signals. Here, using psychophysical methods in human subjects, we investigate audiovisual recalibration and show that it takes place extremely rapidly without explicit periods of adaptation. Our results demonstrate that exposure to a single, brief asynchrony is sufficient to produce strong recalibration effects. Recalibration occurs regardless of whether the preceding trial was perceived as synchronous, and regardless of whether a response was required. We propose that this rapid recalibration is a fast-acting sensory effect, rather than a higher-level cognitive process. An account in terms of response bias is unlikely due to a strong asymmetry whereby stimuli with vision leading produce bigger recalibrations than audition leading. A fast-acting recalibration mechanism provides a means for overcoming inevitable audiovisual timing variation and serves to rapidly realign signals at onset to maximize the perceptual benefits of audiovisual integration. PMID:24027264

  19. Search in audiovisual broadcast archives

    NARCIS (Netherlands)

    B. Huurnink


    Documentary makers, journalists, news editors, and other media professionals routinely require previously recorded audiovisual material for new productions. For example, a news editor might wish to reuse footage from overseas services for the evening news, or a documentary maker describing the histo

  20. Audiovisual Instruments in Ethnographic Research


    Carvalho, Clara


    In 1973, the most renowned researchers in Visual Anthropology met at the ninth International Congress of Anthropology and Sociology to discuss the role of film and photography in ethnographic research and to systematize the almost century-old experiences of bringing together description, ethnography, photography and film. Opening the meeting, Dean Margaret Mead enthusiastically defended the use of audiovisual instruments in research. Considering that Anthropology explicitly ...

  1. Bilingualism affects audiovisual phoneme identification


    Burfin, Sabine; Pascalis, Olivier; Ruiz Tada, Elisa; Costa, Albert; Savariaux, Christophe; Kandel, Sonia


    We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience—i.e., the exposure to a double phonological code during childhood—affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identifi...

  2. Hysteresis in Audiovisual Synchrony Perception


    Martin, Jean-Remy; Kösem, Anne; van Wassenhove, Virginie


    The effect of stimulation history on the perception of a current event can yield two opposite effects, namely: adaptation or hysteresis. The perception of the current event thus goes in the opposite or in the same direction as prior stimulation, respectively. In audiovisual (AV) synchrony perception, adaptation effects have primarily been reported. Here, we tested if perceptual hysteresis could also be observed over adaptation in AV timing perception by varying different experimental conditio...

  3. Audiovisual Styling and the Film Experience

    DEFF Research Database (Denmark)

    Langkjær, Birger


    Approaches to music and audiovisual meaning in film appear to be very different in nature and scope when considered from the point of view of experimental psychology or humanistic studies. Nevertheless, this article argues that experimental studies square with ideas of audiovisual perception and ...

  4. Cinco discursos da digitalidade audiovisual

    Directory of Open Access Journals (Sweden)

    Gerbase, Carlos


    Full Text Available Michel Foucault ensina que toda fala sistemática - inclusive aquela que se afirma “neutra” ou “uma desinteressada visão objetiva do que acontece” - é, na verdade, mecanismo de articulação do saber e, na seqüência, de formação de poder. O aparecimento de novas tecnologias, especialmente as digitais, no campo da produção audiovisual, provoca uma avalanche de declarações de cineastas, ensaios de acadêmicos e previsões de demiurgos da mídia.

  5. Audio-visual gender recognition (United States)

    Liu, Ming; Xu, Xun; Huang, Thomas S.


    Combining different modalities for pattern recognition task is a very promising field. Basically, human always fuse information from different modalities to recognize object and perform inference, etc. Audio-Visual gender recognition is one of the most common task in human social communication. Human can identify the gender by facial appearance, by speech and also by body gait. Indeed, human gender recognition is a multi-modal data acquisition and processing procedure. However, computational multimodal gender recognition has not been extensively investigated in the literature. In this paper, speech and facial image are fused to perform a mutli-modal gender recognition for exploring the improvement of combining different modalities.

  6. Audiovisual quality assessment and prediction for videotelephony

    CERN Document Server

    Belmudez, Benjamin


    The work presented in this book focuses on modeling audiovisual quality as perceived by the users of IP-based solutions for video communication like videotelephony. It also extends the current framework for the parametric prediction of audiovisual call quality. The book addresses several aspects related to the quality perception of entire video calls, namely, the quality estimation of the single audio and video modalities in an interactive context, the audiovisual quality integration of these modalities and the temporal pooling of short sample-based quality scores to account for the perceptual quality impact of time-varying degradations.

  7. Categorization of Natural Dynamic Audiovisual Scenes


    Rummukainen, Olli; Radun, Jenni; Virtanen, Toni; Pulkki, Ville


    This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectori...

  8. Rapid, generalized adaptation to asynchronous audiovisual speech


    Van der Burg, Erik; Goodbourn, Patrick T.


    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an...

  9. Neurocognitive mechanisms of audiovisual speech perception


    Ojanen, Ville


    Face-to-face communication involves both hearing and seeing speech. Heard and seen speech inputs interact during audiovisual speech perception. Specifically, seeing the speaker's mouth and lip movements improves identification of acoustic speech stimuli, especially in noisy conditions. In addition, visual speech may even change the auditory percept. This occurs when mismatching auditory speech is dubbed onto visual articulation. Research on the brain mechanisms of audiovisual perception a...

  10. Positive Emotion Facilitates Audiovisual Binding. (United States)

    Kitamura, Miho S; Watanabe, Katsumi; Kitagawa, Norimichi


    It has been shown that positive emotions can facilitate integrative and associative information processing in cognitive functions. The present study examined whether emotions in observers can also enhance perceptual integrative processes. We tested 125 participants in total for revealing the effects of emotional states and traits in observers on the multisensory binding between auditory and visual signals. Participants in Experiment 1 observed two identical visual disks moving toward each other, coinciding, and moving away, presented with a brief sound. We found that for participants with lower depressive tendency, induced happy moods increased the width of the temporal binding window of the sound-induced bounce percept in the stream/bounce display, while no effect was found for the participants with higher depressive tendency. In contrast, no effect of mood was observed for a simple audiovisual simultaneity discrimination task in Experiment 2. These results provide the first empirical evidence of a dependency of multisensory binding upon emotional states and traits, revealing that positive emotions can facilitate the multisensory binding processes at a perceptual level. PMID:26834585

  11. Evaluation of an audiovisual-FM system: speechreading performance as a function of distance. (United States)

    Gagné, Jean-Pierre; Charest, Monique; Le Monday, K; Desbiens, C


    A research program was undertaken to evaluate the efficacy of an audiovisual-FM system as a speechreading aid. The present study investigated the effects of the distance between the talker and the speechreader on a visual-speech perception task. Sentences were recorded simultaneously with a conventional Hi8 mm video camera, and with the microcamera of an audiovisual-FM system. The recordings were obtained from two talkers at three different distances: 1.83 m, 3.66 m, and 7.32 m. Sixteen subjects completed a visual-keyword recognition task. The main results of the investigation were as follows: For the recordings obtained with the conventional video camera, there was a significant decrease in speechreading performance as the distance between the talker and the camera increased. For the recordings obtained with the microcamera of the audiovisual-FM system, there were no differences in speechreading as a function of the test distances. The findings of the investigation confirm that in a classroom setting the use of an audiovisual-FM system may constitute an effective way of overcoming the deleterious effects of distance on speechreading performance. PMID:16717020

  12. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention


    Yuanqing Li; Jinyi Long; Biao Huang; Tianyou Yu; Wei Wu; Peijun Li; Fang Fang; Pei Sun


    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how thes...

  13. El tratamiento documental del mensaje audiovisual Documentary treatment of the audio-visual message

    Directory of Open Access Journals (Sweden)

    Blanca Rodríguez Bravo


    Full Text Available Se analizan las peculiaridades del documento audiovisual y el tratamiento documental que sufre en las emisoras de televisión. Observando a las particularidades de la imagen que condicionan su análisis y recuperación, se establecen las etapas y procedimientos para representar el mensaje audiovisual con vistas a su reutilización. Por último se realizan algunas consideraciones acerca del procesamiento automático del video y de los cambios introducidos por la televisión digital.Peculiarities of the audio-visual document and the treatment it undergoes in TV broadcasting stations are analyzed. The particular features of images condition their analysis and recovery; this paper establishes stages and proceedings for the representation of audio-visual messages with a view to their re-usability Also, some considerations about the automatic processing of the video and the changes introduced by digital TV are made.

  14. Longevity and Depreciation of Audiovisual Equipment. (United States)

    Post, Richard


    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  15. Rapid, generalized adaptation to asynchronous audiovisual speech. (United States)

    Van der Burg, Erik; Goodbourn, Patrick T


    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. PMID:25716790

  16. Audiovisual biofeedback improves motion prediction accuracy


    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho


    Purpose: The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients’ respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction.

  17. Active Methodology in the Audiovisual Communication Degree (United States)

    Gimenez-Lopez, J. L.; Royo, T. Magal; Laborda, Jesus Garcia; Dunai, Larisa


    The paper describes the adaptation methods of the active methodologies of the new European higher education area in the new Audiovisual Communication degree under the perspective of subjects related to the area of the interactive communication in Europe. The proposed active methodologies have been experimentally implemented into the new academic…

  18. Preparing Negotiations in Services: EC Audiovisuals in the Doha Round


    Messerlin, Patrick; Cocq, Emmanuel


    Under the 1994 Uruguay Round Agreement, only nineteen WTO members have made commitments in audiovisual services in their GATS schedule (see table 7). As illustrated in table 7, these commitments are generally of limited scope and magnitude.1 Among the large audiovisual producers, only the United States has taken substantial commitments at the various stages of audiovisual production, distribution, and transmission. Although more limited, the commitments by India (the world’s largest film prod...

  19. Audiovisual Generation of Social Attitudes from Neutral Stimuli


    Barbulescu, Adela; Bailly, Gérard; Ronfard, Rémi; Pouget, Maël


    The focus of this study is the generation of expressive audiovisual speech from neutral utterances for 3D virtual actors. Taking into account the segmental and suprasegmental aspects of audiovisual speech, we propose and compare several computational frameworks for the generation of expressive speech and face animation. We notably evaluate a standard frame-based conversion approach with two other methods that postulate the existence of global prosodic audiovisual patterns that are characteris...

  20. Audio-visual affective expression recognition (United States)

    Huang, Thomas S.; Zeng, Zhihong


    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  1. Dynamic Perceptual Changes in Audiovisual Simultaneity


    Kanai, Ryota; Sheth, Bhavin R.; Verstraten, Frans A J; Shimojo, Shinsuke


    Background: The timing at which sensory input reaches the level of conscious perception is an intriguing question still awaiting an answer. It is often assumed that both visual and auditory percepts have a modality specific processing delay and their difference determines perceptual temporal offset. Methodology/Principal Findings: Here, we show that the perception of audiovisual simultaneity can change flexibly and fluctuates over a short period of time while subjects observe a constant ...

  2. Active methodology in the Audiovisual communication degree


    Giménez López, José Luis; Magal Royo, Teresa; García Laborda, Jesús; Dunai Dunai, Larisa


    The paper describes the adaptation methods of the active methodologies of the new European higher education area in the new Audiovisual Communication degree under the perspective of subjects related to the area of the interactive communication in Europe. The proposed active methodologies have been experimentally implemented into the new academic curricular development of the subjects, leading to a docent adjustment for the professors who currently teach lectures and who have been evaluated fo...

  3. Diminished sensitivity of audiovisual temporal order in autism spectrum disorder

    Directory of Open Access Journals (Sweden)

    Liselotte De Boer


    Full Text Available We examined sensitivity of audiovisual temporal order in adolescents with Autism Spectrum Disorder (ASD using an audiovisual Temporal Order Judgment (TOJ task. In order to assess domain-specific impairments, the stimuli varied in social complexity from simple flash/beeps to videos of a handclap or a speaking face. Compared to typically-developing controls, individuals with ASD were generally less sensitive in judgments of audiovisual temporal order (larger Just Noticeable Differences, JNDs, but there was no specific impairment with social stimuli. This suggests that people with ASD suffer from a more general impairment in audiovisual temporal processing.

  4. Audiovisual bimodal mutual compensation of Chinese

    Institute of Scientific and Technical Information of China (English)


    The perception of human languages is inherently a multi-modalprocess, in which audio information can be compensated by visual information to improve the recognition performance. Such a phenomenon in English, German, Spanish and so on has been researched, but in Chinese it has not been reported yet. In our experiment, 14 syllables (/ba, bi, bian, biao, bin, de, di, dian, duo, dong, gai, gan, gen, gu/), extracted from Chinese audiovisual bimodal speech database CAVSR-1.0, were pronounced by 10 subjects. The audio-only stimuli, audiovisual stimuli, and visual-only stimuli were recognized by 20 observers. The audio-only stimuli and audiovisual stimuli both were presented under 5 conditions: no noise, SNR 0 dB, -8 dB, -12 dB, and -16 dB. The experimental result is studied and the following conclusions for Chinese speech are reached. Human beings can recognize visual-only stimuli rather well. The place of articulation determines the visual distinction. In noisy environment, audio information can remarkably be compensated by visual information and as a result the recognition performance is greatly improved.

  5. Exogenous spatial attention decreases audiovisual integration. (United States)

    Van der Stoep, N; Van der Stigchel, S; Nijboer, T C W


    Multisensory integration (MSI) and spatial attention are both mechanisms through which the processing of sensory information can be facilitated. Studies on the interaction between spatial attention and MSI have mainly focused on the interaction between endogenous spatial attention and MSI. Most of these studies have shown that endogenously attending a multisensory target enhances MSI. It is currently unclear, however, whether and how exogenous spatial attention and MSI interact. In the current study, we investigated the interaction between these two important bottom-up processes in two experiments. In Experiment 1 the target location was task-relevant, and in Experiment 2 the target location was task-irrelevant. Valid or invalid exogenous auditory cues were presented before the onset of unimodal auditory, unimodal visual, and audiovisual targets. We observed reliable cueing effects and multisensory response enhancement in both experiments. To examine whether audiovisual integration was influenced by exogenous spatial attention, the amount of race model violation was compared between exogenously attended and unattended targets. In both Experiment 1 and Experiment 2, a decrease in MSI was observed when audiovisual targets were exogenously attended, compared to when they were not. The interaction between exogenous attention and MSI was less pronounced in Experiment 2. Therefore, our results indicate that exogenous attention diminishes MSI when spatial orienting is relevant. The results are discussed in terms of models of multisensory integration and attention. PMID:25341648

  6. Evaluating audio-visual and computer programs for classroom use. (United States)

    Van Ort, S


    Appropriate faculty decisions regarding adoption of audiovisual and computer programs are critical to the classroom use of these learning materials. The author describes the decision-making process in one college of nursing and the adaptation of an evaluation tool for use by faculty in reviewing audiovisual and computer programs. PMID:2467237

  7. Children Using Audiovisual Media for Communication: A New Language? (United States)

    Weiss, Michael


    Gives an overview of the Schools Council Communication and Social Skills Project at Brighton Polytechnic in which children ages 9-17 have developed and used audiovisual media such as films, tape-slides, or television programs in the classroom. The effects of audiovisual language on education are briefly discussed. (JJD)

  8. Use of Audiovisual Texts in University Education Process (United States)

    Aleksandrov, Evgeniy P.


    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  9. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention. (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei


    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features. PMID:26759193

  10. Trigger Videos on the Web: Impact of Audiovisual Design (United States)

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.


    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  11. Teleconferences and Audiovisual Materials in Earth Science Education (United States)

    Cortina, L. M.


    Unidad de Educacion Continua y a Distancia, Universidad Nacional Autonoma de Mexico, Coyoaca 04510 Mexico, MEXICO As stated in the special session description, 21st century undergraduate education has access to resources/experiences that go beyond university classrooms. However in some cases, resources may go largely unused and a number of factors may be cited such as logistic problems, restricted internet and telecommunication service access, miss-information, etc. We present and comment on our efforts and experiences at the National University of Mexico in a new unit dedicated to teleconferences and audio-visual materials. The unit forms part of the geosciences institutes, located in the central UNAM campus and campuses in other States. The use of teleconference in formal graduate and undergraduate education allows teachers and lecturers to distribute course material as in classrooms. Course by teleconference requires learning and student and teacher effort without physical contact, but they have access to multimedia available to support their exhibition. Well selected multimedia material allows the students to identify and recognize digital information to aid understanding natural phenomena integral to Earth Sciences. Cooperation with international partnerships providing access to new materials and experiences and to field practices will greatly add to our efforts. We will present specific examples of the experiences that we have at the Earth Sciences Postgraduate Program of UNAM with the use of technology in the education in geosciences.

  12. Categorization of natural dynamic audiovisual scenes.

    Directory of Open Access Journals (Sweden)

    Olli Rummukainen

    Full Text Available This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database.

  13. Audiovisual bimodal mutual compensation of Chinese

    Institute of Scientific and Technical Information of China (English)

    ZHOU; Zhi


    [1]Richard, P., Schumeyer, Kenneth E. B., The effect of visual information on word initial consonant perception of dysarthric speech, in Proc. ICSLP'96 October 3-6 1996, Philadephia, Pennsylvania, USA.[2]Goff, B. L., Marigny, T. G., Benoit, C., Read my lips...and my jaw! How intelligible are the components of a speaker's face? Eurospeech'95, 4th European Conference on Speech Communication and Technology, Madrid, September 1995.[3]McGurk, H., MacDonald, J. Hearing lips and seeing voices, Nature, 1976, 264: 746.[4]Duran A. F., Mcgurk effect in Spanish and German listeners: Influences of visual cues in the perception of Spanish and German confliction audio-visual stimuli, Eurospeech'95. 4th European Conference on Speech Communication and Technology, Madrid, September 1995.[5]Luettin, J., Visual speech and speaker recognition, Ph.D thesis, University of Sheffield, 1997.[6]Xu Yanjun, Du Limin, Chinese audiovisual bimodal speech database CAVSR1.0, Chinese Journal of Acoustics, to appear.[7]Zhang Jialu, Speech corpora and language input/output methods' evaluation, Chinese Applied Acoustics, 1994, 13(3): 5.

  14. Audiovisual integration facilitates monkeys' short-term memory. (United States)

    Bigelow, James; Poremba, Amy


    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans. PMID:27010716

  15. Audiovisual Quality Fusion based on Relative Multimodal Complexity

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Reiter, Ulrich


    for relative multimodal complexity analysis to derive the fusion parameter in objective audiovisual quality metrics. Audio and video qualities are first estimated separately using advanced quality models, and then they are combined into the overall audiovisual quality using a linear fusion. Based on......In multimodal presentations the perceived audiovisual quality assessment is significantly influenced by the content of both the audio and visual tracks. Based on our earlier subjective quality test for finding the optimal trade-off between audio and video quality, this paper proposes a novel method...... quality metrics, compared to the fusion parameters obtained from the subjective quality tests using other known optimization methods....

  16. How to Make Junior English Lessons Lively and Interesting by Different Teaching Aids

    Institute of Scientific and Technical Information of China (English)



    This paper is mainly concerned with the usage of teaching aids in junior English from three aspects: the visual aids,the audio-visual means, the body language and tone. By this means, it can give the students a comparatively real circumstances, attract the students' attention, enhance the students' interest in English and improve their consciousness of competition.

  17. Automatic audiovisual integration in speech perception. (United States)

    Gentilucci, Maurizio; Cattaneo, Luigi


    Two experiments aimed to determine whether features of both the visual and acoustical inputs are always merged into the perceived representation of speech and whether this audiovisual integration is based on either cross-modal binding functions or on imitation. In a McGurk paradigm, observers were required to repeat aloud a string of phonemes uttered by an actor (acoustical presentation of phonemic string) whose mouth, in contrast, mimicked pronunciation of a different string (visual presentation). In a control experiment participants read the same printed strings of letters. This condition aimed to analyze the pattern of voice and the lip kinematics controlling for imitation. In the control experiment and in the congruent audiovisual presentation, i.e. when the articulation mouth gestures were congruent with the emission of the string of phones, the voice spectrum and the lip kinematics varied according to the pronounced strings of phonemes. In the McGurk paradigm the participants were unaware of the incongruence between visual and acoustical stimuli. The acoustical analysis of the participants' spoken responses showed three distinct patterns: the fusion of the two stimuli (the McGurk effect), repetition of the acoustically presented string of phonemes, and, less frequently, of the string of phonemes corresponding to the mouth gestures mimicked by the actor. However, the analysis of the latter two responses showed that the formant 2 of the participants' voice spectra always differed from the value recorded in the congruent audiovisual presentation. It approached the value of the formant 2 of the string of phonemes presented in the other modality, which was apparently ignored. The lip kinematics of the participants repeating the string of phonemes acoustically presented were influenced by the observation of the lip movements mimicked by the actor, but only when pronouncing a labial consonant. The data are discussed in favor of the hypothesis that features of both

  18. Ordinal models of audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias


    Audiovisual information is integrated in speech perception. One manifestation of this is the McGurk illusion in which watching the articulating face alters the auditory phonetic percept. Understanding this phenomenon fully requires a computational model with predictive power. Here, we describe...... ordinal models that can account for the McGurk illusion. We compare this type of models to the Fuzzy Logical Model of Perception (FLMP) in which the response categories are not ordered. While the FLMP generally fit the data better than the ordinal model it also employs more free parameters in complex...... experiments when the number of response categories are high as it is for speech perception in general. Testing the predictive power of the models using a form of cross-validation we found that ordinal models perform better than the FLMP. Based on these findings we suggest that ordinal models generally have...

  19. Nuevos actores sociales en el escenario audiovisual

    Directory of Open Access Journals (Sweden)

    Gloria Rosique Cedillo


    Full Text Available A raíz de la entrada de las televisiones privadas al sector audiovisual español, el panorama de los contenidos de entretenimiento de la televisión generalista vivió cambios trascendentales que se vieron reflejados en las parrillas de programación. Esta situación ha abierto la polémica en torno a la disyuntiva de tener o no una televisión, sea pública o privada, que no cumple con las expectativas sociales esperadas. Esto ha motivado a que grupos civiles organizados en asociaciones de telespectadores, emprendan diversas acciones con el objetivo de incidir en el rumbo que los contenidos de entretenimiento vienen tomando, apostando fuertemente por la educación del receptor en relación a los medios audiovisuales, y por la participación ciudadana en torno a los temas televisivos.

  20. Audiovisual Enhancement of Classroom Teaching: A Primer for Law Professors. (United States)

    Johnson, Vincent Robert


    A discussion of audiovisual instruction in the law school classroom looks at the strengths, weaknesses, equipment and facilities needs and hints for classroom use of overhead projection, audiotapes and videotapes, and slides. (MSE)

  1. Prediction and constraint in audiovisual speech perception. (United States)

    Peelle, Jonathan E; Sommers, Mitchell S


    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration

  2. CAVA (human Communication: an Audio-Visual Archive)


    Mahon, M. S.


    In order to investigate human communication and interaction, researchers need hours of audio-visual data, sometimes recorded over periods of months or years. The process of collecting, cataloguing and transcribing such valuable data is time-consuming and expensive. Once it is collected and ready to use, it makes sense to get the maximum value from it by reusing it and sharing it among the research community. But unlike highly-controlled experimental data, natural audio-visual data tends t...

  3. Investigating the Use of Audiovisual Elicitation on the Creative Enterprise


    Flatt, Nicholas


    Elicitation methods have been explored extensively in social science research, and in business contexts, to uncover unarticulated informant knowledge. This qualitative study investigates the use of an audiovisual elicitation interviewing technique, developed by a UKbased creative multimedia production social enterprise; Fifth Planet Productions CIC. The method employs a system of using audiovisual stimulus to elicit participant responses in the interview setting. This study, conducted in t...

  4. Audiovisual Association Learning in the Absence of Primary Visual Cortex


    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J.; de Gelder, Beatrice


    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit ...

  5. Learning bimodal structure in audio-visual data


    Monaci, Gianluca; Vandergheynst, Pierre; Sommer, Friederich T.


    A novel model is presented to learn bimodally informative structures from audio-visual signals. The signal is represented as a sparse sum of audio- visual kernels. Each kernel is a bimodal function consisting of synchronous snippets of an audio waveform and a spatio-temporal visual basis function. To represent an audio-visual signal, the kernels can be positioned independently and arbitrarily in space and time. The proposed algorithm uses unsupervised learning to form dicti...

  6. Entorno de creación de contenido audiovisual




    En este PFC se pone a disposición de cualquier interesado un completo entorno de creación de contenido audiovisual, valiéndonos del plató creado en la ETSIT, podremos trabajar con un chroma, emplear software de edición, adquirir nociones básicas de audiovisual e incluso llegar a emitir nuestro propio programa en streaming

  7. Timing in audiovisual speech perception: A mini review and new psychophysical data. (United States)

    Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory


    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309

  8. A real-time detector system for precise timing of audiovisual stimuli. (United States)

    Henelius, Andreas; Jagadeesan, Sharman; Huotilainen, Minna


    The successful recording of neurophysiologic signals, such as event-related potentials (ERPs) or event-related magnetic fields (ERFs), relies on precise information of stimulus presentation times. We have developed an accurate and flexible audiovisual sensor solution operating in real-time for on-line use in both auditory and visual ERP and ERF paradigms. The sensor functions independently of the used audio or video stimulus presentation tools or signal acquisition system. The sensor solution consists of two independent sensors; one for sound and one for light. The microcontroller-based audio sensor incorporates a novel approach to the detection of natural sounds such as multipart audio stimuli, using an adjustable dead time. This aids in producing exact markers for complex auditory stimuli and reduces the number of false detections. The analog photosensor circuit detects changes in light intensity on the screen and produces a marker for changes exceeding a threshold. The microcontroller software for the audio sensor is free and open source, allowing other researchers to customise the sensor for use in specific auditory ERP/ERF paradigms. The hardware schematics and software for the audiovisual sensor are freely available from the webpage of the authors' lab. PMID:23365952

  9. A representação audiovisual das mulheres migradas The audiovisual representation of migrant women

    Directory of Open Access Journals (Sweden)

    Luciana Pontes


    Full Text Available Neste artigo analiso as representações sobre as mulheres migradas nos fundos audiovisuais de algumas entidades que trabalham com gênero e imigração em Barcelona. Por haver detectado nos audiovisuais analisados uma associação recorrente das mulheres migradas à pobreza, à criminalidade, à ignorância, à maternidade obrigatória e numerosa, à prostituição etc., busquei entender como tais representações tomam forma, estudando os elementos narrativos, estilísticos, visuais e verbais através dos quais se articulam essas imagens e discursos sobre as mulheres migradas.In this paper I analyze the representations of the migrant women at the audiovisual founds in some of the organizations that work with gender and immigration in Barcelona. At the audiovisuals I have found a recurring association of the migrant women with poverty, criminality, ignorance, passivity, undocumentation, gender violence, compulsory and numerous motherhood, prostitution, etc. Thus, I tried to understand the ways in which these representations are shaped, studying the narrative, stylistic, visual and verbal elements through which these images and discourses of the migrant women are articulated.

  10. Parametric packet-based audiovisual quality model for IPTV services

    CERN Document Server

    Garcia, Marie-Neige


    This volume presents a parametric packet-based audiovisual quality model for Internet Protocol TeleVision (IPTV) services. The model is composed of three quality modules for the respective audio, video and audiovisual components. The audio and video quality modules take as input a parametric description of the audiovisual processing path, and deliver an estimate of the audio and video quality. These outputs are sent to the audiovisual quality module which provides an estimate of the audiovisual quality. Estimates of perceived quality are typically used both in the network planning phase and as part of the quality monitoring. The same audio quality model is used for both these phases, while two variants of the video quality model have been developed for addressing the two application scenarios. The addressed packetization scheme is MPEG2 Transport Stream over Real-time Transport Protocol over Internet Protocol. In the case of quality monitoring, that is the case for which the network is already set-up, the aud...

  11. The Fungible Audio-Visual Mapping and its Experience

    Directory of Open Access Journals (Sweden)

    Adriana Sa


    Full Text Available This article questions how different sorts of audio-visual mappings may be perceived. Clearly perceivable cause and effect relationships can be problematic if one desires the audience to experience the music. Indeed perception would bias those sonic qualities that fit previous concepts of causation, subordinating other sonic qualities, which may form the relations between the sounds themselves. The question is, how can an audio-visual mapping produce a sense of causation, and simultaneously confound the actual cause-effect relationships. We call this a fungible audio-visual mapping; the present investigation seeks to glean its constitution and aspect. We report a study, which draws upon methods from experimental psychology to inform audio-visual instrument design and composition. The participants are shown several audio-visual mapping prototypes, and posed quantitative and qualitative questions. These questions respect to their sense of causation, and their sense of understanding the cause-effect relationships. The study shows that a fungible mapping requires both synchronized and seemingly non-related components – sufficient complexity to be confusing. As the specific cause-effect concepts remain inconclusive, the sense of causation embraces the whole. 

  12. 7 CFR 3015.200 - Acknowledgement of support on publications and audiovisuals. (United States)


    ... A defines “audiovisual,” “production of an audiovisual,” and “publication.” (b) Publications... published with grant support and, if feasible, on any publication reporting the results of, or describing, a... support placed on any audiovisual which is produced with grant support and which has a direct...

  13. Electrophysiological assessment of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Dau, Torsten

    auditory speech percept? In two experiments, which both combine behavioral and neurophysiological measures, an uncovering of the relation between perception of faces and of audiovisual integration is attempted. Behavioral findings suggest a strong effect of face perception, whereas the MMN results are less......Speech perception integrates signal from ear and eye. This is witnessed by a wide range of audiovisual integration effects, such as ventriloquism and the McGurk illusion. Some behavioral evidence suggest that audiovisual integration of specific aspects is special for speech perception. However, our...... mismatch negativity response (MMN). MMN has the property of being evoked when an acoustic stimulus deviates from a learned pattern of stimuli. In three experimental studies, this effect is utilized to track when a coinciding visual signal alters auditory speech perception. Visual speech emanates from the...

  14. Effect of context, rebinding and noise, on audiovisual speech fusion


    Attigodu, Ganesh; Berthommier, Frédéric; Nahorna, Olha; Schwartz, Jean-Luc


    In a previous set of experiments we showed that audio-visual fusion during the McGurk effect may be modulated by context. A short context (2 to 4 syllables) composed of incoherent auditory and visual material significantly decreases the McGurk effect. We interpreted this as showing the existence of an audiovisual "binding" stage controlling the fusion process, and we also showed the existence of a "rebinding" process when an incoherent material is followed by a short coherent material. In thi...

  15. Evolution of audiovisual production in five Spanish Cybermedia

    Directory of Open Access Journals (Sweden)

    Javier Mayoral Sánchez


    Full Text Available This paper quantifies and analyzes the evolution of audiovisual production of five Spanish digital newspapers:,,, and So have been studied videos published on the five cover for four weeks (fourteen days in November 2011 and another fourteen in March 2014. This diachronic perspective has revealed a remarkable contradiction in online media about audiovisual products. Even with very considerable differences between them, the five analyzed media increasingly publish videos. They do it in in the most valued areas of their homepages. However, is not perceived in them a willingness to engage firmly

  16. El papel del traductor en la industria audiovisual


    Lachat Leal, Christina


    Hasta ahora la mayor??a de los trabajos que tratan la traducci??n audiovisual, estudian o bien los aspectos t??cnicos y tecnol??gicos del subtitulado, del doblaje y de la voz superpuesta, o bien las peculiaridades de los diferentes productos audiovisuales como largometraje, documental, video promocional, videojuego, etc. que afectan al proceso de traducci??n. Sin embargo, el desarrollo y la expansi??n espectacular de la industria del ocio en la que se encuadra la industria audiovisual nos ...

  17. Deep Multimodal Learning for Audio-Visual Speech Recognition


    Mroueh, Youssef; Marcheret, Etienne; Goel, Vaibhava


    In this paper, we present methods in deep multimodal learning for fusing speech and visual modalities for Audio-Visual Automatic Speech Recognition (AV-ASR). First, we study an approach where uni-modal deep networks are trained separately and their final hidden layers fused to obtain a joint feature space in which another deep network is built. While the audio network alone achieves a phone error rate (PER) of $41\\%$ under clean condition on the IBM large vocabulary audio-visual studio datase...

  18. Developing a typology of humor in audiovisual media

    NARCIS (Netherlands)

    Buijzen, M.A.; Valkenburg, P.M.


    The main aim of this study was to develop and investigate a typology of humor in audiovisual media. We identified 41 humor techniques, drawing on Berger's (1976, 1993) typology of humor in narratives, audience research on humor preferences, and an inductive analysis of humorous commercials. We analy

  19. The Role of Audiovisual Mass Media News in Language Learning (United States)

    Bahrani, Taher; Sim, Tam Shu


    The present paper focuses on the role of audio/visual mass media news in language learning. In this regard, the two important issues regarding the selection and preparation of TV news for language learning are the content of the news and the linguistic difficulty. Content is described as whether the news is specialized or universal. Universal…

  20. Media Literacy and Audiovisual Languages: A Case Study from Belgium (United States)

    Van Bauwel, Sofie


    This article examines the use of media in the construction of a "new" language for children. We studied how children acquire and use media literacy skills through their engagement in an educational art project. This media literacy project is rooted in the realm of audiovisual media, within which children's sound and visual worlds are the focus of…

  1. Audiovisual document. ''Introduction to the visit of Ganil''

    International Nuclear Information System (INIS)

    During the year 1985, an audiovisual document (diaporama) has been realized on Ganil. This diaporama with four slide-projectors, which lasts 12 minutes is permanently set in a conference room at Ganil. Two video sequences have been realized, coming from this diaporama, on experimentation in nuclear physics on the operation principle of the cyclotrons

  2. Audio-Visual Equipment Depreciation. RDU-75-07. (United States)

    Drake, Miriam A.; Baker, Martha

    A study was conducted at Purdue University to gather operational and budgetary planning data for the Libraries and Audiovisual Center. The objectives were: (1) to complete a current inventory of equipment including year of purchase, costs, and salvage value; (2) to determine useful life data for general classes of equipment; and (3) to determine…

  3. Crossmodal and incremental perception of audiovisual cues to emotional speech

    NARCIS (Netherlands)

    Barkhuysen, Pashiera; Krahmer, E.J.; Swerts, M.G.J.


    In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: (1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? B

  4. Crossmodal and Incremental Perception of Audiovisual Cues to Emotional Speech (United States)

    Barkhuysen, Pashiera; Krahmer, Emiel; Swerts, Marc


    In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: (1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests…

  5. Audiovisual Ethnography of Philippine Music: A Process-oriented Approach

    Directory of Open Access Journals (Sweden)

    Terada Yoshitaka


    Full Text Available Audiovisual documentation has been an important part of ethnomusicological endeavors, but until recently it was treated primarily as a tool of preservation and/or documentation that supplements written ethnography, albeit there a few notable exceptions. The proliferation of inexpensive video equipment has encouraged the unprecedented number of scholars and students in ethnomusicology to be involved in filmmaking, but its potential as a methodology has not been fully explored. As a small step to redefine the application of audiovisual media, Dr. Usopay Cadar, my teacher in Philippine music, and I produced two films: one on Maranao kolintang music and the other on Maranao culture in general, based on the audiovisual footage we collected in 2008. This short essay describes how the screenings of these films were organized in March 2013 for the diverse audiences in the Philippines, and what types of reactions and interactions transpired during the screenings. These screenings were organized both to obtain feedback about the content of the films from the caretakers and stakeholders of the documented tradition and to create a venue for interactions and collaborations to discuss the potential of audiovisual ethnography. Drawing from the analysis of the current project, I propose to regard film not as a fixed product but as a living and organic site that is open to commentaries and critiques, where changes can be made throughout the process. In this perspective, ‘filmmaking’ refers to the entire process of research, filming, editing and post-production activities.

  6. Audiovisual y semiótica: el videoclip como texto


    Rodríguez-López, Jennifer


    El vídeo musical es interpretado en este artículo como un texto audiovisual susceptible de análisis a partir del estudio de las funciones del lenguaje predominantes y de determinados códigos siguiendo los conceptos aportados por la teoría semiótic

  7. Kijkwijzer: The Dutch rating system for audiovisual productions

    NARCIS (Netherlands)

    Valkenburg, P.M.; Beentjes, J.W.J.; Nikken, P.; Tan, E.S.H.


    Kijkwijzer is the name of the new Dutch rating system in use since early 2001 to provide information about the possible harmful effects of movies, home videos and television programs on young people. The rating system is meant to provide audiovisual productions with both age-based and content-based

  8. Multistage audiovisual integration of speech: dissociating identification and detection

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias


    signal. Here we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers...

  9. The Audiovisual Temporal Binding Window Narrows in Early Childhood (United States)

    Lewkowicz, David J.; Flom, Ross


    Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…

  10. Context-specific effects of musical expertise on audiovisual integration

    Directory of Open Access Journals (Sweden)

    Laura eBishop


    Full Text Available Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronisation. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinettists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronised. The range of asynchronies most often endorsed as synchronised was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well.

  11. Context-specific effects of musical expertise on audiovisual integration. (United States)

    Bishop, Laura; Goebl, Werner


    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  12. Medical student's perceptions of different teaching aids from a tertiary care teaching institution

    Directory of Open Access Journals (Sweden)

    Inderjit Singh Bagga


    Conclusions: Student's preferences and feedback need to be taken into consideration when using multimedia modalities to present lectures to students. Feasible student suggestions must be implemented for further improving the use of audio-visual aids during didactic lectures to make teaching learning environment better. [Int J Res Med Sci 2016; 4(7.000: 2788-2791

  13. Sistema audiovisual para reconocimiento de comandos Audiovisual system for recognition of commands

    Directory of Open Access Journals (Sweden)

    Alexander Ceballos


    Full Text Available Se presenta el desarrollo de un sistema automático de reconocimiento audiovisual del habla enfocado en el reconocimiento de comandos. La representación del audio se realizó mediante los coeficientes cepstrales de Mel y las primeras dos derivadas temporales. Para la caracterización del vídeo se hizo seguimiento automático de características visuales de alto nivel a través de toda la secuencia. Para la inicialización automática del algoritmo se emplearon transformaciones de color y contornos activos con información de flujo del vector gradiente ("GVF snakes" sobre la región labial, mientras que para el seguimiento se usaron medidas de similitud entre vecindarios y restricciones morfológicas definidas en el estándar MPEG-4. Inicialmente, se presenta el diseño del sistema de reconocimiento automático del habla, empleando únicamente información de audio (ASR, mediante Modelos Ocultos de Markov (HMMs y un enfoque de palabra aislada; posteriormente, se muestra el diseño de los sistemas empleando únicamente características de vídeo (VSR, y empleando características de audio y vídeo combinadas (AVSR. Al final se comparan los resultados de los tres sistemas para una base de datos propia en español y francés, y se muestra la influencia del ruido acústico, mostrando que el sistema de AVSR es más robusto que ASR y VSR.We present the development of an automatic audiovisual speech recognition system focused on the recognition of commands. Signal audio representation was done using Mel cepstral coefficients and their first and second order time derivatives. In order to characterize the video signal, a set of high-level visual features was tracked throughout the sequences. Automatic initialization of the algorithm was performed using color transformations and active contour models based on Gradient Vector Flow (GVF Snakes on the lip region, whereas visual tracking used similarity measures across neighborhoods and morphological

  14. A influência do ambiente audiovisual na legendação de filmes

    Directory of Open Access Journals (Sweden)

    Antonia Célia Ribeiro Nobre


    Full Text Available Este artigo mostra como a legendação é influenciada por muitos fatores presentes dentro do ambiente audiovisual devido, sobretudo, à função comunicativa audiovisual, à composição semiótica, à mecânica da legendação, e às visões e ao comportamento das pessoas envolvidas na produção audiovisual, na tradução e na distribuição, na crítica e no público.This article shows how subtitling is influenced by many factors among the audiovisual environment, due primarily to the audiovisuals communicative function and semiotic composition; the mechanics of subtitling; and the views and behavior of people involved with the audiovisuals production, translation and distribution, the critics and the public.

  15. Aid Effectiveness

    DEFF Research Database (Denmark)

    Arndt, Channing; Jones, Edward Samuel; Tarp, Finn

    Controversy over the aggregate impact of foreign aid has focused on reduced form estimates of the aid-growth link. The causal chain, through which aid affects developmental outcomes including growth, has received much less attention. We address this gap by: (i) specifying a structural model of the...... main relationships; (ii) estimating the impact of aid on a range of final and intermediate outcomes; and (iii) quantifying a simplied representation of the full structural form, where aid impacts on growth through key intermediate outcomes. A coherent picture emerges: aid stimulates growth and reduces...

  16. Academic e-learning experience in the enhancement of open access audiovisual and media education


    Pacholak, Anna; Sidor, Dorota


    The paper presents how the academic e-learning experience and didactic methods of the Centre for Open and Multimedia Education (COME UW), University of Warsaw, enhance the open access to audiovisual and media education at various levels of education. The project is implemented within the Audiovisual and Media Education Programme (PEAM). It is funded by the Polish Film Institute (PISF). The aim of the project is to create a proposal of a comprehensive and open programme for the audiovisual (me...

  17. Media and journalism as forms of knowledge: a methodology for critical reading of journalistic audiovisual narratives


    Beatriz Becker


    The work presents a methodology for the analysis of journalistic audiovisual narratives, and instrument of critical reading of news contents and formats which utilize audiovisual language and multimedia resources on TV and on the web. It is assumed that the comprehension of the dynamic combinations of the elements which constitute the audiovisual text contributes to a better perception of the meanings of the news, and that uses of the digital tools in a critical and creative way can collabora...

  18. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech, and music


    Lee, Hweeling; Noppeney, Uta


    This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech, or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogs of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms). Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. ...

  19. The role of visual spatial attention in audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias; Tiippana, K.; Laarni, J.;


    recent reports have challenged this view. Here we study the effect of visual spatial attention on the McGurk effect. By presenting a movie of two faces symmetrically displaced to each side of a central fixation point and dubbed with a single auditory speech track, we were able to discern the influences......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive but...... from each of the faces and from the voice on the auditory speech percept. We found that directing visual spatial attention towards a face increased the influence of that face on auditory perception. However, the influence of the voice on auditory perception did not change suggesting that audiovisual...


    Directory of Open Access Journals (Sweden)

    Suâmi Abdalla-Santos


    Full Text Available This article makes a brief analysis on the benefits that can be extracted using audiovisual technology applied to education of Geography. Currently, we have many options of devices that have recording function of audio and video at affordable prices. In this context it becomes possible to the geographer, researcher or teacher, to use these tools to capture material that can be used to the teaching of the discipline. RESUMO: This article makes a brief analysis on the benefits that can be extracted using audiovisual technology applied to education of Geography. Currently, we have many options of devices that have recording function of audio and video at affordable prices. In this context it becomes possible to the geographer, researcher or teacher, to use these tools to capture material that can be used to the teaching of the discipline.

  1. Audiovisual English-Arabic Translation: De Beaugrande's Perspective

    Directory of Open Access Journals (Sweden)

    Alaa Eddin Hussain


    Full Text Available This paper attempts to demonstrate the significance of the seven standards of textuality with special application to audiovisual English Arabic translation.  Ample and thoroughly analysed examples have been provided to help in audiovisual English-Arabic translation decision-making. A text is meaningful if and only if it carries meaning and knowledge to its audience, and is optimally activatable, recoverable and accessible.  The same is equally applicable to audiovisual translation (AVT. The latter should also carry knowledge which can be easily accessed by the TL audience, and be processed with least energy and time, i.e. achieving the utmost level of efficiency. Communication occurs only when that text is coherent, with continuity of senses and concepts that are appropriately linked. Coherence of a text will be achieved when all aspects of cohesive devices are well accounted for pragmatically.  This combined with a good amount of psycholinguistic element will provide a text with optimal communicative value. Non-text is certainly devoid of such components and ultimately non-communicative. Communicative knowledge can be classified into three categories: determinate knowledge, typical knowledge and accidental knowledge. To create dramatic suspense and the element of surprise, the text in AV environment, as in any dialogue, often carries accidental knowledge.  This unusual knowledge aims to make AV material interesting in the eyes of its audience. That cognitive environment is enhanced by an adequate employment of material (picture and sound, and helps to recover sense in the text. Hence, the premise of this paper is the application of certain aspects of these standards to AV texts taken from various recent feature films and documentaries, in order to facilitate the translating process and produce a final appropriate product. Keywords: Arabic audiovisual translation, coherence, cohesion, textuality

  2. Cross-modal cueing in audiovisual spatial attention


    Blurton, Steven Paul; Mark W Greenlee; Gondan, Matthias


    Visual processing is most effective at the location of our attentional focus. It has long been known that various spatial cues can direct visuospatial attention and influence the detection of auditory targets. Cross-modal cueing, however, seems to depend on the type of the visual cue: facilitation effects have been reported for endogenous visual cues while exogenous cues seem to be mostly ineffective. In three experiments, we investigated cueing effects on the processing of audiovisual signal...

  3. Compliments in Audiovisual Translation – issues in character identity


    Isabel Fernandes Silva; Jane Rodrigues Duarte


    Over the last decades, audiovisual translation has gained increased significance in Translation Studies as well as an interdisciplinary subject within other fields (media, cinema studies etc). Although many articles have been published on communicative aspects of translation such as politeness, only recently have scholars taken an interest in the translation of compliments. This study will focus on both these areas from a multimodal and pragmatic perspective, emphasizing the links between the...

  4. Neural correlates of quality during perception of audiovisual stimuli

    CERN Document Server

    Arndt, Sebastian


    This book presents a new approach to examining perceived quality of audiovisual sequences. It uses electroencephalography to understand how exactly user quality judgments are formed within a test participant, and what might be the physiologically-based implications when being exposed to lower quality media. The book redefines experimental paradigms of using EEG in the area of quality assessment so that they better suit the requirements of standard subjective quality testings. Therefore, experimental protocols and stimuli are adjusted accordingly. .

  5. Audiovisual correspondence between musical timbre and visual shapes.


    Mohammad Adeli; Stéphane Molotchnikoff


    This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g. simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against col...

  6. Audiovisual correspondence between musical timbre and visual shapes


    Adeli, Mohammad; Rouat, Jean; Molotchnikoff, Stéphane


    This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g., simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against co...

  7. The Role of Visual Spatial Attention in Audiovisual Speech Perception


    Andersen, Tobias; Tiippana, K.; Laarni, J.; Kojo, I.; Sams, M.


    Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive but recent reports have challenged this view. Here we study the effect of visual spatial attention on the McGurk effect. By presenting a movie of two faces symmetrically displaced to each side of a cen...

  8. Model-based assessment of factors influencing categorical audiovisual pereception


    Andersen, Tobias S.


    Information processing in the sensory modalities is not segregated but interacts strongly. The exact nature of this interaction is not known and might differ for different multisensory phenomena. Here, we investigate two cases of categorical audiovisual perception: speech perception and the perception of rapid flashes and beeps. It is known that multisensory interactions in general depend on physical factors, such as information reliability and modality appropriateness, but it is not know...

  9. Good practices in audiovisual diversity. Hype or hope?


    García Leiva, María Trinidad; Segovia, Ana I.


    Research conducted on the audiovisual industry within the context of the Convention on the Protection and Promotion of the Diversity of Cultural Expressions (UNESCO, 2005) lends weight to the idea that exclusively applying market logic to the field of culture poses a threat to its diversity. It is therefore necessary to identify and foster practices to implement from the public sphere. The question, then, is how to define such practices. The Convention uses the term 'best practice' within a s...

  10. Phase Synchronization in Human EEG During Audio-Visual Stimulation

    Czech Academy of Sciences Publication Activity Database

    Teplan, M.; Šušmáková, K.; Paluš, Milan; Vejmelka, Martin


    Roč. 28, - (2009), s. 80-84. ISSN 1536-8378 Grant ostatní: Bilateral project between Slovak AS and AS CR(CZ-SK) Modern methods for evaluation of electrophysiological signals Source of funding: V - iné verejné zdroje Keywords : synchronization * EEG * wavelet * audio-visual stimulation Subject RIV: FH - Neurology Impact factor: 0.729, year: 2009

  11. Audiovisual distraction reduces pain perception during aural microsuction


    Choudhury, N.; Amer, I; Daniels, M; Wareing, MJ


    Introduction Aural microsuction is a common ear, nose and throat procedure used in the outpatient setting. Some patients, however, find it difficult to tolerate owing to discomfort, pain or noise. This study evaluated the effect of audiovisual distraction on patients’ pain perception and overall satisfaction. Methods A prospective study was conducted for patients attending our aural care clinic requiring aural toileting of bilateral mastoid cavities over a three-month period. All microsuction...

  12. Audiovisual Services in Korea : Market Development and Policies


    Song, Yeongkwan


    This paper reviews economic development and the regulatory environment of audiovisual services in the Republic of Korea (hereafter, Korea). The paper specifically examines motion pictures and broadcasting, and discusses what drives or hinders the sector’s trade potential. Korean motion pictures have benefited greatly from the elimination of government censorship, substantial investment capital, especially from the 1990s, and frequent invitations from prestigious international movie festi...

  13. The development of the perception of audiovisual simultaneity. (United States)

    Chen, Yi-Chuan; Shore, David I; Lewis, Terri L; Maurer, Daphne


    We measured the typical developmental trajectory of the window of audiovisual simultaneity by testing four age groups of children (5, 7, 9, and 11 years) and adults. We presented a visual flash and an auditory noise burst at various stimulus onset asynchronies (SOAs) and asked participants to report whether the two stimuli were presented at the same time. Compared with adults, children aged 5 and 7 years made more simultaneous responses when the SOAs were beyond ± 200 ms but made fewer simultaneous responses at the 0 ms SOA. The point of subjective simultaneity was located at the visual-leading side, as in adults, by 5 years of age, the youngest age tested. However, the window of audiovisual simultaneity became narrower and response errors decreased with age, reaching adult levels by 9 years of age. Experiment 2 ruled out the possibility that the adult-like performance of 9-year-old children was caused by the testing of a wide range of SOAs. Together, the results demonstrate that the adult-like precision of perceiving audiovisual simultaneity is developed by 9 years of age, the youngest age that has been reported to date. PMID:26897264

  14. Audiovisual integration of speech falters under high attention demands. (United States)

    Alsius, Agnès; Navarra, Jordi; Campbell, Ruth; Soto-Faraco, Salvador


    One of the most commonly cited examples of human multisensory integration occurs during exposure to natural speech, when the vocal and the visual aspects of the signal are integrated in a unitary percept. Audiovisual association of facial gestures and vocal sounds has been demonstrated in nonhuman primates and in prelinguistic children, arguing for a general basis for this capacity. One critical question, however, concerns the role of attention in such multisensory integration. Although both behavioral and neurophysiological studies have converged on a preattentive conceptualization of audiovisual speech integration, this mechanism has rarely been measured under conditions of high attentional load, when the observers' attention resources are depleted. We tested the extent to which audiovisual integration was modulated by the amount of available attentional resources by measuring the observers' susceptibility to the classic McGurk illusion in a dual-task paradigm. The proportion of visually influenced responses was severely, and selectively, reduced if participants were concurrently performing an unrelated visual or auditory task. In contrast with the assumption that crossmodal speech integration is automatic, our results suggest that these multisensory binding processes are subject to attentional demands. PMID:15886102

  15. Temporal structure and complexity affect audio-visual correspondence detection

    Directory of Open Access Journals (Sweden)

    Rachel N Denison


    Full Text Available Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration.

  16. Musical expertise induces audiovisual integration of abstract congruency rules. (United States)

    Paraskevopoulos, Evangelos; Kuchenbuch, Anja; Herholz, Sibylle C; Pantev, Christo


    Perception of everyday life events relies mostly on multisensory integration. Hence, studying the neural correlates of the integration of multiple senses constitutes an important tool in understanding perception within an ecologically valid framework. The present study used magnetoencephalography in human subjects to identify the neural correlates of an audiovisual incongruency response, which is not generated due to incongruency of the unisensory physical characteristics of the stimulation but from the violation of an abstract congruency rule. The chosen rule-"the higher the pitch of the tone, the higher the position of the circle"-was comparable to musical reading. In parallel, plasticity effects due to long-term musical training on this response were investigated by comparing musicians to non-musicians. The applied paradigm was based on an appropriate modification of the multifeatured oddball paradigm incorporating, within one run, deviants based on a multisensory audiovisual incongruent condition and two unisensory mismatch conditions: an auditory and a visual one. Results indicated the presence of an audiovisual incongruency response, generated mainly in frontal regions, an auditory mismatch negativity, and a visual mismatch response. Moreover, results revealed that long-term musical training generates plastic changes in frontal, temporal, and occipital areas that affect this multisensory incongruency response as well as the unisensory auditory and visual mismatch responses. PMID:23238733

  17. AIDS (image) (United States)

    AIDS (acquired immune deficiency syndrome) is caused by HIV (human immunodeficiency virus), and is a syndrome that ... life-threatening illnesses. There is no cure for AIDS, but treatment with antiviral medication can suppress symptoms. ...

  18. Hearing Aids (United States)

    ... more in both quiet and noisy situations. Hearing aids help people who have hearing loss from damage ... your doctor. There are different kinds of hearing aids. They differ by size, their placement on or ...

  19. Hearing Aids (United States)

    ... electrical nerve impulses and send them to the auditory nerve, which connects the inner ear to the ... prefer. Cleaning makes a difference in hearing aid comfort. A perfectly comfortable hearing aid can become pretty ...

  20. Foreign aid

    DEFF Research Database (Denmark)

    Tarp, Finn


    Foreign aid has evolved significantly since the Second World War in response to a dramatically changing global political and economic context. This article (a) reviews this process and associated trends in the volume and distribution of foreign aid; (b) reviews the goals, principles and instituti......Foreign aid has evolved significantly since the Second World War in response to a dramatically changing global political and economic context. This article (a) reviews this process and associated trends in the volume and distribution of foreign aid; (b) reviews the goals, principles and...... institutions of the aid system; and (c) discusses whether aid has been effective. While much of the original optimism about the impact of foreign aid needed modification, there is solid evidence that aid has indeed helped further growth and poverty reduction...

  1. Application and design of audio-visual aids stomatology teaching in orthodontic non-stomatology students%非口腔医学专业医学生口腔正畸学教学中“口腔直观教学法”的设计与应用

    Institute of Scientific and Technical Information of China (English)

    李若萱; 吕亚林; 王晓庚


    Objective This study is to discuss the effects of audio- visual aids stomatology teaching in undergraduate orthodontic training for students majoring in preventive medicine in two credit hours.Methods We selected 85 students from the 2007 and 2008 matriculating classes of the preventive medicine department of Capital Medical University.Using the eight-year orthodontic textbook as our reference,we taught the theory through the multimedia pathway in the first class hour,and implemented teaching by playing situation in the trainee class hour.A follow-up survey was carried out to obtain students' feedback on the combined teaching method.Results Our survey showed that the majority of students realized the goal of using the method and believed their interest in learning orthodontics was significantly enhanced.In fact,they became fascinated by orthodontics in the limited time of the study.Conclusions We concluded that the integration of object teaching combination with situational teaching is of great assistance to orthodontic training; however,the integration must be carefully prepared to ensure student participation,maximize the benefits of integration and improve the course from direct feedback.%目的 在2学时的非口腔医学专业本科学生口腔正畸学教学中设计并实施“口腔直观教学法”,并评价其教学效果.方法 以首都医科大学2007级和2008级预防医学专业85名学生作为研究对象,以八年制口腔正畸学教科书为教材,1学时理论教学采用多媒体形式,1学时见习教学采用情景扮演方式.教学结束后,采用理论考核和问卷调查方式评价教学效果,分析学生对“口腔直观教学法”的反馈评价.结果 学生对口腔正畸学理论知识掌握较好,大部分学生能够明确教学目的.学生认为“口腔直观教学法”增强了对学习口腔正畸学的兴趣,在极其有限的时间内,对口腔正畸学留下了深刻印象.结论 “口腔直观教学法”适合

  2. Something for Everyone? An Evaluation of the Use of Audio-Visual Resources in Geographical Learning in the UK. (United States)

    McKendrick, John H.; Bowden, Annabel


    Reports from a survey of geographers that canvassed experiences using audio-visual resources to support teaching. Suggests that geographical learning has embraced audio-visual resources and that they are employed effectively. Concludes that integration of audio-visual resources into mainstream curriculum is essential to ensure effective and…

  3. Audio-Visual Education in Primary Schools: A Curriculum Project in the Netherlands. (United States)

    Ketzer, Jan W.


    A media education curriculum developed in the Netherlands is designed to increase the media literacy of children aged 4-12 years by helping them to acquire information and insights into the meaning of mass media; teaching them to produce and use audiovisual materials as a method of expression; and using audiovisual equipment in the classroom. (LRW)

  4. Propuesta para la creación de una empresa del sector audiovisual


    Muniozguren Gomez, Angela


    El treball de Salón Olimpia presenta una aproximació teòrica a la creació d'una empresa del sector audiovisual. Aquesta empresa es dedica a la promoció de videos amteurs i informació sobre beques i festivals audiovisuals, disposant d'una borsa de treball de gestió privada.

  5. Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection. (United States)

    Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica


    Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration. PMID:25848682

  6. On the relevance of script writing basics in audiovisual translation practice and training

    Directory of Open Access Journals (Sweden)

    Juan José Martínez-Sierra


    Full Text Available Audiovisual texts possess characteristics that clearly differentiate audiovisual translation from both oral and written translation, and prospective screen translators are usually taught about the issues that typically arise in audiovisual translation. This article argues for the development of an interdisciplinary approach that brings together Translation Studies and Film Studies, which would prepare future audiovisual translators to work with the nature and structure of a script in mind, in addition to the study of common and diverse translational aspects. Focusing on film, the article briefly discusses the nature and structure of scripts, and identifies key points in the development and structuring of a plot. These key points and various potential hurdles are illustrated with examples from the films Chinatown and La habitación de Fermat. The second part of this article addresses some implications for teaching audiovisual translation.

  7. La regulación audiovisual: argumentos a favor y en contra The audio-visual regulation: the arguments for and against

    Directory of Open Access Journals (Sweden)

    Jordi Sopena Palomar


    Full Text Available El artículo analiza la efectividad de la regulación audiovisual y valora los diversos argumentos a favor y en contra de la existencia de consejos reguladores a nivel estatal. El debate sobre la necesidad de un organismo de este calado en España todavía persiste. La mayoría de los países comunitarios se han dotado de consejos competentes en esta materia, como es el caso del OFCOM en el Reino Unido o el CSA en Francia. En España, la regulación audiovisual se limita a organismos de alcance autonómico, como son el Consejo Audiovisual de Navarra, el de Andalucía y el Consell de l’Audiovisual de Catalunya (CAC, cuyo modelo también es abordado en este artículo. The article analyzes the effectiveness of the audio-visual regulation and assesses the different arguments for and against the existence of the broadcasting authorities at the state level. The debate of the necessity of a Spanish organism of regulation is still active. Most of the European countries have created some competent authorities, like the OFCOM in United Kingdom and the CSA in France. In Spain, the broadcasting regulation is developed by regional organisms, like the Consejo Audiovisual de Navarra, the Consejo Audiovisual de Andalucía and the Consell de l’Audiovisual de Catalunya (CAC, whose case is also studied in this article.

  8. Development of sensitivity to audiovisual temporal asynchrony during midchildhood. (United States)

    Kaganovich, Natalya


    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal asynchrony in 7- to 8-year-olds, 10- to 11-year-olds, and adults by using a simultaneity judgment task (SJT). Additionally, we evaluated whether nonverbal intelligence, verbal ability, attention skills, or age influenced children's performance. On each trial, participants saw an explosion-shaped figure and heard a 2-kHz pure tone. These occurred at the following stimulus onset asynchronies (SOAs): 0, 100, 200, 300, 400, and 500 ms. In half of all trials, the visual stimulus appeared first (VA condition), and in the other half, the auditory stimulus appeared first (AV condition). Both groups of children were significantly more likely than adults to perceive asynchronous events as synchronous at all SOAs exceeding 100 ms, in both VA and AV conditions. Furthermore, only adults exhibited a significant shortening of reaction time (RT) at long SOAs compared to medium SOAs. Sensitivities to the VA and AV temporal asynchronies showed different developmental trajectories, with 10- to 11-year-olds outperforming 7- to 8-year-olds at the 300- to 500-ms SOAs, but only in the AV condition. Lastly, age was the only predictor of children's performance on the SJT. These results provide an important baseline against which children with developmental disorders associated with impaired audiovisual temporal function-such as autism, specific language impairment, and dyslexia-may be compared. PMID:26569563

  9. Visual Mislocalization of Moving Objects in an Audiovisual Event.

    Directory of Open Access Journals (Sweden)

    Yousuke Kawachi

    Full Text Available The present study investigated the influence of an auditory tone on the localization of visual objects in the stream/bounce display (SBD. In this display, two identical visual objects move toward each other, overlap, and then return to their original positions. These objects can be perceived as either streaming through or bouncing off each other. In this study, the closest distance between object centers on opposing trajectories and tone presentation timing (none, 0 ms, ± 90 ms, and ± 390 ms relative to the instant for the closest distance were manipulated. Observers were asked to judge whether the two objects overlapped with each other and whether the objects appeared to stream through, bounce off each other, or reverse their direction of motion. A tone presented at or around the instant of the objects' closest distance biased judgments toward "non-overlapping," and observers overestimated the physical distance between objects. A similar bias toward direction change judgments (bounce and reverse, not stream judgments was also observed, which was always stronger than the non-overlapping bias. Thus, these two types of judgments were not always identical. Moreover, another experiment showed that it was unlikely that this observed mislocalization could be explained by other previously known mislocalization phenomena (i.e., representational momentum, the Fröhlich effect, and a turn-point shift. These findings indicate a new example of crossmodal mislocalization, which can be obtained without temporal offsets between audiovisual stimuli. The mislocalization effect is also specific to a more complex stimulus configuration of objects on opposing trajectories, with a tone that is presented simultaneously. The present study promotes an understanding of relatively complex audiovisual interactions beyond simple one-to-one audiovisual stimuli used in previous studies.

  10. Bayesian calibration of simultaneity in audiovisual temporal order judgments.

    Directory of Open Access Journals (Sweden)

    Shinya Yamamoto

    Full Text Available After repeated exposures to two successive audiovisual stimuli presented in one frequent order, participants eventually perceive a pair separated by some lag time in the same order as occurring simultaneously (lag adaptation. In contrast, we previously found that perceptual changes occurred in the opposite direction in response to tactile stimuli, conforming to bayesian integration theory (bayesian calibration. We further showed, in theory, that the effect of bayesian calibration cannot be observed when the lag adaptation was fully operational. This led to the hypothesis that bayesian calibration affects judgments regarding the order of audiovisual stimuli, but that this effect is concealed behind the lag adaptation mechanism. In the present study, we showed that lag adaptation is pitch-insensitive using two sounds at 1046 and 1480 Hz. This enabled us to cancel lag adaptation by associating one pitch with sound-first stimuli and the other with light-first stimuli. When we presented each type of stimulus (high- or low-tone in a different block, the point of simultaneity shifted to "sound-first" for the pitch associated with sound-first stimuli, and to "light-first" for the pitch associated with light-first stimuli. These results are consistent with lag adaptation. In contrast, when we delivered each type of stimulus in a randomized order, the point of simultaneity shifted to "light-first" for the pitch associated with sound-first stimuli, and to "sound-first" for the pitch associated with light-first stimuli. The results clearly show that bayesian calibration is pitch-specific and is at work behind pitch-insensitive lag adaptation during temporal order judgment of audiovisual stimuli.

  11. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes. (United States)

    Desantis, Andrea; Haggard, Patrick


    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. PMID:27131076

  12. Aula virtual y presencial en aprendizaje de comunicación audiovisual y educación Virtual and Real Classroom in Learning Audiovisual Communication and Education

    Directory of Open Access Journals (Sweden)

    Josefina Santibáñez Velilla


    Full Text Available El modelo mixto de enseñanza-aprendizaje pretende utilizar las tecnologías de la información y de la comunicación (TIC para garantizar una formación más ajustada al Espacio Europeo de Educación Superior (EEES. Se formularon los siguientes objetivos de investigación: Averiguar la valoración que hacen los alumnos de Magisterio del aula virtual WebCT como apoyo a la docencia presencial, y conocer las ventajas del uso de la WebCT y de las TIC por los alumnos en el estudio de caso: «Valores y contravalores transmitidos por series televisivas visionadas por niños y adolescentes». La investigación se realizó con una muestra de 205 alumnos de la Universidad de La Rioja que cursaban la asignatura de «Tecnologías aplicadas a la Educación». Para la descripción objetiva, sistemática y cuantitativa del contenido manifiesto de los documentos se ha utilizado la técnica de análisis de contenido cualitativa y cuantitativa. Los resultados obtenidos demuestran que las herramientas de comunicación, contenidos y evaluación son valoradas favorablemente por los alumnos. Se llega a la conclusión de que la WebCT y las TIC constituyen un apoyo a la innovación metodológica del EEES basada en el aprendizaje centrado en el alumno. Los alumnos evidencian su competencia audiovisual en los ámbitos de análisis de valores y de expresión a través de documentos audiovisuales en formatos multimedia. Dichos alumnos aportan un nuevo sentido innovador y creativo al uso docente de series televisivas.The mixed model of Teaching-Learning intends to use Information and Communication Technologies (ICTs to guarantee an education more adjusted to European Space for Higher Education (ESHE. The following research objectives were formulated: 1 To find out the assessment made by teacher-training college students of the virtual classroom WebCT as an aid to face-to-face teaching. 2 To know the advantages of the use of WebCT and ICTs by students in the case study:

  13. Search for Structure in Audiovisual Recordings of Lectures and Conferences

    Czech Academy of Sciences Publication Activity Database

    Kopp, M.; Pulc, P.; Holeňa, Martin

    Aachen & Charleston: Technical University & CreateSpace Independent Publishing Platform, 2015 - (Yaghob, J.), s. 150-158. (CEUR Workshop Proceedings. V-1422). ISBN 978-1-5151-2065-0. ISSN 1613-0073. [ITAT 2015. Conference on Theory and Practice of Information Technologies /15./. Slovenský Raj (SK), 17.09.2015-21.09.2015] R&D Projects: GA ČR GA13-17187S Grant ostatní: GA MŠk(CZ) LM2010005 Institutional support: RVO:67985807 Keywords : multimedial data * audiovisual recordings * self-organizing map * hierarchical clustering * cluster size Subject RIV: IN - Informatics, Computer Science

  14. Faculty attitudes toward the use of audiovisuals in continuing education. (United States)

    Schindler, M K; Port, J


    A study was undertaken in planning for a project involving library support for formal continuing education programs. A questionnaire survey assessed faculty attitudes toward continuing education activities, self-instructional AV programs for continuing education, and self-instructional AV programs for undergraduate medical education. Actual use of AV programs in both undergraduate and postgraduate classroom teaching was also investigated. The results indicated generally positive attitudes regarding a high level of classroom use of AV programs, but little assignment of audiovisuals for self-instruction. PMID:6162840

  15. Proyecto educativo : herramientas de educación audiovisual


    Boza Osuna, Luis


    El objeto de este trabajo es examinar la necesidad de informar y formar en educación audiovisual a familias, alumnos y profesores. Desde 1999, Telespectadores Asociados de Cataluña (TAC) decidió apostar decididamente por acercarse al mundo educativo, para dar respuesta a la evidente necesidad de las instituciones educativas de plantar cara a los efectos negativos de la televisión en los alumnos. Los directivos y profesionales de la enseñanza son perfectamente conscientes de la competencia des...

  16. Sustainable Archiving and Storage Management of Audiovisual Digital Assets


    Addis, Matthew; Norlund, Charlotte; Beales, Richard; Lowe, Richard; Middleton, Lee; Zlatev, Zlatko


    With the advent of end-to-end tapeless production and distribution, the whole concept of what it means to archive audiovisual content is being challenged. The traditional role of the archive as a repository for material after broadcast is changing because of digital file-based technologies and high speed networking. Rather than being at the end of the production chain, the archive is becoming an integral part of the production process and as a result is being absorbed into wider digital stora...

  17. A comparative study of approaches to audiovisual translation


    Aldea, Silvia


    For those who are not new to the world of Japanese animation, known mainly as anime, the debate of "dub vs. sub" is by no means anything out of the ordinary, but rather a very heated argument amongst fans. The study will focus on the differences in the US English version between the two approaches of translating audio-visual media, namely subtitling (official subtitles and fanmade subtitles) and dubbing, in a qualitative context. More precisely, which of the two approaches can store the most ...

  18. Voice activity detection using audio-visual information

    DEFF Research Database (Denmark)

    Petsatodis, Theodore; Pnevmatikakis, Aristodemos; Boukis, Christos


    An audio-visual voice activity detector that uses sensors positioned distantly from the speaker is presented. Its constituting unimodal detectors are based on the modeling of the temporal variation of audio and visual features using Hidden Markov Models; their outcomes are fused using a post......-decision scheme. The Mel-Frequency Cepstral Coefficients and the vertical mouth opening are the chosen audio and visual features respectively, both augmented with their first-order derivatives. The proposed system is assessed using far-field recordings from four different speakers and under various levels of...

  19. Specialization in audiovisual speech perception: a replication study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    , observers might only have been motivated to look at the face when informed and audio and video thus seemed related. Since Tuomainen et al. did not control for this, the influence of motivation is unknown. The current experiment repeated the original methods while controlling eye movements. 4 observers...... observers did look near the mouth. We conclude that eye-movements did not influence the results of Tuomainen et al. and that their results thus can be taken as evidence of a speech specific mode of audiovisual integration underlying the McGurk illusion....

  20. Sistemas de Registro Audiovisual del Patrimonio Urbano (SRAPU)


    Conles, Liliana Eva


    El Sistema SRAPU es un método de relevamiento fílmico diseñado para configurar una base de datos interactiva del paisaje urbano. Sobre esta base se persigue la formulación de criterios ordenados en términos de: flexibilidad y eficacia económica, eficiencia en el manejo de datos, democratización de la información. El SRAPU se plantea como un registro audiovisual del patrimonio material e intangible en su singularidad y como conjunto histórico y natural. En su concepción involucra los pro...

  1. Contribució de continguts audiovisuals per IP


    Lopez Perez, Aron


    Aquest projecte sobre “Distribució de continguts Audiovisuals per IP” té com a objectius substituir les infraestructures actuals de vídeo (distribució de continguts mitjançant satèl·lits) per la utilització eficient de les xarxes basades en tecnologia IP, proporcionant una solució integral per al transport segur de continguts multimèdia professionals. Aquest projecte s’ha portat a terme per un grup de televisions de Catalunya, València i les Illes Balears. Per aconseguir aquests objectius,...

  2. Hearing Aids (United States)

    ... prefer the open-fit hearing aid because their perception of their voice does not sound “plugged up.” ... My voice sounds too loud. The “plugged-up” sensation that causes a hearing aid user’s voice to ...

  3. Brand Aid

    DEFF Research Database (Denmark)

    Richey, Lisa Ann; Ponte, Stefano

    A critical account of the rise of celebrity-driven “compassionate consumption” Cofounded by the rock star Bono in 2006, Product RED exemplifies a new trend in celebrity-driven international aid and development, one explicitly linked to commerce, not philanthropy. Brand Aid offers a deeply informed...

  4. Claves para reconocer los niveles de lectura crítica audiovisual en el niño Keys to Recognizing the Levels of Critical Audiovisual Reading in Children

    Directory of Open Access Journals (Sweden)

    Jacqueline Sánchez Carrero


    Full Text Available Diversos estudios con niños y adolescentes han demostrado que a mayor conocimiento del mundo de la producción y transmisión de mensajes audiovisuales, mayor capacidad adquieren para formarse un criterio propio ante la pantalla. En este artículo se aúnan tres experiencias de educación mediática realizadas en Venezuela, Colombia y España, desde el enfoque de la recepción crítica. Se proporcionan los indicadores que llevan a determinar los niveles de lectura crítica audiovisual en niños de entre 8 y 12 años, construidos a partir de procesos de intervención mediante talleres de alfabetización mediática. Los grupos han sido instruidos acerca del universo audiovisual, dándoles a conocer cómo se gestan los contenidos audiovisuales y el modo de analizarlos, desestructurarlos y recrearlos. Primero, se hace referencia al concepto en evolución de educación mediática. Después, se describen las experiencias comunes en los tres países para luego incidir en los indicadores que permiten medir el nivel de lectura crítica. Por último, se reflexiona sobre la necesidad de la educación mediática en la era de la multialfabetización. No es muy frecuente encontrar estudios que revelen las claves para reconocer qué grado de criticidad tiene un niño cuando visiona los contenidos de los distintos medios digitales. Es un tema fundamental pues permite saber con qué nivel de comprensión cuenta y cuál adquiere después de un proceso de formación en educación mediática.Based on the results of several projects carried out with children and adolescents, we can state that knowledge of production and broadcasting aids the acquisition of critical media skills. This article combines three media education experiences in Venezuela, Colombia and Spain driven by a critical reception approach. It presents leading indicators for determining the level of critical audiovisual reading in children aged 8-12 extracted from intervention processes through

  5. Compliments in Audiovisual Translation – issues in character identity

    Directory of Open Access Journals (Sweden)

    Isabel Fernandes Silva


    Full Text Available Over the last decades, audiovisual translation has gained increased significance in Translation Studies as well as an interdisciplinary subject within other fields (media, cinema studies etc. Although many articles have been published on communicative aspects of translation such as politeness, only recently have scholars taken an interest in the translation of compliments. This study will focus on both these areas from a multimodal and pragmatic perspective, emphasizing the links between these fields and how this multidisciplinary approach will evidence the polysemiotic nature of the translation process. In Audiovisual Translation both text and image are at play, therefore, the translation of speech produced by the characters may either omit (because it is provided by visualgestual signs or it may emphasize information. A selection was made of the compliments present in the film What Women Want, our focus being on subtitles which did not successfully convey the compliment expressed in the source text, as well as analyze the reasons for this, namely difference in register, Culture Specific Items and repetitions. These differences lead to a different portrayal/identity/perception of the main character in the English version (original soundtrack and subtitled versions in Portuguese and Italian.

  6. A imagem-ritmo e o videoclipe no audiovisual

    Directory of Open Access Journals (Sweden)

    Felipe de Castro Muanis


    Full Text Available A televisão pode ser um espaço de reunião entre som e imagem em um dispositivo que possibilita a imagem-ritmo – dando continuidade à teoria da imagem de Gilles Deleuze, proposta para o cinema. Ela agregaria, simultaneamente, ca-racterísticas da imagem-movimento e da imagem-tempo, que se personificariam na construção de imagens pós-modernas, em produtos audiovisuais não necessariamente narrativos, porém populares. Filmes, videogames, videoclipes e vinhetas em que a música conduz as imagens permitiriam uma leitura mais sensorial. O audiovisual como imagem-música abre, assim, para uma nova forma de percepção além da textual tradicional, fruto da interação entre ritmo, texto e dispositivo. O tempo das imagens em movimento no audiovisual está atrelado inevitável e prioritariamente ao som. Elas agregam possibilidades não narrativas que se realizam, na maioria das vezes, sobre a lógica do ritmo musical, so-bressaindo-se como um valor fundamental, observado nos filmes Sem Destino (1969, Assassinos por Natureza (1994 e Corra Lola Corra (1998.

  7. Anatomical Instruction: Curriculum Development and the Efficient Use of Audio-Visual Aids (United States)

    Sistek, Vladimir; Harrison, John


    Argues that the proper use of the tools of instructional technology, and familiarity with the principles governing their use, are prerequisites for professionalism in teaching. The development of a pilot series of conceptual multi-media modules in anatomy is described. (VT)

  8. The Development of Multi-Level Audio-Visual Teaching Aids for Earth Science. (United States)

    Pitt, William D.

    The project consisted of making a multi-level teaching film titled "Rocks and Minerals of the Ouachita Mountains," which runs for 25 minutes and is in color. The film was designed to be interesting to earth science students from junior high to college, and consists of dialogue combined with motion pictures of charts, sequential diagrams, outcrops,…

  9. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG.

    Directory of Open Access Journals (Sweden)

    Yao Lu

    Full Text Available Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave, while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  10. Effect of attentional load on audiovisual speech perception: Evidence from ERPs

    Directory of Open Access Journals (Sweden)



    Full Text Available Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e. a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  11. The audiovisual mounting narrative as a basis for the documentary film interactive: news studies

    Directory of Open Access Journals (Sweden)

    Mgs. Denis Porto Renó


    Full Text Available This paper presents a literature review and experiment results from pilot-doctoral research "assembly language visual narrative for the documentary film interactive," which defend the thesis that there are features interactive audio and video editing of the movie, even as causing agent of interactivity. The search for interactive audio-visual formats are present in international investigations, but sob glances technology. He believes that this paper is to propose possible formats for interactive audiovisual production film, video, television, computer and cell phone from the postmodern society. Key words: Audiovisual, language, interactivity, cinema interactive, documentary, communication.

  12. Hearing Aid (United States)

    ... and Food and Drug Administration Staff FDA permits marketing of new laser-based hearing aid with potential ... feeds Follow FDA on Twitter Follow FDA on Facebook View FDA videos on YouTube View FDA photos ...

  13. Atypical audiovisual speech integration in infants at risk for autism.

    Directory of Open Access Journals (Sweden)

    Jeanne A Guiraud

    Full Text Available The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/- audio/ba/and the congruent visual/ba/- audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/- audio/ga/display compared with the congruent visual/ga/- audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16 = 17.153, p = 0.001. The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25 = 0.09, p = 0.767, in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41 = 4.466, p = 0.041. In some cases this reduced ability might lead to the poor communication skills characteristic of autism.

  14. Artimate: an articulatory animation framework for audiovisual speech synthesis

    CERN Document Server

    Steiner, Ingmar


    We present a modular framework for articulatory animation synthesis using speech motion capture data obtained with electromagnetic articulography (EMA). Adapting a skeletal animation approach, the articulatory motion data is applied to a three-dimensional (3D) model of the vocal tract, creating a portable resource that can be integrated in an audiovisual (AV) speech synthesis platform to provide realistic animation of the tongue and teeth for a virtual character. The framework also provides an interface to articulatory animation synthesis, as well as an example application to illustrate its use with a 3D game engine. We rely on cross-platform, open-source software and open standards to provide a lightweight, accessible, and portable workflow.

  15. Gaze-direction-based MEG averaging during audiovisual speech perception

    Directory of Open Access Journals (Sweden)

    Satu Lamminmäki


    Full Text Available To take a step towards real-life-like experimental setups, we simultaneously recorded magnetoencephalographic (MEG signals and subject’s gaze direction during audiovisual speech perception. The stimuli were utterances of /apa/ dubbed onto two side-by-side female faces articulating /apa/ (congruent and /aka/ (incongruent in synchrony, repeated once every 3 s. Subjects (N = 10 were free to decide which face they viewed, and responses were averaged to two categories according to the gaze direction. The right-hemisphere 100-ms response to the onset of the second vowel (N100m’ was a fifth smaller to incongruent than congruent stimuli. The results demonstrate the feasibility of realistic viewing conditions with gaze-based averaging of MEG signals.

  16. Increasing observer objectivity with audio-visual technology: the Sphygmocorder. (United States)

    Atkins; O'Brien; Wesseling; Guelen


    The most fallible component of blood pressure measurement is the human observer. The traditional technique of measuring blood pressure does not allow the result of the measurement to be checked by independent observers, thereby leaving the method open to bias. In the Sphygmocorder, several components used to measure blood pressure have been combined innovatively with audio-visual recording technology to produce a system consisting of a mercury sphygmomanometer, an occluding cuff, an automatic inflation-deflation source, a stethoscope, a microphone capable of detecting Korotkoff sounds, a camcorder and a display screen. The accuracy of the Sphygmocorder against the trained human observer has been confirmed previously using the protocol of the British Hypertension Society and in this article the updated system incorporating a number of innovations is described. PMID:10234128

  17. A Joint Audio-Visual Approach to Audio Localization

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll


    Localization of audio sources is an important research problem, e.g., to facilitate noise reduction. In the recent years, the problem has been tackled using distributed microphone arrays (DMA). A common approach is to apply direction-of-arrival (DOA) estimation on each array (denoted as nodes), and...... then map the DOA estimates to a location. In practice, however, the individual nodes contain few microphones, limiting the DOA estimation accuracy and, thereby, also the localization performance. We investigate a new approach, where range estimates are also obtained and utilized from each node, e.......g., using time-of-flight cameras. Moreover, we propose an optimal method for weighting such DOA and range information for audio localization. Our experiments on both synthetic and real data show that there is a clear, potential advantage of using the joint audiovisual localization framework....

  18. Audiovisual Webjournalism: An analysis of news on UOL News and on TV UERJ Online

    Directory of Open Access Journals (Sweden)

    Leila Nogueira


    Full Text Available This work shows the development of audiovisual webjournalism on the Brazilian Internet. This paper, based on the analysis of UOL News on UOL TV – pioneer format on commercial web television - and of UERJ Online TV – first on-line university television in Brazil - investigates the changes in the gathering, production and dissemination processes of audiovisual news when it starts to be transmitted through the web. Reflections of authors such as Herreros (2003, Manovich (2001 and Gosciola (2003 are used to discuss the construction of audiovisual narrative on the web. To comprehend the current changes in today’s webjournalism, we draw on the concepts developed by Fidler (1997; Bolter and Grusin (1998; Machado (2000; Mattos (2002 and Palacios (2003. We may conclude that the organization of narrative elements in cyberspace makes for the efficiency of journalistic messages, while establishing the basis of a particular language for audiovisual news on the Internet.

  19. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate this...... audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase...... noise were measured for naïve and informed participants. We found that the threshold for detecting speech in audiovisual stimuli was lower than for auditory-only stimuli. But there was no detection advantage for observers informed of the speech nature of the auditory signal. This may indicate that...

  20. Strategies for media literacy: Audiovisual skills and the citizenship in Andalusia

    Directory of Open Access Journals (Sweden)

    Ignacio Aguaded-Gómez


    Full Text Available Media consumption is an undeniable fact in present-day society. The hours that members of all social segments spend in front of a screen take up a large part of their leisure time worldwide. Audiovisual communication becomes especially important within the context of today’s digital society (society-network, where information and communication technologies pervade all corners of everyday life. However, people do not own enough audiovisual media skills to cope with this mass media omnipresence. Neither the education system nor civic associations, or the media themselves, have promoted audiovisual skills to make people critically competent when viewing media. This study aims to provide an updated conceptualization of the “audiovisual skill” in this digital environment and transpose it onto a specific interventional environment, seeking to detect needs and shortcomings, plan global strategies to be adopted by governments and devise training programmes for the various sectors involved.

  1. Audiovisual quality assessment in communications applications: Current status, trends and challenges

    DEFF Research Database (Denmark)

    Korhonen, Jari


    Audiovisual quality assessment is one of the major challenges in multimedia communications. Traditionally, algorithm-based (objective) assessment methods have focused primarily on the compression artifacts. However, compression is only one of the numerous factors influencing the perception. In co...

  2. On the Role of Crossmodal Prediction in Audiovisual Emotion Perception

    Directory of Open Access Journals (Sweden)

    Sarah eJessen


    Full Text Available Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others’ emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of crossmodal prediction. In emotion perception, as in most other settings, visual information precedes the auditory one. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, it has not been addressed so far in audiovisual emotion perception. Based on the current state of the art in (a crossmodal prediction and (b multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG and magnetoencephalographic (MEG studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow for a more reliable prediction of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 response in the EEG and the duration of visual emotional but not non-emotional information. If the assumption that emotional content allows for more reliable predictions can be corroborated in future studies, crossmodal prediction is a crucial factor in our understanding of multisensory emotion perception.

  3. Putative mechanisms mediating tolerance for audiovisual stimulus onset asynchrony. (United States)

    Bhat, Jyoti; Miller, Lee M; Pitt, Mark A; Shahin, Antoine J


    Audiovisual (AV) speech perception is robust to temporal asynchronies between visual and auditory stimuli. We investigated the neural mechanisms that facilitate tolerance for audiovisual stimulus onset asynchrony (AVOA) with EEG. Individuals were presented with AV words that were asynchronous in onsets of voice and mouth movement and judged whether they were synchronous or not. Behaviorally, individuals tolerated (perceived as synchronous) longer AVOAs when mouth movement preceded the speech (V-A) stimuli than when the speech preceded mouth movement (A-V). Neurophysiologically, the P1-N1-P2 auditory evoked potentials (AEPs), time-locked to sound onsets and known to arise in and surrounding the primary auditory cortex (PAC), were smaller for the in-sync than the out-of-sync percepts. Spectral power of oscillatory activity in the beta band (14-30 Hz) following the AEPs was larger during the in-sync than out-of-sync perception for both A-V and V-A conditions. However, alpha power (8-14 Hz), also following AEPs, was larger for the in-sync than out-of-sync percepts only in the V-A condition. These results demonstrate that AVOA tolerance is enhanced by inhibiting low-level auditory activity (e.g., AEPs representing generators in and surrounding PAC) that code for acoustic onsets. By reducing sensitivity to acoustic onsets, visual-to-auditory onset mapping is weakened, allowing for greater AVOA tolerance. In contrast, beta and alpha results suggest the involvement of higher-level neural processes that may code for language cues (phonetic, lexical), selective attention, and binding of AV percepts, allowing for wider neural windows of temporal integration, i.e., greater AVOA tolerance. PMID:25505102

  4. Audiovisual emotional processing and neurocognitive functioning in patients with depression

    Directory of Open Access Journals (Sweden)

    Sophie eDoose-Grünefeld


    Full Text Available Alterations in the processing of emotional stimuli (e.g. facial expressions, prosody, music have repeatedly been reported in patients with major depression. Such impairments may result from the likewise prevalent executive deficits in these patients. However, studies investigating this relationship are rare. Moreover, most studies to date have only assessed impairments in unimodal emotional processing, whereas in real life, emotions are primarily conveyed through more than just one sensory channel. The current study therefore aimed at investigating multi-modal emotional processing in patients with depression and to assess the relationship between emotional and neurocognitive impairments. 41 patients suffering from major depression and 41 never-depressed healthy controls participated in an audiovisual (faces-sounds emotional integration paradigm as well as a neurocognitive test battery. Our results showed that depressed patients were specifically impaired in the processing of positive auditory stimuli as they rated faces significantly more fearful when presented with happy than with neutral sounds. Such an effect was absent in controls. Findings in emotional processing in patients did not correlate with BDI-scores. Furthermore, neurocognitive findings revealed significant group differences for two of the tests. The effects found in audiovisual emotional processing, however, did not correlate with performance in the neurocognitive tests.In summary, our results underline the diversity of impairments going along with depression and indicate that deficits found for unimodal emotional processing cannot trivially be generalized to deficits in a multi-modal setting. The mechanisms of impairments therefore might be far more complex than previously thought. Our findings furthermore contradict the assumption that emotional processing deficits in major depression are associated with impaired attention or inhibitory functioning.

  5. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality. (United States)

    Cecere, Roberto; Gross, Joachim; Thut, Gregor


    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. PMID:27003546

  6. Audio/visual analysis for high-speed TV advertisement detection from MPEG bitstream


    Sadlier, David A.


    Advertisement breaks dunng or between television programmes are typically flagged by senes of black-and-silent video frames, which recurrendy occur in order to audio-visually separate individual advertisement spots from one another. It is the regular prevalence of these flags that enables automatic differentiauon between what is programme content and what is advertisement break. Detection of these audio-visual depressions within broadcast television content provides a basis on which advertise...



    BUDACIA, Andreea


    The global market is a challenge which requires a certain attitude from its economic agents, a proactive behavior meant to ensure advantageous positions in certain domains of activity. In the audiovisual domain, major enterprises have a precise and competitive strategy. Marketing strategies represent “the path chosen by the enterprise in order to achieve certain goals†which are of two types: market strategies and mix strategies. Market strategies typical of audiovisual services have the f...

  8. Audiovisual classification of vocal outbursts in human conversation using long-short-term memory networks


    F. Eyben; Petridis, S.; Schuller, Björn; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja


    We investigate classification of non-linguistic vocalisations with a novel audiovisual approach and Long Short-Term Memory (LSTM) Recurrent Neural Networks as highly successful dynamic sequence classifiers. As database of evaluation serves this year's Paralinguistic Challenge's Audiovisual Interest Corpus of human-to-human natural conversation. For video-based analysis we compare shape and appearance based features. These are fused in an early manner with typical audio descriptors. The result...

  9. El documentalista audiovisual als serveis de documentació de les televisions locals


    Martínez, Virginia


    The irruption of digital systems into the televisions has opened a new front for those audiovisual documentalists working on a television documentation centre. To the traditional tasks as cataloguing and storage, news tasks common to digital contents management have been added, such as metadata generation and management, or information flow control of servers and archives. This poster is based on the audiovisual documentalist figure on the local television, and it shows the environment wher...

  10. Psychophysical Responses Comparison in Spatial Visual, Audiovisual, and Auditory BCI-Spelling Paradigms


    Chang, Moonjeong; Nishikawa, Nozomu; Cai, Zhenyu; Makino, Shoji; Rutkowski, Tomasz M.


    The paper presents a pilot study conducted with spatial visual, audiovisual and auditory brain-computer-interface (BCI) based speller paradigms. The psychophysical experiments are conducted with healthy subjects in order to evaluate a difficulty and a possible response accuracy variability. We also present preliminary EEG results in offline BCI mode. The obtained results validate a thesis, that spatial auditory only paradigm performs as good as the traditional visual and audiovisual speller B...

  11. Teacher's uses of educational audiovisual at the digital age : case study


    Marty, Frédéric


    This research focuses on the uses of educational audiovisual by teachers. It falls within the theoretical framework of the sociology of uses, however, the research object at the heart of this work is at the intersection of audiovisual and educational issues. Therefore, it aims to develop a communicationnal approach of educational tools and media, as defined by Pierre Moeglin (2005). The first stage of this work tries to highlight the elements that configure the use, regarding the side of acto...

  12. Superior temporal activation in response to dynamic audio-visual emotional cues


    Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.


    Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audiovisual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual cues. Emotion perception research has focused on static facial cues; however, dynamic audiovisual (AV) cues mimic real-world social cues more accura...

  13. PRESTOPRIME: Deliverable D3.1: Design and specification of the audiovisual preservation toolkit


    Phillips, Stephen


    This deliverable is a specification and design document for an audiovisual content preservation environment. It contains a review of current digital preservation support for AV files, a survey of File-Corruption Detection methods and technologies, a review of threats to data integrity from use of large-scale data management environments; SLA schemas and QoS parameters for using online storage services in audiovisual preservation, the design of a system for preservation using both migration an...

  14. Film Studies in Motion : From Audiovisual Essay to Academic Research Video

    NARCIS (Netherlands)

    Kiss, Miklós; van den Berg, Thomas


    Our (co-written with Thomas van den Berg) ‪media rich,‬ ‪‎open access‬ ‪‎Scalar‬ ‪e-book‬ on the ‪‎Audiovisual Essay‬ practice is available online: Audiovisual essaying should be more than an appropriation of traditional video artistry, or a mere au

  15. Types of Hearing Aids (United States)

    ... Devices Consumer Products Hearing Aids Types of Hearing Aids Share Tweet Linkedin Pin it More sharing options ... some features for hearing aids? What are hearing aids? Hearing aids are sound-amplifying devices designed to ...

  16. Children with a history of SLI show reduced sensitivity to audiovisual temporal asynchrony: An ERP Study (United States)

    Kaganovich, Natalya; Schumaker, Jennifer; Leonard, Laurence B.; Gustafson, Dana; Macias, Danielle


    Purpose We examined whether school-age children with a history of SLI (H-SLI), their typically developing (TD) peers, and adults differ in sensitivity to audiovisual temporal asynchrony and whether such difference stems from the sensory encoding of audiovisual information. Method 15 H-SLI children, 15 TD children, and 15 adults judged whether a flashed explosion-shaped figure and a 2 kHz pure tone occurred simultaneously. The stimuli were presented at 0, 100, 200, 300, 400, and 500 ms temporal offsets. This task was combined with EEG recordings. Results H-SLI children were profoundly less sensitive to temporal separations between auditory and visual modalities compared to their TD peers. Those H-SLI children who performed better at simultaneity judgment also had higher language aptitude. TD children were less accurate than adults, revealing a remarkably prolonged developmental course of the audiovisual temporal discrimination. Analysis of early ERP components suggested that poor sensory encoding was not a key factor in H-SLI children’s reduced sensitivity to audiovisual asynchrony. Conclusions Audiovisual temporal discrimination is impaired in H-SLI children and is still immature during mid-childhood in TD children. The present findings highlight the need for further evaluation of the role of atypical audiovisual processing in the development of SLI. PMID:24686922

  17. The development and use of audio-visual technology in terms of economy and socio-economic trends in society


    Mikšík, Jan


    The aim of this work is to describe history of audio-visual technology and to analyse the influence of digitalization. The text describes the history of cinematography, television and also the introduction of audio-visual technology to people's homes. It contains information on present situation as well as new trends and the influence of the Internet on audio-visual making. There is a comparison of past and present technologies. The new technologies are accessible even for amateur creators wh...

  18. Brand Aid

    DEFF Research Database (Denmark)

    Richey, Lisa Ann; Ponte, Stefano


    activists, scholars and venture capitalists, discusses the pros and cons of changing the world by ‘voting with your dollars’. Lisa Ann Richey and Stefano Ponte (Professor at Roskilde University and Senior Researcher at DIIS respectively), authors of Brand Aid: Shopping Well to Save the World, highlight how...

  19. Negotiating Aid

    DEFF Research Database (Denmark)

    Whitfield, Lindsay; Fraser, Alastair


    This article presents a new analytical approach to the study of aid negotiations. Building on existing approaches but trying to overcome their limitations, it argues that factors outside of individual negotiations (or the `game' in game-theoretic approaches) significantly affect the preferences of...

  20. The spatial reliability of task-irrelevant sounds modulates bimodal audiovisual integration: An event-related potential study. (United States)

    Li, Qi; Yu, Hongtao; Wu, Yan; Gao, Ning


    The integration of multiple sensory inputs is essential for perception of the external world. The spatial factor is a fundamental property of multisensory audiovisual integration. Previous studies of the spatial constraints on bimodal audiovisual integration have mainly focused on the spatial congruity of audiovisual information. However, the effect of spatial reliability within audiovisual information on bimodal audiovisual integration remains unclear. In this study, we used event-related potentials (ERPs) to examine the effect of spatial reliability of task-irrelevant sounds on audiovisual integration. Three relevant ERP components emerged: the first at 140-200ms over a wide central area, the second at 280-320ms over the fronto-central area, and a third at 380-440ms over the parieto-occipital area. Our results demonstrate that ERP amplitudes elicited by audiovisual stimuli with reliable spatial relationships are larger than those elicited by stimuli with inconsistent spatial relationships. In addition, we hypothesized that spatial reliability within an audiovisual stimulus enhances feedback projections to the primary visual cortex from multisensory integration regions. Overall, our findings suggest that the spatial linking of visual and auditory information depends on spatial reliability within an audiovisual stimulus and occurs at a relatively late stage of processing. PMID:27392755

  1. Can personality traits predict pathological responses to audiovisual stimulation? (United States)

    Yambe, Tomoyuki; Yoshizawa, Makoto; Fukudo, Shin; Fukuda, Hiroshi; Kawashima, Ryuta; Shizuka, Kazuhiko; Nanka, Shunsuke; Tanaka, Akira; Abe, Ken-ichi; Shouji, Tomonori; Hongo, Michio; Tabayashi, Kouichi; Nitta, Shin-ichi


    pathophysiological reaction to the audiovisual stimulations. As for the photo sensitive epilepsy, it was reported to be only 5-10% for all patients. Therefore, 90% or more of the cause could not be determined in patients who started a morbid response. The results in this study suggest that the autonomic function was connected to the mental tendency of the objects. By examining such directivity, it is expected that subjects, which show morbid reaction to an audiovisual stimulation, can be screened beforehand. PMID:14572681

  2. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    International Nuclear Information System (INIS)

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathed without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating

  3. Tactile Aids

    Directory of Open Access Journals (Sweden)

    Mohtaramossadat Homayuni


    Full Text Available Tactile aids, which translate sound waves into vibrations that can be felt by the skin, have been used for decades by people with severe/profound hearing loss to enhance speech/language development and improve speechreading.The development of tactile aids dates from the efforts of Goults and his co-workers in the 1920s; Although The power supply was too voluminous and it was difficult to carry specially by children, it was too huge and heavy to be carried outside the laboratories and its application was restricted to the experimental usage. Nowadays great advances have been performed in producing this instrument and its numerous models is available in markets around the world.

  4. Audio-Visual Perception System for a Humanoid Robotic Head

    Directory of Open Access Journals (Sweden)

    Raquel Viciana-Abad


    Full Text Available One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.

  5. Audio-visual assistance in co-creating transition knowledge (United States)

    Hezel, Bernd; Broschkowski, Ephraim; Kropp, Jürgen P.


    Earth system and climate impact research results point to the tremendous ecologic, economic and societal implications of climate change. Specifically people will have to adopt lifestyles that are very different from those they currently strive for in order to mitigate severe changes of our known environment. It will most likely not suffice to transfer the scientific findings into international agreements and appropriate legislation. A transition is rather reliant on pioneers that define new role models, on change agents that mainstream the concept of sufficiency and on narratives that make different futures appealing. In order for the research community to be able to provide sustainable transition pathways that are viable, an integration of the physical constraints and the societal dynamics is needed. Hence the necessary transition knowledge is to be co-created by social and natural science and society. To this end, the Climate Media Factory - in itself a massively transdisciplinary venture - strives to provide an audio-visual connection between the different scientific cultures and a bi-directional link to stake holders and society. Since methodology, particular language and knowledge level of the involved is not the same, we develop new entertaining formats on the basis of a "complexity on demand" approach. They present scientific information in an integrated and entertaining way with different levels of detail that provide entry points to users with different requirements. Two examples shall illustrate the advantages and restrictions of the approach.

  6. Audiovisual education and breastfeeding practices: A preliminary report

    Directory of Open Access Journals (Sweden)

    V. C. Nikodem


    Full Text Available A randomized control trial was conducted at the Coronation Hospital, to evaluate the effect of audiovisual breastfeeding education. Within 72 hours after delivery, 340 women who agreed to participate were allocated randomly to view one of two video programmes, one of which dealt with breastfeeding. To determine the effect of the programme on infant feeding a structured questionnaire was administered to 108 women who attended the six week postnatal check-up. Alternative methods, such as telephonic interviews (24 and home visits (30 were used to obtain information from subjects who did not attend the postnatal clinic. Comparisons of mother-infant relationships and postpartum depression showed no significant differences. Similar proportions of each group reported that their baby was easy to manage, and that they felt close to and could communicate well with it. While the overall number of mothers who breast-fed was not significantly different between the two groups, there was a trend towards fewer mothers in the study group supplementing with bottle feeding. It was concluded that the effectiveness of aidiovisual education alone is limited, and attention should be directed towards personal follow-up and support for breastfeeding mothers.

  7. Audio-visual perception system for a humanoid robotic head. (United States)

    Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M; Bandera, Juan P; Romero-Garces, Adrian; Reche-Lopez, Pedro


    One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593

  8. The influence of task on gaze during audiovisual speech perception (United States)

    Buchan, Julie; Paré, Martin; Yurick, Micheal; Munhall, Kevin


    In natural conversation, visual and auditory information about speech not only provide linguistic information but also provide information about the identity and the emotional state of the speaker. Thus, listeners must process a wide range of information in parallel to understand the full meaning in a message. In this series of studies, we examined how different types of visual information conveyed by a speaker's face are processed by measuring the gaze patterns exhibited by subjects watching audiovisual recordings of spoken sentences. In three experiments, subjects were asked to judge the emotion and the identity of the speaker, and to report the words that they heard under different auditory conditions. As in previous studies, eye and mouth regions dominated the distribution of the gaze fixations. It was hypothesized that the eyes would attract more fixations for more social judgment tasks, rather than tasks which rely more on verbal comprehension. Our results support this hypothesis. In addition, the location of gaze on the face did not influence the accuracy of the perception of speech in noise.

  9. Linguagem Audiovisual no Ensino de Química

    Directory of Open Access Journals (Sweden)

    T. A. Almeida


    Full Text Available As fortes mudanças no cenário educacional relacionadas às tecnologias colocam em pauta a forma como o ensino é ministrado, ou ainda, como prender a atenção dos alunos nessa era digital. Nesse contexto as mídias audiovisuais tornaram-se uma grande contribuinte para o ensino de ciências. A ideia base do projeto foi usar da linguagem audiovisual (vídeos como metodologia alternativa no ensino e aprendizagem de Química. Para isso, com observações, criação e apresentação de um vídeo e formulação de questionários (antes e depois da exposição do vídeo, foi possível visualizar e explorar o que os alunos sabem sobre a teoria atômica e sua historicidade, além de buscar corrigir e modificar a visão que os estudantes possuem do átomo. As respostas dos questionários aplicados mostram que os estudantes possuem uma visão fragmentada dos modelos atômicos, porém, com o vídeo e as discussões foi possível sanar alguns dos erros conceituais identificados.

  10. Neurofunctional underpinnings of audiovisual emotion processing in teens with Autism Spectrum Disorders

    Directory of Open Access Journals (Sweden)

    Krissy A.R. Doyle-Thomas


    Full Text Available Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD. Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n=18 and typically developing controls (n=16 during audiovisual and unimodal emotion processing . Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviours, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that during audiovisual emotion matching individuals with ASD may rely on a parietofrontal network to compensate for atypical brain activity elsewhere.

  11. Keeping time in the brain: Autism spectrum disorder and audiovisual temporal processing. (United States)

    Stevenson, Ryan A; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Camarata, Stephen; Wallace, Mark T


    A growing area of interest and relevance in the study of autism spectrum disorder (ASD) focuses on the relationship between multisensory temporal function and the behavioral, perceptual, and cognitive impairments observed in ASD. Atypical sensory processing is becoming increasingly recognized as a core component of autism, with evidence of atypical processing across a number of sensory modalities. These deviations from typical processing underscore the value of interpreting ASD within a multisensory framework. Furthermore, converging evidence illustrates that these differences in audiovisual processing may be specifically related to temporal processing. This review seeks to bridge the connection between temporal processing and audiovisual perception, and to elaborate on emerging data showing differences in audiovisual temporal function in autism. We also discuss the consequence of such changes, the specific impact on the processing of different classes of audiovisual stimuli (e.g. speech vs. nonspeech, etc.), and the presumptive brain processes and networks underlying audiovisual temporal integration. Finally, possible downstream behavioral implications, and possible remediation strategies are outlined. Autism Res 2016, 9: 720-738. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. PMID:26402725

  12. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg


    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  13. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection. (United States)

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing


    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC. PMID:26123281

  14. Neurological Complications of AIDS (United States)

    ... Diversity Find People About NINDS Neurological Complications of AIDS Fact Sheet Feature Federal domestic HIV/AIDS information ... Where can I get more information? What is AIDS? AIDS (acquired immune deficiency syndrome) is a condition ...

  15. Developing an Audiovisual Notebook as a Self-Learning Tool in Histology: Perceptions of Teachers and Students (United States)

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás


    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four…

  16. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    Directory of Open Access Journals (Sweden)

    Magnus eAlm


    Full Text Available Gender and age have been found to affect adults’ audio-visual (AV speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20-30 years and middle-aged adults (50-60 years with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy towards more visually dominated responses.

  17. Omnidirectional Audio-Visual Talker Localization Based on Dynamic Fusion of Audio-Visual Features Using Validity and Reliability Criteria (United States)

    Denda, Yuki; Nishiura, Takanobu; Yamashita, Yoichi

    This paper proposes a robust omnidirectional audio-visual (AV) talker localizer for AV applications. The proposed localizer consists of two innovations. One of them is robust omnidirectional audio and visual features. The direction of arrival (DOA) estimation using an equilateral triangular microphone array, and human position estimation using an omnidirectional video camera extract the AV features. The other is a dynamic fusion of the AV features. The validity criterion, called the audioor visual-localization counter, validates each audio- or visual-feature. The reliability criterion, called the speech arriving evaluator, acts as a dynamic weight to eliminate any prior statistical properties from its fusion procedure. The proposed localizer can compatibly achieve talker localization in a speech activity and user localization in a non-speech activity under the identical fusion rule. Talker localization experiments were conducted in an actual room to evaluate the effectiveness of the proposed localizer. The results confirmed that the talker localization performance of the proposed AV localizer using the validity and reliability criteria is superior to that of conventional localizers.

  18. AIDS Epidemiyolojisi




    AIDS was first defined in the United States in 1981. It spreads to nearly all the countries of the world with a great speed and can infect everbody without any differantiation. The infection results in death and there is no cure or vaccine for it, yet. To data given to World Health Organization until July-1994, it is estimated that there are about 1 million patients and about 22 millions HIV positive persons In the world. Sixty percent of HIV positive persons are men and 40% are women. The di...

  19. A Similarity-Based Approach for Audiovisual Document Classification Using Temporal Relation Analysis

    Directory of Open Access Journals (Sweden)

    Ferrane Isabelle


    Full Text Available Abstract We propose a novel approach for video classification that bases on the analysis of the temporal relationships between the basic events in audiovisual documents. Starting from basic segmentation results, we define a new representation method that is called Temporal Relation Matrix (TRM. Each document is then described by a set of TRMs, the analysis of which makes events of a higher level stand out. This representation has been first designed to analyze any audiovisual document in order to find events that may well characterize its content and its structure. The aim of this work is to use this representation to compute a similarity measure between two documents. Approaches for audiovisual documents classification are presented and discussed. Experimentations are done on a set of 242 video documents and the results show the efficiency of our proposals.

  20. Indexing method of digital audiovisual medical resources with semantic Web integration. (United States)

    Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre


    Digitalization of audiovisual resources and network capability offer many possibilities which are the subject of intensive work in scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has developed MPEG-7, a standard for describing multimedia content. The goal of this standard is to develop a rich set of standardized tools to enable efficient retrieval from digital archives or the filtering of audiovisual broadcasts on the Internet. How could this kind of technology be used in the medical context? In this paper, we propose a simpler indexing system, based on the Dublin Core standard and compliant to MPEG-7. We use MeSH and the UMLS to introduce conceptual navigation. We also present a video-platform which enables encoding and gives access to audiovisual resources in streaming mode. PMID:15694622

  1. Media and journalism as forms of knowledge: a methodology for critical reading of journalistic audiovisual narratives

    Directory of Open Access Journals (Sweden)

    Beatriz Becker


    Full Text Available The work presents a methodology for the analysis of journalistic audiovisual narratives, and instrument of critical reading of news contents and formats which utilize audiovisual language and multimedia resources on TV and on the web. It is assumed that the comprehension of the dynamic combinations of the elements which constitute the audiovisual text contributes to a better perception of the meanings of the news, and that uses of the digital tools in a critical and creative way can collaborate in the practice of citizenship and in the perfection of current journalistic practice, highlighting the importance of the training of future professionals. The methodology proposed here is supported by technical references established in the possible dialogues of the research works in the journalism field itself with the contributions of Media Literacy, of Televisual Analysis, of Cultural Studies and of Discourse Analysis.

  2. Influence of auditory and audiovisual stimuli on the right-left prevalence effect

    DEFF Research Database (Denmark)

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim


    vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not...... result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was...... not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal...

  3. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    Directory of Open Access Journals (Sweden)

    Yuanqing Li

    Full Text Available One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG. The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  4. Automatic Identification used in Audio-Visual indexing and Analysis

    Directory of Open Access Journals (Sweden)

    A. Satish Chowdary


    Full Text Available To locate a video clip in large collections is very important for retrieval applications, especially for digital rights management. We attempt to provide a comprehensive and high-level review of audiovisual features that can be extracted from the standard compressed domains, such as MPEG-1 and MPEG-2. This paper presents a graph transformation and matching approach to identify the occurrence of potentially different ordering or length due to content editing. With a novel batch query algorithm to retrieve similar frames, the mapping relationship between the query and database video is first represented by a bipartite graph. The densely matched parts along the long sequence are then extracted, followed by a filter-and-refine search strategy to prune some irrelevant subsequences. During the filtering stage, Maximum Size Matching is deployed for each sub graph constructed by the query and candidate subsequence to obtain a smaller set of candidates. During the refinement stage, Sub-Maximum Similarity Matching is devised to identify the subsequence with the highest aggregate score from all candidates, according to a robust video similarity model that incorporates visual content, temporal order, and frame alignment information. This new algorithm is based on dynamic programming that fully uses the temporal dimension to measure the similarity between two video sequences. A normalized chromaticity histogram is used as a feature which is illumination invariant. Dynamic programming is applied on shot level to find the optimal nonlinear mapping between video sequences. Two new normalized distance measures are presented for video sequence matching. One measure is based on the normalization of the optimal path found by dynamic programming. The other measure combines both the visual features and the temporal information. The proposed distance measures are suitable for variable-length comparisons.

  5. Audiovisual correspondence between musical timbre and visual shapes.

    Directory of Open Access Journals (Sweden)

    Mohammad eAdeli


    Full Text Available This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g. simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against colored shapes and once against grayscale shapes. Subjects had to select the visual equivalent of a given sound i.e. its shape, color (or grayscale and vertical position. This scenario permitted studying the associations between normalized timbre and visual shapes as well as some of the previous findings for more complex stimuli. 119 subjects (31 females and 88 males participated in the online experiment. Subjects included 36 claimed professional musicians, 47 claimed amateur musicians and 36 claimed non-musicians. 31 subjects have also claimed to have synesthesia-like experiences. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects have strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the two previous shapes. Color or grayscale had no effect on timbre-shape associations. Fundamental frequency was not associated with height, grayscale or color. The significant correspondence between timbre and shape revealed by the present work allows designing substitution systems which might help the blind to perceive shapes through timbre.

  6. El lenguaje WOW de las 6W. Tratamiento audiovisual de la información televisada


    Santiago Martínez, Pablo


    El escenario televisivo actual está caracterizado por una hibridación de géneros y funciones provocada por el auge del infoentretenimiento, un macrogénero imparable que combina la función informativa clásica con la búsqueda de la emotividad. Para combinar ambas funciones, el uso del lenguaje audiovisual adquiere una nueva relevancia. Esta disertación tiene como objetivo principal determinar y analizar la función de aquellos elementos del lenguaje audiovisual que influyan en la calidad y compr...

  7. Cross-Modal Matching of Audio-Visual German and French Fluent Speech in Infancy


    Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun


    The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants' audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matc...

  8. Els estudis universitaris de comunicació audiovisual als Països Catalans


    Martí, Josep Maria, 1950-; Bonet Bonet Bagant, Montse; Montagut Calvo, Marta; Pérez-Portabella Lopez, Antoni


    El sector audiovisual als Països Catalans ha experimentat en els darrers anys un important creixement, alhora que una diversificació remarcable com a conseqüència de l’aparició de nous suports i de noves tecnologies. En aquest mateix període s’ha incrementat el nombre de facultats i de centres de formació que inclouen entre les seves llicenciatures les que fan referència a la comunicació audiovisual. Existeixen dificultats objectives per aconseguir que els plans d’estudis reflecteixin aquest ...

  9. Los nuevos modelos de producción audiovisual aplicados a contenidos deportivos


    Marín Montín, Joaquín


    En la actualidad el sector audiovisual se encuentra inmerso en pleno proceso de transformación marcado por las implicaciones de la tecnología digital. Asistimos al surgimiento de nuevos soportes multimedia para la distribución de contenidos audiovisuales destacando especialmente Internet y diferentes dispositivos móviles. Como consecuencia la producción audiovisual se ha ido reestructurando adaptándose a un nuevo tipo de mercado ya no sólo exclusivo de la televisión. Además la digitalización ...

  10. Grafismo audiovisual: el lenguaje efímero. Recursos y estrategias.


    Herráiz Zornoza, Beatriz


    La presente tesis doctoral tiene como objeto de estudio el grafismo audiovisual. El grafismo audiovisual se presenta como un elemento indivisible de los actuales sistemas audiovisuales, tanto en los medios convencionales, cinematográfico o televisivo, como en muchos otros sistemas multiplataforma, como los grandes eventos o los dispositivos periféricos, puesto que forma parte de la comunicación y supone el vehículo idóneo de transmisión de determinadas partes del mensaje. El grafismo audi...

  11. Comunicación audiovisual, una experiencia basada en el blended learning en la universidad

    Directory of Open Access Journals (Sweden)

    Mariona Grané Oró


    Full Text Available En los estudios de Comunicación Audiovisual de la Universidad de Barcelona, y bajo una perspectiva de blended learning, diferentes medios y diferentes recursos se disponen para el trabajo de alumnos y profesores. Pero el hecho de poder acceder a diferentes medios no garantiza la calidad en los procesos de enseñanza y aprendizaje. Conocer los recursos de que se dispone, saber planificar el proceso y organizar el uso de los mismos, es la clave para la formación de los alumnos de Comunicación Audiovisual.

  12. El audiovisual español como factor coadyuvante de la marca España


    Tribaldos Macia, Enrique


    El objetivo de este trabajo es determinar el impacto que tiene el Sector Audiovisual Español entendido desde tres perspectivas distintas. Primero como alfabeto universal base del ecosistema mediático digital multimedia. Segundo, como contenido creativo de valor artístico. Y tercero como industria destacada, competitiva y próspera con un enorme potencial en el mercado internacional. Por lo tanto, trataremos de investigar sobre si el Sector Audiovisual reúne los requisitos suficientes en la...

  13. La estacíon de trabajo del traductor audiovisual: Herramientas y Recursos.


    Anna Matamala


    En este artículo abordamos la relación entre traducción audiovisual y nuevas tecnologías y describimos las características que tiene la estación de trabajo del traductor audiovisual, especialmente en el caso del doblaje y del voice- over. Después de presentar las herramientas que necesita el traductor para llevar a cabo satisfactoriamente su tarea y apuntar vías de futuro, presentamos una relación de recursos que suele consultar para resolver los problemas de traducción, haciendo hincapié en ...

  14. Videojuegos y Televisión. Influencias en el tratamiento audiovisual de contenidos deportivos


    Marín Montín, Joaquín


    Muchos videojuegos tratan de emular de forma lúdica mundos reales mediante experiencias virtuales parecidas. En los de temática deportiva, es habitual que algunos de ellos reproduzcan narrativamente un modelo audiovisual como si se tratara de una retransmisión televisiva en directo. Además, desde los videojuegos se han ido experimentado nuevos puntos de vista y otros elementos que posteriormente adoptará el lenguaje televisivo en la realización audiovisual de muchos deportes. Del mismo modo, ...

  15. La estación de trabajo del traductor audiovisual : herramientas y recursos


    Matamala, Anna


    En este artículo abordamos la relación entre traducción audiovisual y nuevas tecnologías y describimos las características que tiene la estación de trabajo del traductor audiovisual, especialmente en el caso del doblaje y del voice-over. Después de presentar las herramientas que necesita el traductor para llevar a cabo satisfactoriamente su tarea y apuntar vías de futuro, presentamos una relación de recursos que suele consultar para resolver los problemas de traducción, haciendo hincapié en l...

  16. La m??sica en la narrativa publicitaria audiovisual. El caso de Coca-Cola


    S??nchez Porras, Mar??a Jos??


    En esta investigaci??n se ha realizado un estudio profundo de la m??sica en la publicidad audiovisual y su relaci??n con otros aspectos sonoros y visuales de la publicidad. Para llevarlo a cabo se ha seleccionado una marca concreta, Coca-Cola, debido a su globalizaci??n y reconocimiento. Se ha abordado una nueva perspectiva de an??lisis musical en la publicidad audiovisual, abordando los diferentes elementos de la estructura musical a trav??s de la proyecci??n de los anuncios. Se ha rea...

  17. Músicas para persuadir: apropiaciones musicales e hibridaciones genéricas en la publicidad audiovisual


    Fraile Prieto, Teresa


    La publicidad audiovisual aglutina las últimas corrientes creativas y tendencias sociales que circulan en los medios. Por eso los spots contemporáneos son un magnífico ejemplo de la actual disolución de fronteras entre unos formatos audiovisuales y otros. Aunque la música es una presencia constante en el medio publicitario gracias a su eficacia emocional, en nuestros días asistimos a una apropiación por parte de la publicidad de los usos de la música en el audiovisual contemporáneo. Este t...


    Directory of Open Access Journals (Sweden)

    Sarah Prasasti


    Full Text Available The method presented here provides the basis for a course in American prose for EFL students. Understanding and appreciation of American prose is a difficult task for the students because they come into contact with works that are full of cultural baggage and far apart from their own world. The audio visual aid is one of the alternatives to sensitize the students to the topic and the cultural background. Instead of proving the ready-made audio visual aids, teachers can involve students to actively engage in a more task oriented audiovisual project. Here, the teachers encourage their students to create their own audio visual aids using colors, pictures, sound and gestures as a point of initiation for further discussion. The students can use color that has become a strong element of fiction to help them calling up a forceful visual representation. Pictures can also stimulate the students to build their mental image. Sound and silence, which are a part of the fabric of literature, may also help them to increase the emotional impact.

  19. Training methods, tools and aids

    International Nuclear Information System (INIS)

    The training programme, training methods, tools and aids necessary for staffing nuclear power plants depend very much on the overall contractual provisions. The basis for training programmes and methods is the definition of the plant organization and the prequalification of the personnel. Preselection tests are tailored to the different educational levels and precede the training programme, where emphasis is put on practical on-the-job training. Technical basic and introductory courses follow language training and give a broad but basic spectrum of power plant technology. Plant-related theoretical training consists of reactor technology training combined with practical work in laboratories, on a test reactor and of the nuclear power plant course on design philosophy and operation. Classroom instruction together with video tapes and other audiovisual material which are used during this phase are described; as well as the various special courses for the different specialists. The first step of on-the-job training is a practical observation phase in an operating nuclear power plant, where the participants are assigned to shift work or to the different special departments, depending on their future assignment. Training in manufacturers' workshops, in laboratories or in engineering departments necessitate other training methods. The simulator training for operating personnel, for key personnel and, to some extent, also for maintenance personnel and specialists gives the practical feeling for nuclear power plant behaviour during normal and abnormal conditions. During the commissioning phase of the own nuclear power plant, which is the most important practical training, the participants are integrated into the commissioning staff and are assisted during their process of practical learning on-the-job by special instructors. Personnel training also includes performance of training of instructors and assistance in building up special training programmes and material as well

  20. Audiovisual Material as Educational Innovation Strategy to Reduce Anxiety Response in Students of Human Anatomy (United States)

    Casado, Maria Isabel; Castano, Gloria; Arraez-Aybar, Luis Alfonso


    This study presents the design, effect and utility of using audiovisual material containing real images of dissected human cadavers as an innovative educational strategy (IES) in the teaching of Human Anatomy. The goal is to familiarize students with the practice of dissection and to transmit the importance and necessity of this discipline, while…

  1. Strategies for Media Literacy: Audiovisual Skills and the Citizenship in Andalusia (United States)

    Aguaded-Gomez, Ignacio; Perez-Rodriguez, M. Amor


    Media consumption is an undeniable fact in present-day society. The hours that members of all social segments spend in front of a screen take up a large part of their leisure time worldwide. Audiovisual communication becomes especially important within the context of today's digital society (society-network), where information and communication…

  2. Audiovisual semantic interference and attention : Evidence from the attentional blink paradigm

    NARCIS (Netherlands)

    Van der Burg, Erik; Brederoo, Sanne G.; Nieuwenstein, Mark R.; Theeuwes, Jan; Olivers, Christian N. L.


    In the present study we investigate the role of attention in audiovisual semantic interference, by using an attentional blink paradigm. Participants were asked to make an unspeeded response to the identity of a visual target letter. This target letter was preceded at various SOAs by a synchronized a




  4. Nutrition Education Materials and Audiovisuals for Grades Preschool through 6. Special Reference Briefs Series. (United States)

    Evans, Shirley King, Comp.

    This bibliography was prepared for educators interested in nutrition education materials, audiovisuals, and resources for classroom use. Items listed cover a range of topics including general nutrition, food preparation, food science, and dietary management. Teaching materials listed include food models, games, kits, videocassettes, and lesson…

  5. Automated Apprenticeship Training (AAT). A Systematized Audio-Visual Approach to Self-Paced Job Training. (United States)

    Pieper, William J.; And Others

    Two Automated Apprenticeship Training (AAT) courses were developed for Air Force Security Police Law Enforcement and Security specialists. The AAT was a systematized audio-visual approach to self-paced job training employing an easily operated teaching device. AAT courses were job specific and based on a behavioral task analysis of the two…

  6. A comparative study on automatic audio-visual fusion for aggression detection using meta-information

    NARCIS (Netherlands)

    Lefter, I.; Rothkrantz, L.J.M.; Burghouts, G.J.


    Multimodal fusion is a complex topic. For surveillance applications audio-visual fusion is very promising given the complementary nature of the two streams. However, drawing the correct conclusion from multi-sensor data is not straightforward. In previous work we have analysed a database with audio-

  7. Automatic audio-visual fusion for aggression detection using meta-information

    NARCIS (Netherlands)

    Lefter, I.; Burghouts, G.J.; Rothkrantz, L.J.M.


    We propose a new method for audio-visual sensor fusion and apply it to automatic aggression detection. While a variety of definitions of aggression exist, in this paper we see it as any kind of behavior that has a disturbing effect on others. We have collected multi- and unimodal assessments by huma

  8. Catalogo de peliculas educativas y otros materiales audiovisuales (Catalogue of Educational Films and other Audiovisual Materials). (United States)

    Encyclopaedia Britannica, Inc., Chicago, IL.

    This catalogue of educational films and other audiovisual materials consists predominantly of films in Spanish and English which are intended for use in elementary and secondary schools. A wide variety of topics including films for social studies, language arts, humanities, physical and natural sciences, safety and health, agriculture, physical…

  9. Brief Report: Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders (United States)

    Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.


    Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their…

  10. Audiovisual Speech Perception in Children with Developmental Language Disorder in Degraded Listening Conditions (United States)

    Meronen, Auli; Tiippana, Kaisa; Westerholm, Jari; Ahonen, Timo


    Purpose: The effect of the signal-to-noise ratio (SNR) on the perception of audiovisual speech in children with and without developmental language disorder (DLD) was investigated by varying the noise level and the sound intensity of acoustic speech. The main hypotheses were that the McGurk effect (in which incongruent visual speech alters the…