WorldWideScience

Sample records for audiovisual materials

  1. On-line repository of audiovisual material feminist research methodology

    Directory of Open Access Journals (Sweden)

    Lena Prado

    2014-12-01

    Full Text Available This paper includes a collection of audiovisual material available in the repository of the Interdisciplinary Seminar of Feminist Research Methodology SIMReF (http://www.simref.net.

  2. 36 CFR 1237.26 - What materials and processes must agencies use to create audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... must agencies use to create audiovisual records? 1237.26 Section 1237.26 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.26 What materials and processes must agencies use to create...

  3. THE IROQUOIS, A BIBLIOGRAPHY OF AUDIO-VISUAL MATERIALS--WITH SUPPLEMENT. (TITLE SUPPLIED).

    Science.gov (United States)

    KELLERHOUSE, KENNETH; AND OTHERS

    APPROXIMATELY 25 SOURCES OF AUDIOVISUAL MATERIALS PERTAINING TO THE IROQUOIS AND OTHER NORTHEASTERN AMERICAN INDIAN TRIBES ARE LISTED ACCORDING TO TYPE OF AUDIOVISUAL MEDIUM. AMONG THE LESS-COMMON MEDIA ARE RECORDINGS OF IROQUOIS MUSIC AND DO-IT-YOURSELF REPRODUCTIONS OF IROQUOIS ARTIFACTS. PRICES ARE GIVEN WHERE APPLICABLE. (BR)

  4. Teleconferences and Audiovisual Materials in Earth Science Education

    Science.gov (United States)

    Cortina, L. M.

    2007-05-01

    Unidad de Educacion Continua y a Distancia, Universidad Nacional Autonoma de Mexico, Coyoaca 04510 Mexico, MEXICO As stated in the special session description, 21st century undergraduate education has access to resources/experiences that go beyond university classrooms. However in some cases, resources may go largely unused and a number of factors may be cited such as logistic problems, restricted internet and telecommunication service access, miss-information, etc. We present and comment on our efforts and experiences at the National University of Mexico in a new unit dedicated to teleconferences and audio-visual materials. The unit forms part of the geosciences institutes, located in the central UNAM campus and campuses in other States. The use of teleconference in formal graduate and undergraduate education allows teachers and lecturers to distribute course material as in classrooms. Course by teleconference requires learning and student and teacher effort without physical contact, but they have access to multimedia available to support their exhibition. Well selected multimedia material allows the students to identify and recognize digital information to aid understanding natural phenomena integral to Earth Sciences. Cooperation with international partnerships providing access to new materials and experiences and to field practices will greatly add to our efforts. We will present specific examples of the experiences that we have at the Earth Sciences Postgraduate Program of UNAM with the use of technology in the education in geosciences.

  5. Evaluation of Modular EFL Educational Program (Audio-Visual Materials Translation & Translation of Deeds & Documents)

    Science.gov (United States)

    Imani, Sahar Sadat Afshar

    2013-01-01

    Modular EFL Educational Program has managed to offer specialized language education in two specific fields: Audio-visual Materials Translation and Translation of Deeds and Documents. However, no explicit empirical studies can be traced on both internal and external validity measures as well as the extent of compatibility of both courses with the…

  6. Catalogo de peliculas educativas y otros materiales audiovisuales (Catalogue of Educational Films and other Audiovisual Materials).

    Science.gov (United States)

    Encyclopaedia Britannica, Inc., Chicago, IL.

    This catalogue of educational films and other audiovisual materials consists predominantly of films in Spanish and English which are intended for use in elementary and secondary schools. A wide variety of topics including films for social studies, language arts, humanities, physical and natural sciences, safety and health, agriculture, physical…

  7. An Audio-Visual Resource Notebook for Adult Consumer Education. An Annotated Bibliography of Selected Audio-Visual Aids for Adult Consumer Education, with Special Emphasis on Materials for Elderly, Low-Income and Handicapped Consumers.

    Science.gov (United States)

    Virginia State Dept. of Agriculture and Consumer Services, Richmond, VA.

    This document is an annotated bibliography of audio-visual aids in the field of consumer education, intended especially for use among low-income, elderly, and handicapped consumers. It was developed to aid consumer education program planners in finding audio-visual resources to enhance their presentations. Materials listed include 293 resources…

  8. Anglo-American Cataloging Rules. Chapter Twelve, Revised. Audiovisual Media and Special Instructional Materials.

    Science.gov (United States)

    American Library Association, Chicago, IL.

    Chapter 12 of the Anglo-American Cataloging Rules has been revised to provide rules for works in the principal audiovisual media (motion pictures, filmstrips, videorecordings, slides, and transparencies) as well as instructional aids (charts, dioramas, flash cards, games, kits, microscope slides, models, and realia). The rules for main and added…

  9. Audio-Visual Aids: Historians in Blunderland.

    Science.gov (United States)

    Decarie, Graeme

    1988-01-01

    A history professor relates his experiences producing and using audio-visual material and warns teachers not to rely on audio-visual aids for classroom presentations. Includes examples of popular audio-visual aids on Canada that communicate unintended, inaccurate, or unclear ideas. Urges teachers to exercise caution in the selection and use of…

  10. Nutrition Education Printed Materials and Audiovisuals: Grades 7-12, January 1979-May 1990. Quick Bibliography Series: QB 90-80.

    Science.gov (United States)

    Evans, Shirley King

    This annotated bibliography contains 203 citations from AGRICOLA, the U.S. Department of Agriculture database, dating from January 1979 through May 1990. The bibliography cites books, print materials, and audiovisual materials on the subject of nutrition education for grades 7-12. Each citation contains complete bibliographic information,…

  11. Relationship between Audio-Visual Materials and Environmental Factors on Students Academic Performance in Senior Secondary Schools in Borno State: Implications for Counselling

    Science.gov (United States)

    Bello, S.; Goni, Umar

    2016-01-01

    This is a survey study, designed to determine the relationship between audio-visual materials and environmental factors on students' academic performance in Senior Secondary Schools in Borno State: Implications for Counselling. The study set two research objectives, and tested two research hypotheses. The population of this study is 1,987 students…

  12. 音像资料联合编目删探讨%Discussion on Cooperative Cataloging Rules of Audio-Visual Materials

    Institute of Scientific and Technical Information of China (English)

    胡大琴

    2012-01-01

    Through comparative researches on the cataloging data of audiovisual materials in several representative member libraries of China Regional Libraries Network (CRLNet), we find that the cataloging work of audiovisual materials is independent, and the cataloging data is very different among domestic libraries. The key reason is without uniform cataloging rules. For unifying cataloging rules of audiovisual materials of CRLNet, Shenzhen Library asks for views of member libraries, combines with practices of audiovisual materials, and gets a common view that the cataloging work of audiovisual materials could refer to the standard about some fields of electronic resources, but in principal, it should follow cataloging standards of audio and visual materials.%通过比较研究地方版文献联合采编协作网(CRLNet)几个有代表性成员馆的音像资料编目数据发现,目前国内图书馆的音像资料编目存在各自为政、编目数据大相径庭的现象,其主要原因是没有统一的编目规则。为统一CRLNet音像资料的著录规则,深圳图书馆征求各成员馆的意见并结合音像资料工作实践提出,音像资料的著录可以参见著录电子资源的某些字段,但是主体仍应该遵循录音、录像资料的编目规则.

  13. 医学视听教材制作及应用的新趋势%New Trend in Production and Application of Medical Audiovisual Teaching Materials

    Institute of Scientific and Technical Information of China (English)

    初万江; 白灿明; 许友松; 郝立宏; 耿成燕

    2011-01-01

    The trend in manufacture and application of medical audiovisual teaching materials reveals the innovation of medical teaching methods. Medical audiovisual teaching materials are playing an increasingly important role in raising the quality of medical teaching in medical educational innovation nowadays. In order to accommodate the innovation of medical teaching methods, a trend of reusing traditional teaching resources, developing clinical digital diagnosis teaching resources, and developing series of medical audiovisual teaching materials have occurred in the development of medical audiovisual teaching materials. It is an important characteristic in the innovation of medical teaching methods for medical audiovisual teaching materials to apply the theory of "breaking up the whole into parts" in the practice of medical education.%医学视听教材制作和应用趋势的演变揭示着医学教学方法变革的实质.在当下医学教育变革中,医学视听教材在医学教学质量的提高中扮演着越来越重要的角色.为了适应目前医学教学方法的改革,医学视听教材的研发出现了重视盘活传统的医学教学资源,开发现代临床数字化诊疗教学资源,研发系列医学视听教材的趋势;医学视听教材化整为零广泛应用于医学教学实践是目前的医学教学方法变革的重要特质.

  14. 高校电教音像资料管理的探讨%Discussion on Electronic Management of College Teaching Audiovisual Materials

    Institute of Scientific and Technical Information of China (English)

    郭红兵

    2011-01-01

    With the fast development of computer multimedia technology and the expanding of electronic-teaching application fields, great changes have taken place such as carrying media and resource amount of electronic-teaching audiovisual materials. The tradi- tional electronic-teaching audiovisual materials management method can not meet with the management requirements. The paper analyzes features of electronic-teaching audiovisual materials, current condition and existing main problems of electronic-teaching audiovisual ma- terials management. Based on the above analysis, data sharing of electronic-teaching audiovisual materials, management system optimiza- tion and processes and improvement of supporting ability are demonstrated in detail.%计算机多媒体技术的飞速发展以及电教应用领域的扩大使得电教音像资料在承载介质、资源规模量等方面发生了巨大变化,传统的高校电教音像资料管理已经难以满足需要。论文在深入分析高校电教音像资料特点、管理现状及问题基础上,分别从实现电教音像资料共享、完善电教音像资料管理制度、优化电教音像资料管理流程以及提升系统支撑能力等方面进行了探讨。

  15. Audiovisual materials applied in medical multimedia courseware%谈音视频素材在医学多媒体课件中的应用

    Institute of Scientific and Technical Information of China (English)

    徐振亚; 张雅茹

    2012-01-01

    在对音视频素材特点、应用的基本要求进行分析的基础上,结合制作和应用实践指出音视频素材是医学多媒体课件的重要组成部分;正确合理地使用这些素材,可以使课件以生动、具体的影像和感人心绪的声音传授知识,启迪智慧.%The paper first analyzes the characteristics and basic application requirements of audiovisual materials. Then based on making and application practice, it points out that audiovisual materials are an important part of medical multimedia courseware. Therefore, proper use of these materials can make courseware impart knowledge and inspire wisdom with their vivid images and enchanting sound.

  16. Audiovisual Interaction

    Science.gov (United States)

    Möttönen, Riikka; Sams, Mikko

    Information about the objects and events in the external world is received via multiple sense organs, especially via eyes and ears. For example, a singing bird can be heard and seen. Typically, audiovisual objects are detected, localized and identified more rapidly and accurately than objects which are perceived via only one sensory system (see, e.g. Welch and Warren, 1986; Stein and Meredith, 1993; de Gelder and Bertelson, 2003; Calvert et al., 2004). The ability of the central nervous system to utilize sensory inputs mediated by different sense organs is called multisensory processing.

  17. Historia audiovisual para una sociedad audiovisual

    Directory of Open Access Journals (Sweden)

    Julio Montero Díaz

    2013-04-01

    Full Text Available This article analyzes the possibilities of presenting an audiovisual history in a society in which audiovisual media has progressively gained greater protagonism. We analyze specific cases of films and historical documentaries and we assess the difficulties faced by historians to understand the keys of audiovisual language and by filmmakers to understand and incorporate history into their productions. We conclude that it would not be possible to disseminate history in the western world without audiovisual resources circulated through various types of screens (cinema, television, computer, mobile phone, video games.

  18. Search in audiovisual broadcast archives

    NARCIS (Netherlands)

    Huurnink, B.

    2010-01-01

    Documentary makers, journalists, news editors, and other media professionals routinely require previously recorded audiovisual material for new productions. For example, a news editor might wish to reuse footage from overseas services for the evening news, or a documentary maker describing the histo

  19. 基于网络平台的汉语视听教材设计%Chinese Audio-visual Teaching Material Design Based on Internet Platform

    Institute of Scientific and Technical Information of China (English)

    徐文婷

    2012-01-01

    In recent twenty years, teaching Chinese as a foreign language has got a rapid development. The international promotion of Chinese has been one of the most important strategies of peaceful development of the country. Although the theory and practice of teaching Chinese as a foreign language have acquired a great achievement, few scholars pay close at- tention to the study of the educational and teaching ideas of audio-visual teaching material, which is a pity. So the present paper focuses on the design principle of audio-visual teaching material.%近二十多年来,我国对外汉语教学事业蓬勃发展,"汉语国际推广"已成为21世纪国家和平发展的重要战略之一。对外汉语教学理论和实践的研究成果颇丰。但是,对"汉语视听教材"的对外汉语教育教学思想的研究状况却与之相形见绌,这不能不说是一个缺憾。因此,本文旨在探讨基于网络的汉语视听教材的设计原则,以期抛砖引玉,引起界内学者的关注与研究。

  20. 电教教材建设的实践与思考%Practice and thinking of audio-visual teaching materials construction

    Institute of Scientific and Technical Information of China (English)

    杨宝强; 刘守东; 王莹

    2013-01-01

    电教教材已成为提高教育教学质量、全面实施素质教育的重要手段和培养学生创新精神及实践能力的重要途径。文章系统分析和梳理了空军工程大学电教教材建设理论与实践工作,提出了“着眼复合型人才培养,完善建设体系;发挥人才和技术优势,形成建设合力;实行全流程精细化管理,确保建设质量;推进数字资源共建共享,提高使用效益”的建设策略,对于高等院校探索信息化人才培养,打造精品电教教材有一定的借鉴意义和参考价值。%Audio-visual teaching materials are the important means of improving teaching quality and implementing quality-oriented education .They are also an important way to cultivate students 'innovative spirit and practical ability .This article analyzes the theory and practice of audio-visual teaching material construction at our university .Then it puts forward construction strategies as follows:focusing on the compound-type talents training and improving construction system; giving play to the advantages of talents and technology to form construction power; carrying out whole-process fine management to ensure the quality of construction; and promoting co-construction and sharing of digital resources to enhance using benefits .This study has valuable reference for information professional training and top-quality audio-visual teaching material construction .

  1. El fénix quiere vivir : algunas consideraciones sobre la documentación audiovisual

    OpenAIRE

    2003-01-01

    The paper presents an overview of the audio-visual documents, with a retrospective study and different points of view of national and foreign authors on the importance of the audio-visual materials and its organization, preservation and diffusion.

  2. Audiovisual Media and the Disabled. AV in Action 1.

    Science.gov (United States)

    Nederlands Bibliotheek en Lektuur Centrum, The Hague (Netherlands).

    Designed to provide information on public library services to the handicapped, this pamphlet contains case studies from three different countries on various aspects of the provision of audiovisual services to the disabled. The contents include: (1) "The Value of Audiovisual Materials in a Children's Hospital in Sweden" (Lis Byberg); (2)…

  3. Collection of Digital Audio-visual Material Preservation and Backup Data Transfer%典藏音像资料保存与数字化备份转移

    Institute of Scientific and Technical Information of China (English)

    李浚

    2011-01-01

    According to the audio and video material carrier form, storage media technical features type classification accord- ing to different types of collection, audio-visual materials of corresponding preserving method proposed. In audio and video material carrier storage life could not infinite long cases, and many early video data broadcast devices will be eliminated, causing many valuable audio-visual material will collapse of reality, audio-visual materials need to put forward the urgency views. Finally talk about how video data provide detailed digital transfer methods.%根据音像资料载体形式、存储媒介技术特点进行类型划分,针对不同类型的典藏音像资料提出各种相应的保存方法。在音像资料载体保存期不可能无限长的情况下,以及很多早期音像资料播放设备即将被淘汰,致使许多珍贵声像资料面·临无法使用的现实,为此提出音像资料迫切需要数字化的观点。最后为音像资料怎样数字化转移提供了详细方法

  4. Digital audiovisual archives

    CERN Document Server

    Stockinger, Peter

    2013-01-01

    Today, huge quantities of digital audiovisual resources are already available - everywhere and at any time - through Web portals, online archives and libraries, and video blogs. One central question with respect to this huge amount of audiovisual data is how they can be used in specific (social, pedagogical, etc.) contexts and what are their potential interest for target groups (communities, professionals, students, researchers, etc.).This book examines the question of the (creative) exploitation of digital audiovisual archives from a theoretical, methodological, technical and practical

  5. Teaching Materials for French. Recorded and Audio-Visual Courses; Recorded and Audio-Visual Supplementary Material; Books for Conversation-Comprehension-Composition-Translation; Pictorial Readers-Classroom Magazines, Books with Games & Puzzles-Playlets-Songs; Primary School French.

    Science.gov (United States)

    Centre for Information on Language Teaching, London (England).

    These five lists form an annotated bibliography of instructional materials for use in teaching French, classified according to the age and level of instruction for which they were intended. Each list treats a separate category of materials. There is a title index, as well as an index of authors, editors, compilers, and adaptors, with each list.…

  6. Plan empresa productora de audiovisuales : La Central Audiovisual y Publicidad

    OpenAIRE

    Arroyave Velasquez, Alejandro

    2015-01-01

    El presente documento corresponde al plan de creación de empresa La Central Publicidad y Audiovisual, una empresa dedicada a la pre-producción, producción y post-producción de material de tipo audiovisual. La empresa estará ubicada en la ciudad de Cali y tiene como mercado objetivo atender los diferentes tipos de empresas de la ciudad, entre las cuales se encuentran las pequeñas, medianas y grandes empresas.

  7. Audiovisual integration of stimulus transients

    DEFF Research Database (Denmark)

    Andersen, Tobias; Mamassian, Pascal

    2008-01-01

    leaving only unsigned stimulus transients as the basis for audiovisual integration. Facilitation of luminance detection occurred even with varying audiovisual stimulus onset asynchrony and even when the sound lagged behind the luminance change by 75 ms supporting the interpretation that perceptual...

  8. The Audio-Visual Man.

    Science.gov (United States)

    Babin, Pierre, Ed.

    A series of twelve essays discuss the use of audiovisuals in religious education. The essays are divided into three sections: one which draws on the ideas of Marshall McLuhan and other educators to explore the newest ideas about audiovisual language and faith, one that describes how to learn and use the new language of audio and visual images, and…

  9. 36 CFR 1256.96 - What provisions apply to the transfer of USIA audiovisual records to the National Archives of the...

    Science.gov (United States)

    2010-07-01

    ... transfer of USIA audiovisual records to the National Archives of the United States? 1256.96 Section 1256.96... Information Agency Audiovisual Materials in the National Archives of the United States § 1256.96 What provisions apply to the transfer of USIA audiovisual records to the National Archives of the United...

  10. Search behavior of media professionals at an audiovisual archive: A transaction log analysis

    NARCIS (Netherlands)

    Huurnink, B.; Hollink, L.; van den Heuvel, W.; de Rijke, M.

    2010-01-01

    Finding audiovisual material for reuse in new programs is an important activity for news producers, documentary makers, and other media professionals. Such professionals are typically served by an audiovisual broadcast archive. We report on a study of the transaction logs of one such archive. The an

  11. On the Experience of Making Audiovisual Teaching Material:A Case Study of Rural Drinking Water Health%谈医学视听教材制作体会——以农村饮水卫生为例

    Institute of Scientific and Technical Information of China (English)

    张亭亭; 邵鹏

    2012-01-01

    By relating our practice in the ministry of health of audiovisual teaching material for "Rural Drinking Water Health",the paper introduces proper selection,script writing,before production,official shooting,post production and so on,for the help of production of such paper.%结合卫生部视听教材农村饮水卫生的制作实践,介绍了视听教材的合适选题、脚本编写、拍摄前准备、正式拍摄、后期制作等,以此为制作此类教材提供帮助。

  12. Plantilla 1: El documento audiovisual: elementos importantes

    OpenAIRE

    2011-01-01

    Concepto de documento audiovisual y de documentación audiovisual, profundizando en la distinción de documentación de imagen en movimiento con posible incorporación de sonido frente al concepto de documentación audiovisual según plantea Jorge Caldera. Diferenciación entre documentos audiovisuales, obras audiovisuales y patrimonio audiovisual según Félix del Valle.

  13. Blacklist Established in Chinese Audiovisual Market

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The Chinese audiovisual market is to impose a ban on audiovisual product dealers whose licenses have been revoked for violatingthe law. This ban will prohibit them from dealing in audiovisual products for ten years. Their names are to be included on a blacklist made known to the public.

  14. [Audio-visual aids and tropical medicine].

    Science.gov (United States)

    Morand, J J

    1989-01-01

    The author presents a list of the audio-visual productions about Tropical Medicine, as well as of their main characteristics. He thinks that the audio-visual educational productions are often dissociated from their promotion; therefore, he invites the future creator to forward his work to the Audio-Visual Health Committee.

  15. The level of audiovisual print-speech integration deficits in dyslexia.

    Science.gov (United States)

    Kronschnabel, Jens; Brem, Silvia; Maurer, Urs; Brandeis, Daniel

    2014-09-01

    The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No

  16. Publicación de materiales audiovisuales a través de un servidor de video-streaming Publication of audio-visual materials through a streaming video server

    Directory of Open Access Journals (Sweden)

    Acevedo Clavijo Edwin Jovanny

    2010-07-01

    Full Text Available Esta propuesta tiene como objetivo estudiar varias alternativas de servidores Streaming para determinar la mejor herramienta para el desarrollo de la publicación de material audiovisual educativo. Se evaluaron las plataformas más utilizadas teniendo en cuenta sus características y beneficios que tiene cada servidor entre las los cuales están: Hélix Universal Server, Windows Media Server de Microsoft, Peer Cast y Darwin Server. implementando un servidor con mayores capacidades y beneficios para la publicación de videos con fines académicos a través de la intranet de la Universidad Cooperativa de Colombia seccional Barrancabermeja This proposal has as an principal objective to study different alternatives for streaming servers to determine the best tool in the project’s development. Platforms most used were evaluated features and benefits in each served such as: Helix Universal Server, Microsoft Windows Media Server, Peer Cast and Darwin Server. Implementing a server with more capabilities and benefits for the publication of videos for academic purposes through the intranet of the Cooperative University of Colombia Barrancabermeja’s sectional

  17. Bilingualism affects audiovisual phoneme identification.

    Science.gov (United States)

    Burfin, Sabine; Pascalis, Olivier; Ruiz Tada, Elisa; Costa, Albert; Savariaux, Christophe; Kandel, Sonia

    2014-01-01

    We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience-i.e., the exposure to a double phonological code during childhood-affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identification experiment with bilingual and monolingual adult participants. It was an ABX task involving a Bengali dental-retroflex contrast that does not exist in any of the participants' languages. The phonemes were presented in audiovisual (AV) and audio-only (A) conditions. The results revealed that in the audio-only condition monolinguals and bilinguals had difficulties in discriminating the retroflex non-native phoneme. They were phonologically "deaf" and assimilated it to the dental phoneme that exists in their native languages. In the audiovisual presentation instead, both groups could overcome the phonological deafness for the retroflex non-native phoneme and identify both Bengali phonemes. However, monolinguals were more accurate and responded quicker than bilinguals. This suggests that bilinguals do not use the same processes as monolinguals to decode visual speech.

  18. Bilingualism affects audiovisual phoneme identification

    Directory of Open Access Journals (Sweden)

    Sabine eBurfin

    2014-10-01

    Full Text Available We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience –i.e., the exposure to a double phonological code during childhood– affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identification experiment with bilingual and monolingual adult participants. It was an ABX task involving a Bengali dental-retroflex contrast that does not exist in any of the participants’ languages. The phonemes were presented in audiovisual (AV and audio-only (A conditions. The results revealed that in the audio-only condition monolinguals and bilinguals had difficulties in discriminating the retroflex non-native phoneme. They were phonologically deaf and assimilated it to the dental phoneme that exists in their native languages. In the audiovisual presentation instead, both groups could overcome the phonological deafness for the retroflex non-native phoneme and identify both Bengali phonemes. However, monolinguals were more accurate and responded quicker than bilinguals. This suggests that bilinguals do not use the same processes as monolinguals to decode visual speech.

  19. Decreased BOLD responses in audiovisual processing

    NARCIS (Netherlands)

    Wiersinga-Post, Esther; Tomaskovic, Sonja; Slabu, Lavinia; Renken, Remco; de Smit, Femke; Duifhuis, Hendrikus

    2010-01-01

    Audiovisual processing was studied in a functional magnetic resonance imaging study using the McGurk effect. Perceptual responses and the brain activity patterns were measured as a function of audiovisual delay. In several cortical and subcortical brain areas, BOLD responses correlated negatively wi

  20. Audiovisual Styling and the Film Experience

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2015-01-01

    Approaches to music and audiovisual meaning in film appear to be very different in nature and scope when considered from the point of view of experimental psychology or humanistic studies. Nevertheless, this article argues that experimental studies square with ideas of audiovisual perception and ...

  1. Audio-Visual Aids in Universities

    Science.gov (United States)

    Douglas, Jackie

    1970-01-01

    A report on the proceedings and ideas expressed at a one day seminar on "Audio-Visual Equipment--Its Uses and Applications for Teaching and Research in Universities." The seminar was organized by England's National Committee for Audio-Visual Aids in Education in conjunction with the British Universities Film Council. (LS)

  2. Cinco discursos da digitalidade audiovisual

    Directory of Open Access Journals (Sweden)

    Gerbase, Carlos

    2001-01-01

    Full Text Available Michel Foucault ensina que toda fala sistemática - inclusive aquela que se afirma “neutra” ou “uma desinteressada visão objetiva do que acontece” - é, na verdade, mecanismo de articulação do saber e, na seqüência, de formação de poder. O aparecimento de novas tecnologias, especialmente as digitais, no campo da produção audiovisual, provoca uma avalanche de declarações de cineastas, ensaios de acadêmicos e previsões de demiurgos da mídia.

  3. 29 CFR 2.13 - Audiovisual coverage prohibited.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage prohibited. 2.13 Section 2.13 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.13 Audiovisual coverage prohibited. The Department shall not permit audiovisual coverage of...

  4. Audiovisual English-Arabic Translation: De Beaugrande's Perspective

    Directory of Open Access Journals (Sweden)

    Alaa Eddin Hussain

    2016-05-01

    Full Text Available This paper attempts to demonstrate the significance of the seven standards of textuality with special application to audiovisual English Arabic translation.  Ample and thoroughly analysed examples have been provided to help in audiovisual English-Arabic translation decision-making. A text is meaningful if and only if it carries meaning and knowledge to its audience, and is optimally activatable, recoverable and accessible.  The same is equally applicable to audiovisual translation (AVT. The latter should also carry knowledge which can be easily accessed by the TL audience, and be processed with least energy and time, i.e. achieving the utmost level of efficiency. Communication occurs only when that text is coherent, with continuity of senses and concepts that are appropriately linked. Coherence of a text will be achieved when all aspects of cohesive devices are well accounted for pragmatically.  This combined with a good amount of psycholinguistic element will provide a text with optimal communicative value. Non-text is certainly devoid of such components and ultimately non-communicative. Communicative knowledge can be classified into three categories: determinate knowledge, typical knowledge and accidental knowledge. To create dramatic suspense and the element of surprise, the text in AV environment, as in any dialogue, often carries accidental knowledge.  This unusual knowledge aims to make AV material interesting in the eyes of its audience. That cognitive environment is enhanced by an adequate employment of material (picture and sound, and helps to recover sense in the text. Hence, the premise of this paper is the application of certain aspects of these standards to AV texts taken from various recent feature films and documentaries, in order to facilitate the translating process and produce a final appropriate product.  Keywords: Arabic audiovisual translation, coherence, cohesion, textuality

  5. Effects of audio-visual aids on foreign language test anxiety, reading and listening comprehension, and retention in EFL learners.

    Science.gov (United States)

    Lee, Shu-Ping; Lee, Shin-Da; Liao, Yuan-Lin; Wang, An-Chi

    2015-04-01

    This study examined the effects of audio-visual aids on anxiety, comprehension test scores, and retention in reading and listening to short stories in English as a Foreign Language (EFL) classrooms. Reading and listening tests, general and test anxiety, and retention were measured in English-major college students in an experimental group with audio-visual aids (n=83) and a control group without audio-visual aids (n=94) with similar general English proficiency. Lower reading test anxiety, unchanged reading comprehension scores, and better reading short-term and long-term retention after four weeks were evident in the audiovisual group relative to the control group. In addition, lower listening test anxiety, higher listening comprehension scores, and unchanged short-term and long-term retention were found in the audiovisual group relative to the control group after the intervention. Audio-visual aids may help to reduce EFL learners' listening test anxiety and enhance their listening comprehension scores without facilitating retention of such materials. Although audio-visual aids did not increase reading comprehension scores, they helped reduce EFL learners' reading test anxiety and facilitated retention of reading materials.

  6. An audiovisual emotion recognition system

    Science.gov (United States)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  7. 36 CFR 1256.98 - Can I get access to and obtain copies of USIA audiovisual records transferred to the National...

    Science.gov (United States)

    2010-07-01

    ... obtain copies of USIA audiovisual records transferred to the National Archives of the United States? 1256.98 Section 1256.98 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION... United States Information Agency Audiovisual Materials in the National Archives of the United...

  8. Audiovisual quality assessment and prediction for videotelephony

    CERN Document Server

    Belmudez, Benjamin

    2015-01-01

    The work presented in this book focuses on modeling audiovisual quality as perceived by the users of IP-based solutions for video communication like videotelephony. It also extends the current framework for the parametric prediction of audiovisual call quality. The book addresses several aspects related to the quality perception of entire video calls, namely, the quality estimation of the single audio and video modalities in an interactive context, the audiovisual quality integration of these modalities and the temporal pooling of short sample-based quality scores to account for the perceptual quality impact of time-varying degradations.

  9. 42 CFR 4.5 - Use of materials from the collections.

    Science.gov (United States)

    2010-10-01

    ..., which need not be returned unless otherwise stated at the time of the loan. (2) Loans of audiovisual materials. Audiovisual materials are available for loan under the same general terms as printed...

  10. Audiovisual segregation in cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Simon Landry

    Full Text Available It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition, as well as in normal controls. A visual speech recognition task (i.e. speechreading was administered either in silence or in combination with three types of auditory distractors: i noise ii reverse speech sound and iii non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users.

  11. Sistemas de Registro Audiovisual del Patrimonio Urbano (SRAPU)

    OpenAIRE

    Conles, Liliana Eva

    2006-01-01

    El Sistema SRAPU es un método de relevamiento fílmico diseñado para configurar una base de datos interactiva del paisaje urbano. Sobre esta base se persigue la formulación de criterios ordenados en términos de: flexibilidad y eficacia económica, eficiencia en el manejo de datos, democratización de la información. El SRAPU se plantea como un registro audiovisual del patrimonio material e intangible en su singularidad y como conjunto histórico y natural. En su concepción involucra los pro...

  12. La Documentación Audiovisual en las empresas televisivas

    OpenAIRE

    2003-01-01

    The information systems and audio-visual documentation in the televisions are part of a great gear for the good operation of the audio-visual companies. In the present work are the main characteristics of the audio-visual documentation within the framework of the televising audio-visual organizations offering an express crossed on the aspects more excellent than the main users of these services must know. The article tries to demonstrate the importance and to show the possibilities that offer...

  13. Audiovisual integration facilitates unconscious visual scene processing.

    Science.gov (United States)

    Tan, Jye-Sheng; Yeh, Su-Ling

    2015-10-01

    Meanings of masked complex scenes can be extracted without awareness; however, it remains unknown whether audiovisual integration occurs with an invisible complex visual scene. The authors examine whether a scenery soundtrack can facilitate unconscious processing of a subliminal visual scene. The continuous flash suppression paradigm was used to render a complex scene picture invisible, and the picture was paired with a semantically congruent or incongruent scenery soundtrack. Participants were asked to respond as quickly as possible if they detected any part of the scene. Release-from-suppression time was used as an index of unconscious processing of the complex scene, which was shorter in the audiovisual congruent condition than in the incongruent condition (Experiment 1). The possibility that participants adopted different detection criteria for the 2 conditions was excluded (Experiment 2). The audiovisual congruency effect did not occur for objects-only (Experiment 3) and background-only (Experiment 4) pictures, and it did not result from consciously mediated conceptual priming (Experiment 5). The congruency effect was replicated when catch trials without scene pictures were added to exclude participants with high false-alarm rates (Experiment 6). This is the first study demonstrating unconscious audiovisual integration with subliminal scene pictures, and it suggests expansions of scene-perception theories to include unconscious audiovisual integration.

  14. Lip movements affect infants' audiovisual speech perception.

    Science.gov (United States)

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  15. Utilization of audio-visual aids by family welfare workers.

    Science.gov (United States)

    Naik, V R; Jain, P K; Sharma, B B

    1977-01-01

    Communication efforts have been an important component of the Indian Family Planning Welfare Program since its inception. However, its chief interests in its early years were clinical, until the adoption of the extension approach in 1963. Educational materials were developed, especially in the period 1965-8, to fit mass, group meeting and home visit approaches. Audiovisual aids were developed for use by extension workers, who had previously relied entirely on verbal approaches. This paper examines their use. A questionnaire was designed for workers in motivational programs at 3 levels: Village Level (Family Planning Health Assistant, Auxilliary Nurse-Midwife, Dias), Block Level (Public Health Nurse, Lady Health Visitor, Block Extension Educator), and District (District Extension Educator, District Mass Education and Information Officer). 3 Districts were selected from each State on the basis of overall family planning performance during 1970-2 (good, average, or poor). Units of other agencies were also included on the same basis. Findings: 1) Workers in all 3 categories preferred individual contacts over group meetings or mass approach. 2) 56-64% said they used audiovisual aids "sometimes" (when available). 25% said they used them "many times" and only 15.9% said "rarely." 3) More than 1/2 of workers in each category said they were not properly oriented toward the use of audiovisual aids. Nonavailability of the aids in the market was also cited. About 1/3 of village level and 1/2 of other workers said that the materials were heavy and liable to be damaged. Complexity, inaccuracy and confusion in use were not widely cited (less than 30%).

  16. Challenges of Using Audio-Visual Aids as Warm-Up Activity in Teaching Aviation English

    Science.gov (United States)

    Sahin, Mehmet; Sule, St.; Seçer, Y. E.

    2016-01-01

    This study aims to find out the challenges encountered in the use of video as audio-visual material as a warm-up activity in aviation English course at high school level. This study is based on a qualitative study in which focus group interview is used as the data collection procedure. The participants of focus group are four instructors teaching…

  17. Eyewitnesses of History: Italian Amateur Cinema as Cultural Heritage and Source for Audiovisual and Media Production

    NARCIS (Netherlands)

    Simoni, Paolo

    2015-01-01

    abstractThe role of amateur cinema as archival material in Italian media productions has only recently been discovered. Italy, as opposed to other European countries, lacked a local, regional and national policy for the collection and preservation of private audiovisual documents, which led, as a re

  18. 29 CFR 2.12 - Audiovisual coverage permitted.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage permitted. 2.12 Section 2.12 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.12 Audiovisual coverage permitted. The following are the types of hearings where the...

  19. Píndoles audiovisuals 3x3

    OpenAIRE

    Raja Nadales, Daniel

    2014-01-01

    Creació de tres Píndoles audiovisuals d'aproximadament 3 minuts de durada, compostes per una sèrie de consells relacionats amb la salut, la cura de pacients i el seu entorn, creant una funció d'utilitat a l'usuari. Les píndoles estan complementades per un llenguatge de fàcil comprensió i enteniment i estan subjectes a una lliure accessibilitat mitjançant la distribució per Internet, adaptades a qualsevol aparell electrònic de reproducció audiovisual.

  20. El tratamiento documental del mensaje audiovisual Documentary treatment of the audio-visual message

    Directory of Open Access Journals (Sweden)

    Blanca Rodríguez Bravo

    2005-06-01

    Full Text Available Se analizan las peculiaridades del documento audiovisual y el tratamiento documental que sufre en las emisoras de televisión. Observando a las particularidades de la imagen que condicionan su análisis y recuperación, se establecen las etapas y procedimientos para representar el mensaje audiovisual con vistas a su reutilización. Por último se realizan algunas consideraciones acerca del procesamiento automático del video y de los cambios introducidos por la televisión digital.Peculiarities of the audio-visual document and the treatment it undergoes in TV broadcasting stations are analyzed. The particular features of images condition their analysis and recovery; this paper establishes stages and proceedings for the representation of audio-visual messages with a view to their re-usability Also, some considerations about the automatic processing of the video and the changes introduced by digital TV are made.

  1. Longevity and Depreciation of Audiovisual Equipment.

    Science.gov (United States)

    Post, Richard

    1987-01-01

    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  2. Audiovisual Asynchrony Detection in Human Speech

    Science.gov (United States)

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  3. Rapid, generalized adaptation to asynchronous audiovisual speech.

    Science.gov (United States)

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-01

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity.

  4. Audiovisual Prosody and Feeling of Knowing

    Science.gov (United States)

    Swerts, M.; Krahmer, E.

    2005-01-01

    This paper describes two experiments on the role of audiovisual prosody for signalling and detecting meta-cognitive information in question answering. The first study consists of an experiment, in which participants are asked factual questions in a conversational setting, while they are being filmed. Statistical analyses bring to light that the…

  5. Audiovisual vocal outburst classification in noisy conditions

    NARCIS (Netherlands)

    Eyben, Florian; Petridis, Stavros; Schuller, Björn; Pantic, Maja

    2012-01-01

    In this study, we investigate an audiovisual approach for classification of vocal outbursts (non-linguistic vocalisations) in noisy conditions using Long Short-Term Memory (LSTM) Recurrent Neural Networks and Support Vector Machines. Fusion of geometric shape features and acoustic low-level descript

  6. Active Methodology in the Audiovisual Communication Degree

    Science.gov (United States)

    Gimenez-Lopez, J. L.; Royo, T. Magal; Laborda, Jesus Garcia; Dunai, Larisa

    2010-01-01

    The paper describes the adaptation methods of the active methodologies of the new European higher education area in the new Audiovisual Communication degree under the perspective of subjects related to the area of the interactive communication in Europe. The proposed active methodologies have been experimentally implemented into the new academic…

  7. Reduced audiovisual recalibration in the elderly.

    Science.gov (United States)

    Chan, Yu Man; Pianta, Michael J; McKendrick, Allison M

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22-32 years old) and 15 older (64-74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age.

  8. Reduced audiovisual recalibration in the elderly

    Directory of Open Access Journals (Sweden)

    Yu Man eChan

    2014-08-01

    Full Text Available Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy ageing results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However the impact of ageing on audiovisual recalibration is unkonwn. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for fifteen younger (22-32 years old and fifteen older (64-74 years old healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230ms. The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post adaptation to synchrony, the younger and older observers had average window widths (±standard deviation of 326 (±80 and 448 (±105 ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers however perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous nor their synchrony window widths. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age.

  9. 36 CFR 1237.12 - What record elements must be created and preserved for permanent audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... created and preserved for permanent audiovisual records? 1237.12 Section 1237.12 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC... permanent audiovisual records? For permanent audiovisual records, the following record elements must...

  10. Audio-visual affective expression recognition

    Science.gov (United States)

    Huang, Thomas S.; Zeng, Zhihong

    2007-11-01

    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  11. Stuttering and speech naturalness: audio and audiovisual judgments.

    Science.gov (United States)

    Martin, R R; Haroldson, S K

    1992-06-01

    Unsophisticated raters, using 9-point interval scales, judged speech naturalness and stuttering severity of recorded stutterer and nonstutterer speech samples. Raters judged separately the audio-only and audiovisual presentations of each sample. For speech naturalness judgments of stutterer samples, raters invariably judged the audiovisual presentation more unnatural than the audio presentation of the same sample; but for the nonstutterer samples, there was no difference between audio and audiovisual naturalness ratings. Stuttering severity ratings did not differ significantly between audio and audiovisual presentations of the same samples. Rater reliability, interrater agreement, and intrarater agreement for speech naturalness judgments were assessed.

  12. Diminished sensitivity of audiovisual temporal order in autism spectrum disorder.

    Science.gov (United States)

    de Boer-Schellekens, Liselotte; Eussen, Mart; Vroomen, Jean

    2013-01-01

    We examined sensitivity of audiovisual temporal order in adolescents with autism spectrum disorder (ASD) using an audiovisual temporal order judgment (TOJ) task. In order to assess domain-specific impairments, the stimuli varied in social complexity from simple flash/beeps to videos of a handclap or a speaking face. Compared to typically-developing controls, individuals with ASD were generally less sensitive in judgments of audiovisual temporal order (larger just noticeable differences, JNDs), but there was no specific impairment with social stimuli. This suggests that people with ASD suffer from a more general impairment in audiovisual temporal processing.

  13. Audiovisual bimodal mutual compensation of Chinese

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The perception of human languages is inherently a multi-modalprocess, in which audio information can be compensated by visual information to improve the recognition performance. Such a phenomenon in English, German, Spanish and so on has been researched, but in Chinese it has not been reported yet. In our experiment, 14 syllables (/ba, bi, bian, biao, bin, de, di, dian, duo, dong, gai, gan, gen, gu/), extracted from Chinese audiovisual bimodal speech database CAVSR-1.0, were pronounced by 10 subjects. The audio-only stimuli, audiovisual stimuli, and visual-only stimuli were recognized by 20 observers. The audio-only stimuli and audiovisual stimuli both were presented under 5 conditions: no noise, SNR 0 dB, -8 dB, -12 dB, and -16 dB. The experimental result is studied and the following conclusions for Chinese speech are reached. Human beings can recognize visual-only stimuli rather well. The place of articulation determines the visual distinction. In noisy environment, audio information can remarkably be compensated by visual information and as a result the recognition performance is greatly improved.

  14. Materials Used in Bilingual Programs.

    Science.gov (United States)

    New York City Board of Education, Brooklyn, NY. Bilingual Resource Center.

    This list, prepared by the Bilingual Resource Center in New York City, of instructional materials used in bilingual programs includes textbooks, educational materials, and audio-visual aids used in the various school districts of New York City. (SK)

  15. Self-organizing maps for measuring similarity of audiovisual speech percepts

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich

    . Dependent on the training data, these other units may also be contextually immediate neighboring units. The poster demonstrates the idea with text material spoken by one individual subject using a set of simple audio-visual features. The data material for the training process consists of 44 labeled...... visual lip features is used. Phoneme-related receptive fields result on the SOM basis; they are speaker dependent and show individual locations and strain. Overlapping main slopes indicate a high similarity of respective units; distortion or extra peaks originate from the influence of other units...... sentences in German with a balanced phoneme repertoire. As a result it can be stated that (i) the SOM can be trained to map auditory and visual features in a topology-preserving way and (ii) they show strain due to the influence of other audio-visual units. The SOM can be used to measure similarity amongst...

  16. Panorama de les fonts audiovisuals internacionals en televisió : contingut, gestió i drets

    Directory of Open Access Journals (Sweden)

    López de Solís, Iris

    2014-12-01

    Full Text Available Les cadenes generalistes espanyoles (nacionals i autonòmiques disposen de diferents fonts audiovisuals per informar dels temes internacionals, com ara agències, consorcis de notícies i corresponsalies. En aquest article, a partir de les dades facilitades per diferents cadenes, s'aborda la cobertura, l'ús i la gestió d'aquestes fonts, així com també els seus drets d'ús i arxivament, i s'analitza la història i les eines en línia de les agències més emprades. Finalment, es descriu la tasca diària del departament d'Eurovision de TVE, al qual fa uns mesos s'han incorporat documentalistes que, a més de tractar documentalment el material audiovisual, duen a terme tasques d'edició i de producció.Las cadenas generalistas españolas (nacionales y autonómicas cuentan con diferentes fuentes audiovisuales para informar de los temas internacionales, como agencias, consorcios de noticias y corresponsalías. En este artículo, a partir de los datos facilitados por diferentes cadenas, se aborda la cobertura, el uso y la gestión de dichas fuentes, así como sus derechos de uso y archivado, y se analiza la historia y las herramientas en línea de las agencias más empleadas. Finalmente se describe la labor diaria del departamento de Eurovision de TVE, al que hace unos meses se han incorporado documentalistas que, además de tratar documentalmente el material audiovisual, realizan labores de edición y producción.At both national and regional levels, Spain’s main public service television channels rely upon a number of independent producers of audiovisual content to deliver news on international affairs, including news agencies and consortia and correspondent networks. Using the data provided by different channels, this paper examines the coverage, use and management of these sources as well as the regulations determining their use and storage. It also analyzes the history of the most prominent agencies and the online toolkits they offer

  17. 36 CFR 1256.100 - What is the copying policy for USIA audiovisual records that either have copyright protection or...

    Science.gov (United States)

    2010-07-01

    ... Distribution of United States Information Agency Audiovisual Materials in the National Archives of the United...? 1256.100 Section 1256.100 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... United States once NARA has: (1) Ensured, as described in paragraph (c) of this section, that you...

  18. Neural Correlates of Audiovisual Integration of Semantic Category Information

    Science.gov (United States)

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-01-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded…

  19. Development of Sensitivity to Audiovisual Temporal Asynchrony during Midchildhood

    Science.gov (United States)

    Kaganovich, Natalya

    2016-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…

  20. Knowledge Generated by Audiovisual Narrative Action Research Loops

    Science.gov (United States)

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of getting to…

  1. Audiovisual Integration in High Functioning Adults with Autism

    Science.gov (United States)

    Keane, Brian P.; Rosenthal, Orna; Chun, Nicole H.; Shams, Ladan

    2010-01-01

    Autism involves various perceptual benefits and deficits, but it is unclear if the disorder also involves anomalous audiovisual integration. To address this issue, we compared the performance of high-functioning adults with autism and matched controls on experiments investigating the audiovisual integration of speech, spatiotemporal relations, and…

  2. Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special

    Science.gov (United States)

    Vroomen, Jean; Stekelenburg, Jeroen J.

    2011-01-01

    Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…

  3. Visual anticipatory information modulates multisensory interactions of artificial audiovisual stimuli.

    Science.gov (United States)

    Vroomen, Jean; Stekelenburg, Jeroen J

    2010-07-01

    The neural activity of speech sound processing (the N1 component of the auditory ERP) can be suppressed if a speech sound is accompanied by concordant lip movements. Here we demonstrate that this audiovisual interaction is neither speech specific nor linked to humanlike actions but can be observed with artificial stimuli if their timing is made predictable. In Experiment 1, a pure tone synchronized with a deformation of a rectangle induced a smaller auditory N1 than auditory-only presentations if the temporal occurrence of this audiovisual event was made predictable by two moving disks that touched the rectangle. Local autoregressive average source estimation indicated that this audiovisual interaction may be related to integrative processing in auditory areas. When the moving disks did not precede the audiovisual stimulus--making the onset unpredictable--there was no N1 reduction. In Experiment 2, the predictability of the leading visual signal was manipulated by introducing a temporal asynchrony between the audiovisual event and the collision of moving disks. Audiovisual events occurred either at the moment, before (too "early"), or after (too "late") the disks collided on the rectangle. When asynchronies varied from trial to trial--rendering the moving disks unreliable temporal predictors of the audiovisual event--the N1 reduction was abolished. These results demonstrate that the N1 suppression is induced by visual information that both precedes and reliably predicts audiovisual onset, without a necessary link to human action-related neural mechanisms.

  4. Audiovisual Processing in Children with and without Autism Spectrum Disorders

    Science.gov (United States)

    Mongillo, Elizabeth A.; Irwin, Julia R.; Whalen, D. H.; Klaiman, Cheryl; Carter, Alice S.; Schultz, Robert T.

    2008-01-01

    Fifteen children with autism spectrum disorders (ASD) and twenty-one children without ASD completed six perceptual tasks designed to characterize the nature of the audiovisual processing difficulties experienced by children with ASD. Children with ASD scored significantly lower than children without ASD on audiovisual tasks involving human faces…

  5. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    Science.gov (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  6. Electrophysiological assessment of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Dau, Torsten

    Speech perception integrates signal from ear and eye. This is witnessed by a wide range of audiovisual integration effects, such as ventriloquism and the McGurk illusion. Some behavioral evidence suggest that audiovisual integration of specific aspects is special for speech perception. However, o...

  7. Use of Audiovisual Texts in University Education Process

    Science.gov (United States)

    Aleksandrov, Evgeniy P.

    2014-01-01

    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  8. Decision-level fusion for audio-visual laughter detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, M.; Truong, K.; Poppe, R.; Pantic, M.

    2008-01-01

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laughter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is per

  9. Knowledge Generated by Audiovisual Narrative Action Research Loops

    Science.gov (United States)

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of…

  10. Trigger Videos on the Web: Impact of Audiovisual Design

    Science.gov (United States)

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  11. Audiovisual Matching in Speech and Nonspeech Sounds: A Neurodynamical Model

    Science.gov (United States)

    Loh, Marco; Schmid, Gabriele; Deco, Gustavo; Ziegler, Wolfram

    2010-01-01

    Audiovisual speech perception provides an opportunity to investigate the mechanisms underlying multimodal processing. By using nonspeech stimuli, it is possible to investigate the degree to which audiovisual processing is specific to the speech domain. It has been shown in a match-to-sample design that matching across modalities is more difficult…

  12. The Use of Audio-Visual Aids in Teaching: A Study in the Saudi Girls Colleges.

    Science.gov (United States)

    Al-Sharhan, Jamal A.

    1993-01-01

    A survey of faculty in girls colleges in Riyadh, Saudi Arabia, investigated teaching experience, academic rank, importance of audiovisual aids, teacher training, availability of audiovisual centers, and reasons for not using audiovisual aids. Proposes changes to increase use of audiovisual aids: more training courses, more teacher release time,…

  13. 7 CFR 3015.200 - Acknowledgement of support on publications and audiovisuals.

    Science.gov (United States)

    2010-01-01

    ... audiovisuals. 3015.200 Section 3015.200 Agriculture Regulations of the Department of Agriculture (Continued... Miscellaneous § 3015.200 Acknowledgement of support on publications and audiovisuals. (a) Definitions. Appendix A defines “audiovisual,” “production of an audiovisual,” and “publication.” (b)...

  14. Dissociating verbal and nonverbal audiovisual object processing.

    Science.gov (United States)

    Hocking, Julia; Price, Cathy J

    2009-02-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

  15. Categorization of natural dynamic audiovisual scenes.

    Directory of Open Access Journals (Sweden)

    Olli Rummukainen

    Full Text Available This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database.

  16. Audiovisual bimodal mutual compensation of Chinese

    Institute of Scientific and Technical Information of China (English)

    ZHOU; Zhi

    2001-01-01

    [1]Richard, P., Schumeyer, Kenneth E. B., The effect of visual information on word initial consonant perception of dysarthric speech, in Proc. ICSLP'96 October 3-6 1996, Philadephia, Pennsylvania, USA.[2]Goff, B. L., Marigny, T. G., Benoit, C., Read my lips...and my jaw! How intelligible are the components of a speaker's face? Eurospeech'95, 4th European Conference on Speech Communication and Technology, Madrid, September 1995.[3]McGurk, H., MacDonald, J. Hearing lips and seeing voices, Nature, 1976, 264: 746.[4]Duran A. F., Mcgurk effect in Spanish and German listeners: Influences of visual cues in the perception of Spanish and German confliction audio-visual stimuli, Eurospeech'95. 4th European Conference on Speech Communication and Technology, Madrid, September 1995.[5]Luettin, J., Visual speech and speaker recognition, Ph.D thesis, University of Sheffield, 1997.[6]Xu Yanjun, Du Limin, Chinese audiovisual bimodal speech database CAVSR1.0, Chinese Journal of Acoustics, to appear.[7]Zhang Jialu, Speech corpora and language input/output methods' evaluation, Chinese Applied Acoustics, 1994, 13(3): 5.

  17. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan.

    Science.gov (United States)

    Noel, Jean-Paul; De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.

  18. Audiovisual integration facilitates monkeys' short-term memory.

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2016-07-01

    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans.

  19. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan

    Science.gov (United States)

    De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T.

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations. PMID:27551918

  20. Confession Function of Synchronized Audiovisual Recordings%论同步录音录像的口供功能

    Institute of Scientific and Technical Information of China (English)

    谢小剑; 颜翔

    2014-01-01

    In practice, synchronized audiovisual recordings as audiovisual materials are most likely being used to prove the authenticity and legality of interrogation record, while it is ignored that the such recording as a kind of videotaped confession records that encompass enormous amount of non-text information during questioning. The synchronized audiovisual recording has some featured confession functions including finding out clues, finding out breakthroughs in cases, as well as directly proving the existence of criminal facts by functioning as criminal suspect’s confession. The procuratorate taking synchronized audiovisual recording as criminal suspect’s confession would be a better solution to the problem of lacking evidence in self-investigating criminal cases. In order to activate such confession function of the synchronized audiovisual recordings, investigators’ ability of analyzing and determining audiovisual recording needs to be enhanced and efforts shall be taken to bring in court the audiovisual recording as confession evidence, and judges’ cognitive bias shall be avoided when they view the audiovisual recording, etc.%实践中,同步录音录像多用作视听资料,证明讯问笔录的合法性,从而忽略了它由于记录了口供及其非文字信息而具有的作为口供证据的功能。同步录音录像的口供功能包括通过查看同步录音录像,发现案件线索,找到案件侦查突破口,也包括作为口供证据直接证明犯罪事实。检察机关将同步录音录像作为口供使用可以更好地解决检察机关自侦案件证据缺乏的难题。为实现上述同步录音录像的口供功能,应当提高侦查人员分析判断同步录音录像的能力,积极将同步录音录像作为供述证据使用,避免法官在查看同步录音录像时的认知偏见等等。

  1. A Review on Audio-visual Translation Studies

    Institute of Scientific and Technical Information of China (English)

    李瑶

    2008-01-01

    <正>This paper is dedicated to a thorough review on the audio-visual related translations from both home and abroad.In reviewing the foreign achievements on this specific field of translation studies it can shed some lights on our national audio-visual practice and research.The review on the Chinese scholars’ audio-visual translation studies is to offer the potential developing direction and guidelines to the studies and aspects neglected as well.Based on the summary of relevant studies,possible topics for further studies are proposed.

  2. High visual resolution matters in audiovisual speech perception, but only for some.

    Science.gov (United States)

    Alsius, Agnès; Wayne, Rachel V; Paré, Martin; Munhall, Kevin G

    2016-07-01

    The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.

  3. Child′s dental fear: Cause related factors and the influence of audiovisual modeling

    Directory of Open Access Journals (Sweden)

    Jayanthi Mungara

    2013-01-01

    Full Text Available Background: Delivery of effective dental treatment to a child patient requires thorough knowledge to recognize dental fear and its management by the application of behavioral management techniques. Children′s Fear Survey Schedule - Dental Subscale (CFSS-DS helps in identification of specific stimuli which provoke fear in children with regard to dental situation. Audiovisual modeling can be successfully used in pediatric dental practice. Aim: To assess the degree of fear provoked by various stimuli in the dental office and to evaluate the effect of audiovisual modeling on dental fear of children using CFSS-DS. Materials and Methods: Ninety children were divided equally into experimental (group I and control (group II groups and were assessed in two visits for their degree of fear and the effect of audiovisual modeling, with the help of CFSS-DS. Results: The most fear-provoking stimulus for children was injection and the least was to open the mouth and having somebody look at them. There was no statistically significant difference in the overall mean CFSS-DS scores between the two groups during the initial session (P > 0.05. However, in the final session, a statistically significant difference was observed in the overall mean fear scores between the groups (P < 0.01. Significant improvement was seen in group I, while no significant change was noted in case of group II. Conclusion: Audiovisual modeling resulted in a significant reduction of overall fear as well as specific fear in relation to most of the items. A significant reduction of fear toward dentists, doctors in general, injections, being looked at, the sight, sounds, and act of the dentist drilling, and having the nurse clean their teeth was observed.

  4. Gestión de la documentación audiovisual en Televisión Valenciana

    OpenAIRE

    2004-01-01

    Management of the audio-visual documentation in Valencian Television. The Unit of Documentation of RTVV is integrated in the Direction of Management and Planning of Human and Material Resources of the Main directorate, under the Department of General Services. With the purpose of organizing the materials emitted and generated by the companies of Radio and Television, this Unit in 1990 is created, although, months before the beginning of the TVV emission and Radio 9, the service of document...

  5. Nuevos actores sociales en el escenario audiovisual

    Directory of Open Access Journals (Sweden)

    Gloria Rosique Cedillo

    2012-04-01

    Full Text Available A raíz de la entrada de las televisiones privadas al sector audiovisual español, el panorama de los contenidos de entretenimiento de la televisión generalista vivió cambios trascendentales que se vieron reflejados en las parrillas de programación. Esta situación ha abierto la polémica en torno a la disyuntiva de tener o no una televisión, sea pública o privada, que no cumple con las expectativas sociales esperadas. Esto ha motivado a que grupos civiles organizados en asociaciones de telespectadores, emprendan diversas acciones con el objetivo de incidir en el rumbo que los contenidos de entretenimiento vienen tomando, apostando fuertemente por la educación del receptor en relación a los medios audiovisuales, y por la participación ciudadana en torno a los temas televisivos.

  6. Ordinal models of audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2011-01-01

    Audiovisual information is integrated in speech perception. One manifestation of this is the McGurk illusion in which watching the articulating face alters the auditory phonetic percept. Understanding this phenomenon fully requires a computational model with predictive power. Here, we describe...... ordinal models that can account for the McGurk illusion. We compare this type of models to the Fuzzy Logical Model of Perception (FLMP) in which the response categories are not ordered. While the FLMP generally fit the data better than the ordinal model it also employs more free parameters in complex...... experiments when the number of response categories are high as it is for speech perception in general. Testing the predictive power of the models using a form of cross-validation we found that ordinal models perform better than the FLMP. Based on these findings we suggest that ordinal models generally have...

  7. An audiovisual database of English speech sounds

    Science.gov (United States)

    Frisch, Stefan A.; Nikjeh, Dee Adams

    2003-10-01

    A preliminary audiovisual database of English speech sounds has been developed for teaching purposes. This database contains all Standard English speech sounds produced in isolated words in word initial, word medial, and word final position, unless not allowed by English phonotactics. There is one example of each word spoken by a male and a female talker. The database consists of an audio recording, video of the face from a 45 deg angle off of center, and ultrasound video of the tongue in the mid-saggital plane. The files contained in the database are suitable for examination by the Wavesurfer freeware program in audio or video modes [Sjolander and Beskow, KTH Stockholm]. This database is intended as a multimedia reference for students in phonetics or speech science. A demonstration and plans for further development will be presented.

  8. A measure for assessing the effects of audiovisual speech integration.

    Science.gov (United States)

    Altieri, Nicholas; Townsend, James T; Wenger, Michael J

    2014-06-01

    We propose a measure of audiovisual speech integration that takes into account accuracy and response times. This measure should prove beneficial for researchers investigating multisensory speech recognition, since it relates to normal-hearing and aging populations. As an example, age-related sensory decline influences both the rate at which one processes information and the ability to utilize cues from different sensory modalities. Our function assesses integration when both auditory and visual information are available, by comparing performance on these audiovisual trials with theoretical predictions for performance under the assumptions of parallel, independent self-terminating processing of single-modality inputs. We provide example data from an audiovisual identification experiment and discuss applications for measuring audiovisual integration skills across the life span.

  9. Proper Use of Audio-Visual Aids: Essential for Educators.

    Science.gov (United States)

    Dejardin, Conrad

    1989-01-01

    Criticizes educators as the worst users of audio-visual aids and among the worst public speakers. Offers guidelines for the proper use of an overhead projector and the development of transparencies. (DMM)

  10. Multistage audiovisual integration of speech: dissociating identification and detection

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech...... signal. Here we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers...... informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multi-stage account of audiovisual integration of speech in which the many attributes...

  11. Cinema, Vídeo, Digital: a virtualidade do audiovisual

    Directory of Open Access Journals (Sweden)

    Polidoro, Bruno

    2008-01-01

    Full Text Available O artigo propõe-se a refletir sobre as diversas manifestações contemporâneas do audiovisual, a partir das idéias de Vilém Flusser, focando-se no cinema, no vídeo e nas tecnologias digitais. Com os conceitos de Henri Bergson, busca perceber o audiovisual como uma virtualidade e, com isso, compreender o sentido de linguagem nesses diversos suportes de som e imagem

  12. Development and utilization of low-cost audio-visual aids in population communication.

    Science.gov (United States)

    1980-07-01

    One of the reasons why population information has to a certain degree failed to create demand for family planning services is that the majority of information and communication materials being used have been developed in an urban setting, resulting in their inappropriateness to the target rural audiences. Furthermore, their having been evolved in urban centers has hampered their subsequent replication, distribution, and use in rural areas due to lack of funds, production and distribution resources. For this reason, many developing countries in Asia have begun to demand population materials which are low-cost and simple, more appropriate to rural audiences and within local production resources and capabilities. In the light of this identified need, the Population Communication Unit, with the assistance of the Population Education Mobile Team and Clearing House, Unesco, has collaborated with the Population Center Foundation of the Philippines to undertake a Regional Training Workshop on the Design, Development, and Utilization of Low-Cost Audiovisual Aids in the Philippines from 21-26 July 1980. The Workshop, which will be attended by communications personnel and materials developers from Bangladesh, Indonesia, Nepal, the Philippines, Sri Lanka and Thailand, will focus on developing the capabilities of midlevel population program personnel in conceptualizing, designing, developing, testing and utilizing simple and low-cost audiovisual materials. It is hoped that with the skills acquired from the Workshop, participants will be able to increase their capability in training their own personnel in the development of low-cost materials.

  13. No, there is no 150 ms lead of visual speech on auditory speech, but a range of audiovisual asynchronies varying from small audio lead to large audio lag.

    Science.gov (United States)

    Schwartz, Jean-Luc; Savariaux, Christophe

    2014-07-01

    An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call "preparatory gestures". However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way. This is what we call "comodulatory gestures" providing auditory and visual events more or less in synchrony. We provide audiovisual data on sequences of plosive-vowel syllables (pa, ta, ka, ba, da, ga, ma, na) showing that audiovisual synchrony is actually rather precise, varying between 20 ms audio lead and 70 ms audio lag. We show how more complex speech material should result in a range typically varying between 40 ms audio lead and 200 ms audio lag, and we discuss how this natural coordination is reflected in the so-called temporal integration window for audiovisual speech perception. Finally we present a toy model of auditory and audiovisual predictive coding, showing that visual lead is actually not necessary for visual prediction.

  14. No, there is no 150 ms lead of visual speech on auditory speech, but a range of audiovisual asynchronies varying from small audio lead to large audio lag.

    Directory of Open Access Journals (Sweden)

    Jean-Luc Schwartz

    2014-07-01

    Full Text Available An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call "preparatory gestures". However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way. This is what we call "comodulatory gestures" providing auditory and visual events more or less in synchrony. We provide audiovisual data on sequences of plosive-vowel syllables (pa, ta, ka, ba, da, ga, ma, na showing that audiovisual synchrony is actually rather precise, varying between 20 ms audio lead and 70 ms audio lag. We show how more complex speech material should result in a range typically varying between 40 ms audio lead and 200 ms audio lag, and we discuss how this natural coordination is reflected in the so-called temporal integration window for audiovisual speech perception. Finally we present a toy model of auditory and audiovisual predictive coding, showing that visual lead is actually not necessary for visual prediction.

  15. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech.

  16. 78 FR 48190 - Certain Audiovisual Components and Products Containing the Same Notice of Request for Statements...

    Science.gov (United States)

    2013-08-07

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Audiovisual Components and Products Containing the Same Notice of Request for Statements... limited exclusion order against certain infringing audiovisual components and products containing the...

  17. Audiovisual classification of vocal outbursts in human conversation using long-short-term memory networks

    NARCIS (Netherlands)

    Eyben, Florian; Petridis, Stavros; Schuller, Björn; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    2011-01-01

    We investigate classification of non-linguistic vocalisations with a novel audiovisual approach and Long Short-Term Memory (LSTM) Recurrent Neural Networks as highly successful dynamic sequence classifiers. As database of evaluation serves this year's Paralinguistic Challenge's Audiovisual Interest

  18. Realización audiovisual y creación de sentido en la música.El caso del videoclip musical de Nuevo Flamenco

    OpenAIRE

    Sedeño Valdellós, Ana María

    2003-01-01

    El videoclip musical es un tipo de producción audiovisual desarrollada desde la industria de la música cultural discográfica con un objetivo de promoción musical. Como técnica publicitaria, puede añadir un valor o significado a través de su configuración audiovisual a los productos musicales que acompañan, bienes, por sí solos, de alto contenido simbólico. Además, sus procesos de producción y el resultado material final se encuentra acondicionado por las condiciones y lógicas de...

  19. Audio-Visual Integration of Emotional Information

    Directory of Open Access Journals (Sweden)

    Penny Bergman

    2011-10-01

    Full Text Available Emotions are central to our perception of the environment surrounding us (Berlyne, 1971. An important aspect in the emotional response to a sound is dependent on the meaning of the sound, ie, it is not the physical parameter per se that determines our emotional response to the sound but rather the source of the sound (Genell, 2008, and the relevance it has to the self (Tajadura-Jiménez et al 2010. When exposed to sound together with visual information, the information from both modalities is integrated, altering the perception of each modality, in order to generate a coherent experience. In emotional information this integration is rapid and without requirements of attentional processes (De Gelder, 1999. The present experiment investigates perception of pink noise in two visual settings in a within-subjects design. Nineteen participants rated the same sound twice in terms of pleasantness and arousal in either a pleasant or an unpleasant visual setting. The results showed that pleasantness of the sound decreased in the negative visual setting, thus suggesting an audio-visual integration, where the affective information in the visual modality is translated to the auditory modality when information-markers are lacking in it. The results are discussed in relation to theories of emotion perception.

  20. Temporal structure in audiovisual sensory selection.

    Directory of Open Access Journals (Sweden)

    Anne Kösem

    Full Text Available In natural environments, sensory information is embedded in temporally contiguous streams of events. This is typically the case when seeing and listening to a speaker or when engaged in scene analysis. In such contexts, two mechanisms are needed to single out and build a reliable representation of an event (or object: the temporal parsing of information and the selection of relevant information in the stream. It has previously been shown that rhythmic events naturally build temporal expectations that improve sensory processing at predictable points in time. Here, we asked to which extent temporal regularities can improve the detection and identification of events across sensory modalities. To do so, we used a dynamic visual conjunction search task accompanied by auditory cues synchronized or not with the color change of the target (horizontal or vertical bar. Sounds synchronized with the visual target improved search efficiency for temporal rates below 1.4 Hz but did not affect efficiency above that stimulation rate. Desynchronized auditory cues consistently impaired visual search below 3.3 Hz. Our results are interpreted in the context of the Dynamic Attending Theory: specifically, we suggest that a cognitive operation structures events in time irrespective of the sensory modality of input. Our results further support and specify recent neurophysiological findings by showing strong temporal selectivity for audiovisual integration in the auditory-driven improvement of visual search efficiency.

  1. A representação audiovisual das mulheres migradas The audiovisual representation of migrant women

    Directory of Open Access Journals (Sweden)

    Luciana Pontes

    2012-12-01

    Full Text Available Neste artigo analiso as representações sobre as mulheres migradas nos fundos audiovisuais de algumas entidades que trabalham com gênero e imigração em Barcelona. Por haver detectado nos audiovisuais analisados uma associação recorrente das mulheres migradas à pobreza, à criminalidade, à ignorância, à maternidade obrigatória e numerosa, à prostituição etc., busquei entender como tais representações tomam forma, estudando os elementos narrativos, estilísticos, visuais e verbais através dos quais se articulam essas imagens e discursos sobre as mulheres migradas.In this paper I analyze the representations of the migrant women at the audiovisual founds in some of the organizations that work with gender and immigration in Barcelona. At the audiovisuals I have found a recurring association of the migrant women with poverty, criminality, ignorance, passivity, undocumentation, gender violence, compulsory and numerous motherhood, prostitution, etc. Thus, I tried to understand the ways in which these representations are shaped, studying the narrative, stylistic, visual and verbal elements through which these images and discourses of the migrant women are articulated.

  2. Audiovisual Association Learning in the Absence of Primary Visual Cortex.

    Science.gov (United States)

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J; de Gelder, Beatrice

    2015-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning.

  3. Parametric packet-based audiovisual quality model for IPTV services

    CERN Document Server

    Garcia, Marie-Neige

    2014-01-01

    This volume presents a parametric packet-based audiovisual quality model for Internet Protocol TeleVision (IPTV) services. The model is composed of three quality modules for the respective audio, video and audiovisual components. The audio and video quality modules take as input a parametric description of the audiovisual processing path, and deliver an estimate of the audio and video quality. These outputs are sent to the audiovisual quality module which provides an estimate of the audiovisual quality. Estimates of perceived quality are typically used both in the network planning phase and as part of the quality monitoring. The same audio quality model is used for both these phases, while two variants of the video quality model have been developed for addressing the two application scenarios. The addressed packetization scheme is MPEG2 Transport Stream over Real-time Transport Protocol over Internet Protocol. In the case of quality monitoring, that is the case for which the network is already set-up, the aud...

  4. The Fungible Audio-Visual Mapping and its Experience

    Directory of Open Access Journals (Sweden)

    Adriana Sa

    2014-12-01

    Full Text Available This article draws a perceptual approach to audio-visual mapping. Clearly perceivable cause and effect relationships can be problematic if one desires the audience to experience the music. Indeed perception would bias those sonic qualities that fit previous concepts of causation, subordinating other sonic qualities, which may form the relations between the sounds themselves. The question is, how can an audio-visual mapping produce a sense of causation, and simultaneously confound the actual cause-effect relationships. We call this a fungible audio-visual mapping. Our aim here is to glean its constitution and aspect. We will report a study, which draws upon methods from experimental psychology to inform audio-visual instrument design and composition. The participants are shown several audio-visual mapping prototypes, after which we pose quantitative and qualitative questions regarding their sense of causation, and their sense of understanding the cause-effect relationships. The study shows that a fungible mapping requires both synchronized and seemingly non-related components – sufficient complexity to be confusing. As the specific cause-effect concepts remain inconclusive, the sense of causation embraces the whole. 

  5. Media Aid Beyond the Factual: Culture, Development, and Audiovisual Assistance

    Directory of Open Access Journals (Sweden)

    Benjamin A. J. Pearson

    2015-01-01

    Full Text Available This paper discusses audiovisual assistance, a form of development aid that focuses on the production and distribution of cultural and entertainment media such as fictional films and TV shows. While the first audiovisual assistance program dates back to UNESCO’s International Fund for the Promotion of Culture in the 1970s, the past two decades have seen a proliferation of audiovisual assistance that, I argue, is related to a growing concern for culture in post-2015 global development agendas. In this paper, I examine the aims and motivations behind the EU’s audiovisual assistance programs to countries in the Global South, using data from policy documents and semi-structured, in-depth interviews with Program Managers and administrative staff in Brussels. These programs prioritize forms of audiovisual content that are locally specific, yet globally tradable. Furthermore, I argue that they have an ambivalent relationship with traditional notions of international development, one that conceptualizes media not only as a means to achieve economic development and human rights aims, but as a form of development itself.

  6. The Effects of Audio-Visual Recorded and Audio Recorded Listening Tasks on the Accuracy of Iranian EFL Learners' Oral Production

    Science.gov (United States)

    Drood, Pooya; Asl, Hanieh Davatgari

    2016-01-01

    The ways in which task in classrooms has developed and proceeded have receive great attention in the field of language teaching and learning in the sense that they draw attention of learners to the competing features such as accuracy, fluency, and complexity. English audiovisual and audio recorded materials have been widely used by teachers and…

  7. 77 FR 22803 - Certain Audiovisual Components and Products Containing the Same; Institution of Investigation...

    Science.gov (United States)

    2012-04-17

    ... COMMISSION Certain Audiovisual Components and Products Containing the Same; Institution of Investigation... importation, and the sale within the United States after importation of certain audiovisual components and... certain audiovisual components and products containing the same that infringe one or more of claims 1,...

  8. 77 FR 16561 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-03-21

    ... COMMISSION Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint... complaint entitled Certain Audiovisual Components and Products Containing the Same, DN 2884; the Commission... within the United States after importation of certain audiovisual components and products containing...

  9. 77 FR 16560 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-03-21

    ... COMMISSION Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint... complaint entitled Certain Audiovisual Components and Products Containing the Same, DN 2884; the Commission... within the United States after importation of certain audiovisual components and products containing...

  10. 36 CFR 1237.10 - How must agencies manage their audiovisual, cartographic, and related records?

    Science.gov (United States)

    2010-07-01

    ... their audiovisual, cartographic, and related records? 1237.10 Section 1237.10 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.10 How must agencies manage their audiovisual, cartographic, and...

  11. Cataloging, Processing, Administering AV Materials. A Model for Wisconsin Schools.

    Science.gov (United States)

    Little, Robert D., Ed.

    The objective of this cataloging manual is to recommend specific methods for cataloging audiovisual materials for use in individual school media centers. The following types of audiovisual aids are included: educational games, filmstrips, flat graphics, kits, models, motion pictures, realia, records, slides, sound filmstrips, tapes,…

  12. Dynamic Bayesian Networks for Audio-Visual Speech Recognition

    Directory of Open Access Journals (Sweden)

    Liang Luhong

    2002-01-01

    Full Text Available The use of visual features in audio-visual speech recognition (AVSR is justified by both the speech generation mechanism, which is essentially bimodal in audio and visual representation, and by the need for features that are invariant to acoustic noise perturbation. As a result, current AVSR systems demonstrate significant accuracy improvements in environments affected by acoustic noise. In this paper, we describe the use of two statistical models for audio-visual integration, the coupled HMM (CHMM and the factorial HMM (FHMM, and compare the performance of these models with the existing models used in speaker dependent audio-visual isolated word recognition. The statistical properties of both the CHMM and FHMM allow to model the state asynchrony of the audio and visual observation sequences while preserving their natural correlation over time. In our experiments, the CHMM performs best overall, outperforming all the existing models and the FHMM.

  13. The role of visual spatial attention in audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias; Tiippana, K.; Laarni, J.

    2009-01-01

    integration did not change. Visual spatial attention was also able to select between the faces when lip reading. This suggests that visual spatial attention acts at the level of visual speech perception prior to audiovisual integration and that the effect propagates through audiovisual integration......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive...... but recent reports have challenged this view. Here we study the effect of visual spatial attention on the McGurk effect. By presenting a movie of two faces symmetrically displaced to each side of a central fixation point and dubbed with a single auditory speech track, we were able to discern the influences...

  14. Neural correlates of audiovisual speech processing in a second language.

    Science.gov (United States)

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance.

  15. Perceived synchrony for realistic and dynamic audiovisual events.

    Science.gov (United States)

    Eg, Ragnhild; Behne, Dawn M

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli.

  16. Electrophysiological evidence for speech-specific audiovisual integration.

    Science.gov (United States)

    Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode.

  17. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    Science.gov (United States)

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate

  18. Herramienta observacional para el estudio de conductas violentas en un cómic audiovisual

    Directory of Open Access Journals (Sweden)

    Zaida Márquez

    2012-01-01

    Full Text Available Abstract This research paper presents a study which aimed to structure a system of categories for observation and description of violent behavior within an audiovisual children program, specifically in cartoons. A chapter of an audiovisual cartoon was chosen as an example. This chapter presented three main female characters in a random fashion in order to be observed by the children. Categories were established using the taxonomic criteria proposed by Anguera (2001 and were made up of various typed behaviors according to levels of response. To identify a stable behavioral pattern, some events were taken as a sample, taking into account one or several behavior registered in the observed sessions. The episode was analyzed by two observers who appreciated the material simultaneously, making two observations, registering the relevant data and contrasting opinions. The researchers determined a set of categories which expressed violent behavior such as: Nonverbal behavior, special behavior, and vocal/verbal behavior. It was concluded that there was a pattern of predominant and stable violent behavior in the cartoon observed. Resumen El presente artículo de investigación presenta un trabajo cuyo objetivo consistió en estructurar un sistema de categorías para la observación y descripción de conductas violentas en un cómic audiovisual (dibujo animado. Se seleccionó como muestra un cómic audiovisual que tiene tres personajes principales femeninos; tomándose de forma aleatoria, para su observación, uno de sus capítulos. Para el establecimiento de las categorías se escogieron como base los criterios taxonómicos propuestos por Anguera (2001, con lo cual se tipificaron las conductas que conforman cada categoría según los niveles de respuesta. Y para identificar un patrón de conducta estable se ha realizado un muestreo de eventos, usando todas las ocurrencias de una o varias conductas que se registraron en las sesiones observadas. El episodio

  19. Speech-specificity of two audiovisual integration effects

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2010-01-01

    Seeing the talker’s articulatory mouth movements can influence the auditory speech percept both in speech identification and detection tasks. Here we show that these audiovisual integration effects also occur for sine wave speech (SWS), which is an impoverished speech signal that naïve observers...... often fail to perceive as speech. While audiovisual integration in the identification task only occurred when observers were informed of the speech-like nature of SWS, integration occurred in the detection task both for informed and naïve observers. This shows that both speech-specific and general...

  20. El archivo de RTVV: Patrimonio Audiovisual de la Humanidad

    Directory of Open Access Journals (Sweden)

    Hidalgo Goyanes, Paloma

    2014-07-01

    Full Text Available Los documentos audiovisuales son importantes para el estudio de los siglos XX y XXI. Los archivos de televisión contribuyen a la formación del imaginario colectivo y forman parte del Patrimonio Audiovisual de la Humanidad. La preservación del archivo audiovisual de la RTVV es responsabilidad de los poderes públicos, según se expresa en la legislación vigente y un derecho de los ciudadanos y de los contribuyentes como herederos de este patrimonio que refleja su historia, su cultura y su lengua.

  1. El archivo de RTVV: Patrimonio Audiovisual de la Humanidad

    OpenAIRE

    2014-01-01

    Los documentos audiovisuales son importantes para el estudio de los siglos XX y XXI. Los archivos de televisión contribuyen a la formación del imaginario colectivo y forman parte del Patrimonio Audiovisual de la Humanidad. La preservación del archivo audiovisual de la RTVV es responsabilidad de los poderes públicos, según se expresa en la legislación vigente y un derecho de los ciudadanos y de los contribuyentes como herederos de este patrimonio que refleja su historia, su cultura y su lengua...

  2. Evolution of audiovisual production in five Spanish Cybermedia

    Directory of Open Access Journals (Sweden)

    Javier Mayoral Sánchez

    2014-12-01

    Full Text Available This paper quantifies and analyzes the evolution of audiovisual production of five Spanish digital newspapers: abc.es, elconfidencial.com, elmundo.es, elpais.com and lavanguardia.com. So have been studied videos published on the five cover for four weeks (fourteen days in November 2011 and another fourteen in March 2014. This diachronic perspective has revealed a remarkable contradiction in online media about audiovisual products. Even with very considerable differences between them, the five analyzed media increasingly publish videos. They do it in in the most valued areas of their homepages. However, is not perceived in them a willingness to engage firmly

  3. Audiovisual Quality Fusion based on Relative Multimodal Complexity

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Reiter, Ulrich

    2011-01-01

    In multimodal presentations the perceived audiovisual quality assessment is significantly influenced by the content of both the audio and visual tracks. Based on our earlier subjective quality test for finding the optimal trade-off between audio and video quality, this paper proposes a novel method...... designed auditory and visual features, the relative complexity analysis model across sensory modalities is proposed for deriving the fusion parameter. Experimental results have demonstrated that the content adaptive fusion parameter can improve the prediction accuracy of objective audiovisual quality...

  4. Audiovisual integration in speech perception: a multi-stage process

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    investigate whether the integration of auditory and visual speech observed in these two audiovisual integration effects are specific traits of speech perception. We further ask whether audiovisual integration is undertaken in a single processing stage or multiple processing stages.......Integration of speech signals from ear and eye is a well-known feature of speech perception. This is evidenced by the McGurk illusion in which visual speech alters auditory speech perception and by the advantage observed in auditory speech detection when a visual signal is present. Here we...

  5. Intermodal timing relations and audio-visual speech recognition by normal-hearing adults.

    Science.gov (United States)

    McGrath, M; Summerfield, Q

    1985-02-01

    Audio-visual identification of sentences was measured as a function of audio delay in untrained observers with normal hearing; the soundtrack was replaced by rectangular pulses originally synchronized to the closing of the talker's vocal folds and then subjected to delay. When the soundtrack was delayed by 160 ms, identification scores were no better than when no acoustical information at all was provided. Delays of up to 80 ms had little effect on group-mean performance, but a separate analysis of a subgroup of better lipreaders showed a significant trend of reduced scores with increased delay in the range from 0-80 ms. A second experiment tested the interpretation that, although the main disruptive effect of the delay occurred on a syllabic time scale, better lipreaders might be attempting to use intermodal timing cues at a phonemic level. Normal-hearing observers determined whether a 120-Hz complex tone started before or after the opening of a pair of liplike Lissajou figures. Group-mean difference limens (70.7% correct DLs) were - 79 ms (sound leading) and + 138 ms (sound lagging), with no significant correlation between DLs and sentence lipreading scores. It was concluded that most observers, whether good lipreaders or not, possess insufficient sensitivity to intermodal timing cues in audio-visual speech for them to be used analogously to voice onset time in auditory speech perception. The results of both experiments imply that delays of up to about 40 ms introduced by signal-processing algorithms in aids to lipreading should not materially affect audio-visual speech understanding.

  6. El audiovisual como medio sociocomunicativo: hacia una antropología audiovisual performativa

    Directory of Open Access Journals (Sweden)

    José Manuel Vidal-Gálvez

    2016-01-01

    Full Text Available Los recursos audiovisuales como vehículo de comunicación y representación del arte aplicados a la investigación social permiten fomentar un tipo de ciencia que vuelve su mirada más allá del mero diagnóstico científico. Posibilitan devolver el producto final empaquetado en un lenguaje sencillo y accesible, y reconocen, como principal objetivo, el retorno de sus conclusiones al ámbito social en el que se generó como vía hacia la catalización dialéctica y performativa del hecho social y comunicativo. En este texto, presentamos, a partir de trabajos empíricos realizados en España y en Ecuador, la viabilidad de la antropología audiovisual como medio para llevar a cabo una ciencia implicada con el colectivo representado y favorecedora del cambio social.

  7. El Archivo de la Palabra : contexto y proyecto del repositorio audiovisual del Ateneu Barcelonès

    Directory of Open Access Journals (Sweden)

    Alcaraz Martínez, Rubén

    2014-12-01

    Full Text Available Es presenten els resultats del projecte de digitalització del fons audiovisual de l'Ateneu Barcelonès iniciat per l'Àrea de Biblioteca i Arxiu Històric l'any 2011. S'explica la metodologia de treball fent èmfasi en la gestió dels fitxers analògics i nascuts digitals i en la problemàtica derivada dels drets d'autor. Finalment, es presenta l'Arxiu de la Paraula i l'@teneu hub, nous repositori i web respectivament, l'objectiu dels quals és difondre el patrimoni audiovisual i donar accés centralitzat als diferents continguts generats per l'entitat.Se presentan los resultados del proyecto de digitalización del fondo audiovisual del Ateneu Barcelonès iniciado por el área de Biblioteca y Archivo Histórico en 2011. Se explica la metodología de trabajo haciendo énfasis en la gestión de los ficheros analógicos y nacidos digitales y en la problemática derivada de los derechos de autor. Finalmente, se presenta el Archivo de la Palabra y el @teneo hub, nuevos repositorio y web respectivamente, cuyo objetivo es difundir el patrimonio audiovisual y dar acceso centralizado a los diferentes contenidos generados por la entidad.This paper reports on the project to digitize the audiovisual archives of the Ateneu Barcelonès, which was launched by that institution’s Library and Archive department in 2011. The paper explains the methodology used to create the repository, focusing on the management of analogue files and born-digital materials and the question of author’s rights. Finally, it presents the new repository L’Arxiu de la Paraula (the Word Archive and the new website, @teneu hub, which are designed to disseminate the Ateneu’s audiovisual heritage and provide centralized access to its different contents.

  8. L'Arxiu de la Paraula : context i projecte del repositori audiovisual de l'Ateneu Barcelonès

    Directory of Open Access Journals (Sweden)

    Alcaraz Martínez, Rubén

    2014-12-01

    Full Text Available Es presenten els resultats del projecte de digitalització del fons audiovisual de l'Ateneu Barcelonès iniciat per l'Àrea de Biblioteca i Arxiu Històric l'any 2011. S'explica la metodologia de treball fent èmfasi en la gestió dels fitxers analògics i nascuts digitals i en la problemàtica derivada dels drets d'autor. Finalment, es presenta l'Arxiu de la Paraula i l'@teneu hub, nous repositori i web respectivament, l'objectiu dels quals és difondre el patrimoni audiovisual i donar accés centralitzat als diferents continguts generats per l'entitat.Se presentan los resultados del proyecto de digitalización del fondo audiovisual del Ateneu Barcelonès iniciado por el área de Biblioteca y Archivo Histórico en 2011. Se explica la metodología de trabajo haciendo énfasis en la gestión de los ficheros analógicos y nacidos digitales y en la problemática derivada de los derechos de autor. Finalmente, se presenta el Archivo de la Palabra y el @teneo hub, nuevos repositorio y web respectivamente, cuyo objetivo es difundir el patrimonio audiovisual y dar acceso centralizado a los diferentes contenidos generados por la entidad.This paper reports on the project to digitize the audiovisual archives of the Ateneu Barcelonès, which was launched by that institution’s Library and Archive department in 2011. The paper explains the methodology used to create the repository, focusing on the management of analogue files and born-digital materials and the question of author’s rights. Finally, it presents the new repository L’Arxiu de la Paraula (the Word Archive and the new website, @teneu hub, which are designed to disseminate the Ateneu’s audiovisual heritage and provide centralized access to its different contents.

  9. Audio-Visual Equipment Depreciation. RDU-75-07.

    Science.gov (United States)

    Drake, Miriam A.; Baker, Martha

    A study was conducted at Purdue University to gather operational and budgetary planning data for the Libraries and Audiovisual Center. The objectives were: (1) to complete a current inventory of equipment including year of purchase, costs, and salvage value; (2) to determine useful life data for general classes of equipment; and (3) to determine…

  10. Audiovisual Vowel Monitoring and the Word Superiority Effect in Children

    Science.gov (United States)

    Fort, Mathilde; Spinelli, Elsa; Savariaux, Christophe; Kandel, Sonia

    2012-01-01

    The goal of this study was to explore whether viewing the speaker's articulatory gestures contributes to lexical access in children (ages 5-10) and in adults. We conducted a vowel monitoring task with words and pseudo-words in audio-only (AO) and audiovisual (AV) contexts with white noise masking the acoustic signal. The results indicated that…

  11. Developing a typology of humor in audiovisual media

    NARCIS (Netherlands)

    Buijzen, M.A.; Valkenburg, P.M.

    2004-01-01

    The main aim of this study was to develop and investigate a typology of humor in audiovisual media. We identified 41 humor techniques, drawing on Berger's (1976, 1993) typology of humor in narratives, audience research on humor preferences, and an inductive analysis of humorous commercials. We analy

  12. The Role of Audiovisual Mass Media News in Language Learning

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2011-01-01

    The present paper focuses on the role of audio/visual mass media news in language learning. In this regard, the two important issues regarding the selection and preparation of TV news for language learning are the content of the news and the linguistic difficulty. Content is described as whether the news is specialized or universal. Universal…

  13. Modelling and Retrieving Audiovisual Information - A Soccer Video Retrieval System

    NARCIS (Netherlands)

    Woudstra, A.; Velthausz, D.D.; Poot, de H.J.G.; Moelaart El-Hadidy, F.; Jonker, W.; Houtsma, M.A.W.; Heller, R.G.; Heemskerk, J.N.H.

    1998-01-01

    This paper describes the results of an ongoing collaborative project between KPN Research and the Telematics Institute on multimedia information handling. The focus of the paper is the modelling and retrieval of audiovisual information. The paper presents a general framework for modeling multimedia

  14. Producing Slide and Tape Presentations: Readings from "Audiovisual Instruction"--4.

    Science.gov (United States)

    Hitchens, Howard, Ed.

    Designed to serve as a reference and source of ideas on the use of slides in combination with audiocassettes for presentation design, this book of readings from Audiovisual Instruction magazine includes three papers providing basic tips on putting together a presentation, five articles describing techniques for improving the visual images, five…

  15. Kijkwijzer: The Dutch rating system for audiovisual productions

    NARCIS (Netherlands)

    Valkenburg, P.M.; Beentjes, J.W.J.; Nikken, P.; Tan, E.S.H.

    2002-01-01

    Kijkwijzer is the name of the new Dutch rating system in use since early 2001 to provide information about the possible harmful effects of movies, home videos and television programs on young people. The rating system is meant to provide audiovisual productions with both age-based and content-based

  16. Today's and tomorrow's retrieval practice in the audiovisual archive

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2010-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. We investigate to what extent content-based video retriev

  17. Content-based analysis improves audiovisual archive retrieval

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2012-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. In this paper, we take into account the information needs

  18. Preference for Audiovisual Speech Congruency in Superior Temporal Cortex.

    Science.gov (United States)

    Lüttke, Claudia S; Ekman, Matthias; van Gerven, Marcel A J; de Lange, Floris P

    2016-01-01

    Auditory speech perception can be altered by concurrent visual information. The superior temporal cortex is an important combining site for this integration process. This area was previously found to be sensitive to audiovisual congruency. However, the direction of this congruency effect (i.e., stronger or weaker activity for congruent compared to incongruent stimulation) has been more equivocal. Here, we used fMRI to look at the neural responses of human participants during the McGurk illusion--in which auditory /aba/ and visual /aga/ inputs are fused to perceived /ada/--in a large homogenous sample of participants who consistently experienced this illusion. This enabled us to compare the neuronal responses during congruent audiovisual stimulation with incongruent audiovisual stimulation leading to the McGurk illusion while avoiding the possible confounding factor of sensory surprise that can occur when McGurk stimuli are only occasionally perceived. We found larger activity for congruent audiovisual stimuli than for incongruent (McGurk) stimuli in bilateral superior temporal cortex, extending into the primary auditory cortex. This finding suggests that superior temporal cortex prefers when auditory and visual input support the same representation.

  19. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    Science.gov (United States)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  20. An Audio-Visual Lecture Course in Russian Culture

    Science.gov (United States)

    Leighton, Lauren G.

    1977-01-01

    An audio-visual course in Russian culture is given at Northern Illinois University. A collection of 4-5,000 color slides is the basis for the course, with lectures focussed on literature, philosophy, religion, politics, art and crafts. Acquisition, classification, storage and presentation of slides, and organization of lectures are discussed. (CHK)

  1. Voice activity detection using audio-visual information

    DEFF Research Database (Denmark)

    Petsatodis, Theodore; Pnevmatikakis, Aristodemos; Boukis, Christos

    2009-01-01

    An audio-visual voice activity detector that uses sensors positioned distantly from the speaker is presented. Its constituting unimodal detectors are based on the modeling of the temporal variation of audio and visual features using Hidden Markov Models; their outcomes are fused using a post-deci...

  2. Audiovisual Ethnography of Philippine Music: A Process-oriented Approach

    Directory of Open Access Journals (Sweden)

    Terada Yoshitaka

    2013-06-01

    Full Text Available Audiovisual documentation has been an important part of ethnomusicological endeavors, but until recently it was treated primarily as a tool of preservation and/or documentation that supplements written ethnography, albeit there a few notable exceptions. The proliferation of inexpensive video equipment has encouraged the unprecedented number of scholars and students in ethnomusicology to be involved in filmmaking, but its potential as a methodology has not been fully explored. As a small step to redefine the application of audiovisual media, Dr. Usopay Cadar, my teacher in Philippine music, and I produced two films: one on Maranao kolintang music and the other on Maranao culture in general, based on the audiovisual footage we collected in 2008. This short essay describes how the screenings of these films were organized in March 2013 for the diverse audiences in the Philippines, and what types of reactions and interactions transpired during the screenings. These screenings were organized both to obtain feedback about the content of the films from the caretakers and stakeholders of the documented tradition and to create a venue for interactions and collaborations to discuss the potential of audiovisual ethnography. Drawing from the analysis of the current project, I propose to regard film not as a fixed product but as a living and organic site that is open to commentaries and critiques, where changes can be made throughout the process. In this perspective, ‘filmmaking’ refers to the entire process of research, filming, editing and post-production activities.

  3. Market potential for interactive audio-visual media

    NARCIS (Netherlands)

    Leurdijk, A.; Limonard, S.

    2005-01-01

    NM2 (New Media for a New Millennium) develops tools for interactive, personalised and non-linear audio-visual content that will be tested in seven pilot productions. This paper looks at the market potential for these productions from a technological, a business and a users' perspective. It shows tha

  4. The Audiovisual Temporal Binding Window Narrows in Early Childhood

    Science.gov (United States)

    Lewkowicz, David J.; Flom, Ross

    2014-01-01

    Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…

  5. Neural Development of Networks for Audiovisual Speech Comprehension

    Science.gov (United States)

    Dick, Anthony Steven; Solodkin, Ana; Small, Steven L.

    2010-01-01

    Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the…

  6. Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders

    Science.gov (United States)

    Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.

    2013-01-01

    Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their typically developing peers. To shed light on possible differences in the maturation of audiovisual speech integration, we tested younger (ages 6-12) and older (ages 13-18) children with and without ASD on a task indexing such multisensory integration. To do this, we used the McGurk effect, in which the pairing of incongruent auditory and visual speech tokens typically results in the perception of a fused percept distinct from the auditory and visual signals, indicative of active integration of the two channels conveying speech information. Whereas little difference was seen in audiovisual speech processing (i.e., reports of McGurk fusion) between the younger ASD and TD groups, there was a significant difference at the older ages. While TD controls exhibited an increased rate of fusion (i.e., integration) with age, children with ASD failed to show this increase. These data suggest arrested development of audiovisual speech integration in ASD. The results are discussed in light of the extant literature and necessary next steps in research. PMID:24218241

  7. Decision-Level Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, Boris; Poel, Mannes; Truong, Khiet; Poppe, Ronald; Pantic, Maja; Popescu-Belis, Andrei; Stiefelhagen, Rainer

    2008-01-01

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laugh- ter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio- visual laughter detection is

  8. Sur Quatre Methodes Audio-Visuelles (On Four Audiovisual Methods)

    Science.gov (United States)

    Porquier, Remy; Vives, Robert

    1974-01-01

    This is a critical examination of four audiovisual methods for the teaching of French as a Foreign Language. The methods have as a common basis the interrelationship of image, dialogue, situation, and give grammar priority over vocabulary. (Text is in French.) (AM)

  9. Neural correlates of audiovisual integration in music reading.

    Science.gov (United States)

    Nichols, Emily S; Grahn, Jessica A

    2016-10-01

    Integration of auditory and visual information is important to both language and music. In the linguistic domain, audiovisual integration alters event-related potentials (ERPs) at early stages of processing (the mismatch negativity (MMN)) as well as later stages (P300(Andres et al., 2011)). However, the role of experience in audiovisual integration is unclear, as reading experience is generally confounded with developmental stage. Here we tested whether audiovisual integration of music appears similar to reading, and how musical experience altered integration. We compared brain responses in musicians and non-musicians on an auditory pitch-interval oddball task that evoked the MMN and P300, while manipulating whether visual pitch-interval information was congruent or incongruent with the auditory information. We predicted that the MMN and P300 would be largest when both auditory and visual stimuli deviated, because audiovisual integration would increase the neural response when the deviants were congruent. The results indicated that scalp topography differed between musicians and non-musicians for both the MMN and P300 response to deviants. Interestingly, musicians' musical training modulated integration of congruent deviants at both early and late stages of processing. We propose that early in the processing stream, visual information may guide interpretation of auditory information, leading to a larger MMN when auditory and visual information mismatch. At later attentional stages, integration of the auditory and visual stimuli leads to a larger P300 amplitude. Thus, experience with musical visual notation shapes the way the brain integrates abstract sound-symbol pairings, suggesting that musicians can indeed inform us about the role of experience in audiovisual integration.

  10. Context-specific effects of musical expertise on audiovisual integration

    Science.gov (United States)

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  11. Context-specific effects of musical expertise on audiovisual integration.

    Science.gov (United States)

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well.

  12. The effect of visual apparent motion on audiovisual simultaneity.

    Science.gov (United States)

    Kwon, Jinhwan; Ogawa, Ken-ichiro; Miyake, Yoshihiro

    2014-01-01

    Visual motion information from dynamic environments is important in multisensory temporal perception. However, it is unclear how visual motion information influences the integration of multisensory temporal perceptions. We investigated whether visual apparent motion affects audiovisual temporal perception. Visual apparent motion is a phenomenon in which two flashes presented in sequence in different positions are perceived as continuous motion. Across three experiments, participants performed temporal order judgment (TOJ) tasks. Experiment 1 was a TOJ task conducted in order to assess audiovisual simultaneity during perception of apparent motion. The results showed that the point of subjective simultaneity (PSS) was shifted toward a sound-lead stimulus, and the just noticeable difference (JND) was reduced compared with a normal TOJ task with a single flash. This indicates that visual apparent motion affects audiovisual simultaneity and improves temporal discrimination in audiovisual processing. Experiment 2 was a TOJ task conducted in order to remove the influence of the amount of flash stimulation from Experiment 1. The PSS and JND during perception of apparent motion were almost identical to those in Experiment 1, but differed from those for successive perception when long temporal intervals were included between two flashes without motion. This showed that the result obtained under the apparent motion condition was unaffected by the amount of flash stimulation. Because apparent motion was produced by a constant interval between two flashes, the results may be accounted for by specific prediction. In Experiment 3, we eliminated the influence of prediction by randomizing the intervals between the two flashes. However, the PSS and JND did not differ from those in Experiment 1. It became clear that the results obtained for the perception of visual apparent motion were not attributable to prediction. Our findings suggest that visual apparent motion changes temporal

  13. Effects of audio-visual presentation of target words in word translation training

    Science.gov (United States)

    Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko

    2004-05-01

    Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.

  14. Effects of audio-visual presentation of target words in word translation training

    Science.gov (United States)

    Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko

    2001-05-01

    Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.

  15. Definición del objeto de trabajo y conceptualización de los Sistemas de Información Audiovisual de la Televisión Defining the object of work and conceptualizing TV Audiovisual Information Systems

    Directory of Open Access Journals (Sweden)

    Inés-Carmen Póveda-López

    2010-04-01

    Full Text Available Se define el objeto de trabajo documental en los sistemas de información audiovisual de la televisión, partiendo de las distintas definiciones aportadas por los principales autores e instituciones sobre los conceptos de audiovisual, imagen en movimiento, sonido, documentación audiovisual, información audiovisual y documento audiovisual. Se llega así, por medio de la cuantificación y el análisis de las ideas y conceptos más repetidos en las definiciones analizadas, a definir un "Documento televisivo de imagen en movimiento".The object of documentary work in visual information systems on TV is defined on the basis of the various ideas provided by leading authors and institutions about the concepts of audiovisual, moving image, sound, audiovisual documentation, audiovisual information and audiovisual document. This takes us through quantification and analysis of the most recurrent ideas and concepts discussed in the studied definitions.

  16. Sistema audiovisual para reconocimiento de comandos Audiovisual system for recognition of commands

    Directory of Open Access Journals (Sweden)

    Alexander Ceballos

    2011-08-01

    Full Text Available Se presenta el desarrollo de un sistema automático de reconocimiento audiovisual del habla enfocado en el reconocimiento de comandos. La representación del audio se realizó mediante los coeficientes cepstrales de Mel y las primeras dos derivadas temporales. Para la caracterización del vídeo se hizo seguimiento automático de características visuales de alto nivel a través de toda la secuencia. Para la inicialización automática del algoritmo se emplearon transformaciones de color y contornos activos con información de flujo del vector gradiente ("GVF snakes" sobre la región labial, mientras que para el seguimiento se usaron medidas de similitud entre vecindarios y restricciones morfológicas definidas en el estándar MPEG-4. Inicialmente, se presenta el diseño del sistema de reconocimiento automático del habla, empleando únicamente información de audio (ASR, mediante Modelos Ocultos de Markov (HMMs y un enfoque de palabra aislada; posteriormente, se muestra el diseño de los sistemas empleando únicamente características de vídeo (VSR, y empleando características de audio y vídeo combinadas (AVSR. Al final se comparan los resultados de los tres sistemas para una base de datos propia en español y francés, y se muestra la influencia del ruido acústico, mostrando que el sistema de AVSR es más robusto que ASR y VSR.We present the development of an automatic audiovisual speech recognition system focused on the recognition of commands. Signal audio representation was done using Mel cepstral coefficients and their first and second order time derivatives. In order to characterize the video signal, a set of high-level visual features was tracked throughout the sequences. Automatic initialization of the algorithm was performed using color transformations and active contour models based on Gradient Vector Flow (GVF Snakes on the lip region, whereas visual tracking used similarity measures across neighborhoods and morphological

  17. A Model for Producing and Sharing Instructional Materials in Veterinary Medicine. Final Report.

    Science.gov (United States)

    Ward, Billy C.; Niec, Alphonsus P.

    This report describes a study of factors which appear to influence the "shareability" of audiovisual materials in the field of veterinary medicine. Specific factors addressed are content quality, instructional effectiveness, technical quality, institutional support, organization, logistics, and personal attitudes toward audiovisuals. (Author/CO)

  18. Audio-visual interactions in product sound design

    Science.gov (United States)

    Özcan, Elif; van Egmond, René

    2010-02-01

    Consistent product experience requires congruity between product properties such as visual appearance and sound. Therefore, for designing appropriate product sounds by manipulating their spectral-temporal structure, product sounds should preferably not be considered in isolation but as an integral part of the main product concept. Because visual aspects of a product are considered to dominate the communication of the desired product concept, sound is usually expected to fit the visual character of a product. We argue that this can be accomplished successfully only on basis of a thorough understanding of the impact of audio-visual interactions on product sounds. Two experimental studies are reviewed to show audio-visual interactions on both perceptual and cognitive levels influencing the way people encode, recall, and attribute meaning to product sounds. Implications for sound design are discussed defying the natural tendency of product designers to analyze the "sound problem" in isolation from the other product properties.

  19. Specialization in audiovisual speech perception: a replication study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by bimodal integration in the McGurk effect. This integration effect may be specific to speech or be applied to all stimuli in general. To investigate this, Tuomainen et al. (2005) used sine-wave speech, which naïve observers may perceive as non-speech......, but hear as speech once informed of the linguistic origin of the signal. Combinations of sine-wave speech and incongruent video of the talker elicited a McGurk effect only for informed observers. This indicates that the audiovisual integration effect is specific to speech perception. However, observers...... of the speaker. Observers were required to report this after primary target categorization. We found a significant McGurk effect only in the natural speech and speech mode conditions supporting the finding of Tuomainen et al. Performance in the secondary task was similar in all conditions indicating...

  20. Audiovisual correspondence between musical timbre and visual shapes

    OpenAIRE

    Adeli, Mohammad; Rouat, Jean; Molotchnikoff, Stéphane

    2014-01-01

    This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g., simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against co...

  1. Audiovisual Ethnography of Philippine Music: A Process-oriented Approach

    OpenAIRE

    2013-01-01

    Audiovisual documentation has been an important part of ethnomusicological endeavors, but until recently it was treated primarily as a tool of preservation and/or documentation that supplements written ethnography, albeit there a few notable exceptions. The proliferation of inexpensive video equipment has encouraged the unprecedented number of scholars and students in ethnomusicology to be involved in filmmaking, but its potential as a methodology has not been fully explored. As a small step ...

  2. Neural correlates of quality during perception of audiovisual stimuli

    CERN Document Server

    Arndt, Sebastian

    2016-01-01

    This book presents a new approach to examining perceived quality of audiovisual sequences. It uses electroencephalography to understand how exactly user quality judgments are formed within a test participant, and what might be the physiologically-based implications when being exposed to lower quality media. The book redefines experimental paradigms of using EEG in the area of quality assessment so that they better suit the requirements of standard subjective quality testings. Therefore, experimental protocols and stimuli are adjusted accordingly. .

  3. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    Science.gov (United States)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  4. The role of emotion in dynamic audiovisual integration of faces and voices.

    Science.gov (United States)

    Kokinous, Jenny; Kotz, Sonja A; Tavano, Alessandro; Schröger, Erich

    2015-05-01

    We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration.

  5. Visual Target Localization, the Effect of Allocentric Audiovisual Reference Frame

    Directory of Open Access Journals (Sweden)

    David Hartnagel

    2011-10-01

    Full Text Available Visual allocentric references frames (contextual cues affect visual space perception (Diedrichsen et al., 2004; Walter et al., 2006. On the other hand, experiments have shown a change of visual perception induced by binaural stimuli (Chandler, 1961; Carlile et al., 2001. In the present study we investigate the effect of visual and audiovisual allocentred reference frame on visual localization and straight ahead pointing. Participant faced a black part-spherical screen (92cm radius. The head was maintained aligned with the body. Participant wore headphone and a glove with motion capture markers. A red laser point was displayed straight ahead as fixation point. The visual target was a 100ms green laser point. After a short delay, the green laser reappeared and participant had to localize target with a trackball. Straight ahead blind pointing was required before and after series of 48 trials. Visual part of the bimodal allocentred reference frame was provided by a vertical red laser line (15° left or 15° right, auditory part was provided by 3D sound. Five conditions were tested, no-reference, visual reference (left/right, audiovisual reference (left/right. Results show that the significant effect of bimodal audiovisual reference is not different from the visual reference one.

  6. The development of the perception of audiovisual simultaneity.

    Science.gov (United States)

    Chen, Yi-Chuan; Shore, David I; Lewis, Terri L; Maurer, Daphne

    2016-06-01

    We measured the typical developmental trajectory of the window of audiovisual simultaneity by testing four age groups of children (5, 7, 9, and 11 years) and adults. We presented a visual flash and an auditory noise burst at various stimulus onset asynchronies (SOAs) and asked participants to report whether the two stimuli were presented at the same time. Compared with adults, children aged 5 and 7 years made more simultaneous responses when the SOAs were beyond ± 200 ms but made fewer simultaneous responses at the 0 ms SOA. The point of subjective simultaneity was located at the visual-leading side, as in adults, by 5 years of age, the youngest age tested. However, the window of audiovisual simultaneity became narrower and response errors decreased with age, reaching adult levels by 9 years of age. Experiment 2 ruled out the possibility that the adult-like performance of 9-year-old children was caused by the testing of a wide range of SOAs. Together, the results demonstrate that the adult-like precision of perceiving audiovisual simultaneity is developed by 9 years of age, the youngest age that has been reported to date.

  7. Audiovisual temporal fusion in 6-month-old infants.

    Science.gov (United States)

    Kopp, Franziska

    2014-07-01

    The aim of this study was to investigate neural dynamics of audiovisual temporal fusion processes in 6-month-old infants using event-related brain potentials (ERPs). In a habituation-test paradigm, infants did not show any behavioral signs of discrimination of an audiovisual asynchrony of 200 ms, indicating perceptual fusion. In a subsequent EEG experiment, audiovisual synchronous stimuli and stimuli with a visual delay of 200 ms were presented in random order. In contrast to the behavioral data, brain activity differed significantly between the two conditions. Critically, N1 and P2 latency delays were not observed between synchronous and fused items, contrary to previously observed N1 and P2 latency delays between synchrony and perceived asynchrony. Hence, temporal interaction processes in the infant brain between the two sensory modalities varied as a function of perceptual fusion versus asynchrony perception. The visual recognition components Pb and Nc were modulated prior to sound onset, emphasizing the importance of anticipatory visual events for the prediction of auditory signals. Results suggest mechanisms by which young infants predictively adjust their ongoing neural activity to the temporal synchrony relations to be expected between vision and audition.

  8. Audiovisual integration for speech during mid-childhood: electrophysiological evidence.

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-12-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7-8-year-olds and 10-11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception.

  9. Audiovisual integration of speech in a patient with Broca's Aphasia.

    Science.gov (United States)

    Andersen, Tobias S; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia.

  10. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    Science.gov (United States)

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  11. Audiovisual integration of emotional signals from others’ social interactions.

    Directory of Open Access Journals (Sweden)

    Lukasz ePiwek

    2015-05-01

    Full Text Available Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g. the face-voice and/or body-sound of one actor. However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previously found in simpler social situations extends to more complex and ecological situations. Stimuli consisting of the biological motion and voice of two interacting agents were used in two experiments. In Experiment 1, participants were presented with visual, auditory, auditory filtered/noisy, and audiovisual congruent and incongruent clips. We asked participants to judge whether the two agents were interacting happily or angrily. In Experiment 2, another group of participants repeated the same task, as in Experiment 1, while trying to ignore either the visual or the auditory information. The findings from both experiments indicate that when the reliability of the auditory cue was decreased participants weighted more the visual cue in their emotional judgments. This in turn translated in increased emotion recognition accuracy for the multisensory condition. Our findings thus point to a common mechanism of multisensory integration of emotional signals irrespective of social stimulus complexity.

  12. Temporal structure and complexity affect audio-visual correspondence detection

    Directory of Open Access Journals (Sweden)

    Rachel N Denison

    2013-01-01

    Full Text Available Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration.

  13. Infants' preference for native audiovisual speech dissociated from congruency preference.

    Directory of Open Access Journals (Sweden)

    Kathleen Shaw

    Full Text Available Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces. Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English and non-native (Spanish language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.

  14. Montagem e remontagem na produção audiovisual de Guel Arraes

    Directory of Open Access Journals (Sweden)

    Yvana Fechine

    2008-11-01

    Full Text Available As minisséries O auto da Compadecida (1999 e A invenção do Brasil (2000, produzidas por Guel Arraes para a TV e, posteriormente, reeditadas e distribuídas como filme um ano depois de sua exibição pela Rede Globo, inauguraram uma nova lógica de produção no mercado audiovisual brasileiro. A transformação dessas minisséries em filmes não pode, no entanto, ser pensada como adaptação, pois o que temos, a partir do mesmo material anteriormente gravado, é um processo de "remontagem". Haveria, então, um tipo de montagem inerente a tais produtos audiovisuais, pensados, já na origem, para o trânsito entre meios? O presente artigo discute a questão e propõe que o caminho encontrado por Guel Arraes para obter esses resultados do tipo "dois em um", que "funcionam" tanto como programa de TV, quanto como filme, foi o apelo ao que descreveremos como "montagem em módulos". Palavras-chave televisão, filme, montagem Abstract The TV series O auto da Compadecida (1999 and A invenção do Brasil (2000, produced by Guel Arraes, and reedited and distributed as film one year after their exhibition on Rede Globo, introduced a new logic of production into the Brazilian audiovisual market. The transformation of these series into films cannot, however, be seen as an adaptation, even though the same recorded material is used in the process of re-editing. Would there, therefore, be a type of editing (montage inherent to these audiovisual products destined to transit between the two kinds of media, television and film? The present article discusses the issue and proposes that the way which Guel Arraes found to reach these results – in the "two-in-one" type, which "work" equally well as a television program and as a film – was what we can describe as editing (montage in modules. Key words television, film, editing

  15. A desconstrução audiovisual do trailer

    Directory of Open Access Journals (Sweden)

    Patricia de Oliveira Iuva

    2010-06-01

    Full Text Available Para além das reflexões acerca de uma dada produção audiovisual, este artigo tem por finalidade ensaiar possíveis desconstruções da noção hegemônica da publicidade no trailer. Daí que, acerca do mesmo, é importante considerar que esse não está restrito, somente, à promoção de filmes, uma vez que se observa na televisão, no jornalismo, nos videoclipes, etc., a presença de audiovisuais com construções semelhantes às dos trailers. Como chamaríamos esses audiovisuais, uma vez que o termo trailer, em princípio, estaria restrito a peças que possuem relação a um filme? De tal modo, poderia se pensar, portanto, que existem movimentos no interior do trailer, que vão além da publicidade e do cinema. Neste sentido, então, é possível pensar que o que justifica a ocorrência do trailer não é a existência de um filme, mas sim a promessa da existência de um filme, o que pode constituir, possivelmente, uma forma de linguagem emergente da produção audiovisual. Ou seja, é possível vislumbrar no trailer uma composição audiovisual adequada a um dado padrão global de produção e, ao mesmo tempo, identificar a existência de elementos fluidos que escapam aos modelos pré-concebidos. A articulação de uma dada linguagem audiovisual com referências que vêm desde a produção dos videoclipes e influências das tecnologias analógico-digitais, possibilita-nos vislumbrar um movimento de autonomia estética e político-econômica da produção trailerífica. É neste contexto teórico-metodológico, entre a semiologia de Christian Metz e o conceito de desconstrução em Derrida, que o trabalho aborda a discussão do cinema e do audiovisual no interior do objeto trailer.

  16. 36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?

    Science.gov (United States)

    2010-07-01

    ... RECORDS MANAGEMENT § 1237.18 What are the environmental standards for audiovisual records storage? (a... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false What are the environmental standards for audiovisual records storage? 1237.18 Section 1237.18 Parks, Forests, and Public...

  17. Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection.

    Science.gov (United States)

    Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica

    2015-09-01

    Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration.

  18. Audiovisual News, Cartoons, and Films as Sources of Authentic Language Input and Language Proficiency Enhancement

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2012-01-01

    In today's audiovisually driven world, various audiovisual programs can be incorporated as authentic sources of potential language input for second language acquisition. In line with this view, the present research aimed at discovering the effectiveness of exposure to news, cartoons, and films as three different types of authentic audiovisual…

  19. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... considerations in the maintenance of audiovisual records? 1237.20 Section 1237.20 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.20 What are special considerations in the maintenance of...

  20. Exploring Student Perceptions of Audiovisual Feedback via Screencasting in Online Courses

    Science.gov (United States)

    Mathieson, Kathleen

    2012-01-01

    Using Moore's (1993) theory of transactional distance as a framework, this action research study explored students' perceptions of audiovisual feedback provided via screencasting as a supplement to text-only feedback. A crossover design was employed to ensure that all students experienced both text-only and text-plus-audiovisual feedback and to…

  1. Age-related audiovisual interactions in the superior colliculus of the rat.

    Science.gov (United States)

    Costa, M; Piché, M; Lepore, F; Guillemot, J-P

    2016-04-21

    It is well established that multisensory integration is a functional characteristic of the superior colliculus that disambiguates external stimuli and therefore reduces the reaction times toward simple audiovisual targets in space. However, in a condition where a complex audiovisual stimulus is used, such as the optical flow in the presence of modulated audio signals, little is known about the processing of the multisensory integration in the superior colliculus. Furthermore, since visual and auditory deficits constitute hallmark signs during aging, we sought to gain some insight on whether audiovisual processes in the superior colliculus are altered with age. Extracellular single-unit recordings were conducted in the superior colliculus of anesthetized Sprague-Dawley adult (10-12 months) and aged (21-22 months) rats. Looming circular concentric sinusoidal (CCS) gratings were presented alone and in the presence of sinusoidally amplitude modulated white noise. In both groups of rats, two different audiovisual response interactions were encountered in the spatial domain: superadditive, and suppressive. In contrast, additive audiovisual interactions were found only in adult rats. Hence, superior colliculus audiovisual interactions were more numerous in adult rats (38%) than in aged rats (8%). These results suggest that intersensory interactions in the superior colliculus play an essential role in space processing toward audiovisual moving objects during self-motion. Moreover, aging has a deleterious effect on complex audiovisual interactions.

  2. Children with a History of SLI Show Reduced Sensitivity to Audiovisual Temporal Asynchrony: An ERP Study

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer; Leonard, Laurence B.; Gustafson, Dana; Macias, Danielle

    2014-01-01

    Purpose: The authors examined whether school-age children with a history of specific language impairment (H-SLI), their peers with typical development (TD), and adults differ in sensitivity to audiovisual temporal asynchrony and whether such difference stems from the sensory encoding of audiovisual information. Method: Fifteen H-SLI children, 15…

  3. Twice upon a time: multiple concurrent temporal recalibrations of audiovisual speech.

    Science.gov (United States)

    Roseboom, Warrick; Arnold, Derek H

    2011-07-01

    Audiovisual timing perception can recalibrate following prolonged exposure to asynchronous auditory and visual inputs. It has been suggested that this might contribute to achieving perceptual synchrony for auditory and visual signals despite differences in physical and neural signal times for sight and sound. However, given that people can be concurrently exposed to multiple audiovisual stimuli with variable neural signal times, a mechanism that recalibrates all audiovisual timing percepts to a single timing relationship could be dysfunctional. In the experiments reported here, we showed that audiovisual temporal recalibration can be specific for particular audiovisual pairings. Participants were shown alternating movies of male and female actors containing positive and negative temporal asynchronies between the auditory and visual streams. We found that audiovisual synchrony estimates for each actor were shifted toward the preceding audiovisual timing relationship for that actor and that such temporal recalibrations occurred in positive and negative directions concurrently. Our results show that humans can form multiple concurrent estimates of appropriate timing for audiovisual synchrony.

  4. 36 CFR 1237.16 - How do agencies store audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... facilities comply with 36 CFR part 1234. (b) For the storage of permanent, long-term temporary, or... audiovisual records? 1237.16 Section 1237.16 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.16 How...

  5. Changes of the Prefrontal EEG (Electroencephalogram) Activities According to the Repetition of Audio-Visual Learning.

    Science.gov (United States)

    Kim, Yong-Jin; Chang, Nam-Kee

    2001-01-01

    Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…

  6. A General Audiovisual Temporal Processing Deficit in Adult Readers with Dyslexia

    Science.gov (United States)

    Francisco, Ana A.; Jesse, Alexandra; Groen, Margriet A.; McQueen, James M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of…

  7. Audiovisual Speech Perception and Eye Gaze Behavior of Adults with Asperger Syndrome

    Science.gov (United States)

    Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-01-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…

  8. A Management Review and Analysis of Purdue University Libraries and Audio-Visual Center.

    Science.gov (United States)

    Baaske, Jan; And Others

    A management review and analysis was conducted by the staff of the libraries and audio-visual center of Purdue University. Not only were the study team and the eight task forces drawn from all levels of the libraries and audio-visual center staff, but a systematic effort was sustained through inquiries, draft reports and open meetings to involve…

  9. Threats and opportunities for new audiovisual cultural heritage archive services: the Dutch case

    NARCIS (Netherlands)

    Ongena, Guido; Huizer, E.; Wijngaert, van de Lidwien

    2012-01-01

    Purpose The purpose of this paper is to analyze the business-to-consumer market for digital audiovisual archiving services. In doing so we identify drivers, threats, and opportunities for new services based on audiovisual archives in the cultural heritage domain. By analyzing the market we provide i

  10. La regulación audiovisual: argumentos a favor y en contra The audio-visual regulation: the arguments for and against

    Directory of Open Access Journals (Sweden)

    Jordi Sopena Palomar

    2008-03-01

    Full Text Available El artículo analiza la efectividad de la regulación audiovisual y valora los diversos argumentos a favor y en contra de la existencia de consejos reguladores a nivel estatal. El debate sobre la necesidad de un organismo de este calado en España todavía persiste. La mayoría de los países comunitarios se han dotado de consejos competentes en esta materia, como es el caso del OFCOM en el Reino Unido o el CSA en Francia. En España, la regulación audiovisual se limita a organismos de alcance autonómico, como son el Consejo Audiovisual de Navarra, el de Andalucía y el Consell de l’Audiovisual de Catalunya (CAC, cuyo modelo también es abordado en este artículo. The article analyzes the effectiveness of the audio-visual regulation and assesses the different arguments for and against the existence of the broadcasting authorities at the state level. The debate of the necessity of a Spanish organism of regulation is still active. Most of the European countries have created some competent authorities, like the OFCOM in United Kingdom and the CSA in France. In Spain, the broadcasting regulation is developed by regional organisms, like the Consejo Audiovisual de Navarra, the Consejo Audiovisual de Andalucía and the Consell de l’Audiovisual de Catalunya (CAC, whose case is also studied in this article.

  11. Bayesian calibration of simultaneity in audiovisual temporal order judgments.

    Directory of Open Access Journals (Sweden)

    Shinya Yamamoto

    Full Text Available After repeated exposures to two successive audiovisual stimuli presented in one frequent order, participants eventually perceive a pair separated by some lag time in the same order as occurring simultaneously (lag adaptation. In contrast, we previously found that perceptual changes occurred in the opposite direction in response to tactile stimuli, conforming to bayesian integration theory (bayesian calibration. We further showed, in theory, that the effect of bayesian calibration cannot be observed when the lag adaptation was fully operational. This led to the hypothesis that bayesian calibration affects judgments regarding the order of audiovisual stimuli, but that this effect is concealed behind the lag adaptation mechanism. In the present study, we showed that lag adaptation is pitch-insensitive using two sounds at 1046 and 1480 Hz. This enabled us to cancel lag adaptation by associating one pitch with sound-first stimuli and the other with light-first stimuli. When we presented each type of stimulus (high- or low-tone in a different block, the point of simultaneity shifted to "sound-first" for the pitch associated with sound-first stimuli, and to "light-first" for the pitch associated with light-first stimuli. These results are consistent with lag adaptation. In contrast, when we delivered each type of stimulus in a randomized order, the point of simultaneity shifted to "light-first" for the pitch associated with sound-first stimuli, and to "sound-first" for the pitch associated with light-first stimuli. The results clearly show that bayesian calibration is pitch-specific and is at work behind pitch-insensitive lag adaptation during temporal order judgment of audiovisual stimuli.

  12. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    Science.gov (United States)

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions.

  13. THE COVERAGE OF THE TRAGEDIES IN THE AUDIOVISUAL MEDIA

    Directory of Open Access Journals (Sweden)

    Carlos Portas

    2013-11-01

    Full Text Available News about tragedies or disasters is one of the biggest challenges for journalists. These are extreme situations in which they must combine the inalienable right to truthful information with other inalienable rights, including respect for the privacy of people suffering. For this, the role of the professionals is crucial, but also the role of the audiovisual media companies. Journalists should to understand that in a tragic event involved people react in public, but that doesn't mean they are making public their reaction. A good reporter knows to discern what is news, what to ask, how and when to do it and, if appropriate, how to spread

  14. Proyecto educativo : herramientas de educación audiovisual

    OpenAIRE

    Boza Osuna, Luis

    2005-01-01

    El objeto de este trabajo es examinar la necesidad de informar y formar en educación audiovisual a familias, alumnos y profesores. Desde 1999, Telespectadores Asociados de Cataluña (TAC) decidió apostar decididamente por acercarse al mundo educativo, para dar respuesta a la evidente necesidad de las instituciones educativas de plantar cara a los efectos negativos de la televisión en los alumnos. Los directivos y profesionales de la enseñanza son perfectamente conscientes de la competencia des...

  15. Los metadatos asociados a la información audiovisual televisiva por “agentes externos” al servicio de documentación: validez, uso y posibilidades

    Directory of Open Access Journals (Sweden)

    Jorge Caldera-Serrano

    2016-03-01

    Full Text Available Se identifican los metadatos asociados a las imágenes que ingresan en los departamentos de documentación de las televisiones. Estos metadatos externos a la propia gestión documental pueden ser utilizados y contar con un valor positivo en el marco del análisis documental de la información audiovisual en televisión. Igualmente se señalan en qué momentos del proceso de la generación de la información pueden incorporarse metadatos descriptivos a la información audiovisual, tanto en la información procedente de agentes externos a la propia cadena, como aquel material que se produce en la propia empresa televisiva.

  16. Authentic Language Input Through Audiovisual Technology and Second Language Acquisition

    Directory of Open Access Journals (Sweden)

    Taher Bahrani

    2014-09-01

    Full Text Available Second language acquisition cannot take place without having exposure to language input. With regard to this, the present research aimed at providing empirical evidence about the low and the upper-intermediate language learners’ preferred type of audiovisual programs and language proficiency development outside the classroom. To this end, 60 language learners (30 low level and 30 upper-intermediate level were asked to have exposure to their preferred types of audiovisual program(s outside the classroom and keep a diary of the amount and the type of exposure. The obtained data indicated that the low-level participants preferred cartoons and the upper-intermediate participants preferred news more. To find out which language proficiency level could improve its language proficiency significantly, a post-test was administered. The results indicated that only the upper-intermediate language learners gained significant improvement. Based on the findings, the quality of the language input should be given priority over the amount of exposure.

  17. Valores occidentales en el discurso publicitario audiovisual argentino

    Directory of Open Access Journals (Sweden)

    Isidoro Arroyo Almaraz

    2012-04-01

    Full Text Available En el presente artículo se desarrolla un análisis del discurso publicitario audiovisual argentino. Se pretende identificar los valores sociales que comunica con mayor predominancia y su posible vinculación con los valores característicos de la sociedad occidental posmoderna. Con este propósito se analizó la frecuencia de aparición de valores sociales para el estudio de 28 anuncios de diferentes anunciantes . Como modelo de análisis se utilizó el modelo “Seven/Seven” (siete pecados capitales y siete virtudes cardinales ya que se considera que los valores tradicionales son herederos de las virtudes y los pecados, utilizados por la publicidad para resolver necesidades relacionadas con el consumo. La publicidad audiovisual argentina promueve y anima ideas relacionadas con las virtudes y pecados a través de los comportamientos de los personajes de los relatos audiovisuales. Los resultados evidencian una mayor frecuencia de valores sociales caracterizados como pecados que de valores sociales caracterizados como virtudes ya que los pecados se transforman a través de la publicidad en virtudes que dinamizan el deseo y que favorecen el consumo fortaleciendo el aprendizaje de las marcas. Finalmente, a partir de los resultados obtenidos se reflexiona acerca de los usos y alcances sociales que el discurso publicitario posee.

  18. A imagem-ritmo e o videoclipe no audiovisual

    Directory of Open Access Journals (Sweden)

    Felipe de Castro Muanis

    2012-12-01

    Full Text Available A televisão pode ser um espaço de reunião entre som e imagem em um dispositivo que possibilita a imagem-ritmo – dando continuidade à teoria da imagem de Gilles Deleuze, proposta para o cinema. Ela agregaria, simultaneamente, ca-racterísticas da imagem-movimento e da imagem-tempo, que se personificariam na construção de imagens pós-modernas, em produtos audiovisuais não necessariamente narrativos, porém populares. Filmes, videogames, videoclipes e vinhetas em que a música conduz as imagens permitiriam uma leitura mais sensorial. O audiovisual como imagem-música abre, assim, para uma nova forma de percepção além da textual tradicional, fruto da interação entre ritmo, texto e dispositivo. O tempo das imagens em movimento no audiovisual está atrelado inevitável e prioritariamente ao som. Elas agregam possibilidades não narrativas que se realizam, na maioria das vezes, sobre a lógica do ritmo musical, so-bressaindo-se como um valor fundamental, observado nos filmes Sem Destino (1969, Assassinos por Natureza (1994 e Corra Lola Corra (1998.

  19. Information-Driven Active Audio-Visual Source Localization.

    Directory of Open Access Journals (Sweden)

    Niclas Schult

    Full Text Available We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source's position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot's mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system's performance and discuss possible areas of application.

  20. Head Tracking of Auditory, Visual, and Audio-Visual Targets.

    Science.gov (United States)

    Leung, Johahn; Wei, Vincent; Burgess, Martin; Carlile, Simon

    2015-01-01

    The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20 to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual "bisensory" stimuli. Three metrics were measured-onset, RMS, and gain error. The results showed that tracking accuracy (RMS error) varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets.

  1. Compliments in Audiovisual Translation – issues in character identity

    Directory of Open Access Journals (Sweden)

    Isabel Fernandes Silva

    2011-12-01

    Full Text Available Over the last decades, audiovisual translation has gained increased significance in Translation Studies as well as an interdisciplinary subject within other fields (media, cinema studies etc. Although many articles have been published on communicative aspects of translation such as politeness, only recently have scholars taken an interest in the translation of compliments. This study will focus on both these areas from a multimodal and pragmatic perspective, emphasizing the links between these fields and how this multidisciplinary approach will evidence the polysemiotic nature of the translation process. In Audiovisual Translation both text and image are at play, therefore, the translation of speech produced by the characters may either omit (because it is provided by visualgestual signs or it may emphasize information. A selection was made of the compliments present in the film What Women Want, our focus being on subtitles which did not successfully convey the compliment expressed in the source text, as well as analyze the reasons for this, namely difference in register, Culture Specific Items and repetitions. These differences lead to a different portrayal/identity/perception of the main character in the English version (original soundtrack and subtitled versions in Portuguese and Italian.

  2. Head Tracking of Auditory, Visual and Audio-Visual Targets

    Directory of Open Access Journals (Sweden)

    Johahn eLeung

    2016-01-01

    Full Text Available The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20°/s to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual bisensory stimuli. Three metrics were measured – onset, RMS and gain error. The results showed that tracking accuracy (RMS error varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets.

  3. Video genre categorization and representation using audio-visual information

    Science.gov (United States)

    Ionescu, Bogdan; Seyerlehner, Klaus; Rasche, Christoph; Vertan, Constantin; Lambert, Patrick

    2012-04-01

    We propose an audio-visual approach to video genre classification using content descriptors that exploit audio, color, temporal, and contour information. Audio information is extracted at block-level, which has the advantage of capturing local temporal information. At the temporal structure level, we consider action content in relation to human perception. Color perception is quantified using statistics of color distribution, elementary hues, color properties, and relationships between colors. Further, we compute statistics of contour geometry and relationships. The main contribution of our work lies in harnessing the descriptive power of the combination of these descriptors in genre classification. Validation was carried out on over 91 h of video footage encompassing 7 common video genres, yielding average precision and recall ratios of 87% to 100% and 77% to 100%, respectively, and an overall average correct classification of up to 97%. Also, experimental comparison as part of the MediaEval 2011 benchmarking campaign demonstrated the efficiency of the proposed audio-visual descriptors over other existing approaches. Finally, we discuss a 3-D video browsing platform that displays movies using feature-based coordinates and thus regroups them according to genre.

  4. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    Science.gov (United States)

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  5. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG.

    Directory of Open Access Journals (Sweden)

    Yao Lu

    Full Text Available Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave, while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  6. Efficient visual search from synchronized auditory signals requires transient audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.

  7. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    Science.gov (United States)

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  8. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis.

    Science.gov (United States)

    Altieri, Nicholas; Wenger, Michael J

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of -12 dB, and S/N ratio of -18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity.

  9. Brain responses to audiovisual speech mismatch in infants are associated with individual differences in looking behaviour.

    Science.gov (United States)

    Kushnerenko, Elena; Tomalski, Przemyslaw; Ballieux, Haiko; Ribeiro, Helena; Potton, Anita; Axelsson, Emma L; Murphy, Elizabeth; Moore, Derek G

    2013-11-01

    Research on audiovisual speech integration has reported high levels of individual variability, especially among young infants. In the present study we tested the hypothesis that this variability results from individual differences in the maturation of audiovisual speech processing during infancy. A developmental shift in selective attention to audiovisual speech has been demonstrated between 6 and 9 months with an increase in the time spent looking to articulating mouths as compared to eyes (Lewkowicz & Hansen-Tift. (2012) Proc. Natl Acad. Sci. USA, 109, 1431-1436; Tomalski et al. (2012) Eur. J. Dev. Psychol., 1-14). In the present study we tested whether these changes in behavioural maturational level are associated with differences in brain responses to audiovisual speech across this age range. We measured high-density event-related potentials (ERPs) in response to videos of audiovisually matching and mismatched syllables /ba/ and /ga/, and subsequently examined visual scanning of the same stimuli with eye-tracking. There were no clear age-specific changes in ERPs, but the amplitude of audiovisual mismatch response (AVMMR) to the combination of visual /ba/ and auditory /ga/ was strongly negatively associated with looking time to the mouth in the same condition. These results have significant implications for our understanding of individual differences in neural signatures of audiovisual speech processing in infants, suggesting that they are not strictly related to chronological age but instead associated with the maturation of looking behaviour, and develop at individual rates in the second half of the first year of life.

  10. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    Science.gov (United States)

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  11. The Influence of Selective and Divided Attention on Audiovisual Integration in Children.

    Science.gov (United States)

    Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong

    2016-01-24

    This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder.

  12. Effect of attentional load on audiovisual speech perception: Evidence from ERPs

    Directory of Open Access Journals (Sweden)

    Agnès eAlsius

    2014-07-01

    Full Text Available Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e. a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  13. Neural Dynamics of Audiovisual Speech Integration under Variable Listening Conditions: An Individual Participant Analysis

    Directory of Open Access Journals (Sweden)

    Nicholas eAltieri

    2013-09-01

    Full Text Available Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend & Nozawa, 1995, a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of -12 dB, and S/N ratio of -18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude in lower auditory S/N ratios (higher capacity/efficiency compared to the high S/N ratio (low capacity/inefficient integration. The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity.

  14. The audiovisual mounting narrative as a basis for the documentary film interactive: news studies

    Directory of Open Access Journals (Sweden)

    Mgs. Denis Porto Renó

    2008-01-01

    Full Text Available This paper presents a literature review and experiment results from pilot-doctoral research "assembly language visual narrative for the documentary film interactive," which defend the thesis that there are features interactive audio and video editing of the movie, even as causing agent of interactivity. The search for interactive audio-visual formats are present in international investigations, but sob glances technology. He believes that this paper is to propose possible formats for interactive audiovisual production film, video, television, computer and cell phone from the postmodern society. Key words: Audiovisual, language, interactivity, cinema interactive, documentary, communication.

  15. Audiovisual Integration of Speech in a Patient with Broca’s Aphasia

    DEFF Research Database (Denmark)

    Andersen, Tobias; Starrfelt, Randi

    2015-01-01

    's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical......, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing...

  16. Atypical audiovisual speech integration in infants at risk for autism.

    Directory of Open Access Journals (Sweden)

    Jeanne A Guiraud

    Full Text Available The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/- audio/ba/and the congruent visual/ba/- audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/- audio/ga/display compared with the congruent visual/ga/- audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16 = 17.153, p = 0.001. The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25 = 0.09, p = 0.767, in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41 = 4.466, p = 0.041. In some cases this reduced ability might lead to the poor communication skills characteristic of autism.

  17. CURRICULUM MATERIALS, DESCRIPTION AND PRICE LIST OF MATERIALS DEVELOPED BY THE OHIO VOCATIONAL AGRICULTURE CURRICULUM MATERIALS SERVICE.

    Science.gov (United States)

    Ohio Vocational Agriculture Instructional Materials Service, Columbus.

    PAMPHLETS, SLIDES, TAPES, MANUALS, AND AN EXAMINATION ARE INCLUDED IN THIS CATALOG OF INSTRUCTIONAL MATERIALS FOR USE BY VOCATIONAL AGRICULTURE TEACHERS IN HIGH SCHOOL AND ADULT FARMER PROGRAMS. THE MATERIALS WERE DEVELOPED BY VOCATIONAL AGRICULTURE TEACHERS, CURRICULUM SPECIALISTS, TECHNICAL SPECIALISTS, AND AUDIOVISUAL PERSONNEL AND ARE…

  18. Handicrafts production: documentation and audiovisual dissemination as sociocultural appreciation technology

    Directory of Open Access Journals (Sweden)

    Luciana Alvarenga

    2016-01-01

    Full Text Available The paper presents the results of scientific research, technology and innovation project in the creative economy sector, conducted from January 2014 to January 2015 that aimed to document and disclose the artisans and handicraft production of Vila de Itaúnas, ES, Brasil. The process was developed from initial conversations, followed by planning and conducting participatory workshops for documentation and audiovisual dissemination around the production of handicrafts and its relation to biodiversity and local culture. The initial objective was to promote expression and diffusion spaces of knowledge among and for the local population, also reaching a regional, state and national public. Throughout the process, it was found that the participatory workshops and the collective production of a virtual site for disclosure of practices and products contributed to the development and socio-cultural recognition of artisan and craft in the region.

  19. The Digital Turn in the French Audiovisual Model

    Directory of Open Access Journals (Sweden)

    Olivier Alexandre

    2016-07-01

    Full Text Available This article deals with the digital turn in the French audiovisual model. An organizational and legal system has evolved with changing technology and economic forces over the past thirty years. The high-income television industry served as the key element during the 1980s to compensate for a shifting value economy from movie theaters to domestic screens and personal devices. However, the growing competition in the TV sector and the rise of tech companies have initiated a disruption process. A challenged French conception copyright, the weakened position of TV channels and the scaling of content market all now call into question the sustainability of the French model in a digital era.

  20. Innovación y competencia en la industria audiovisual

    OpenAIRE

    Motta, Jorge José

    2015-01-01

    Este artículo está orientado a analizar la relación existente entre innovación y formas e intensidad de la competencia empresarial en el mercado audiovisual, con especial referencia a la industria cinematográfica. Para ello se indaga en las características económicas de las principales tecnologías y en las formas de organización de la producción típicas del sector y se analiza cómo afectan la relación innovación – competencia. Además, se examina la importancia de la cultura y de los...

  1. Artimate: an articulatory animation framework for audiovisual speech synthesis

    CERN Document Server

    Steiner, Ingmar

    2012-01-01

    We present a modular framework for articulatory animation synthesis using speech motion capture data obtained with electromagnetic articulography (EMA). Adapting a skeletal animation approach, the articulatory motion data is applied to a three-dimensional (3D) model of the vocal tract, creating a portable resource that can be integrated in an audiovisual (AV) speech synthesis platform to provide realistic animation of the tongue and teeth for a virtual character. The framework also provides an interface to articulatory animation synthesis, as well as an example application to illustrate its use with a 3D game engine. We rely on cross-platform, open-source software and open standards to provide a lightweight, accessible, and portable workflow.

  2. 78 FR 63492 - Certain Audiovisual Components and Products Containing the Same; Notice of Commission...

    Science.gov (United States)

    2013-10-24

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Audiovisual Components and Products Containing the Same; Notice of Commission Determination To Review a Final Initial Determination Finding a Violation of Section 337 in Its...

  3. The development of sensorimotor influences in the audiovisual speech domain: Some critical questions

    Directory of Open Access Journals (Sweden)

    Bahia eGuellaï

    2014-08-01

    Full Text Available Speech researchers have long been interested in how auditory and visual speech signals are integrated, and recent work has revived interest in the role of speech production with respect to this process. Here we discuss these issues from a developmental perspective. Because speech perception abilities typically outstrip speech production abilities in infancy and childhood, it is unclear how speech-like movements could influence audiovisual speech perception in development. While work on this question is still in its preliminary stages, there is nevertheless increasing evidence that sensorimotor processes (defined here as any motor or proprioceptive process related to orofacial movements affect developmental audiovisual speech processing. We suggest three areas on which to focus in future research: i the relation between audiovisual speech perception and sensorimotor processes at birth, ii the pathways through which sensorimotor processes interact with audiovisual speech processing in infancy, and iii developmental change in sensorimotor pathways as speech production emerges in childhood.

  4. The development of sensorimotor influences in the audiovisual speech domain: some critical questions.

    Science.gov (United States)

    Guellaï, Bahia; Streri, Arlette; Yeung, H Henny

    2014-01-01

    Speech researchers have long been interested in how auditory and visual speech signals are integrated, and the recent work has revived interest in the role of speech production with respect to this process. Here, we discuss these issues from a developmental perspective. Because speech perception abilities typically outstrip speech production abilities in infancy and childhood, it is unclear how speech-like movements could influence audiovisual speech perception in development. While work on this question is still in its preliminary stages, there is nevertheless increasing evidence that sensorimotor processes (defined here as any motor or proprioceptive process related to orofacial movements) affect developmental audiovisual speech processing. We suggest three areas on which to focus in future research: (i) the relation between audiovisual speech perception and sensorimotor processes at birth, (ii) the pathways through which sensorimotor processes interact with audiovisual speech processing in infancy, and (iii) developmental change in sensorimotor pathways as speech production emerges in childhood.

  5. On the Role of Crossmodal Prediction in Audiovisual Emotion Perception

    Directory of Open Access Journals (Sweden)

    Sarah eJessen

    2013-07-01

    Full Text Available Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others’ emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of crossmodal prediction. In emotion perception, as in most other settings, visual information precedes the auditory one. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, it has not been addressed so far in audiovisual emotion perception. Based on the current state of the art in (a crossmodal prediction and (b multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG and magnetoencephalographic (MEG studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow for a more reliable prediction of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 response in the EEG and the duration of visual emotional but not non-emotional information. If the assumption that emotional content allows for more reliable predictions can be corroborated in future studies, crossmodal prediction is a crucial factor in our understanding of multisensory emotion perception.

  6. A LINGUAGEM AUDIOVISUAL COMO PRÁTICA ESCOLAR

    Directory of Open Access Journals (Sweden)

    Simone Berle

    2012-01-01

    Full Text Available O ensaio discute a relação entre o cinema e a escola para tematizar a linguagem audiovisual e suas implicações nas práticas escolares. Mesmo com o acesso a materiais e recursos audiovisuais, o cinema comparece no cotidiano escolar como apoio pedagógico diante da hierarquização e redução das linguagens à leitura e à escrita na educação das crianças. Para discutir a necessária pluralização de experiências com as linguagens, enquanto prática escolar, busca dialogar com a proposta de Jorge Larrosa, de substituir o par teoria/prática pelo par experiência/sentido para pensar a educação e com a concepção do humano como ser histórico e produtor de história em Paul Ricoeur. Nosso olhar de educadoras e pesquisadoras da infância interroga a naturalizada presença da linguagem audiovisual na educação das crianças, para destacar a desconsideração pela pluralidade de acessos midiáticos que as crianças podem interagir atualmente. Não reivindica a inclusão do cinema nos currículos, enquanto área de conhecimento a ser contemplada como “conteúdo”, mas aponta a importância do ampliar as aprendizagens, no cotidiano escolar, ao reivindicar a pluralização dos processos de aprender a complexificar repertórios linguageiros.

  7. The audio-visual revolution: do we really need it?

    Science.gov (United States)

    Townsend, I

    1979-03-01

    In the United Kingdom, The audio-visual revolution has steadily gained converts in the nursing profession. Nurse tutor courses now contain information on the techniques of educational technology and schools of nursing increasingly own (or wish to own) many of the sophisticated electronic aids to teaching that abound. This is taking place at a time of hitherto inexperienced crisis and change. Funds have been or are being made available to buy audio-visual equipment. But its purchase and use relies on satisfying personal whim, prejudice or educational fashion, not on considerations of educational efficiency. In the rush of enthusiasm, the overwhelmed teacher (everywhere; the phenomenon is not confined to nursing) forgets to ask the searching, critical questions: 'Why should we use this aid?','How effective is it?','And, at what?'. Influential writers in this profession have repeatedly called for a more responsible attitude towards published research work of other fields. In an attempt to discover what is known about the answers to this group of questions, an eclectic look at media research is taken and the widespread dissatisfaction existing amongst international educational technologists is noted. The paper isolates out of the literature several causative factors responsible for the present state of affairs. Findings from the field of educational television are cited as representative of an aid which has had a considerable amount of time and research directed at it. The concluding part of the paper shows the decisions to be taken in using or not using educational media as being more complicated than might at first appear.

  8. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    Science.gov (United States)

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration.

  9. Trayectoria, educación universitaria y aprendizaje laboral en la producción audiovisual

    OpenAIRE

    Fernández Berdaguer, María Leticia

    2006-01-01

    Este documento analiza la influencia que tiene la educación universitaria en el trabajo de los profesionales del campo audiovisual. Para ello describe aspectos de la trayectoria de actores del campo audiovisual y de su percepción de la importancia de la educación universitaria y del aprendizaje laboral en el desempeño profesional. Facultad de Bellas Artes

  10. Multiple concurrent temporal recalibrations driven by audiovisual stimuli with apparent physical differences.

    Science.gov (United States)

    Yuan, Xiangyong; Bi, Cuihua; Huang, Xiting

    2015-05-01

    Out-of-synchrony experiences can easily recalibrate one's subjective simultaneity point in the direction of the experienced asynchrony. Although temporal adjustment of multiple audiovisual stimuli has been recently demonstrated to be spatially specific, perceptual grouping processes that organize separate audiovisual stimuli into distinctive "objects" may play a more important role in forming the basis for subsequent multiple temporal recalibrations. We investigated whether apparent physical differences between audiovisual pairs that make them distinct from each other can independently drive multiple concurrent temporal recalibrations regardless of spatial overlap. Experiment 1 verified that reducing the physical difference between two audiovisual pairs diminishes the multiple temporal recalibrations by exposing observers to two utterances with opposing temporal relationships spoken by one single speaker rather than two distinct speakers at the same location. Experiment 2 found that increasing the physical difference between two stimuli pairs can promote multiple temporal recalibrations by complicating their non-temporal dimensions (e.g., disks composed of two rather than one attribute and tones generated by multiplying two frequencies); however, these recalibration aftereffects were subtle. Experiment 3 further revealed that making the two audiovisual pairs differ in temporal structures (one transient and one gradual) was sufficient to drive concurrent temporal recalibration. These results confirm that the more audiovisual pairs physically differ, especially in temporal profile, the more likely multiple temporal perception adjustments will be content-constrained regardless of spatial overlap. These results indicate that multiple temporal recalibrations are based secondarily on the outcome of perceptual grouping processes.

  11. Detecting Functional Connectivity During Audiovisual Integration with MEG: A Comparison of Connectivity Metrics.

    Science.gov (United States)

    Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard

    2015-08-01

    In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.

  12. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    Science.gov (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration.

  13. Developmental Trajectory of Audiovisual Speech Integration in Early Infancy. A Review of Studies Using the McGurk Paradigm

    Directory of Open Access Journals (Sweden)

    Tomalski Przemysław

    2015-10-01

    Full Text Available Apart from their remarkable phonological skills young infants prior to their first birthday show ability to match the mouth articulation they see with the speech sounds they hear. They are able to detect the audiovisual conflict of speech and to selectively attend to articulating mouth depending on audiovisual congruency. Early audiovisual speech processing is an important aspect of language development, related not only to phonological knowledge, but also to language production during subsequent years. Th is article reviews recent experimental work delineating the complex developmental trajectory of audiovisual mismatch detection. Th e central issue is the role of age-related changes in visual scanning of audiovisual speech and the corresponding changes in neural signatures of audiovisual speech processing in the second half of the first year of life. Th is phenomenon is discussed in the context of recent theories of perceptual development and existing data on the neural organisation of the infant ‘social brain’.

  14. The spatial reliability of task-irrelevant sounds modulates bimodal audiovisual integration: An event-related potential study.

    Science.gov (United States)

    Li, Qi; Yu, Hongtao; Wu, Yan; Gao, Ning

    2016-08-26

    The integration of multiple sensory inputs is essential for perception of the external world. The spatial factor is a fundamental property of multisensory audiovisual integration. Previous studies of the spatial constraints on bimodal audiovisual integration have mainly focused on the spatial congruity of audiovisual information. However, the effect of spatial reliability within audiovisual information on bimodal audiovisual integration remains unclear. In this study, we used event-related potentials (ERPs) to examine the effect of spatial reliability of task-irrelevant sounds on audiovisual integration. Three relevant ERP components emerged: the first at 140-200ms over a wide central area, the second at 280-320ms over the fronto-central area, and a third at 380-440ms over the parieto-occipital area. Our results demonstrate that ERP amplitudes elicited by audiovisual stimuli with reliable spatial relationships are larger than those elicited by stimuli with inconsistent spatial relationships. In addition, we hypothesized that spatial reliability within an audiovisual stimulus enhances feedback projections to the primary visual cortex from multisensory integration regions. Overall, our findings suggest that the spatial linking of visual and auditory information depends on spatial reliability within an audiovisual stimulus and occurs at a relatively late stage of processing.

  15. Media audio-visual English course design%媒体英语视听说课程设计

    Institute of Scientific and Technical Information of China (English)

    陈赏

    2014-01-01

    Media audio-visual English course is from western mainstream media show or movie clips featured in linguistic context to the students of the video materials as a new teaching content. The author combines his own teaching practice, according to media English of this new subject to put forward reasonable curriculum design solutions and suggestions.%媒体英语视听说课程是从西方国家主流媒体节目或影视片断中精选出语言语境贴近学生的视频材料作为教学内容的一门新课。笔者结合自己的教学实践,针对媒体英语视听说这门新兴课程提出了合理的课程设计方案和建议。

  16. Physical and perceptual factors shape the neural mechanisms that integrate audiovisual signals in speech comprehension.

    Science.gov (United States)

    Lee, HweeLing; Noppeney, Uta

    2011-08-01

    Face-to-face communication challenges the human brain to integrate information from auditory and visual senses with linguistic representations. Yet the role of bottom-up physical (spectrotemporal structure) input and top-down linguistic constraints in shaping the neural mechanisms specialized for integrating audiovisual speech signals are currently unknown. Participants were presented with speech and sinewave speech analogs in visual, auditory, and audiovisual modalities. Before the fMRI study, they were trained to perceive physically identical sinewave speech analogs as speech (SWS-S) or nonspeech (SWS-N). Comparing audiovisual integration (interactions) of speech, SWS-S, and SWS-N revealed a posterior-anterior processing gradient within the left superior temporal sulcus/gyrus (STS/STG): Bilateral posterior STS/STG integrated audiovisual inputs regardless of spectrotemporal structure or speech percept; in left mid-STS, the integration profile was primarily determined by the spectrotemporal structure of the signals; more anterior STS regions discarded spectrotemporal structure and integrated audiovisual signals constrained by stimulus intelligibility and the availability of linguistic representations. In addition to this "ventral" processing stream, a "dorsal" circuitry encompassing posterior STS/STG and left inferior frontal gyrus differentially integrated audiovisual speech and SWS signals. Indeed, dynamic causal modeling and Bayesian model comparison provided strong evidence for a parallel processing structure encompassing a ventral and a dorsal stream with speech intelligibility training enhancing the connectivity between posterior and anterior STS/STG. In conclusion, audiovisual speech comprehension emerges in an interactive process with the integration of auditory and visual signals being progressively constrained by stimulus intelligibility along the STS and spectrotemporal structure in a dorsal fronto-temporal circuitry.

  17. Desarrollo de una prueba de comprensión audiovisual

    Directory of Open Access Journals (Sweden)

    Casañ Núñez, Juan Carlos

    2016-06-01

    Full Text Available Este artículo forma parte de una investigación doctoral que estudia el uso de preguntas de comprensión audiovisual integradas en la imagen del vídeo como subtítulos y sincronizadas con los fragmentos de vídeo relevantes. Anteriormente se han publicado un marco teórico que describe esta técnica (Casañ Núñez, 2015b y un ejemplo en una secuencia didáctica (Casañ Núñez, 2015a. El presente trabajo detalla el proceso de planificación, diseño y experimentación de una prueba de comprensión audiovisual con dos variantes que será administrada junto con otros instrumentos en estudios cuasiexperimentales con grupos de control y tratamiento. Fundamentalmente, se pretende averiguar si la subtitulación de las preguntas facilita la comprensión, si aumenta el tiempo que los estudiantes miran en dirección a la pantalla y conocer la opinión del grupo de tratamiento sobre esta técnica. En la fase de experimentación se efectuaron seis estudios. En el último estudio piloto participaron cuarenta y un estudiantes de ELE (veintidós en el grupo de control y diecinueve en el de tratamiento. Las observaciones de los informantes durante la administración de la prueba y su posterior corrección sugirieron que las indicaciones sobre la estructura del test, las presentaciones de los textos de entrada, la explicación sobre el funcionamiento de las preguntas subtituladas para el grupo experimental y la redacción de los ítems resultaron comprensibles. Los datos de las dos variantes del instrumento se sometieron a sendos análisis de facilidad, discriminación, fiabilidad y descriptivos. También se calcularon las correlaciones entre los test y dos tareas de un examen de comprensión auditiva. Los resultados mostraron que las dos versiones de la prueba estaban preparadas para ser administradas.

  18. Audio-Visual Perception System for a Humanoid Robotic Head

    Directory of Open Access Journals (Sweden)

    Raquel Viciana-Abad

    2014-05-01

    Full Text Available One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.

  19. Audio-visual perception system for a humanoid robotic head.

    Science.gov (United States)

    Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M; Bandera, Juan P; Romero-Garces, Adrian; Reche-Lopez, Pedro

    2014-01-01

    One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.

  20. Audio-Visual Integration Modifies Emotional Judgment in Music

    Directory of Open Access Journals (Sweden)

    Shen-Yuan Su

    2011-10-01

    Full Text Available The conventional view that perceived emotion in music is derived mainly from auditory signals has led to neglect of the contribution of visual image. In this study, we manipulated mode (major vs. minor and examined the influence of a video image on emotional judgment in music. Melodies in either major or minor mode were controlled for tempo and rhythm and played to the participants. We found that Taiwanese participants, like Westerners, judged major melodies as expressing positive, and minor melodies negative, emotions. The major or minor melodies were then paired with video images of the singers, which were either emotionally congruent or incongruent with their modes. Results showed that participants perceived stronger positive or negative emotions with congruent audio-visual stimuli. Compared to listening to music alone, stronger emotions were perceived when an emotionally congruent video image was added and weaker emotions were perceived when an incongruent image was added. We therefore demonstrate that mode is important to perceive the emotional valence in music and that treating musical art as a purely auditory event might lose the enhanced emotional strength perceived in music, since going to a concert may lead to stronger perceived emotion than listening to the CD at home.

  1. Audiovisual education and breastfeeding practices: A preliminary report

    Directory of Open Access Journals (Sweden)

    V. C. Nikodem

    1993-05-01

    Full Text Available A randomized control trial was conducted at the Coronation Hospital, to evaluate the effect of audiovisual breastfeeding education. Within 72 hours after delivery, 340 women who agreed to participate were allocated randomly to view one of two video programmes, one of which dealt with breastfeeding. To determine the effect of the programme on infant feeding a structured questionnaire was administered to 108 women who attended the six week postnatal check-up. Alternative methods, such as telephonic interviews (24 and home visits (30 were used to obtain information from subjects who did not attend the postnatal clinic. Comparisons of mother-infant relationships and postpartum depression showed no significant differences. Similar proportions of each group reported that their baby was easy to manage, and that they felt close to and could communicate well with it. While the overall number of mothers who breast-fed was not significantly different between the two groups, there was a trend towards fewer mothers in the study group supplementing with bottle feeding. It was concluded that the effectiveness of aidiovisual education alone is limited, and attention should be directed towards personal follow-up and support for breastfeeding mothers.

  2. Impact of language on functional connectivity for audiovisual speech integration

    Science.gov (United States)

    Shinozaki, Jun; Hiroe, Nobuo; Sato, Masa-aki; Nagamine, Takashi; Sekiyama, Kaoru

    2016-01-01

    Visual information about lip and facial movements plays a role in audiovisual (AV) speech perception. Although this has been widely confirmed, previous behavioural studies have shown interlanguage differences, that is, native Japanese speakers do not integrate auditory and visual speech as closely as native English speakers. To elucidate the neural basis of such interlanguage differences, 22 native English speakers and 24 native Japanese speakers were examined in behavioural or functional Magnetic Resonance Imaging (fMRI) experiments while mono-syllabic speech was presented under AV, auditory-only, or visual-only conditions for speech identification. Behavioural results indicated that the English speakers identified visual speech more quickly than the Japanese speakers, and that the temporal facilitation effect of congruent visual speech was significant in the English speakers but not in the Japanese speakers. Using fMRI data, we examined the functional connectivity among brain regions important for auditory-visual interplay. The results indicated that the English speakers had significantly stronger connectivity between the visual motion area MT and the Heschl’s gyrus compared with the Japanese speakers, which may subserve lower-level visual influences on speech perception in English speakers in a multisensory environment. These results suggested that linguistic experience strongly affects neural connectivity involved in AV speech integration. PMID:27510407

  3. Impact of language on functional connectivity for audiovisual speech integration.

    Science.gov (United States)

    Shinozaki, Jun; Hiroe, Nobuo; Sato, Masa-Aki; Nagamine, Takashi; Sekiyama, Kaoru

    2016-08-11

    Visual information about lip and facial movements plays a role in audiovisual (AV) speech perception. Although this has been widely confirmed, previous behavioural studies have shown interlanguage differences, that is, native Japanese speakers do not integrate auditory and visual speech as closely as native English speakers. To elucidate the neural basis of such interlanguage differences, 22 native English speakers and 24 native Japanese speakers were examined in behavioural or functional Magnetic Resonance Imaging (fMRI) experiments while mono-syllabic speech was presented under AV, auditory-only, or visual-only conditions for speech identification. Behavioural results indicated that the English speakers identified visual speech more quickly than the Japanese speakers, and that the temporal facilitation effect of congruent visual speech was significant in the English speakers but not in the Japanese speakers. Using fMRI data, we examined the functional connectivity among brain regions important for auditory-visual interplay. The results indicated that the English speakers had significantly stronger connectivity between the visual motion area MT and the Heschl's gyrus compared with the Japanese speakers, which may subserve lower-level visual influences on speech perception in English speakers in a multisensory environment. These results suggested that linguistic experience strongly affects neural connectivity involved in AV speech integration.

  4. Evolving with modern technology: Impact of incorporating audiovisual aids in preanesthetic checkup clinics on patient education and anxiety

    Science.gov (United States)

    Kaur, Haramritpal; Singh, Gurpreet; Singh, Amandeep; Sharda, Gagandeep; Aggarwal, Shobha

    2016-01-01

    Background and Aims: Perioperative stress is an often ignored commonly occurring phenomenon. Little or no prior knowledge of anesthesia techniques can increase this significantly. Patients awaiting surgery may experience high level of anxiety. Preoperative visit is an ideal time to educate patients about anesthesia and address these fears. The present study evaluates two different approaches, i.e., standard interview versus informative audiovisual presentation with standard interview on information gain (IG) and its impact on patient anxiety during preoperative visit. Settings and Design: This prospective, double-blind, randomized study was conducted in a Tertiary Care Teaching Hospital in rural India over 2 months. Materials and Methods: This prospective, double-blind, randomized study was carried out among 200 American Society of Anesthesiologist Grade I and II patients in the age group 18–65 years scheduled to undergo elective surgery under general anesthesia. Patients were allocated to either one of the two equal-sized groups, Group A and Group B. Baseline anxiety and information desire component was assessed using Amsterdam Preoperative Anxiety and Information Scale for both the groups. Group A patients received preanesthetic interview with the anesthesiologist and were reassessed. Group B patients were shown a short audiovisual presentation about operation theater and anesthesia procedure followed by preanesthetic interview and were also reassessed. In addition, patient satisfaction score (PSS) and IG was assessed at the end of preanesthetic visit using standard questionnaire. Statistical Analysis Used: Data were expressed as mean and standard deviation. Nonparametric tests such as Kruskal–Wallis, Mann–Whitney, and Wilcoxon signed rank tests, and Student's t-test and Chi-square test were used for statistical analysis. Results: Patient's IG was significantly more in Group B (5.43 ± 0.55) as compared to Group A (4.41 ± 0.922) (P < 0.001). There was

  5. Keeping time in the brain: Autism spectrum disorder and audiovisual temporal processing.

    Science.gov (United States)

    Stevenson, Ryan A; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Camarata, Stephen; Wallace, Mark T

    2016-07-01

    A growing area of interest and relevance in the study of autism spectrum disorder (ASD) focuses on the relationship between multisensory temporal function and the behavioral, perceptual, and cognitive impairments observed in ASD. Atypical sensory processing is becoming increasingly recognized as a core component of autism, with evidence of atypical processing across a number of sensory modalities. These deviations from typical processing underscore the value of interpreting ASD within a multisensory framework. Furthermore, converging evidence illustrates that these differences in audiovisual processing may be specifically related to temporal processing. This review seeks to bridge the connection between temporal processing and audiovisual perception, and to elaborate on emerging data showing differences in audiovisual temporal function in autism. We also discuss the consequence of such changes, the specific impact on the processing of different classes of audiovisual stimuli (e.g. speech vs. nonspeech, etc.), and the presumptive brain processes and networks underlying audiovisual temporal integration. Finally, possible downstream behavioral implications, and possible remediation strategies are outlined. Autism Res 2016, 9: 720-738. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  6. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection.

    Science.gov (United States)

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-06-30

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC.

  7. Neurofunctional underpinnings of audiovisual emotion processing in teens with autism spectrum disorders.

    Science.gov (United States)

    Doyle-Thomas, Krissy A R; Goldberg, Jeremy; Szatmari, Peter; Hall, Geoffrey B C

    2013-01-01

    Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD). Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n = 18) and typically developing controls (n = 16) during audiovisual and unimodal emotion processing. Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviors, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that in the absence of engaging integrative emotional networks during audiovisual emotion matching, teens with ASD may have recruited the parietofrontal network as an alternate compensatory system.

  8. Audiovisual focus of attention and its application to Ultra High Definition video compression

    Science.gov (United States)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  9. Bibliographic control of audiovisuals: analysis of a cataloging project using OCLC.

    Science.gov (United States)

    Curtis, J A; Davison, F M

    1985-04-01

    The staff of the Quillen-Dishner College of Medicine Library cataloged 702 audiovisual titles between July 1, 1982, and June 30, 1983, using the OCLC database. This paper discusses the library's audiovisual collection and describes the method and scope of a study conducted during this project, the cataloging standards and conventions adopted, the assignment and use of NLM classification, the provision of summaries for programs, and the amount of staff time expended in cataloging typical items. An analysis of the use of OCLC for this project resulted in the following findings: the rate of successful searches for audiovisual copy was 82.4%; the error rate for records used was 41.9%; modifications were required in every record used; the Library of Congress and seven member institutions provided 62.8% of the records used. It was concluded that the effort to establish bibliographic control of audiovisuals is not widespread and that expanded and improved audiovisual cataloging by the Library of Congress and the National Library of Medicine would substantially contribute to that goal.

  10. Neurofunctional underpinnings of audiovisual emotion processing in teens with Autism Spectrum Disorders

    Directory of Open Access Journals (Sweden)

    Krissy A.R. Doyle-Thomas

    2013-05-01

    Full Text Available Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD. Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n=18 and typically developing controls (n=16 during audiovisual and unimodal emotion processing . Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviours, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that during audiovisual emotion matching individuals with ASD may rely on a parietofrontal network to compensate for atypical brain activity elsewhere.

  11. Speech and non-speech audio-visual illusions: a developmental study.

    Directory of Open Access Journals (Sweden)

    Corinne Tremblay

    Full Text Available It is well known that simultaneous presentation of incongruent audio and visual stimuli can lead to illusory percepts. Recent data suggest that distinct processes underlie non-specific intersensory speech as opposed to non-speech perception. However, the development of both speech and non-speech intersensory perception across childhood and adolescence remains poorly defined. Thirty-eight observers aged 5 to 19 were tested on the McGurk effect (an audio-visual illusion involving speech, the Illusory Flash effect and the Fusion effect (two audio-visual illusions not involving speech to investigate the development of audio-visual interactions and contrast speech vs. non-speech developmental patterns. Whereas the strength of audio-visual speech illusions varied as a direct function of maturational level, performance on non-speech illusory tasks appeared to be homogeneous across all ages. These data support the existence of independent maturational processes underlying speech and non-speech audio-visual illusory effects.

  12. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  13. APPLICATION OF PARTIAL LEAST SQUARES REGRESSION FOR AUDIO-VISUAL SPEECH PROCESSING AND MODELING

    Directory of Open Access Journals (Sweden)

    A. L. Oleinik

    2015-09-01

    Full Text Available Subject of Research. The paper deals with the problem of lip region image reconstruction from speech signal by means of Partial Least Squares regression. Such problems arise in connection with development of audio-visual speech processing methods. Audio-visual speech consists of acoustic and visual components (called modalities. Applications of audio-visual speech processing methods include joint modeling of voice and lips’ movement dynamics, synchronization of audio and video streams, emotion recognition, liveness detection. Method. Partial Least Squares regression was applied to solve the posed problem. This method extracts components of initial data with high covariance. These components are used to build regression model. Advantage of this approach lies in the possibility of achieving two goals: identification of latent interrelations between initial data components (e.g. speech signal and lip region image and approximation of initial data component as a function of another one. Main Results. Experimental research on reconstruction of lip region images from speech signal was carried out on VidTIMIT audio-visual speech database. Results of the experiment showed that Partial Least Squares regression is capable of solving reconstruction problem. Practical Significance. Obtained findings give the possibility to assert that Partial Least Squares regression is successfully applicable for solution of vast variety of audio-visual speech processing problems: from synchronization of audio and video streams to liveness detection.

  14. AXES-RESEARCH - A user-oriented tool for enhanced multimodal search and retrieval in audiovisual libraries

    NARCIS (Netherlands)

    P. van der Kreeft (Peggy); K. Macquarrie (Kay); M.J. Kemman (Max); M. Kleppe (Martijn); K. McGuinness (Kevin)

    2014-01-01

    textabstractAXES, Access for Audiovisual Archives, is a research project developing tools for new engaging ways to interact with audiovisual libraries, integrating advanced audio and video analysis technologies. The presented prototype is targeted at academic researchers and journalists. The tool al

  15. A Comparison of the Development of Audiovisual Integration in Children with Autism Spectrum Disorders and Typically Developing Children

    Science.gov (United States)

    Taylor, Natalie; Isaac, Claire; Milne, Elizabeth

    2010-01-01

    This study aimed to investigate the development of audiovisual integration in children with Autism Spectrum Disorder (ASD). Audiovisual integration was measured using the McGurk effect in children with ASD aged 7-16 years and typically developing children (control group) matched approximately for age, sex, nonverbal ability and verbal ability.…

  16. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals

    Directory of Open Access Journals (Sweden)

    Shahram Moradi

    2016-06-01

    Full Text Available The present study compared elderly hearing aid (EHA users (n = 20 with elderly normal-hearing (ENH listeners (n = 20 in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context.

  17. Developing an Audiovisual Notebook as a Self-Learning Tool in Histology: Perceptions of Teachers and Students

    Science.gov (United States)

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four…

  18. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    Directory of Open Access Journals (Sweden)

    Magnus eAlm

    2015-07-01

    Full Text Available Gender and age have been found to affect adults’ audio-visual (AV speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20-30 years and middle-aged adults (50-60 years with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy towards more visually dominated responses.

  19. Influence of auditory and audiovisual stimuli on the right-left prevalence effect

    DEFF Research Database (Denmark)

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did...... not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension...... was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension...

  20. A Similarity-Based Approach for Audiovisual Document Classification Using Temporal Relation Analysis

    Directory of Open Access Journals (Sweden)

    Ferrane Isabelle

    2011-01-01

    Full Text Available Abstract We propose a novel approach for video classification that bases on the analysis of the temporal relationships between the basic events in audiovisual documents. Starting from basic segmentation results, we define a new representation method that is called Temporal Relation Matrix (TRM. Each document is then described by a set of TRMs, the analysis of which makes events of a higher level stand out. This representation has been first designed to analyze any audiovisual document in order to find events that may well characterize its content and its structure. The aim of this work is to use this representation to compute a similarity measure between two documents. Approaches for audiovisual documents classification are presented and discussed. Experimentations are done on a set of 242 video documents and the results show the efficiency of our proposals.

  1. The early maximum likelihood estimation model of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2015-01-01

    Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely...... focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...

  2. Indexing method of digital audiovisual medical resources with semantic Web integration.

    Science.gov (United States)

    Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre

    2005-03-01

    Digitalization of audiovisual resources and network capability offer many possibilities which are the subject of intensive work in scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has developed MPEG-7, a standard for describing multimedia content. The goal of this standard is to develop a rich set of standardized tools to enable efficient retrieval from digital archives or the filtering of audiovisual broadcasts on the Internet. How could this kind of technology be used in the medical context? In this paper, we propose a simpler indexing system, based on the Dublin Core standard and compliant to MPEG-7. We use MeSH and the UMLS to introduce conceptual navigation. We also present a video-platform which enables encoding and gives access to audiovisual resources in streaming mode.

  3. Sincronía entre formas sonoras y formas visuales en la narrativa audiovisual

    Directory of Open Access Journals (Sweden)

    Lic. José Alfredo Sánchez Ríos

    1999-01-01

    Full Text Available ¿Dónde tiene que situarse el investigador para realizar un trabajo que lleve consigo un conocimiento más profundo para entender un fenómeno tan próximo y tan complejo como es la comunicación audiovisual que usa sonido e imagen a la vez? ¿Cuál es el papel del investigador en comunicación audiovisual para aportar nuevas aproximaciones en torno a su objeto de estudio? Desde esta perspectiva, pensamos que la nueva tarea del investigador en comunicación audiovisual será hacer una teoría menos interpretativa-subjetiva y encaminar sus observaciones hacia conocimientos segmentados que puedan ser demostrables, repetibles y autocuestionables, es decir, estudiar, elaborar y construir una teoría con un mayor y nuevo rigor metodológico.

  4. Effect of Anti-Tobacco Audiovisual Messages on Knowledge and Attitude towards Tobacco Use in North India

    Directory of Open Access Journals (Sweden)

    Jagdish Kaur

    2012-01-01

    Full Text Available Context: Tobacco use is one of the leading preventable causes of death globally. Mass media plays a significant role in initiation as well as in control of tobacco use. Aims: To assess the effect of viewing anti-tobacco audiovisual messages on knowledge and attitudinal change towards tobacco use. Settings and Design: Interventional community-based study. Materials and Methods: A total of 1999 cinema attendees (age 10 years and above, irrespective of their smoking or tobacco using status, were selected from four cinema halls (two urban, one semi-urban, and one rural site. In pre-exposure phase 1000 subjects and in post-exposure phase 999 subjects were interviewed using a pre-tested questionnaire. After collecting baseline information, the other days were chosen for screening the audiovisual spots that were shown twice per show. After the show, subjects were interviewed to assess its effect. Statistical Analysis Used: Proportions of two independent groups were compared and statistically significance using chi-square test was accepted if error was less than 0.05%. Results: Overall 784 (39.2% subjects were tobacco users, 52.6% were non-tobacco users and 8.2% were former tobacco users. Important factors for initiation of tobacco use were peer pressure (62%, imitating elders (53.4% and imitating celebrity (63.5%. Tobacco users were significantly less likely than non-tobacco users to recall watching the spots during movie (72.1% vs. 79.1%. Anti-tobacco advertisement gave inspiration to 37% of subjects not to use tobacco. The celebrity in advertisement influenced the people′s attention. There was significant improvement in knowledge and attitudes towards anti-tobacco legal and public health measures in post exposure group. Conclusions: The anti-tobacco advertisements have been found to be effective in enhancing knowledge as well as in transforming to positive attitude of the people about tobacco use.

  5. Effect of Anti-Tobacco Audiovisual Messages on Knowledge and Attitude towards Tobacco Use in North India

    Science.gov (United States)

    Kaur, Jagdish; Kishore, Jugal; Kumar, Monika

    2012-01-01

    Context: Tobacco use is one of the leading preventable causes of death globally. Mass media plays a significant role in initiation as well as in control of tobacco use. Aims: To assess the effect of viewing anti-tobacco audiovisual messages on knowledge and attitudinal change towards tobacco use. Settings and Design: Interventional community-based study. Materials and Methods: A total of 1999 cinema attendees (age 10 years and above), irrespective of their smoking or tobacco using status, were selected from four cinema halls (two urban, one semi-urban, and one rural site). In pre-exposure phase 1000 subjects and in post-exposure phase 999 subjects were interviewed using a pre-tested questionnaire. After collecting baseline information, the other days were chosen for screening the audiovisual spots that were shown twice per show. After the show, subjects were interviewed to assess its effect. Statistical Analysis Used: Proportions of two independent groups were compared and statistically significance using chi-square test was accepted if error was less than 0.05%. Results: Overall 784 (39.2%) subjects were tobacco users, 52.6% were non-tobacco users and 8.2% were former tobacco users. Important factors for initiation of tobacco use were peer pressure (62%), imitating elders (53.4%) and imitating celebrity (63.5%). Tobacco users were significantly less likely than non-tobacco users to recall watching the spots during movie (72.1% vs. 79.1%). Anti-tobacco advertisement gave inspiration to 37% of subjects not to use tobacco. The celebrity in advertisement influenced the people's attention. There was significant improvement in knowledge and attitudes towards anti-tobacco legal and public health measures in post exposure group. Conclusions: The anti-tobacco advertisements have been found to be effective in enhancing knowledge as well as in transforming to positive attitude of the people about tobacco use. PMID:23293436

  6. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    Directory of Open Access Journals (Sweden)

    Yuanqing Li

    Full Text Available One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG. The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  7. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    Science.gov (United States)

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.

  8. Early and late beta-band power reflect audiovisual perception in the McGurk illusion.

    Science.gov (United States)

    Roa Romero, Yadira; Senkowski, Daniel; Keil, Julian

    2015-04-01

    The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13-30 Hz) at short (0-500 ms) and long (500-800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept.

  9. Audiovisual correspondence between musical timbre and visual shapes.

    Science.gov (United States)

    Adeli, Mohammad; Rouat, Jean; Molotchnikoff, Stéphane

    2014-01-01

    This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g., simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against colored shapes and once against grayscale shapes. Subjects had to select the visual equivalent of a given sound i.e., its shape, color (or grayscale) and vertical position. This scenario permitted studying the associations between normalized timbre and visual shapes as well as some of the previous findings for more complex stimuli. One hundred and nineteen subjects (31 females and 88 males) participated in the online experiment. Subjects included 36 claimed professional musicians, 47 claimed amateur musicians, and 36 claimed non-musicians. Thirty-one subjects have also claimed to have synesthesia-like experiences. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects have strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the two previous shapes. Color or grayscale had no effect on timbre-shape associations. Fundamental frequency was not associated with height, grayscale or color. The significant correspondence between timbre and shape revealed by the present work allows designing substitution systems which might help the blind to perceive shapes through timbre.

  10. Audiovisual correspondence between musical timbre and visual shapes.

    Directory of Open Access Journals (Sweden)

    Mohammad eAdeli

    2014-05-01

    Full Text Available This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g. simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against colored shapes and once against grayscale shapes. Subjects had to select the visual equivalent of a given sound i.e. its shape, color (or grayscale and vertical position. This scenario permitted studying the associations between normalized timbre and visual shapes as well as some of the previous findings for more complex stimuli. 119 subjects (31 females and 88 males participated in the online experiment. Subjects included 36 claimed professional musicians, 47 claimed amateur musicians and 36 claimed non-musicians. 31 subjects have also claimed to have synesthesia-like experiences. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects have strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the two previous shapes. Color or grayscale had no effect on timbre-shape associations. Fundamental frequency was not associated with height, grayscale or color. The significant correspondence between timbre and shape revealed by the present work allows designing substitution systems which might help the blind to perceive shapes through timbre.

  11. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity

    Science.gov (United States)

    Gibney, Kyla D.; Aligbe, Enimielen; Eggleston, Brady A.; Nunes, Sarah R.; Kerkhoff, Willa G.; Dean, Cassandra L.; Kwakye, Leslie D.

    2017-01-01

    The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a

  12. Comunicación audiovisual, una experiencia basada en el blended learning en la universidad

    Directory of Open Access Journals (Sweden)

    Mariona Grané Oró

    2004-01-01

    Full Text Available En los estudios de Comunicación Audiovisual de la Universidad de Barcelona, y bajo una perspectiva de blended learning, diferentes medios y diferentes recursos se disponen para el trabajo de alumnos y profesores. Pero el hecho de poder acceder a diferentes medios no garantiza la calidad en los procesos de enseñanza y aprendizaje. Conocer los recursos de que se dispone, saber planificar el proceso y organizar el uso de los mismos, es la clave para la formación de los alumnos de Comunicación Audiovisual.

  13. GÖRSEL-İŞİTSEL ÇEVİRİ / AUDIOVISUAL TRANSLATION

    OpenAIRE

    Sevtap GÜNAY KÖPRÜLÜ

    2016-01-01

    Audiovisual translation dating back to the silent film era is a special translation method which has been developed for the translation of the movies and programs shown on TV and cinema. Therefore, in the beginning, the term “film translation” was used for this type of translation. Due to the growing number of audiovisual texts it has attracted the interest of scientists and has been assessed under the translation studies. Also in our country the concept of film translation was used for this ...

  14. Rehabilitation of balance-impaired stroke patients through audio-visual biofeedback

    DEFF Research Database (Denmark)

    Gheorghe, Cristina; Nissen, Thomas; Juul Rosengreen Christensen, Daniel;

    2015-01-01

    This study explored how audio-visual biofeedback influences physical balance of seven balance-impaired stroke patients, between 33–70 years-of-age. The setup included a bespoke balance board and a music rhythm game. The procedure was designed as follows: (1) a control group who performed a balance...... training exercise without any technological input, (2) a visual biofeedback group, performing via visual input, and (3) an audio-visual biofeedback group, performing via audio and visual input. Results retrieved from comparisons between the data sets (2) and (3) suggested superior postural stability...

  15. O audiovisual na era Youtube: pro-amadores e o mercado

    Directory of Open Access Journals (Sweden)

    Meili, Angela Maria

    2011-01-01

    Full Text Available O presente artigo falará sobre a emergência dos formatos de vídeo para a internet, juntamente com uma nova economia do audiovisual, na qual as fronteiras entre amadorismo e profissionalismo apresentam-se menos definidas. Será feita uma reflexão acerca da plataforma YouTube e a formação desse mercado audiovisual, que apresenta relações estreitas com os formatos e métodos tradicionais de mídia, mas mantém uma estrutura colaborativa, de incentivo à novos talentos e à livre expressão

  16. A narrativa audiovisual publicitária : a forma comercial e a forma social

    OpenAIRE

    Vieira, Claúdia Virgínia Fernandes

    2009-01-01

    Dissertação de Mestrado em Ciências da Comunicação - Área de Especialização em Audiovisual e Multimédia A publicidade audiovisual é geralmente considerada uma forma de vender produtos ou serviços e, na sua maioria, é o que os anúncios estão dispostos a fazer. Temos, no entanto um outro tipo de anúncio, o institucional, que aqui chamamos publicidade social, feita para advertir o público de situações de risco ou fazer apelos para o melhoramento de assuntos pertinentes à sociedade ...

  17. Youth Solid Waste Educational Materials List, November 1991.

    Science.gov (United States)

    Cornell Univ., Ithaca, NY. Cooperative Extension Service.

    This guide provides a brief description and ordering information for approximately 300 educational materials for grades K-12 on the subject of solid waste. The materials cover a variety of environmental issues and actions related to solid waste management. Entries are divided into five sections including audiovisual programs, books, magazines,…

  18. 关于动画视听语言课程教学改革的探索%Exploration on the Teaching Reform of Animation Audio-Visual Language Course

    Institute of Scientific and Technical Information of China (English)

    殷俊; 张慧

    2015-01-01

    视听语言是动画专业的基础课程,传统的纯理论教学已经不能满足当今社会需求。本文分别从提高视听语言课程教材的专业性,改变学生对视听语言课程的单纯认识,以实践操作丰富传统理论课程等角度探讨教学方法的实践,以期达到提高教学质量,学生完全掌握视听语言知识的目的。%Audio-visual language is a basic course of animation major, but the traditional pure theory teaching can no longer meet the needs of today's society. Respectively from improving the professionalization of audio-visual language curriculum materi-als, changing students' simple understanding of audio-visual language course, and enriching the traditional theory teaching by practical operations, this paper aims to improve the teaching quality and make students completely master audio-visual lan-guage knowledge.

  19. Audiovisual heritage preservation in Earth and Space Science Informatics: Videos from Free and Open Source Software for Geospatial (FOSS4G) conferences in the TIB|AV-Portal.

    Science.gov (United States)

    Löwe, Peter; Marín Arraiza, Paloma; Plank, Margret

    2016-04-01

    continues to grow - and so does the number of topics to be addressed at conferences. Up to now, commercial Web 2.0 platforms like Youtube and Vimeo were used. However, these platforms lack capabilities for long-term archiving and scientific citation, such as persistent identifiers that permit the citation of specific intervals of the overall content. To address these issues, the scientific library community has started to implement improved multimedia archiving and retrieval services for scientific audiovisual content which fulfil these requirements. Using the reference case of the OSGeo conference video recordings, this paper gives an overview over the new and growing collection activities by the German National Library of Science and Technology for audiovisual content in Geoinformatics/ESSI in the TIB|AV Portal for audiovisual content. Following a successful start in 2014 and positive response from the OSGeo Community, the TIB acquisition strategy for OSGeo video material was extended to include German, European, North-American and global conference content. The collection grows steadily by new conference content and also by harvesting of past conference videos from commercial Web 2.0 platforms like Youtube and Vimeo. This positions the TIB|AV-Portal as a reliable and concise long-term resource for innovation mining, education and scholarly research within the ESSI context both within Academia and Industry.

  20. An audio-visual corpus for multimodal speech recognition in Dutch language

    NARCIS (Netherlands)

    Wojdel, J.; Wiggers, P.; Rothkrantz, L.J.M.

    2002-01-01

    This paper describes the gathering and availability of an audio-visual speech corpus for Dutch language. The corpus was prepared with the multi-modal speech recognition in mind and it is currently used in our research on lip-reading and bimodal speech recognition. It contains the prompts used also i

  1. Strategies for Media Literacy: Audiovisual Skills and the Citizenship in Andalusia

    Science.gov (United States)

    Aguaded-Gómez, Ignacio; Pérez-Rodríguez, M. Amor

    2012-01-01

    Media consumption is an undeniable fact in present-day society. The hours that members of all social segments spend in front of a screen take up a large part of their leisure time worldwide. Audiovisual communication becomes especially important within the context of today's digital society (society-network), where information and communication…

  2. Comparisons of Audio and Audiovisual Measures of Stuttering Frequency and Severity in Preschool-Age Children

    Science.gov (United States)

    Rousseau, Isabelle; Onslow, Mark; Packman, Ann; Jones, Mark

    2008-01-01

    Purpose: To determine whether measures of stuttering frequency and measures of overall stuttering severity in preschoolers differ when made from audio-only recordings compared with audiovisual recordings. Method: Four blinded speech-language pathologists who had extensive experience with preschoolers who stutter measured stuttering frequency and…

  3. Seminario latinoamericano de didactica de los medios audiovisuales (Latin American Seminar on Teaching with Audiovisual Aids).

    Science.gov (United States)

    Eduplan Informa, 1971

    1971-01-01

    This seminar on the use of audiovisual aids reached several conclusions on the need for and the use of such aids in Latin America. The need for educational innovation in the face of a new society, a new type of communication, and a new vision of man is stressed. A new definition of teaching and learning as a fundamental process of communication is…

  4. Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli

    NARCIS (Netherlands)

    Talsma, Durk; Senkowski, Daniel; Woldorff, Marty G.

    2009-01-01

    The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory

  5. Acceptance of online audio-visual cultural heritage archive services: a study of the general public

    NARCIS (Netherlands)

    Ongena, G.; Wijngaert, van de L.A.L.; Huizer, E.

    2013-01-01

    Introduction. This study examines the antecedents of user acceptance of an audio-visual heritage archive for a wider audience (i.e., the general public) by extending the technology acceptance model with the concepts of perceived enjoyment, nostalgia proneness and personal innovativeness. Method. A W

  6. The audiovisual communication policy of the socialist Government (2004-2009: A neoliberal turn

    Directory of Open Access Journals (Sweden)

    Ramón Zallo, Ph. D.

    2010-01-01

    Full Text Available The first legislature of Jose Luis Rodriguez Zapatero’s government (2004-08 generated important initiatives for some progressive changes in the public communicative system. However, all of these initiatives have been dissolving in the second legislature to give way to a non-regulated and privatizing model that is detrimental to the public service. Three phases can be distinguished, even temporarily: the first one is characterized by interesting reforms; followed by contradictory reforms and, in the second legislature, an accumulation of counter reforms, that lead the system towards a communicative system model completely different from the one devised in the first legislature. This indicates that there has been not one but two different audiovisual policies running the cyclical route of the audiovisual policy from one end to the other. The emphasis has changed from the public service to private concentration; from decentralization to centralization; from the diffusion of knowledge to the accumulation and appropriation of the cognitive capital; from the Keynesian model - combined with the Schumpeterian model and a preference for social access - to a delayed return to the neoliberal model, after having distorted the market through public decisions in the benefit of the most important audiovisual services providers. All this seems to crystallize the impressive process of concentration occurring between audiovisual services providers in two large groups that would be integrated by Mediaset and Sogecable and - in negotiations - between Antena 3 and Imagina. A combination of neo-statist restructuring of the market and neo-liberalism.

  7. Superior Temporal Activation in Response to Dynamic Audio-Visual Emotional Cues

    Science.gov (United States)

    Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.

    2009-01-01

    Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…

  8. 16 CFR 307.8 - Requirements for disclosure in audiovisual and audio advertising.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Requirements for disclosure in audiovisual and audio advertising. 307.8 Section 307.8 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... advertising. In the case of advertisements for smokeless tobacco on videotapes, casettes, or...

  9. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech, and music.

    Science.gov (United States)

    Lee, Hweeling; Noppeney, Uta

    2014-01-01

    This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech, or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogs of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms). Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past 3 years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  10. Comparing Infants' Preference for Correlated Audiovisual Speech with Signal-Level Computational Models

    Science.gov (United States)

    Hollich, George; Prince, Christopher G.

    2009-01-01

    How much of infant behaviour can be accounted for by signal-level analyses of stimuli? The current paper directly compares the moment-by-moment behaviour of 8-month-old infants in an audiovisual preferential looking task with that of several computational models that use the same video stimuli as presented to the infants. One type of model…

  11. Infant Attention to Dynamic Audiovisual Stimuli: Look Duration from 3 to 9 Months of Age

    Science.gov (United States)

    Reynolds, Greg D.; Zhang, Dantong; Guy, Maggie W.

    2013-01-01

    The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3-, 6-, and 9-month-old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous…

  12. Hearing impairment and audiovisual speech integration ability: a case study report.

    Science.gov (United States)

    Altieri, Nicholas; Hudock, Daniel

    2014-01-01

    Research in audiovisual speech perception has demonstrated that sensory factors such as auditory and visual acuity are associated with a listener's ability to extract and combine auditory and visual speech cues. This case study report examined audiovisual integration using a newly developed measure of capacity in a sample of hearing-impaired listeners. Capacity assessments are unique because they examine the contribution of reaction-time (RT) as well as accuracy to determine the extent to which a listener efficiently combines auditory and visual speech cues relative to independent race model predictions. Multisensory speech integration ability was examined in two experiments: an open-set sentence recognition and a closed set speeded-word recognition study that measured capacity. Most germane to our approach, capacity illustrated speed-accuracy tradeoffs that may be predicted by audiometric configuration. Results revealed that some listeners benefit from increased accuracy, but fail to benefit in terms of speed on audiovisual relative to unisensory trials. Conversely, other listeners may not benefit in the accuracy domain but instead show an audiovisual processing time benefit.

  13. Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation.

    Science.gov (United States)

    Banks, Briony; Gowen, Emma; Munro, Kevin J; Adank, Patti

    2015-01-01

    Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker's facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants' eye gaze was recorded to verify that they looked at the speaker's face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation.

  14. Musicians have enhanced audiovisual multisensory binding: experience-dependent effects in the double-flash illusion.

    Science.gov (United States)

    Bidelman, Gavin M

    2016-10-01

    Musical training is associated with behavioral and neurophysiological enhancements in auditory processing for both musical and nonmusical sounds (e.g., speech). Yet, whether the benefits of musicianship extend beyond enhancements to auditory-specific skills and impact multisensory (e.g., audiovisual) processing has yet to be fully validated. Here, we investigated multisensory integration of auditory and visual information in musicians and nonmusicians using a double-flash illusion, whereby the presentation of multiple auditory stimuli (beeps) concurrent with a single visual object (flash) induces an illusory perception of multiple flashes. We parametrically varied the onset asynchrony between auditory and visual events (leads and lags of ±300 ms) to quantify participants' "temporal window" of integration, i.e., stimuli in which auditory and visual cues were fused into a single percept. Results show that musically trained individuals were both faster and more accurate at processing concurrent audiovisual cues than their nonmusician peers; nonmusicians had a higher susceptibility for responding to audiovisual illusions and perceived double flashes over an extended range of onset asynchronies compared to trained musicians. Moreover, temporal window estimates indicated that musicians' windows (audiovisual binding. Collectively, findings indicate a more refined binding of auditory and visual cues in musically trained individuals. We conclude that experience-dependent plasticity of intensive musical experience extends beyond simple listening skills, improving multimodal processing and the integration of multiple sensory systems in a domain-general manner.

  15. Convergent Cultures: the Disappearance of Commissioned Audiovisual Productions in the Netherlands

    NARCIS (Netherlands)

    B. Agterberg (Bas)

    2014-01-01

    textabstractThe article analyses the changes in production and consumption in the audiovisual industry and the way the so-called ‘ephemeral’ commissioned productions are scarcely preserved. New technologies and the liberal economic policies and internationalisation changed the media landscape in the

  16. The Education, Audiovisual and Culture Executive Agency: Helping You Grow Your Project

    Science.gov (United States)

    Education, Audiovisual and Culture Executive Agency, European Commission, 2011

    2011-01-01

    The Education, Audiovisual and Culture Executive Agency (EACEA) is a public body created by a Decision of the European Commission and operates under its supervision. It is located in Brussels and has been operational since January 2006. Its role is to manage European funding opportunities and networks in the fields of education and training,…

  17. 78 FR 63243 - Certain Audiovisual Components and Products Containing the Same; Commission Determination To...

    Science.gov (United States)

    2013-10-23

    ...''). 77 FR 22803 (Apr. 11, 2012). The complaint alleged violations of section 337 of the Tariff Act of... disapprove the Commission's action. See Presidential Memorandum of July 21, 2005, 70 FR 43251 (July 26, 2005... COMMISSION Certain Audiovisual Components and Products Containing the Same; Commission Determination...

  18. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    Science.gov (United States)

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  19. Multimodal indexing of digital audio-visual documents: A case study for cultural heritage data

    NARCIS (Netherlands)

    Carmichael, J.; Larson, M.; Marlow, J.; Newman, E.; Clough, P.; Oomen, J.; Sav, S.

    2008-01-01

    This paper describes a multimedia multimodal information access sub-system (MIAS) for digital audio-visual documents, typically presented in streaming media format. The system is designed to provide both professional and general users with entry points into video documents that are relevant to their

  20. Effects of audio-visual information and mode of speech on listener perceptions of alaryngeal speakers.

    Science.gov (United States)

    Evitts, Paul M; Van Dine, Ami; Holler, Aline

    2009-01-01

    There is minimal research on listener perceptions of an individual with a laryngectomy (IWL) based on audio-visual information. The aim of this research was to provide preliminary insight into whether listeners have different perceptions of an individual with a laryngectomy based on mode of presentation (audio-only vs. audio-visual) and mode of speech (tracheoesophageal, oesophageal, electrolaryngeal, normal). Thirty-four naïve listeners were randomly presented with a standard reading passage produced by one typical speaker from each mode of speech in both audio-only and audio-visual presentation mode. Listeners used a visual analogue scale (10 cm line) to indicate their perceptions of each speaker's personality. A significant effect for mode of speech was present. There was no significant difference in listener perceptions between mode of presentation using individual ratings. However, principal component analysis showed ratings were more favourable in the audio-visual mode. Results of this study suggest that visual information may only have a minor impact on listener perceptions of a speakers' personality and that mode of speech and degree of speech proficiency may only play a small role in listener perceptions. However, results should be interpreted with caution as results are based on only one speaker per mode of speech.

  1. Media literacy: no longer the shrinking violet of European audiovisual media regulation?

    NARCIS (Netherlands)

    McGonagle, T.; Nikoltchev, S.

    2011-01-01

    The lead article in this IRIS plus provides a critical analysis of how the European audiovisual regulatory and policy framework seeks to promote media literacy. It examines pertinent definitional issues and explores the main rationales for the promotion of media literacy as a regulatory and policy g

  2. Code CoAN 2010: The first Code of Audiovisual Media Co-regulation in Spain

    Directory of Open Access Journals (Sweden)

    Mercedes Muñoz-Saldaña, Ph.D.

    2011-01-01

    Full Text Available On 17 November 2009 the first co-regulation code for the audiovisual media sector was established in Spain: “2010 Co-regulation Code for the Quality of Audiovisual Contents in Navarra”. This Code is pioneering in the field and, taking into account the content of the recently approved General Law on Audiovisual Communication, is an example of the kind of work that shall be carried out in the future by Spain’s National Media Council (Consejo Estatal de Medios Audiovisuales, aka, CEMA or the corresponding regulatory body. This initiative shows the need to apply co-regulatory codes to the national systems of regulation in the audiovisual sector, as the European institutions urged in their latest Directive in 2010. This article addresses three issues that demonstrate the need for and advantages of applying co-regulation practices to guarantee the protection of minors, pluralism, and the promotion of media literacy: the failure of traditional regulatory instruments and the inefficiency of self-regulation; the conceptual definition of co-regulation as an instrument separated from self-regulation and regulation; and the added value of co-regulation in its application to concrete areas.

  3. The effect of spatial-temporal audiovisual disparities on saccades in a complex scene

    NARCIS (Netherlands)

    Wanrooij, M.M. van; Bell, A.H.; Munoz, D.P.; Opstal, A.J. van

    2009-01-01

    In a previous study we quantified the effect of multisensory integration on the latency and accuracy of saccadic eye movements toward spatially aligned audiovisual (AV) stimuli within a rich AV-background (Corneil et al. in J Neurophysiol 88:438-454, 2002). In those experiments both stimulus modalit

  4. Audiovisual Translation and Assistive Technology: Towards a Universal Design Approach for Online Education

    Science.gov (United States)

    Patiniotaki, Emmanouela

    2016-01-01

    Audiovisual Translation (AVT) and Assistive Technology (AST) are two fields that share common grounds within accessibility-related research, yet they are rarely studied in combination. The reason most often lies in the fact that they have emerged from different disciplines, i.e. Translation Studies and Computer Science, making a possible combined…

  5. The Efficacy of an Audiovisual Aid in Teaching the Neo-Classical Screenplay Paradigm

    Science.gov (United States)

    Uys, P. G.

    2009-01-01

    This study interrogated the central theoretical statement that understanding and learning to apply the abstract concept of classical dramatic narrative structure can be addressed effectively through a useful audiovisual teaching method. The purpose of the study was to design an effective DVD teaching and learning aid, to justify the design through…

  6. Psychophysics of the McGurk and Other Audiovisual Speech Integration Effects

    Science.gov (United States)

    Jiang, Jintao; Bernstein, Lynne E.

    2011-01-01

    When the auditory and visual components of spoken audiovisual nonsense syllables are mismatched, perceivers produce four different types of perceptual responses, auditory correct, visual correct, fusion (the so-called "McGurk effect"), and combination (i.e., two consonants are reported). Here, quantitative measures were developed to account for…

  7. Audiovisual Speech Perception in Children with Developmental Language Disorder in Degraded Listening Conditions

    Science.gov (United States)

    Meronen, Auli; Tiippana, Kaisa; Westerholm, Jari; Ahonen, Timo

    2013-01-01

    Purpose: The effect of the signal-to-noise ratio (SNR) on the perception of audiovisual speech in children with and without developmental language disorder (DLD) was investigated by varying the noise level and the sound intensity of acoustic speech. The main hypotheses were that the McGurk effect (in which incongruent visual speech alters the…

  8. Brief Report: Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders

    Science.gov (United States)

    Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.

    2014-01-01

    Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their…

  9. Materials on Creative Arts (Arts, Crafts, Dance, Drama, Music, Bibliotherapy) for Persons with Handicapping Conditions. Revised.

    Science.gov (United States)

    American Alliance for Health, Physical Education, and Recreation, Washington, DC. Information and Research Utilization Center.

    Intended as a resource guide for persons who include such subjects as arts, crafts, dance, and music in programs for the handicapped, resources are listed for printed materials, audiovisual materials, resource persons and organizations, and material and equipment suppliers. Brief literature reviews sum up the state of the art in the specific art…

  10. More Materiales Tocante Los Latinos. A Bibliography of Materials on the Spanish-American.

    Science.gov (United States)

    Harrigan, Joan, Comp.

    A bibliography of materials published between 1964 and 1969 on the Spanish American is presented to assist librarians and educators in locating Hispano instructional aids. Over 120 annotated entries list audio-visual aids and reading materials for students of all ages, professional materials for educators including librarians, ERIC materials…

  11. "I am, I am nothing, I am a story ever told": Performing personas - erotic expression in audiovisual performances Ney Matogrosso the authorization context of a dictatorship

    Directory of Open Access Journals (Sweden)

    Robson Pereira da Silva

    2016-01-01

    Full Text Available There is, in this article, the research into the performing transgressions of Ney Matogrosso in the context of the Brazilian dictatorship civil military. Thus, we understand the marginal personas (types / archetypes displayed procedurally in performative procedures artist Ney Matogrosso (phonogram, cover, brochure, spectacles, meaning his work over the years 1970 and 1980. The artist, in his works, reverses the concepts governed by affluent culture, this practice, consisting of audiovisual materials, widespread in the cultural industry, which ensures historically the performer of the production of the materiality of Music Popular Brazilian (MPB - recording performance. This study highlights the historicity of aesthetic subversion in Ney Matogrosso, as a front political attitude to the producer authoritarian regime prohibitions the erotic potential.

  12. On Copyright of Audiovisual Works%视听作品著作权问题探讨

    Institute of Scientific and Technical Information of China (English)

    董思远

    2013-01-01

    This paper introduces the connotation and denotation of audiovisual works, analyzes the relationship between audiovisual works and videos and then by drawing on audiovisual works copyright legislations in other countries and balancing interests between producers and authors of audiovisual works, puts forward some opinions and suggestions for the “copyright law”amendment.%本文介绍了视听作品的涵义和外延,分析了视听作品和录像制品的关系,然后通过借鉴各国有关视听作品著作权归属的立法,平衡视听作品制片人和作者之间的利益,试图为《著作权法》的修改提出一些意见和建议。

  13. Expressing the Needs of Digital Audio-Visual Applications in Different Communities of Practice for Long Term Preservation

    OpenAIRE

    Kumar, Naresh

    2014-01-01

    Digital audio-visual preservation is nerve of the research nowadays in this digital world, where use of audio-visuals in creation and storage of research data has increased rapidly. Thereby it has created many opportunities for new problems regarding their maintenance, preservation and future accessibility. Lack of awareness about the preservation tools and applications is a big issue today. To solve such issues a European Commission research project, Presto4U that aimed to enable semi-automa...

  14. THE IMPROVEMENT OF AUDIO-VISUAL BASED DANCE APPRECIATION LEARNING AMONG PRIMARY TEACHER EDUCATION STUDENTS OF MAKASSAR STATE UNIVERSITY

    OpenAIRE

    Wahira

    2014-01-01

    This research aimed to improve the skill in appreciating dances owned by the students of Primary Teacher Education of Makassar State University, to improve the perception towards audio-visual based art appreciation, to increase the students’ interest in audio-visual based art education subject, and to increase the students’ responses to the subject. This research was classroom action research using the research design created by Kemmis & MC. Taggart, which was conducted to 42 students of Prim...

  15. Materials

    Science.gov (United States)

    Glaessgen, Edward H.; Schoeppner, Gregory A.

    2006-01-01

    NASA Langley Research Center has successfully developed an electron beam freeform fabrication (EBF3) process, a rapid metal deposition process that works efficiently with a variety of weldable alloys. The EBF3 process can be used to build a complex, unitized part in a layer-additive fashion, although the more immediate payoff is for use as a manufacturing process for adding details to components fabricated from simplified castings and forgings or plate products. The EBF3 process produces structural metallic parts with strengths comparable to that of wrought product forms and has been demonstrated on aluminum, titanium, and nickel-based alloys to date. The EBF3 process introduces metal wire feedstock into a molten pool that is created and sustained using a focused electron beam in a vacuum environment. Operation in a vacuum ensures a clean process environment and eliminates the need for a consumable shield gas. Advanced metal manufacturing methods such as EBF3 are being explored for fabrication and repair of aerospace structures, offering potential for improvements in cost, weight, and performance to enhance mission success for aircraft, launch vehicles, and spacecraft. Near-term applications of the EBF3 process are most likely to be implemented for cost reduction and lead time reduction through addition of details onto simplified preforms (casting or forging). This is particularly attractive for components with protruding details that would require a significantly large volume of material to be machined away from an oversized forging, offering significant reductions to the buy-to-fly ratio. Future far-term applications promise improved structural efficiency through reduced weight and improved performance by exploiting the layer-additive nature of the EBF3 process to fabricate tailored unitized structures with functionally graded microstructures and compositions.

  16. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

    Science.gov (United States)

    Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa

    2015-02-01

    To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing.

  17. RECURSO AUDIOVISUAL PAA ENSEÑAR Y APRENDER EN EL AULA: ANÁLISIS Y PROPUESTA DE UN MODELO FORMATIVO

    Directory of Open Access Journals (Sweden)

    Damian Marilu Mendoza Zambrano

    2015-09-01

    Full Text Available La usabilidad de los recursos audiovisuales, gráficos y digitales, que en la actualidad se están introduciendo en el sistema educativo se despliega en varios países de la región como Chile, Colombia, México, Cuba, El Salvador, Uruguay y Venezuela. Se analiza y se justifica subtemas relacionados con la enseñanza de los medios, desde la iniciativa de España y Portugal; países que fueron convirtiéndose en protagonistas internacionales de algunos modelos educativos en el contexto universitario. Debido a la extensión y focalización en la informática y las redes de información y comunicación en la internet; el audiovisual como instrumento tecnológico va ganando espacios como un recurso dinámico e integrador; con características especiales que lo distingue del resto de los medios que conforman el ecosistema audiovisual. Como resultado de esta investigación se proponen dos líneas de aplicación: A. Propuesta del lenguaje icónico y audiovisual como objetivo de aprendizaje y/o materia curricular en los planes de estudio universitarios con talleres para el desarrollo del documento audiovisual, la fotografía digital y la producción audiovisual y B. Uso de los recursos audiovisuales como medio educativo, lo que implicaría un proceso previo de capacitación a la comunidad docente en actividades recomendadas al profesorado y alumnado respectivamente. En consecuencia, se presentan sugerencias que permiten implementar ambas líneas de acción académica.PALABRAS CLAVE: Alfabetización Mediática; Educación Audiovisual; Competencia Mediática; Educomunicación.AUDIOVISUAL RESOURCE FOR TEACHING AND LEARNING IN THE CLASSROOM: ANALYSIS AND PROPOSAL OF A TRAINING MODELABSTRACTThe usage of the graphic and digital audiovisual resources in Education that is been applied in the present, have displayed in countries such as Chile, Colombia, Mexico, Cuba, El Salvador, Uruguay, and Venezuela. The analysis and justification of the topics related to the

  18. Key elements of the audiovisual policy of the International Organization of la Francophonie / Líneas generales de la política audiovisual de la Organización Internacional de la Francofonía

    Directory of Open Access Journals (Sweden)

    Lic. Félix Redondo Casado; fredondo@inst.uc3m.es

    2009-01-01

    Full Text Available This paper investigates the key elements of the audiovisual policy of the International Organization of la Francophonie (OIF. The hypothesis to be tested is that the audiovisual policy of la Francophonie presents a fundamental concept of the audiovisual.This study is exploratory in nature and considers only the last ten years of la Francophonie. The research presents a mixed methodological approach that combines quantitative and qualitative data collection and analysis.Many items have been analyzed: frameworks for action and declarations, the structure of the organization in the audiovisual area and programs and major projects. One of the most important conclusions of this study is that audiovisual policy of the OIF is characterized by diversity, as well as by its link with culture. However, the OIF tries to ensure the presence of the French universe, ignoring the voices of the rest of the organization.Este trabajo aborda las líneas generales de la política audiovisual de la Organización Internacional de la Francofonía (OIF. La hipótesis que ha guiado el estudio es que la política audiovisual de la Francofonía presenta una concepción fundamental del audiovisual. El estudio es de carácter exploratorio y se ha centrado en los últimos diez años de la Francofonía. La investigación empleó un enfoque mixto que combinó datos cualitativos y cuantitativos en la recogida y en el análisis.Se han analizado varios elementos: los marcos de actuación y declaraciones, la estructura de la organización en el área audiovisual y los programas y principales proyectos. Una de las conclusiones más importantes del estudio es que la política audiovisual de la OIF se caracteriza por su diversidad, así como por su ligazón con la cultura. Sin embargo, la OIF trata de garantizar la presencia del universo francés, olvidando las voces del conjunto de la organización.

  19. Prioritized MPEG-4 Audio-Visual Objects Streaming over the DiffServ

    Institute of Scientific and Technical Information of China (English)

    HUANG Tian-yun; ZHENG Chan

    2005-01-01

    The object-based scalable coding in MPEG-4 is investigated, and a prioritized transmission scheme of MPEG-4 audio-visual objects (AVOs) over the DiffServ network with the QoS guarantee is proposed. MPEG-4 AVOs are extracted and classified into different groups according to their priority values and scalable layers (visual importance). These priority values are mapped to the IP DiffServ per hop behaviors (PHB). This scheme can selectively discard packets with low importance, in order to avoid the network congestion. Simulation results show that the quality of received video can gracefully adapt to network state, as compared with the 'best-effort' manner. Also, by allowing the content provider to define prioritization of each audio-visual object, the adaptive transmission of object-based scalable video can be customized based on the content.

  20. Joint evaluation of communication quality and user experience in an audio-visual virtual reality meeting

    DEFF Research Database (Denmark)

    Møller, Anders Kalsgaard; Hoffmann, Pablo F.; Carrozzino, Marcello

    2013-01-01

    The state-of-the-art speech intelligibility tests are created with the purpose of evaluating acoustic communication devices and not for evaluating audio-visual virtual reality systems. This paper present a novel method to evaluate a communication situation based on both the speech intelligibility...... and the indexical characteristics of the speaker. The results will be available in the final paper. Index Terms: speech intelligibility , virtual reality, body language, telecommunication.......The state-of-the-art speech intelligibility tests are created with the purpose of evaluating acoustic communication devices and not for evaluating audio-visual virtual reality systems. This paper present a novel method to evaluate a communication situation based on both the speech intelligibility...

  1. Development of an audiovisual speech perception app for children with autism spectrum disorders.

    Science.gov (United States)

    Irwin, Julia; Preston, Jonathan; Brancazio, Lawrence; D'angelo, Michael; Turcios, Jacqueline

    2015-01-01

    Perception of spoken language requires attention to acoustic as well as visible phonetic information. This article reviews the known differences in audiovisual speech perception in children with autism spectrum disorders (ASD) and specifies the need for interventions that address this construct. Elements of an audiovisual training program are described. This researcher-developed program delivered via an iPad app presents natural speech in the context of increasing noise, but supported with a speaking face. Children are cued to attend to visible articulatory information to assist in perception of the spoken words. Data from four children with ASD ages 8-10 are presented showing that the children improved their performance on an untrained auditory speech-in-noise task.

  2. Modulations of 'late' event-related brain potentials in humans by dynamic audiovisual speech stimuli.

    Science.gov (United States)

    Lebib, Riadh; Papo, David; Douiri, Abdel; de Bode, Stella; Gillon Dowens, Margaret; Baudonnière, Pierre-Marie

    2004-11-30

    Lipreading reliably improve speech perception during face-to-face conversation. Within the range of good dubbing, however, adults tolerate some audiovisual (AV) discrepancies and lipreading, then, can give rise to confusion. We used event-related brain potentials (ERPs) to study the perceptual strategies governing the intermodal processing of dynamic and bimodal speech stimuli, either congruently dubbed or not. Electrophysiological analyses revealed that non-coherent audiovisual dubbings modulated in amplitude an endogenous ERP component, the N300, we compared to a 'N400-like effect' reflecting the difficulty to integrate these conflicting pieces of information. This result adds further support for the existence of a cerebral system underlying 'integrative processes' lato sensu. Further studies should take advantage of this 'N400-like effect' with AV speech stimuli to open new perspectives in the domain of psycholinguistics.

  3. Stream Weight Training Based on MCE for Audio-Visual LVCSR

    Institute of Scientific and Technical Information of China (English)

    LIU Peng; WANG Zuoying

    2005-01-01

    In this paper we address the problem of audio-visual speech recognition in the framework of the multi-stream hidden Markov model. Stream weight training based on minimum classification error criterion is discussed for use in large vocabulary continuous speech recognition (LVCSR). We present the lattice re-scoring and Viterbi approaches for calculating the loss function of continuous speech. The experimental results show that in the case of clean audio, the system performance can be improved by 36.1% in relative word error rate reduction when using state-based stream weights trained by a Viterbi approach, compared to an audio only speech recognition system. Further experimental results demonstrate that our audio-visual LVCSR system provides significant enhancement of robustness in noisy environments.

  4. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    Science.gov (United States)

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals.

  5. La estacíon de trabajo del traductor audiovisual: Herramientas y Recursos.

    Directory of Open Access Journals (Sweden)

    Anna Matamala

    2005-01-01

    Full Text Available En este artículo abordamos la relación entre traducción audiovisual y nuevas tecnologías y describimos las características que tiene la estación de trabajo del traductor audiovisual, especialmente en el caso del doblaje y del voice- over. Después de presentar las herramientas que necesita el traductor para llevar a cabo satisfactoriamente su tarea y apuntar vías de futuro, presentamos una relación de recursos que suele consultar para resolver los problemas de traducción, haciendo hincapié en los que están disponibles en Internet.

  6. Audiovisual Integration of Speech in a Patient with Broca’s Aphasia

    DEFF Research Database (Denmark)

    Andersen, Tobias; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech...... perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca......'s area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical...

  7. Centralized Library Services for Audiovisual Media. AV in Action 4.

    Science.gov (United States)

    Nederlands Bibliotheek en Lektuur Centrum, The Hague (Netherlands).

    Designed to provide assistance to countries in developing centralized services to their libraries for nonbook materials, this pamphlet contains examples from five countries that have succeeded in establishing such services. Those examples include: (1) "The Central Library Service for AV-Materials in Denmark" (Suzanne Hemmeth…

  8. Globalization and pluralism: the function of public TV in the European audiovisual market

    OpenAIRE

    2007-01-01

    European audiovisual legislation focuses exclusively on a concept of external pluralism. It therefore seems necessary to adopt other policies and develop new measures to guarantee diversity. In order to implement this reform, a new, richer concept of pluralism must be sought that reflects the reality of the market. This would enable us to devise instruments to measure the real presence of pluralism in the media, and perform effective regulation to defend this right at every level. The ai...

  9. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.

    Science.gov (United States)

    Smayda, Kirsten E; Van Engen, Kristin J; Maddox, W Todd; Chandrasekaran, Bharath

    2016-01-01

    Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35) and thirty-three older adults (ages 60-90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both

  10. The audiovisual mounting narrative as a basis for the documentary film interactive: news studies

    OpenAIRE

    Mgs. Denis Porto Renó

    2008-01-01

    This paper presents a literature review and experiment results from pilot-doctoral research "assembly language visual narrative for the documentary film interactive," which defend the thesis that there are features interactive audio and video editing of the movie, even as causing agent of interactivity. The search for interactive audio-visual formats are present in international investigations, but sob glances technology. He believes that this paper is to propose possible formats for interact...

  11. Sight and sound out of synch: fragmentation and renormalisation of audiovisual integration and subjective timing.

    Science.gov (United States)

    Freeman, Elliot D; Ipser, Alberta; Palmbaha, Austra; Paunoiu, Diana; Brown, Peter; Lambert, Christian; Leff, Alex; Driver, Jon

    2013-01-01

    The sight and sound of a person speaking or a ball bouncing may seem simultaneous, but their corresponding neural signals are spread out over time as they arrive at different multisensory brain sites. How subjective timing relates to such neural timing remains a fundamental neuroscientific and philosophical puzzle. A dominant assumption is that temporal coherence is achieved by sensory resynchronisation or recalibration across asynchronous brain events. This assumption is easily confirmed by estimating subjective audiovisual timing for groups of subjects, which is on average similar across different measures and stimuli, and approximately veridical. But few studies have examined normal and pathological individual differences in such measures. Case PH, with lesions in pons and basal ganglia, hears people speak before seeing their lips move. Temporal order judgements (TOJs) confirmed this: voices had to lag lip-movements (by ∼200 msec) to seem synchronous to PH. Curiously, voices had to lead lips (also by ∼200 msec) to maximise the McGurk illusion (a measure of audiovisual speech integration). On average across these measures, PH's timing was therefore still veridical. Age-matched control participants showed similar discrepancies. Indeed, normal individual differences in TOJ and McGurk timing correlated negatively: subjects needing an auditory lag for subjective simultaneity needed an auditory lead for maximal McGurk, and vice versa. This generalised to the Stream-Bounce illusion. Such surprising antagonism seems opposed to good sensory resynchronisation, yet average timing across tasks was still near-veridical. Our findings reveal remarkable disunity of audiovisual timing within and between subjects. To explain this we propose that the timing of audiovisual signals within different brain mechanisms is perceived relative to the average timing across mechanisms. Such renormalisation fully explains the curious antagonistic relationship between disparate timing

  12. Towards a Future-Proof Framework for the Protection of Minors in European Audiovisual Media

    Directory of Open Access Journals (Sweden)

    Madeleine de Cock Buning

    2014-12-01

    Full Text Available Legal domains that can be characterized by their high rate of change caused by either societal needs or economic and technological innovations form a constant challenge for their regulatory and supervisory authorities. This contribution aims at turning this perspective from a challenge to an opportunity by finding regulatory ways that adapt flexibly to the changing realities by examining a model for a private-public regulatory and enforcement regime for the protection of minors in audiovisual media and defining conditions.

  13. The New Audiovisual Media Services Directive : Television without Frontiers, Television without Cultural Diversity

    OpenAIRE

    Burri, Mira

    2007-01-01

    After long deliberations, the European Community (EC) has completed the reform of its audiovisual media regulation. The paper examines the main tenets of this reform with particular focus on its implications for the diversity of cultural expressions in the European media landscape. It also takes into account the changed patterns of consumer and business behaviour due to the advances in digital media and their wider spread in society. The paper criticises the somewhat unimaginative approach of...

  14. Audiovisual quality assessment in communications applications: Current status, trends and challenges

    DEFF Research Database (Denmark)

    Korhonen, Jari

    2010-01-01

    Audiovisual quality assessment is one of the major challenges in multimedia communications. Traditionally, algorithm-based (objective) assessment methods have focused primarily on the compression artifacts. However, compression is only one of the numerous factors influencing the perception....... In communications applications, transmission errors, including packet losses and bit errors, can be a significant source of quality degradation. Also the environmental factors, such as background noise, ambient light and display characteristics, pose an impact on perception. A third aspect that has not been widely...

  15. Student′s preference of various audiovisual aids used in teaching pre- and para-clinical areas of medicine

    Directory of Open Access Journals (Sweden)

    Navatha Vangala

    2015-01-01

    Full Text Available Introduction: The formal lecture is among the oldest teaching methods that have been widely used in medical education. Delivering a lecture is made easy and better by use of audiovisual aids (AV aids such as blackboard or whiteboard, an overhead projector, and PowerPoint presentation (PPT. Objective: To know the students preference of various AV aids and their use in medical education with an aim to improve their use in didactic lectures. Materials and Methods: The study was carried out among 230 undergraduate medical students of first and second M.B.B.S studying at Malla Reddy Medical College for Women, Hyderabad, Telangana, India during the month of November 2014. Students were asked to answer a questionnaire on the use of AV aids for various aspects of learning. Results: This study indicates that students preferred PPT, the most for a didactic lecture, for better perception of diagrams and flowcharts. Ninety-five percent of the students (first and second M.B.B.S were stimulated for further reading if they attended a lecture augmented by the use of visual aids. Teacher with good teaching skills and AV aids (58% was preferred most than a teacher with only good teaching skills (42%. Conclusion: Our study demonstrates that lecture delivered using PPT was more appreciated and preferred by the students. Furthermore, teachers with a proper lesson plan, good interactive and communicating skills are needed for an effective presentation of lecture.

  16. Ciudadanía y competencia audiovisual en La Rioja: Panorama actual en la tercera edad

    Directory of Open Access Journals (Sweden)

    Josefina Santibáñez Velilla

    2012-09-01

    Full Text Available El consumo actual de medios por parte de la sociedad está generando nuevas formas de interpretar y analizar la información que se transmite en los diferentes soportes audiovisuales. En este estudio planteamos en primer lugar, la justificación teórica de la situación actual de la educación en medios y en segundo lugar, el análisis y resultados sobre el grado de conocimiento en competencia audiovisual de la muestra de mayores de 65 años de la Comunidad Autónoma de La Rioja (España seleccionada para el estudio. Los objetivos fundamentales son evaluar el grado de conocimiento de la competencia audiovisual de este colectivo, identificar diferencias entre la muestra regional y nacional y describir las dimensiones de alfabetización audiovisual. Para ello, se han tenido en cuenta el análisis de los criterios de evaluación de dicha competencia ateniendo a las dimensiones de ideología y valores, producción y programación, recepción y audiencia y tecnología. Finalmente, se exponen conclusiones que abren la puerta a nuevos planteamientos sobre prácticas de educación en medios y vías de trabajo futuras.

  17. Audiovisual associations alter the perception of low-level visual motion

    Directory of Open Access Journals (Sweden)

    Hulusi eKafaligonul

    2015-03-01

    Full Text Available Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role.

  18. Identifying Core Affect in Individuals from fMRI Responses to Dynamic Naturalistic Audiovisual Stimuli.

    Science.gov (United States)

    Kim, Jongwan; Wang, Jing; Wedell, Douglas H; Shinkareva, Svetlana V

    2016-01-01

    Recent research has demonstrated that affective states elicited by viewing pictures varying in valence and arousal are identifiable from whole brain activation patterns observed with functional magnetic resonance imaging (fMRI). Identification of affective states from more naturalistic stimuli has clinical relevance, but the feasibility of identifying these states on an individual trial basis from fMRI data elicited by dynamic multimodal stimuli is unclear. The goal of this study was to determine whether affective states can be similarly identified when participants view dynamic naturalistic audiovisual stimuli. Eleven participants viewed 5s audiovisual clips in a passive viewing task in the scanner. Valence and arousal for individual trials were identified both within and across participants based on distributed patterns of activity in areas selectively responsive to audiovisual naturalistic stimuli while controlling for lower level features of the stimuli. In addition, the brain regions identified by searchlight analyses to represent valence and arousal were consistent with previously identified regions associated with emotion processing. These findings extend previous results on the distributed representation of affect to multimodal dynamic stimuli.

  19. On the Importance of Audiovisual Coherence for the Perceived Quality of Synthesized Visual Speech

    Directory of Open Access Journals (Sweden)

    Wesley Mattheyses

    2009-01-01

    Full Text Available Audiovisual text-to-speech systems convert a written text into an audiovisual speech signal. Typically, the visual mode of the synthetic speech is synthesized separately from the audio, the latter being either natural or synthesized speech. However, the perception of mismatches between these two information streams requires experimental exploration since it could degrade the quality of the output. In order to increase the intermodal coherence in synthetic 2D photorealistic speech, we extended the well-known unit selection audio synthesis technique to work with multimodal segments containing original combinations of audio and video. Subjective experiments confirm that the audiovisual signals created by our multimodal synthesis strategy are indeed perceived as being more synchronous than those of systems in which both modes are not intrinsically coherent. Furthermore, it is shown that the degree of coherence between the auditory mode and the visual mode has an influence on the perceived quality of the synthetic visual speech fragment. In addition, the audio quality was found to have only a minor influence on the perceived visual signal's quality.

  20. Audiovisual Integration of Speech in a Patient with Broca’s Aphasia

    Directory of Open Access Journals (Sweden)

    Tobias Søren Andersen

    2015-04-01

    Full Text Available Lesions to Broca’s area cause aphasia characterised by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca’s area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca’s area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca’s aphasia did not experience the McGurk illusion suggesting that an intact Broca’s area is necessary for audiovisual integration of speech. Here we describe a patient with Broca’s aphasia who experienced the McGurk illusion. This indicates that an intact Broca’s area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca’s area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke’s aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca’s aphasia.

  1. Brain mechanisms that underlie the effects of motivational audiovisual stimuli on psychophysiological responses during exercise.

    Science.gov (United States)

    Bigliassi, Marcelo; Silva, Vinícius B; Karageorghis, Costas I; Bird, Jonathan M; Santos, Priscila C; Altimari, Leandro R

    2016-05-01

    Motivational audiovisual stimuli such as music and video have been widely used in the realm of exercise and sport as a means by which to increase situational motivation and enhance performance. The present study addressed the mechanisms that underlie the effects of motivational stimuli on psychophysiological responses and exercise performance. Twenty-two participants completed fatiguing isometric handgrip-squeezing tasks under two experimental conditions (motivational audiovisual condition and neutral audiovisual condition) and a control condition. Electrical activity in the brain and working muscles was analyzed by use of electroencephalography and electromyography, respectively. Participants were asked to squeeze the dynamometer maximally for 30s. A single-item motivation scale was administered after each squeeze. Results indicated that task performance and situational motivational were superior under the influence of motivational stimuli when compared to the other two conditions (~20% and ~25%, respectively). The motivational stimulus downregulated the predominance of low-frequency waves (theta) in the right frontal regions of the cortex (F8), and upregulated high-frequency waves (beta) in the central areas (C3 and C4). It is suggested that motivational sensory cues serve to readjust electrical activity in the brain; a mechanism by which the detrimental effects of fatigue on the efferent control of working muscles is ameliorated.

  2. Audiovisual associations alter the perception of low-level visual motion.

    Science.gov (United States)

    Kafaligonul, Hulusi; Oluk, Can

    2015-01-01

    Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role.

  3. Identifying Core Affect in Individuals from fMRI Responses to Dynamic Naturalistic Audiovisual Stimuli

    Science.gov (United States)

    Kim, Jongwan; Wang, Jing; Wedell, Douglas H.

    2016-01-01

    Recent research has demonstrated that affective states elicited by viewing pictures varying in valence and arousal are identifiable from whole brain activation patterns observed with functional magnetic resonance imaging (fMRI). Identification of affective states from more naturalistic stimuli has clinical relevance, but the feasibility of identifying these states on an individual trial basis from fMRI data elicited by dynamic multimodal stimuli is unclear. The goal of this study was to determine whether affective states can be similarly identified when participants view dynamic naturalistic audiovisual stimuli. Eleven participants viewed 5s audiovisual clips in a passive viewing task in the scanner. Valence and arousal for individual trials were identified both within and across participants based on distributed patterns of activity in areas selectively responsive to audiovisual naturalistic stimuli while controlling for lower level features of the stimuli. In addition, the brain regions identified by searchlight analyses to represent valence and arousal were consistent with previously identified regions associated with emotion processing. These findings extend previous results on the distributed representation of affect to multimodal dynamic stimuli. PMID:27598534

  4. Audiovisual integration in near and far space: effects of changes in distance and stimulus effectiveness.

    Science.gov (United States)

    Van der Stoep, N; Van der Stigchel, S; Nijboer, T C W; Van der Smagt, M J

    2016-05-01

    A factor that is often not considered in multisensory research is the distance from which information is presented. Interestingly, various studies have shown that the distance at which information is presented can modulate the strength of multisensory interactions. In addition, our everyday multisensory experience in near and far space is rather asymmetrical in terms of retinal image size and stimulus intensity. This asymmetry is the result of the relation between the stimulus-observer distance and its retinal image size and intensity: an object that is further away is generally smaller on the retina as compared to the same object when it is presented nearer. Similarly, auditory intensity decreases as the distance from the observer increases. We investigated how each of these factors alone, and their combination, affected audiovisual integration. Unimodal and bimodal stimuli were presented in near and far space, with and without controlling for distance-dependent changes in retinal image size and intensity. Audiovisual integration was enhanced for stimuli that were presented in far space as compared to near space, but only when the stimuli were not corrected for visual angle and intensity. The same decrease in intensity and retinal size in near space did not enhance audiovisual integration, indicating that these results cannot be explained by changes in stimulus efficacy or an increase in distance alone, but rather by an interaction between these factors. The results are discussed in the context of multisensory experience and spatial uncertainty, and underline the importance of studying multisensory integration in the depth space.

  5. Electrophysiological correlates of individual differences in perception of audiovisual temporal asynchrony.

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer

    2016-06-01

    Sensitivity to the temporal relationship between auditory and visual stimuli is key to efficient audiovisual integration. However, even adults vary greatly in their ability to detect audiovisual temporal asynchrony. What underlies this variability is currently unknown. We recorded event-related potentials (ERPs) while participants performed a simultaneity judgment task on a range of audiovisual (AV) and visual-auditory (VA) stimulus onset asynchronies (SOAs) and compared ERP responses in good and poor performers to the 200ms SOA, which showed the largest individual variability in the number of synchronous perceptions. Analysis of ERPs to the VA200 stimulus yielded no significant results. However, those individuals who were more sensitive to the AV200 SOA had significantly more positive voltage between 210 and 270ms following the sound onset. In a follow-up analysis, we showed that the mean voltage within this window predicted approximately 36% of variability in sensitivity to AV temporal asynchrony in a larger group of participants. The relationship between the ERP measure in the 210-270ms window and accuracy on the simultaneity judgment task also held for two other AV SOAs with significant individual variability -100 and 300ms. Because the identified window was time-locked to the onset of sound in the AV stimulus, we conclude that sensitivity to AV temporal asynchrony is shaped to a large extent by the efficiency in the neural encoding of sound onsets.

  6. Visual and Auditory Components in the Perception of Asynchronous Audiovisual Speech.

    Science.gov (United States)

    García-Pérez, Miguel A; Alcalá-Quintana, Rocío

    2015-12-01

    Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal.

  7. Visual and audiovisual effects of isochronous timing on visual perception and brain activity.

    Science.gov (United States)

    Marchant, Jennifer L; Driver, Jon

    2013-06-01

    Understanding how the brain extracts and combines temporal structure (rhythm) information from events presented to different senses remains unresolved. Many neuroimaging beat perception studies have focused on the auditory domain and show the presence of a highly regular beat (isochrony) in "auditory" stimulus streams enhances neural responses in a distributed brain network and affects perceptual performance. Here, we acquired functional magnetic resonance imaging (fMRI) measurements of brain activity while healthy human participants performed a visual task on isochronous versus randomly timed "visual" streams, with or without concurrent task-irrelevant sounds. We found that visual detection of higher intensity oddball targets was better for isochronous than randomly timed streams, extending previous auditory findings to vision. The impact of isochrony on visual target sensitivity correlated positively with fMRI signal changes not only in visual cortex but also in auditory sensory cortex during audiovisual presentations. Visual isochrony activated a similar timing-related brain network to that previously found primarily in auditory beat perception work. Finally, activity in multisensory left posterior superior temporal sulcus increased specifically during concurrent isochronous audiovisual presentations. These results indicate that regular isochronous timing can modulate visual processing and this can also involve multisensory audiovisual brain mechanisms.

  8. Representation-based user interfaces for the audiovisual library of the year 2000

    Science.gov (United States)

    Aigrain, Philippe; Joly, Philippe; Lepain, Philippe; Longueville, Veronique

    1995-03-01

    The audiovisual library of the future will be based on computerized access to digitized documents. In this communication, we address the user interface issues which will arise from this new situation. One cannot simply transfer a user interface designed for the piece by piece production of some audiovisual presentation and make it a tool for accessing full-length movies in an electronic library. One cannot take a digital sound editing tool and propose it as a means to listen to a musical recording. In our opinion, when computers are used as mediations to existing contents, document representation-based user interfaces are needed. With such user interfaces, a structured visual representation of the document contents is presented to the user, who can then manipulate it to control perception and analysis of these contents. In order to build such manipulable visual representations of audiovisual documents, one needs to automatically extract structural information from the documents contents. In this communication, we describe possible visual interfaces for various temporal media, and we propose methods for the economically feasible large scale processing of documents. The work presented is sponsored by the Bibliotheque Nationale de France: it is part of the program aiming at developing for image and sound documents an experimental counterpart to the digitized text reading workstation of this library.

  9. The ventriloquist in periphery: impact of eccentricity-related reliability on audio-visual localization.

    Science.gov (United States)

    Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier

    2013-10-28

    The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.

  10. Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception

    Science.gov (United States)

    Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki

    2016-01-01

    Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs’ response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs’ early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception. PMID:27734953

  11. A Cross-Linguistic ERP Examination of Audiovisual Speech Perception between English and Japanese

    Directory of Open Access Journals (Sweden)

    Satoko Hisanaga

    2011-10-01

    Full Text Available According to recent ERP (event-related potentials studies, the visual speech facilitates the neural processing of auditory speech for speakers of European languages in audiovisual speech perception. We examined whether this visual facilitation is also the case for Japanese speakers for whom the weaker susceptibility of the visual influence has been behaviorally reported. We conducted a cross-linguistic experiment comparing ERPs of Japanese and English language groups (JL and EL when they were presented with audiovisual congruent as well as audio-only speech stimuli. The temporal facilitation by the additional visual speech was observed only for native speech stimuli, suggesting a role of articulating experiences for early ERP components. For native stimuli, the EL showed sustained visual facilitation for about 300 ms from audio onset. On the other hand, the visual facilitation was limited to the first 100 ms for the JL, and they rather showed a visual inhibitory effect at 300 ms from the audio onset. Thus the type of native language affects neural processing of visual speech in audiovisual speech perception. This inhibition is consistent with behaviorally reported weaker visual influence for the JL.

  12. Neural dynamics of audiovisual synchrony and asynchrony perception in 6-month-old infants

    Directory of Open Access Journals (Sweden)

    Franziska eKopp

    2013-01-01

    Full Text Available Young infants are sensitive to multisensory temporal synchrony relations, but the neural dynamics of temporal interactions between vision and audition in infancy are not well understood. We investigated audiovisual synchrony and asynchrony perception in 6-month-old infants using event-related potentials (ERP. In a prior behavioral experiment (n = 45, infants were habituated to an audiovisual synchronous stimulus and tested for recovery of interest by presenting an asynchronous test stimulus in which the visual stream was delayed with respect to the auditory stream by 400 ms. Infants who behaviorally discriminated the change in temporal alignment were included in further analyses. In the EEG experiment (final sample: n = 15, synchronous and asynchronous stimuli (visual delay of 400 ms were presented in random order. Results show latency shifts in the auditory ERP components N1 and P2 as well as the infant ERP component Nc. Latencies in the asynchronous condition were significantly longer than in the synchronous condition. After video onset but preceding the auditory onset, amplitude modulations propagating from posterior to anterior sites and related to the Pb component of infants' ERP were observed. Results suggest temporal interactions between the two modalities. Specifically, they point to the significance of anticipatory visual motion for auditory processing, and indicate young infants’ predictive capacities for audiovisual temporal synchrony relations.

  13. Creación colectiva audiovisual y cultura colaborativa online. Proyectos y estrategias

    Directory of Open Access Journals (Sweden)

    Jordi Alberich Pascual

    2012-04-01

    Full Text Available El presente artículo analiza el desarrollo creciente de proyectos audiovisuales de creación colectiva en y a través de Internet. Para ello, se exploran en primer lugar las implicaciones para la redefinición de la función-autor tradicional que posibilitan los sistemas interactivos multimedia, así como su vinculación con estrategias de trabajo colaborativo en red. A continuación, centramos nuestra atención en el uso y desarrollo de recursos de software libre audiovisual, como ejemplo paradigmático de la vitalidad de una creciente cultura colaborativa en el ámbito audiovisual contemporáneo. Finalmente, el artículo concluye estableciendo las claves identificativas básicas de tres aproximaciones diferenciadas a las tareas y estrategias de trabajo implicadas en los proyectos de creación colectiva audiovisual analizados en el curso de nuestra investigación.

  14. Inverse Effectiveness and Multisensory Interactions in Visual Event-Related Potentials with Audiovisual Speech

    Science.gov (United States)

    Bushmakin, Maxim; Kim, Sunah; Wallace, Mark T.; Puce, Aina; James, Thomas W.

    2013-01-01

    In recent years, it has become evident that neural responses previously considered to be unisensory can be modulated by sensory input from other modalities. In this regard, visual neural activity elicited to viewing a face is strongly influenced by concurrent incoming auditory information, particularly speech. Here, we applied an additive-factors paradigm aimed at quantifying the impact that auditory speech has on visual event-related potentials (ERPs) elicited to visual speech. These multisensory interactions were measured across parametrically varied stimulus salience, quantified in terms of signal to noise, to provide novel insights into the neural mechanisms of audiovisual speech perception. First, we measured a monotonic increase of the amplitude of the visual P1-N1-P2 ERP complex during a spoken-word recognition task with increases in stimulus salience. ERP component amplitudes varied directly with stimulus salience for visual, audiovisual, and summed unisensory recordings. Second, we measured changes in multisensory gain across salience levels. During audiovisual speech, the P1 and P1-N1 components exhibited less multisensory gain relative to the summed unisensory components with reduced salience, while N1-P2 amplitude exhibited greater multisensory gain as salience was reduced, consistent with the principle of inverse effectiveness. The amplitude interactions were correlated with behavioral measures of multisensory gain across salience levels as measured by response times, suggesting that change in multisensory gain associated with unisensory salience modulations reflects an increased efficiency of visual speech processing. PMID:22367585

  15. 浅谈英语视听说课中的动感教学%Dynamic Teaching in English Audiovisual and Speaking Class

    Institute of Scientific and Technical Information of China (English)

    高莹

    2011-01-01

    视听说课程是英语教学中特色鲜明的一门课程。本文从选教材、备题材以及借助英文电影作为教学手段三个方面,阐述了在教学过程中所运用的动感教学方式,以及这种教学方式在提高学生英语运用能力方面所取得的效果。%Audiovisual and speaking course is a teaching. From how to choose teaching materials, course with distinctive features in English preparation of the subject matters and films as a teaching method the three aspects, this article elaborates the use of dynamic teaching method in the teaching process and the effects that this teaching approach gets in improving students' English ability for use.

  16. From Survival to Sophistication: Hispanic Needs Equal Library Needs [and] Sources of Spanish-Language Materials.

    Science.gov (United States)

    Cuesta, Yolanda J.; Pearson, John C.

    1990-01-01

    The need to consider length of residency, language facility, and cultural subgroup when selecting library materials for the Hispanic community is discussed in the first article. The second lists 50 vendors of Spanish language books and audiovisual materials, including a contact, an address, and specialty areas for each vendor. (CLB)

  17. Maternal and Infant Nutrition Education Materials. January 1981-October 1988. Quick Bibliography Series.

    Science.gov (United States)

    Irving, Holly Berry

    The materials cited in this annotated bibliography focus on maternal and infant health and the critical importance of good nutrition. Audiovisuals and books are listed in 152 citations derived from online searches of the AGRICOLA database. Materials are available from the National Agricultural Library or through interlibrary loan to a local…

  18. Environnement et elaboration de materiel pedagogique (The Environment and the Elaboration of Instructional Materials).

    Science.gov (United States)

    Capelle, Marie-Jose; Archard-Bayle, Guy

    1982-01-01

    Describes the method and the instructional materials entitled "Contacts," which were developed specifically for Nigeria. The discussion covers the use of audiovisual supplementary material, the essentially African sociocultural reference of the text, the methodology peculiar to the Nigeria plurilingual situation, and the goals for both teachers…

  19. A Bibliography of Selected Materials on the Navajo and Zuni Indians.

    Science.gov (United States)

    Russell, Noma, ED.; And Others

    Intended to acquaint educators with various materials that may be used in the classroom to enhance the American Indian student's self-concept by acquainting him with the richness and variety of his cultural heritage, this bibliography cites 896 books, audiovisual aids, and periodicals about the Navajo and Zuni tribes. The materials, published…

  20. The Picmonic® Learning System: enhancing memory retention of medical sciences, using an audiovisual mnemonic Web-based learning platform

    Directory of Open Access Journals (Sweden)

    Yang A

    2014-05-01

    Full Text Available Adeel Yang,1,* Hersh Goel,1,* Matthew Bryan,2 Ron Robertson,1 Jane Lim,1 Shehran Islam,1 Mark R Speicher2 1College of Medicine, The University of Arizona, Tucson, AZ, USA; 2Arizona College of Osteopathic Medicine, Midwestern University, Glendale, AZ, USA *These authors contributed equally to this work Background: Medical students are required to retain vast amounts of medical knowledge on the path to becoming physicians. To address this challenge, multimedia Web-based learning resources have been developed to supplement traditional text-based materials. The Picmonic® Learning System (PLS; Picmonic, Phoenix, AZ, USA is a novel multimedia Web-based learning platform that delivers audiovisual mnemonics designed to improve memory retention of medical sciences. Methods: A single-center, randomized, subject-blinded, controlled study was conducted to compare the PLS with traditional text-based material for retention of medical science topics. Subjects were randomly assigned to use two different types of study materials covering several diseases. Subjects randomly assigned to the PLS group were given audiovisual mnemonics along with text-based materials, whereas subjects in the control group were given the same text-based materials with key terms highlighted. The primary endpoints were the differences in performance on immediate, 1 week, and 1 month delayed free-recall and paired-matching tests. The secondary endpoints were the difference in performance on a 1 week delayed multiple-choice test and self-reported satisfaction with the study materials. Differences were calculated using unpaired two-tailed t-tests. Results: PLS group subjects demonstrated improvements of 65%, 161%, and 208% compared with control group subjects on free-recall tests conducted immediately, 1 week, and 1 month after study of materials, respectively. The results of performance on paired-matching tests showed an improvement of up to 331% for PLS group subjects. PLS group

  1. Investigating the impact of audio instruction and audio-visual biofeedback for lung cancer radiation therapy

    Science.gov (United States)

    George, Rohini

    Lung cancer accounts for 13% of all cancers in the Unites States and is the leading cause of deaths among both men and women. The five-year survival for lung cancer patients is approximately 15%.(ACS facts & figures) Respiratory motion decreases accuracy of thoracic radiotherapy during imaging and delivery. To account for respiration, generally margins are added during radiation treatment planning, which may cause a substantial dose delivery to normal tissues and increase the normal tissue toxicity. To alleviate the above-mentioned effects of respiratory motion, several motion management techniques are available which can reduce the doses to normal tissues, thereby reducing treatment toxicity and allowing dose escalation to the tumor. This may increase the survival probability of patients who have lung cancer and are receiving radiation therapy. However the accuracy of these motion management techniques are inhibited by respiration irregularity. The rationale of this thesis was to study the improvement in regularity of respiratory motion by breathing coaching for lung cancer patients using audio instructions and audio-visual biofeedback. A total of 331 patient respiratory motion traces, each four minutes in length, were collected from 24 lung cancer patients enrolled in an IRB-approved breathing-training protocol. It was determined that audio-visual biofeedback significantly improved the regularity of respiratory motion compared to free breathing and audio instruction, thus improving the accuracy of respiratory gated radiotherapy. It was also observed that duty cycles below 30% showed insignificant reduction in residual motion while above 50% there was a sharp increase in residual motion. The reproducibility of exhale based gating was higher than that of inhale base gating. Modeling the respiratory cycles it was found that cosine and cosine 4 models had the best correlation with individual respiratory cycles. The overall respiratory motion probability distribution

  2. Propuestas para la investigavción en comunicación audiovisual: publicidad social y creación colectiva en Internet / Research proposals for audiovisual communication: social advertising and collective creation on the internet

    Directory of Open Access Journals (Sweden)

    Teresa Fraile Prieto

    2011-09-01

    Full Text Available Resumen: La sociedad de la información digital plantea nuevos retos a los investigadores. A mediada que la comunicación audiovisual se ha consolidado como disciplina, los estudios culturales se muestran como una perspectiva de análisis ventajosa para acercarse a las nuevas prácticas creativas y de consumo del medio audiovisual. Este artículo defiende el estudio de los productos culturales audiovisuales que esta sociedad digital produce por cuanto son un testimonio de los cambios sociales que se operan en ella. En concreto se propone el acercamiento a la publicidad social y a los objetos de creación colectiva en Internet como medio para conocer las circunstancias de nuestra sociedad. Abstract: The information society poses new challenges to researchers. While audiovisual communication has been consolidated as a discipline, cultural studies is an advantageous analytical perspective to approach the new creative practices and consumption of audiovisual media. This article defends the study of audiovisual cultural products produced by the digital society because they are a testimony of the social changes taking place in it. Specifically, it proposes an approach to social advertising and objects of collective creation on the Internet as a means to know the circumstances of our society.

  3. THE IMPACT OF A SINGLE AND CONTINUOUS AUDIOVISUAL STIMULATION ON HEART RATE VARIABILITY AND MECHANISMS OF AUTONOMIC REGULATION IN ATHLETES-CYCLICS

    Directory of Open Access Journals (Sweden)

    R. I. Aizman

    2014-01-01

    Full Text Available The purpose of this study was to investigate the effect of influence of single and prolonged exposure of audio-visual stimulation (AVS on heart rate variability and the mechanisms of the autonomic regulation in athletes, involved in cyclic sports activity.Material and methods. In this study 60 athletes aging of 17–23 years old, specializing in middle-distance running, were involved. The running volume in the zones of varying intensity was from 185 to 225 km/month. The experiment was conducted in January – March 2014, at the Scientific Educational Center “Physiology of ontogenesis” at the Department of Anatomy, Physiology and Life Safety of NSPU.Training course of audiovisual stimulation (AVS consisted of 20–22 sessions, which were conducted in a day with using a portable audiovisual stimulator “NOVO PRO” (USA. ECG registration signal was performed using hardware and software complex VNS-Micro (Neurosoft, Ivanovo, Russia in standard electrocardiogram lead II. Athletes who received AVS course in the morning, before sports training loads, have received training with activating program, and after exercise – by using a relaxed program.Results. In athletes after 20–22 sessions of AVS the decreased influence of the sympathetic regulation and contribution of central levels of management in the regulation of heart rate were found. A decrease of intensity of regulatory system was observed.Increased influence of the parasympathetic regulation and strengthening of autonomous regulation contour was found. AVS contributed to increasing influence of respiratory waves on the heart rhythm and a more economical functional activity. AVS single exposure caused a significant increase in the functioning of autonomous regulation contour, the growing influence of parasympathetic effects and higher contribution of respiratory waves in the formation of heart rate by using activating as well relaxing programs. However, a more pronounced effect was

  4. Mujeres e industria audiovisual hoy: Involución, experimentación y nuevos modelos narrativos Women and the audiovisual (industry today: regression, experiment and new narrative models

    Directory of Open Access Journals (Sweden)

    Ana MARTÍNEZ-COLLADO MARTÍNEZ

    2011-07-01

    Full Text Available Este artículo analiza las prácticas artísticas audiovisuales en el contexto actual. Describe, en primer lugar, el proceso de involución de las prácticas audiovisuales realizadas por mujeres artistas. Las mujeres no están presentes ni como productoras, ni realizadoras, ni como ejecutivas de la industria audiovisual de tal manera que inevitablemente se reconstruyen y refuerzan los estereotipos tradicionales de género. A continuación el artículo se aproxima a la práctica artística audiovisual feminista en la década de los 70 y 80. Tomar la cámara se hizo absolutamente necesario no sólo para dar voz a muchas mujeres. Era necesario reinscribir los discursos ausentes y señalar un discurso crítico respecto a la representación cultural. Analiza, también, cómo estas prácticas a partir de la década de los 90 exploran nuevos modelos narrativos vinculados a las transformaciones de la subjetividad contemporánea, al tiempo que desarrollan su producción audiovisual en un “campo expandido” de exhibición. Por último, el artículo señala la relación de las prácticas feministas audiovisuales con el complejo territorio de la globalización y la sociedad de la información. La narración de la experiencia local ha encontrado en el audiovisual un medio privilegiado para señalar los problemas de la diferencia, la identidad, la raza y la etnicidad.This article analyses audiovisual art in the contemporary context. Firstly it describes the current regression of the role of women artists’ audiovisual practices. Women have little or no presence in the audiovisual industry as producers, filmmakers or executives, a condition that inevitably reconstitutes and reinforces traditional gender stereotypes. The article goes on to look at the feminist audiovisual practices of the nineteen seventies and eighties when women’s filmmaking became an absolutely necessity, not only to give voice to women but also to inscribe discourses found to be

  5. Processing of Audiovisually Congruent and Incongruent Speech in School-Age Children with a History of Specific Language Impairment: A Behavioral and Event-Related Potentials Study

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer; Macias, Danielle; Gustafson, Dana

    2015-01-01

    Previous studies indicate that at least some aspects of audiovisual speech perception are impaired in children with specific language impairment (SLI). However, whether audiovisual processing difficulties are also present in older children with a history of this disorder is unknown. By combining electrophysiological and behavioral measures, we…

  6. Audiovisual Speech Perception in Infancy: The Influence of Vowel Identity and Infants' Productive Abilities on Sensitivity to (Mis)Matches between Auditory and Visual Speech Cues

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-01-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…

  7. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli.

    Science.gov (United States)

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2016-06-17

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context.

  8. Plantilla 2: Particularidades del documento audiovisual. Orígenes de los servicios de documentación televisivos. El reto de la digitalización

    OpenAIRE

    2011-01-01

    Particularidades del soporte físico y del mensaje audiovisual. Orígenes de la documentación audiovisual. Orígenes de los servicios de documentación televisivos. El reto de la digitalización de los archivos de televisión.

  9. REPORT ON MEETING OF DIRECTORS OF NATIONAL AUDIO-VISUAL SERVICES AND DOCUMENTARY FILM UNITS IN SOUTH AND EAST ASIA, KUALA LUMPUR, 31 JULY - AUGUST 1961.

    Science.gov (United States)

    United Nations Educational, Scientific, and Cultural Organization, Paris (France).

    THE PURPOSE OF THIS MEETING WAS TO DEVELOP COOPERATIVE ACTION IN ASIA IN THE FIELD OF AUDIOVISUAL AIDS IN EDUCATION, BASED ON THE WORK OF EXISTING NATIONAL AUDIOVISUAL SERVICES AND DOCUMENTARY FILM UNITS AND TO CONSIDER COOPERATION BETWEEN THESE SERVICES AND UNITS AND THE INTERNATIONAL COUNCIL FOR EDUCATIONAL FILMS. THE FOLLOWING AGENDA WAS…

  10. Éticas y estéticas de la posmemoria en el audiovisual contemporáneo

    Directory of Open Access Journals (Sweden)

    Laia Quílez Esteve

    2015-10-01

    Full Text Available Si bien el concepto de la posmemoria se fragua a partir de las reflexiones derivadas en torno a la representación y transmisión del Holocausto, es cierto que en los últimos años dicho término también ha sido utilizado para designar un conjunto de producciones gestadas en otros contextos geográficos (España, Argentina, Chile… que apelan, por ello, a pasados traumáticos igualmente diversos. La comunicación que aquí presentamos pretende rastrear las bases y nudos conceptuales de lo que podríamos considerar las “estéticas (y éticas de la posmemoria”.Con este objetivo, trataremos de desentrañar los planteamientos que a nivel, formal e ideológico subyacen en gran parte de las producciones audiovisuales contemporáneas que recuperan, desde una marcada distancia generacional, el complejo y escurridizo material de la memoria.Palabras claves: Posmemoria, memoria generacional, Guerra civil española, cine documental, fotografía, trauma, Holocausto._________________________Abstract: although the concept of postmemory is forged from the reflections arising around the representation and transmission of the Holocaust, recently this term has also been used to describe a set of productions engendered in other geographical contexts (Spain, Argentina, Chile..., appealing, consequently, to various traumatic pasts. This paper aims to trace the conceptual bases and knots of what we might consider the "aesthetics (and ethics of postmemory". To this end, we try to unravel the different approaches that, in a narrative, ideological and formal level, underlie much of contemporary audiovisual productions that recover, from a marked generation gap, the complex and elusive material of memory.Keywords: Postmemory, generational memory, Spanish Civil War, documentary film, photography, trauma, Holocaust.

  11. Postproduction agents : audiovisual design and contemporary constraints for creativity

    OpenAIRE

    2012-01-01

    Moving images and sounds are processed creatively after they have been recorded or computer generated. These processes consists of design activities carried out by workers that hold ‘agency’ through the crafts they exercise, because these crafts are defined by the Moving Image Industry and are employed in practically the same way regardless of company. This thesis explores what material constraints there are for such creativity in contemporary Swedish professional moving image postproduction....

  12. The Impact of Politics 2.0 in the Spanish Social Media: Tracking the Conversations around the Audiovisual Political Wars

    Science.gov (United States)

    Noguera, José M.; Correyero, Beatriz

    After the consolidation of weblogs as interactive narratives and producers, audiovisual formats are gaining ground on the Web. Videos are spreading all over the Internet and establishing themselves as a new medium for political propaganda inside social media with tools so powerful like YouTube. This investigation proceeds in two stages: on one hand we are going to examine how this audiovisual formats have enjoyed an enormous amount of attention in blogs during the Spanish pre-electoral campaign for the elections of March 2008. On the other hand, this article tries to investigate the social impact of this phenomenon using data from a content analysis of the blog discussion related to these videos centered on the most popular Spanish political blogs. Also, we study when the audiovisual political messages (made by politicians or by users) "born" and "die" in the Web and with what kind of rules they do.

  13. Sharing killed the AVMSD star: the impossibility of European audiovisual media regulation in the era of the sharing economy

    Directory of Open Access Journals (Sweden)

    Indrek Ibrus

    2016-06-01

    Full Text Available The paper focuses on the challenges that the ‘sharing economy’ presents to the updating of the European Union’s (EU Audiovisual Media Service Directive (AVMSD, part of the broader Digital Single Market (DSM strategy of the EU. It suggests that the convergence of media markets and the emergence of video-sharing platforms may make the existing regulative tradition obsolete. It demonstrates an emergent need for regulatory convergence – AVMSD to create equal terms for all technical forms of content distribution. It then shows how the operational logic of video-sharing platforms undermines the AVMSD logic aimed at creating demand for professionally produced European content – leading potentially to the liberalisation of the EU audiovisual services market. Lastly, it argues that the DSM strategy combined with sharing-related network effects may facilitate the evolution of the oligopolistic structure in the EU audiovisual market, potentially harmful for cultural diversity.

  14. 基础阶段西班牙语视听说课程的教学思考%Thoughts on Basic Spanish Audio-Visual Teaching

    Institute of Scientific and Technical Information of China (English)

    杨洁

    2012-01-01

    As a compulsory course in Spanish major, Spanish audio-visual Course of theFoundation Stage is the supplement and expansion of Intensive Reading Course. By listening to recordings, news, watching DVD,video and other means, this course is able to expose students to the different pronunciations and intonations of many Spanish-speaking countries. Besides, in studying this course the students would have a better understanding of the social and cultural backgrounds and the present development of these countries. It plays an important role not only in helping the students to broaden their horizons, but also in improving their knowledge of the theoretical system. However, clue to the limited foreign materials, there still exist some problems in the Spanish audio-visual teaching. The writer discusses some of her thoughts about this course based on her own teaching experience during these years.%基础阶段的西班牙语视听说课程作为一门专业必修课,是对精读课的补充与扩展,通过听录音、新闻,看影碟、录像等手段,能够使学生接触到众多西语国家不同的语音语调,了解其社会文化背景以及现今发展状态,不仅有助于学生拓宽视野,而且对完善其知识理论体系起到了举足轻重的作用。然而由于外文资料有限,西语视听说课程的教学也存在着一些难题,笔者简单地谈谈对于视听说课程的教学思考。

  15. 法语视听说课程中的跨文化交际研究%Study on cross-cultural communication and audiovisual course of French

    Institute of Scientific and Technical Information of China (English)

    李娟; 席小妮

    2013-01-01

    本文以跨文化交际与中法语教学视听说课程的密切关系为切入点,旨在培养学生在的跨文化交际能力,同时提出了相应的措施分析了视听说课程对于跨文化交际的重要性,并提出了为培养跨文化交际学生可以采取的相应的措施。从而说明在法语教学过程中教师必须学会合理利用原版影像材料来调动和培养学生的语言及非语言交际能力,从而使得学生能够实现跨文化交际意识内化的教学目标。%In this paper, the close relationship between the Intercultural Communication and Audiovisual courses in the teaching of French is our starting point, the main research programs is how to cultivate the students' intercultural communicative competence. This paper analyse the importance of the Audiovisual course in the process of cross-cultural communication, and advance some appropriate measures for the training of intercultural communication of students.Thus illustrate that the French teachers must learn the rational use of the original image material to mobilize and train students so that the students cans cultivate the verbal and nonverbal communication skills in the process of learning.

  16. THE IMPROVEMENT OF AUDIO-VISUAL BASED DANCE APPRECIATION LEARNING AMONG PRIMARY TEACHER EDUCATION STUDENTS OF MAKASSAR STATE UNIVERSITY

    Directory of Open Access Journals (Sweden)

    Wahira

    2014-06-01

    Full Text Available This research aimed to improve the skill in appreciating dances owned by the students of Primary Teacher Education of Makassar State University, to improve the perception towards audio-visual based art appreciation, to increase the students’ interest in audio-visual based art education subject, and to increase the students’ responses to the subject. This research was classroom action research using the research design created by Kemmis & MC. Taggart, which was conducted to 42 students of Primary Teacher Education of Makassar State University. The data collection was conducted using observation, questionnaire, and interview. The techniques of data analysis applied in this research were descriptive qualitative and quantitative. The results of this research were: (1 the students’ achievement in audio-visual based dance appreciation improved: precycle 33,33%, cycle I 42,85% and cycle II 83,33%, (2 the students’ perception towards the audio-visual based dance appreciation improved: cycle I 59,52%, and cycle II 71,42%. The students’ perception towards the subject obtained through structured interview in cycle I and II was 69,83% in a high category, (3 the interest of the students in the art education subject, especially audio-visual based dance appreciation, increased: cycle I 52,38% and cycle II 64,28%, and the students’ interest in the subject obtained through structured interview was 69,50 % in a high category. (3 the students’ response to audio-visual based dance appreciation increased: cycle I 54,76% and cycle II 69,04% in a good category.

  17. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    Science.gov (United States)

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  18. De la competencia digital y audiovisual a la competencia mediática: dimensiones e indicadores

    Directory of Open Access Journals (Sweden)

    María Amor Pérez Rodríguez

    2012-10-01

    Full Text Available La necesidad de plantear la conceptualización de la competencia mediática conduce a una perspectiva más amplia en la que convergen aspectos vinculados a la competencia audiovisual y a la competencia digital. Ambas constituyen el marco de referencia de «El tratamiento de la información y competencia digital», competencia básica del currículum vigente en nuestro país. A pesar de las experiencias que se están llevando a cabo tanto en comunicación audiovisual como digital, aún son pocas las tentativas para definir, de manera precisa, los conocimientos, habilidades y actitudes necesarios para considerarse competente en sendos ámbitos, ineludibles a la hora de llevar a cabo los procesos de enseñanza-aprendizaje. Este trabajo parte del análisis de seis estudios significativos en la temática de alfabetización tanto digital como audiovisual. Considerando aspectos como los destinatarios, la conceptualización que se utiliza en cada uno de ellos, las dimensiones que plantean, el tipo de taxonomía, indicadores… y las propuestas didácticas: objetivos, contenidos, actividades, se sistematizan en una serie de dimensiones e indicadores para definir la competencia mediática y plantear el diseño de actividades para una propuesta didáctica de acuerdo a los indicadores establecidos. La investigación desarrollada nos ha permitido afirmar la necesidad de la convergencia terminológica, así como de la elaboración de recursos, a partir de los indicadores definidos, que incidan en los distintos ámbitos de la competencia mediática de una manera efectiva y sirvan para llevar a cabo actuaciones didácticas en los distintos grupos que componen la sociedad actual.

  19. [Audiovisual stimulation in children with severely limited motor function: does it improve their quality of life?].

    Science.gov (United States)

    Barja, Salesa; Muñoz, Carolina; Cancino, Natalia; Núñez, Alicia; Ubilla, Mario; Sylleros, Rodrigo; Riveros, Rodrigo; Rosas, Ricardo

    2013-08-01

    Introduccion. Los niños con enfermedades neurologicas que condicionan una limitacion grave de la movilidad tienen una calidad de vida (CV) deficiente. Objetivo. Estudiar si la CV de dichos pacientes mejora con la aplicacion de un programa de estimulacion audiovisual. Pacientes y metodos. Estudio prospectivo en nueve niños, seis de ellos varones (edad media: 42,6 ± 28,6 meses), con limitacion grave de la movilidad, hospitalizados de manera prolongada. Se elaboraron dos programas de estimulo audiovisual que, junto con videos, se aplicaron mediante una estructura especialmente diseñada. La frecuencia fue de dos veces al dia, por 10 minutos, durante 20 dias. Los primeros diez dias se llevo a cabo de manera pasiva y los segundos diez con guia del observador. Se registraron variables biologicas, conductuales, cognitivas y se aplico una encuesta de CV adaptada. Resultados. Se diagnosticaron tres casos de atrofia muscular espinal, dos de distrofia muscular congenita, dos de miopatia y dos con otros diagnosticos. Ocho pacientes completaron el seguimiento. Desde el punto de vista basal, presentaron CV regular (7,2 ± 1,7 puntos; mediana: 7,0; rango: 6-10), que mejoraba a buena al finalizar (9,4 ± 1,2 puntos; mediana: 9,0; rango: 8-11), con diferencia intraindividual de 2,1 ± 1,6 (mediana: 2,5; rango: –1 a 4; IC 95% = 0,83-3,42; p = 0,006). Se detecto mejoria en cognicion y percepcion favorable de los cuidadores. No hubo cambio en las variables biologicas ni conductuales. Conclusion. Mediante la estimulacion audiovisual es posible mejorar la calidad de vida de niños con limitacion grave de la movilidad.

  20. Timing in audiovisual speech perception: A mini review and new psychophysical data.

    Science.gov (United States)

    Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory

    2016-02-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.

  1. SU-E-J-29: Audiovisual Biofeedback Improves Tumor Motion Consistency for Lung Cancer Patients

    Energy Technology Data Exchange (ETDEWEB)

    Lee, D; Pollock, S; Makhija, K; Keall, P [The University of Sydney, Camperdown, NSW (Australia); Greer, P [The University of Newcastle, Newcastle, NSW (Australia); Calvary Mater Newcastle Hospital, Newcastle, NSW (Australia); Arm, J; Hunter, P [Calvary Mater Newcastle Hospital, Newcastle, NSW (Australia); Kim, T [The University of Sydney, Camperdown, NSW (Australia); University of Virginia Health System, Charlottesville, VA (United States)

    2014-06-01

    Purpose: To investigate whether the breathing-guidance system: audiovisual (AV) biofeedback improves tumor motion consistency for lung cancer patients. This will minimize respiratory-induced tumor motion variations across cancer imaging and radiotherapy procedues. This is the first study to investigate the impact of respiratory guidance on tumor motion. Methods: Tumor motion consistency was investigated with five lung cancer patients (age: 55 to 64), who underwent a training session to get familiarized with AV biofeedback, followed by two MRI sessions across different dates (pre and mid treatment). During the training session in a CT room, two patient specific breathing patterns were obtained before (Breathing-Pattern-1) and after (Breathing-Pattern-2) training with AV biofeedback. In each MRI session, four MRI scans were performed to obtain 2D coronal and sagittal image datasets in free breathing (FB), and with AV biofeedback utilizing Breathing-Pattern-2. Image pixel values of 2D images after the normalization of 2D images per dataset and Gaussian filter per image were used to extract tumor motion using image pixel values. The tumor motion consistency of the superior-inferior (SI) direction was evaluated in terms of an average tumor motion range and period. Results: Audiovisual biofeedback improved tumor motion consistency by 60% (p value = 0.019) from 1.0±0.6 mm (FB) to 0.4±0.4 mm (AV) in SI motion range, and by 86% (p value < 0.001) from 0.7±0.6 s (FB) to 0.1±0.2 s (AV) in period. Conclusion: This study demonstrated that audiovisual biofeedback improves both breathing pattern and tumor motion consistency for lung cancer patients. These results suggest that AV biofeedback has the potential for facilitating reproducible tumor motion towards achieving more accurate medical imaging and radiation therapy procedures.

  2. A produção audiovisual na virtualização do ensino superior: subsídios para a formação docente/Audiovisual production in the virtualization of higher education: a contribution for teacher education

    Directory of Open Access Journals (Sweden)

    Dulce Márcia da Cruz

    2007-01-01

    Full Text Available O Brasil vive nos últimos dez anos uma crescente expansão da educação a distância (EAD e da virtualização da sala de aula no ensino superior. Se antes de 1995 a produção da EAD era uma tarefa dos profissionais de rádio e TV, com as mídias digitais esse processo também passa pelas mãos de docentes que podem produzir, transmitir e gerenciar cursos e disciplinas na internet, tornando-se autores da produção audiovisual e hipertextual de suas aulas. Visando contribuir para que os docentes tenham noções básicas sobre como produzir para a EAD e para disciplinas semi-presenciais usando meios audiovisuais e hipertextuais, este artigo descreve os elementos básicos que compõem a linguagem cinematográfica e as narrativas digitais que incorporam a interatividade. Finalmente, apresenta alguns fundamentos da produção para as mídias mais comuns na EAD brasileira: material impresso, teleconferência, videoconferência, multimídia/hipermídia e ambientes virtuais de aprendizagem. The past ten years had seen a significant expansion of the distance and hybrid education in Higher Education – HE in Brazil. Before 1995 the production of distance education (DE was a task of radio and TV professionals, with the adoption of digital media this process started to be a task of the teachers too, who can now produce, transmit and manage courses and disciplines in the Internet, becoming authors of the audiovisual and hypertextual production of its lessons. The objective of this article is to offer basic notions to the teachers about how to create DE and hybrid education incorporating audiovisual and hypertextual media, describing the main elements that compose the cinematographic language and the digital narratives that incorporate the interactivity. Finally, it presents some principles of the production for the most common media used for DE in Brazil: printed material, teleconference, videoconference, hypermedia/multimedia and Virtual Learning

  3. Globalización y diversidad cultural en la política audiovisual europea

    OpenAIRE

    2002-01-01

    La política audiovisual de la Unión Europea pretende afrontar los riesgos que plantea la globalización frente a la diversidad cultural. Para ello, cuenta con una serie de medidas legislativas y políticas que es necesario contextualizar y valorar según los objetivos marcados por las instituciones comunitarias. Su estudio y análisis lleva a reflexionar sobre el interés y la conveniencia de estas medidas y de su fundamentación en el entorno europeo.

  4. Tiempo de crisis. El patrimonio audiovisual valenciano frente al cambio tecnológico

    Directory of Open Access Journals (Sweden)

    Lahoz Rodrigo, Juan Ignacio

    2014-07-01

    Full Text Available Tras tres décadas de autogobierno, la Generalitat Valenciana ha creado, fomentado, recopilado y restaurado un patrimonio audiovisual de incalculable interés cultural que tiene en la Filmoteca de CulturArts-IVAC y en el archivo de RTVV sus dos grandes centros de conservación. Este patrimonio se encuentra en un punto crítico por la necesidad de afrontar su transformación tecnológica en un momento de gran dificultad económica y política. El cierre de RTVV y la incertidumbre sobre el futuro de su archivo llevan a contraponer su carácter patrimonial a la tentación de privatizar su gestión y a recordar las recomendaciones de la UE y de la UNESCO para que sean archivos públicos y sin ánimo de lucro quienes se ocupen de la salvaguarda de las imágenes en movimiento. Si la fragilidad de los soportes de la cinematografía, del vídeo y de los ficheros digitales de imagen es la clave de su conservación a largo plazo, más determinante resulta hoy el imperio de la tecnología digital en todos los ámbitos de la generación, acceso y conservación de la producción audiovisual, pues conlleva un patrón de obsolescencia que puede suponer el bloqueo del patrimonio audiovisual valenciano si la Generalitat no le hace frente de forma inmediata y decidida: dotar a la Filmoteca de CulturArts-IVAC del equipamiento tecnológico necesario para la digitalización de sus fondos, dar continuidad a los planes de digitalización del archivo de RTVV y estimular los de todos los archivos audiovisuales de la Comunitat Valenciana, reforzar –en sintonía con las recomendaciones de la UE- el acento conservacionista de instrumentos como las ayudas públicas a la producción o el depósito legal y estimular el desarrollo del Catálogo del Patrimonio Audiovisual Valenciano son medidas que deben coadyuvar a la conservación a largo plazo de nuestro patrimonio.

  5. Análisis del museo como narración audiovisual

    OpenAIRE

    2011-01-01

    El museo se adapta a los tiempos, haciendo propios los recursos audiovisuales con propuestas discursivas para la interacción y el aprendizaje. Este artículo expone algunas líneas de investigación centradas en el museo como narración audiovisual desde la Teoría de la Comunicación y el Análisis Fílmico, entre otras perspectivas, aportando el ejemplo del Museo CajaGRANADA Memoria de Andalucía. Palabras clave: análisis, narrativa visual, medios de comunicación, museo, cultura visual.

  6. Política audiovisual europea y diversidad cultural en la era digital

    Directory of Open Access Journals (Sweden)

    Ma. Trinidad García Leiva

    2016-01-01

    Full Text Available Se estudia la implementación de la Convención unesco de 2005 sobre diversidad cultural en la formulación de las políticas europeas destinadas al audiovisual digital. Partiendo de un análisis documental crítico se constata la influencia del tratado, aunque más en la esfera de la promoción que de la protección y con una función más legitimadora de lo existente, que generadora de nuevas iniciativas.

  7. Las relaciones entre cine, cultura e historia: una perspectiva de investigación audiovisual

    Directory of Open Access Journals (Sweden)

    Edward Goyeneche-Gómez

    2012-01-01

    Full Text Available Este artículo analiza una perspectiva de investigación audiovisual soportada en el estudio de las relaciones entre cine, cultura e historia, que permite comprender la construcción y el uso que las sociedades contemporáneas hacen, dentro de complejos procesos históricos, de modos especí!cos de representación y codi!cación fílmica, vinculados a modelos culturales y estéticos que dependen de sistemas ideológicos más amplios.

  8. THE GALICIAN AUDIOVISUAL IN MESTRE MATEO AWARDS. PROTOCOL AT THE CEREMONY

    Directory of Open Access Journals (Sweden)

    Anna Amoros Pons

    2013-11-01

    Full Text Available The text summarizes the main results of a research that forms a part of a more global project about the study, in the field of cinema, of the specialized public relations  and ceremonial protocol as a communication strategy (persuasive indirect and / or covert in film events and, specifically, in the ceremony of awards’ delivery. In this article we focus on the geographical context of Galician audiovisual and on Mestre Mateo Awards’ show, in the first decade of the 21st century. Information of this event’s planning and the communication results obtained with its realization are contributed.

  9. An interactive audio-visual installation using ubiquitous hardware and web-based software deployment

    Directory of Open Access Journals (Sweden)

    Tiago Fernandes Tavares

    2015-05-01

    Full Text Available This paper describes an interactive audio-visual musical installation, namely MOTUS, that aims at being deployed using low-cost hardware and software. This was achieved by writing the software as a web application and using only hardware pieces that are built-in most modern personal computers. This scenario implies in specific technical restrictions, which leads to solutions combining both technical and artistic aspects of the installation. The resulting system is versatile and can be freely used from any computer with Internet access. Spontaneous feedback from the audience has shown that the provided experience is interesting and engaging, regardless of the use of minimal hardware.

  10. PHYSIOLOGICAL MONITORING OPERATORS ACS IN AUDIO-VISUAL SIMULATION OF AN EMERGENCY

    Directory of Open Access Journals (Sweden)

    S. S. Aleksanin

    2010-01-01

    Full Text Available In terms of ship simulator automated control systems we have investigated the information content of physiological monitoring cardiac rhythm to assess the reliability and noise immunity of operators of various specializations with audio-visual simulation of an emergency. In parallel, studied the effectiveness of protection against the adverse effects of electromagnetic fields. Monitoring of cardiac rhythm in a virtual crash it is possible to differentiate the degree of voltage regulation systems of body functions of operators on specialization and note the positive effect of the use of means of protection from exposure of electromagnetic fields.

  11. Using Play Activities and Audio-Visual Aids to Develop Speaking Skills

    Directory of Open Access Journals (Sweden)

    Casallas Mutis Nidia

    2000-08-01

    Full Text Available A project was conducted in order to improve oral proficiency in English through the use of play activities and audio-visual aids, with students of first grade in a bilingual school, in la Calera. They were between 6 and 7 years old. As the sample for this study, the fivestudents who had the lowest language oral proficiency were selected. According to the results, it is clear that the sample has improved their English oral proficiency a great deal. However, the process has to be continued because this skill needs constant practice in order to be developed.

  12. Temporalitats digitals. Aproximaci?? a una teoria del temps cinem??tic en les obres audiovisuals interactives

    OpenAIRE

    Sora, Carles

    2015-01-01

    Aquesta tesi presenta una aproximaci?? te??rica a l'estudi del temps cinem??tic en l'audiovisual interactiu, des de dues perspectives: la de l'estructuraci?? narrativa, els seus usos i tractament temporal; i la de la seva viv??ncia i percepci??. L'aven?? constant de les tecnologies de la informaci?? i les comunicacions ha fet variar la manera com hem concebut i hem fet ??s del temps, a trav??s de les imatges en moviment al llarg de la hist??ria. En els mitjans digitals es generen ara m??ltipl...

  13. Constructing a survey over time: Audio-visual feedback and theatre sketches in rural Mali

    Directory of Open Access Journals (Sweden)

    Véronique Hertrich

    2011-10-01

    Full Text Available Knowledge dissemination is an emerging issue in population studies, both in terms of ethics and data quality. The challenge is especially important in long term follow-up surveys and it requires methodological imagination when the population is illiterate. The paper presents the dissemination project developed in a demographic surveillance system implemented in rural Mali over the last 20 years. After basic experience of document transfer, the feedback strategy was developed through audiovisual shows and theatre sketches. The advantages and drawbacks of these media are discussed, in terms of scientific communication and the construction of dialogue with the target population.

  14. Telepuebla y Ebarrios televisión: dos experiencias de comunicación audiovisual

    OpenAIRE

    2005-01-01

    ¿Enseñamos lo que sabemos, o lo que realmente necesitan nuestros alumnos? En la Sociedad de la Información, muchos maestros continuamos enseñando a leer a alumnos que en gran medida no leerán en su etapa adulta; estos alumnos dedican unas mil horas anuales a la televisión, más tiempo del que están en clase. Como el analfabetismo audiovisual puede dejarles en una situación de indefensión ante los mensajes televisivos, la escuela debe adaptarse a la nueva realidad y comprometerse en su...

  15. Documentary Realism, Sampling Theory and Peircean Semiotics: electronic audiovisual signs (analog or digital as indexes of reality

    Directory of Open Access Journals (Sweden)

    Hélio Godoy

    2007-07-01

    Full Text Available This paper addresses Documentary Realism, focusing on thephysical phenomena of transduction that take place in analog and digital audiovisual systems, herein analyzed in the light of the Sampling Theory, within the framework of Shannon and Weaver’s Information Theory. Transduction is a process by which one type of energy is transformed into another, or by which information is transcodified. Within the scope of Documentary Realism, it cannotbe claimed that electronic audiovisual signs, because of their technical digital features lead to a rupture with reality. Rather, the digital documentary, based on electronic digital cinematography, is still an index of reality.

  16. Identity, culture and development through participatory audiovisual: The Youth Path Project case from Costa Rica’s UNESCO

    Directory of Open Access Journals (Sweden)

    Ángel V. Rabadán

    2015-06-01

    Full Text Available In this article we present the use of audiovisuals media as a strategic element capable of integrating the concepts of culture and development, promoting intercultural dialogue and participation. The concept of cultural identity is present through coexistence and creativity of young people participating in the “Youth Path” program proposed by UNESCO and developed in Central America, in order to promote development strategies and inclusion. The ethnographic audiovisual, as a fundamental tool to generate knowledge processes and communication links and interaction.

  17. Big Data between audiovisual displays, artifacts, and aesthetic experience

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2016-01-01

    of large data sets – or Big Data – into the sphere of art and the aesthetic. Central to the discussion here is the analysis of how different structuring principles of data and the discourses that surround these principles shape our perception of data. This discussion involves considerations on various......This article discusses artistic practices and artifacts that are occupied with exploring data through visualization and sonification strategies as well as with translating data into materially solid formats and embodied processes. By means of these examples the overall aim of the article...... is to critically question how and whether such artistic practices can eventually lead to the experience and production of knowledge that could not otherwise be obtained via more traditional ways of data representation. The article, thus, addresses both the problems and possibilities entailed in extending the use...

  18. Presentation of political Alliances in the Romanian audiovisual media

    Directory of Open Access Journals (Sweden)

    Flaviu Calin RUS

    2011-01-01

    Full Text Available This material wishes to highlight the way in which the main political alliances have been formed in Romania in the last 20 years, as well as the way they have been reflected in the media. Moreover, we have tried to analyze the involvement of journalists and political analysts in explaining these political events. The study will focus on four political alliances, namely: CDR (the Romanian Democratic Convention, D.A. (Y.E.S. - Justice and Truth between PNL – the National Liberal Party and PD - the Democratic Party, ACD (the Centre-Right Alliance between PNL and PC – the Conservative Party and USL (the Social-Liberal Union between PSD – the Social Democrat Party, PNL and PC.

  19. Audio-visual training-aid for speechreading

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich; Gebert, H.

    2011-01-01

    ‐recorded video material; it also allows the teacher to produce and combine a large number of individual lessons without the need of expensive recording equipment. Our system uses a scene manager to enhance teaching. It allows the creation of different scenarios that are composed of appropriate background images...... of classroom teaching, but the system may also be used as a new e‐learning or, in general, distance learning tool for hearing impaired people. It presents a facial animation on the computer screen with synchronized speech output and is driven by input text sequences in orthographic transcription. The input may...... modular structure of the software package and the centralized event manager, it is possible to add or replace specific modules when needed. The present version of our teacher‐student module uses a hierarchically structured composition of important single words and short phrases, supplemented by easy...

  20. Relfection on the Teaching Reform of the Audiovisual Program Production Course Group under the New Media Environment%新媒体环境下视听节目编导类课程教学改革思考

    Institute of Scientific and Technical Information of China (English)

    王梅

    2016-01-01

    The rapid development of the new media has brought about a huge impact on the audiovisual program production forms. Hence the previous traditional TV programs as routine materials for the current course cannot meet the teaching development under the new media trend. As a result, the current audiovisual program production course group should be reformed in terms of the teaching material, the reviewing and practicaltraining style, and the evaluation system of the students' performance, so as to foster the practical real talents who are new media-oriented.%新媒体的迅速发展对视听节目的形态产生强烈影响,以传统电视节目制作为教学内容显然已不能满足新媒体环境下视听节目编导类课程的教学要求。视听节目编导类课程需要在教学内容,实训方式、方法,考核与评价体系上进行改革,培养适应新媒体环境的实用型人才。

  1. Alfasecuencialización: la enseñanza del cine en la era del audiovisual Sequential literacy: the teaching of cinema in the age of audio-visual speech

    Directory of Open Access Journals (Sweden)

    José Antonio Palao Errando

    2007-10-01

    Full Text Available En la llamada «sociedad de la información» los estudios sobre cine se han visto diluidos en el abordaje pragmático y tecnológico del discurso audiovisual, así como la propia fruición del cine se ha visto atrapada en la red del DVD y del hipertexto. El propio cine reacciona ante ello a través de estructuras narrativas complejas que lo alejan del discurso audiovisual estándar. La función de los estudios sobre cine y de su enseñanza universitaria debe ser la reintroducción del sujeto rechazado del saber informativo por medio de la interpretación del texto fílmico. In the so called «information society», film studies have been diluted in the pragmatic and technological approaching of the audiovisual speech, as well as the own fruition of the cinema has been caught in the net of DVD and hypertext. The cinema itself reacts in the face of it through complex narrative structures that take it away from the standard audio-visual speech. The function of film studies at the university education should be the reintroduction of the rejected subject of the informative knowledge by means of the interpretation of film text.

  2. The presentation of expert testimony via live audio-visual communication.

    Science.gov (United States)

    Miller, R D

    1991-01-01

    As part of a national effort to improve efficiency in court procedures, the American Bar Association has recommended, on the basis of a number of pilot studies, increased use of current audio-visual technology, such as telephone and live video communication, to eliminate delays caused by unavailability of participants in both civil and criminal procedures. Although these recommendations were made to facilitate court proceedings, and for the convenience of attorneys and judges, they also have the potential to save significant time for clinical expert witnesses as well. The author reviews the studies of telephone testimony that were done by the American Bar Association and other legal research groups, as well as the experience in one state forensic evaluation and treatment center. He also reviewed the case law on the issue of remote testimony. He then presents data from a national survey of state attorneys general concerning the admissibility of testimony via audio-visual means, including video depositions. Finally, he concludes that the option to testify by telephone provides a significant savings in precious clinical time for forensic clinicians in public facilities, and urges that such clinicians work actively to convince courts and/or legislatures in states that do not permit such testimony (currently the majority), to consider accepting it, to improve the effective use of scarce clinical resources in public facilities.

  3. earGram Actors: An Interactive Audiovisual System Based on Social Behavior

    Directory of Open Access Journals (Sweden)

    Peter Beyls

    2015-11-01

    Full Text Available In multi-agent systems, local interactions among system components following relatively simple rules often result in complex overall systemic behavior. Complex behavioral and morphological patterns have been used to generate and organize audiovisual systems with artistic purposes. In this work, we propose to use the Actor model of social interactions to drive a concatenative synthesis engine called earGram in real time. The Actor model was originally developed to explore the emergence of complex visual patterns. On the other hand, earGram was originally developed to facilitate the creative exploration of concatenative sound synthesis. The integrated audiovisual system allows a human performer to interact with the system dynamics while receiving visual and auditory feedback. The interaction happens indirectly by disturbing the rules governing the social relationships amongst the actors, which results in a wide range of dynamic spatiotemporal patterns. A performer thus improvises within the behavioural scope of the system while evaluating the apparent connections between parameter values and actual complexity of the system output.

  4. Policing Fish at Boston's Museum of Science: Studying Audiovisual Interaction in the Wild.

    Science.gov (United States)

    Goldberg, Hannah; Sun, Yile; Hickey, Timothy J; Shinn-Cunningham, Barbara; Sekuler, Robert

    2015-08-01

    Boston's Museum of Science supports researchers whose projects advance science and provide educational opportunities to the Museum's visitors. For our project, 60 visitors to the Museum played "Fish Police!!," a video game that examines audiovisual integration, including the ability to ignore irrelevant sensory information. Players, who ranged in age from 6 to 82 years, made speeded responses to computer-generated fish that swam rapidly across a tablet display. Responses were to be based solely on the rate (6 or 8 Hz) at which a fish's size modulated, sinusoidally growing and shrinking. Accompanying each fish was a task-irrelevant broadband sound, amplitude modulated at either 6 or 8 Hz. The rates of visual and auditory modulation were either Congruent (both 6 Hz or 8 Hz) or Incongruent (6 and 8 or 8 and 6 Hz). Despite being instructed to ignore the sound, players of all ages responded more accurately and faster when a fish's auditory and visual signatures were Congruent. In a controlled laboratory setting, a related task produced comparable results, demonstrating the robustness of the audiovisual interaction reported here. Some suggestions are made for conducting research in public settings.

  5. Asynchrony adaptation reveals neural population code for audio-visual timing.

    Science.gov (United States)

    Roach, Neil W; Heron, James; Whitaker, David; McGraw, Paul V

    2011-05-01

    The relative timing of auditory and visual stimuli is a critical cue for determining whether sensory signals relate to a common source and for making inferences about causality. However, the way in which the brain represents temporal relationships remains poorly understood. Recent studies indicate that our perception of multisensory timing is flexible--adaptation to a regular inter-modal delay alters the point at which subsequent stimuli are judged to be simultaneous. Here, we measure the effect of audio-visual asynchrony adaptation on the perception of a wide range of sub-second temporal relationships. We find distinctive patterns of induced biases that are inconsistent with the previous explanations based on changes in perceptual latency. Instead, our results can be well accounted for by a neural population coding model in which: (i) relative audio-visual timing is represented by the distributed activity across a relatively small number of neurons tuned to different delays; (ii) the algorithm for reading out this population code is efficient, but subject to biases owing to under-sampling; and (iii) the effect of adaptation is to modify neuronal response gain. These results suggest that multisensory timing information is represented by a dedicated population code and that shifts in perceived simultaneity following asynchrony adaptation arise from analogous neural processes to well-known perceptual after-effects.

  6. A comparison between audio and audiovisual distraction techniques in managing anxious pediatric dental patients

    Directory of Open Access Journals (Sweden)

    Prabhakar A

    2007-01-01

    Full Text Available Pain is not the sole reason for fear of dentistry. Anxiety or the fear of unknown during dental treatment is a major factor and it has been the major concern for dentists for a long time. Therefore, the main aim of this study was to evaluate and compare the two distraction techniques, viz, audio distraction and audiovisual distraction, in management of anxious pediatric dental patients. Sixty children aged between 4-8 years were divided into three groups. Each child had four dental visits - screening visit, prophylaxis visit, cavity preparation and restoration visit, and extraction visit. Child′s anxiety level in each visit was assessed using a combination of four measures: Venham′s picture test, Venham′s rating of clinical anxiety, pulse rate, and oxygen saturation. The values obtained were tabulated and subjected to statistical analysis. It was concluded that audiovisual distraction technique was more effective in managing anxious pediatric dental patient as compared to audio distraction technique.

  7. Finding the Correspondence of Audio-Visual Events by Object Manipulation

    Science.gov (United States)

    Nishibori, Kento; Takeuchi, Yoshinori; Matsumoto, Tetsuya; Kudo, Hiroaki; Ohnishi, Noboru

    A human being understands the objects in the environment by integrating information obtained by the senses of sight, hearing and touch. In this integration, active manipulation of objects plays an important role. We propose a method for finding the correspondence of audio-visual events by manipulating an object. The method uses the general grouping rules in Gestalt psychology, i.e. “simultaneity” and “similarity” among motion command, sound onsets and motion of the object in images. In experiments, we used a microphone, a camera, and a robot which has a hand manipulator. The robot grasps an object like a bell and shakes it or grasps an object like a stick and beat a drum in a periodic, or non-periodic motion. Then the object emits periodical/non-periodical events. To create more realistic scenario, we put other event source (a metronome) in the environment. As a result, we had a success rate of 73.8 percent in finding the correspondence between audio-visual events (afferent signal) which are relating to robot motion (efferent signal).

  8. ANALYSIS OF MULTIMODAL FUSION TECHNIQUES FOR AUDIO-VISUAL SPEECH RECOGNITION

    Directory of Open Access Journals (Sweden)

    D.V. Ivanko

    2016-05-01

    Full Text Available The paper deals with analytical review, covering the latest achievements in the field of audio-visual (AV fusion (integration of multimodal information. We discuss the main challenges and report on approaches to address them. One of the most important tasks of the AV integration is to understand how the modalities interact and influence each other. The paper addresses this problem in the context of AV speech processing and speech recognition. In the first part of the review we set out the basic principles of AV speech recognition and give the classification of audio and visual features of speech. Special attention is paid to the systematization of the existing techniques and the AV data fusion methods. In the second part we provide a consolidated list of tasks and applications that use the AV fusion based on carried out analysis of research area. We also indicate used methods, techniques, audio and video features. We propose classification of the AV integration, and discuss the advantages and disadvantages of different approaches. We draw conclusions and offer our assessment of the future in the field of AV fusion. In the further research we plan to implement a system of audio-visual Russian continuous speech recognition using advanced methods of multimodal fusion.

  9. Spectacular Attractions: Museums, Audio-Visuals and the Ghosts of Memory

    Directory of Open Access Journals (Sweden)

    Mandelli Elisa

    2015-12-01

    Full Text Available In the last decades, moving images have become a common feature not only in art museums, but also in a wide range of institutions devoted to the conservation and transmission of memory. This paper focuses on the role of audio-visuals in the exhibition design of history and memory museums, arguing that they are privileged means to achieve the spectacular effects and the visitors’ emotional and “experiential” engagement that constitute the main objective of contemporary museums. I will discuss this topic through the concept of “cinematic attraction,” claiming that when embedded in displays, films and moving images often produce spectacular mises en scène with immersive effects, creating wonder and astonishment, and involving visitors on an emotional, visceral and physical level. Moreover, I will consider the diffusion of audio-visual witnesses of real or imaginary historical characters, presented in Phantasmagoria-like displays that simulate ghostly and uncanny apparitions, creating an ambiguous and often problematic coexistence of truth and illusion, subjectivity and objectivity, facts and imagination.

  10. Kernel-Based Sensor Fusion With Application to Audio-Visual Voice Activity Detection

    Science.gov (United States)

    Dov, David; Talmon, Ronen; Cohen, Israel

    2016-12-01

    In this paper, we address the problem of multiple view data fusion in the presence of noise and interferences. Recent studies have approached this problem using kernel methods, by relying particularly on a product of kernels constructed separately for each view. From a graph theory point of view, we analyze this fusion approach in a discrete setting. More specifically, based on a statistical model for the connectivity between data points, we propose an algorithm for the selection of the kernel bandwidth, a parameter, which, as we show, has important implications on the robustness of this fusion approach to interferences. Then, we consider the fusion of audio-visual speech signals measured by a single microphone and by a video camera pointed to the face of the speaker. Specifically, we address the task of voice activity detection, i.e., the detection of speech and non-speech segments, in the presence of structured interferences such as keyboard taps and office noise. We propose an algorithm for voice activity detection based on the audio-visual signal. Simulation results show that the proposed algorithm outperforms competing fusion and voice activity detection approaches. In addition, we demonstrate that a proper selection of the kernel bandwidth indeed leads to improved performance.

  11. Pre-stimulus beta and gamma oscillatory power predicts perceived audiovisual simultaneity.

    Science.gov (United States)

    Yuan, Xiangyong; Li, Haijiang; Liu, Peiduo; Yuan, Hong; Huang, Xiting

    2016-09-01

    Pre-stimulus oscillation activity in the brain continuously fluctuates, but it is correlated with subsequent behavioral and perceptual performance. Here, using fast Fourier transformation of pre-stimulus electroencephalograms, we explored how oscillatory power modulates the subsequent discrimination of perceived simultaneity from non-simultaneity in the audiovisual domain. We found that the over-scalp high beta (20-28Hz), parieto-occipital low beta (14-20Hz), and high gamma oscillations (55-80Hz) were significantly stronger before audition-then-vision sequence when they were judged as simultaneous rather than non-simultaneous. In contrast, a broad range of oscillations, mainly the beta and gamma bands over a great part of the scalp were significantly weaker before vision-then-audition sequences when they were judged as simultaneous versus non-simultaneous. Moreover, for auditory-leading sequence, pre-stimulus beta and gamma oscillatory power successfully predicted subjects' reports of simultaneity on a trial-by-trial basis, with stronger activity resulting in more simultaneous judgments. These results indicate that ongoing fluctuations of beta and gamma oscillations can modulate subsequent perceived audiovisual simultaneity, but with an opposing pattern for auditory- and visual-leading sequences.

  12. Audiovisual Stimulation Modulates Physical Performance and Biochemical and Hormonal Status of Athletes.

    Science.gov (United States)

    Golovin, M S; Golovin, M S; Aizman, R I

    2016-09-01

    We studied the effect of audiovisual stimulation training course on physical development, functional state of the cardiovascular system, blood biochemical parameters, and hormonal status of athletes. The training course led to improvement of physical performance and adaptive capacities of the circulatory system, increase in plasma levels of total protein, albumin, and glucose and total antioxidant activity, and decrease in triglyceride, lipase, total bilirubin, calcium, and phosphorus. The concentration of hormones (cortisol, thyrotropin, triiodothyronine, and thyroxine) also decreased under these conditions. In the control group, an increase in the concentration of creatinine and uric acid and a tendency toward elevation of lowdensity lipoproteins and total antioxidant activity were observed in the absence of changes in cardiac function and physical performance; calcium and phosphorus concentrations reduced. The improvement in functional state in athletes was mainly associated with intensification of anabolic processes and suppression of catabolic reactions after audiovisual stimulation (in comparison with the control). Stimulation was followed by an increase in the number of correlations between biochemical and hormonal changes and physical performance of athletes, which attested to better integration of processes at the intersystem level.

  13. Sensorimotor cortical response during motion reflecting audiovisual stimulation: evidence from fractal EEG analysis.

    Science.gov (United States)

    Hadjidimitriou, S; Zacharakis, A; Doulgeris, P; Panoulas, K; Hadjileontiadis, L; Panas, S

    2010-06-01

    Sensorimotor activity in response to motion reflecting audiovisual titillation is studied in this article. EEG recordings, and especially the Mu-rhythm over the sensorimotor cortex (C3, CZ, and C4 electrodes), were acquired and explored. An experiment was designed to provide auditory (Modest Mussorgsky's "Promenade" theme) and visual (synchronized human figure walking) stimuli to advanced music students (AMS) and non-musicians (NM) as a control subject group. EEG signals were analyzed using fractal dimension (FD) estimation (Higuchi's, Katz's and Petrosian's algorithms) and statistical methods. Experimental results from the midline electrode (CZ) based on the Higuchi method showed significant differences between the AMS and the NM groups, with the former displaying substantial sensorimotor response during auditory stimulation and stronger correlation with the acoustic stimulus than the latter. This observation was linked to mirror neuron system activity, a neurological mechanism that allows trained musicians to detect action-related meanings underlying the structural patterns in musical excerpts. Contrarily, the response of AMS and NM converged during audiovisual stimulation due to the dominant presence of human-like motion in the visual stimulus. These findings shed light upon music perception aspects, exhibiting the potential of FD to respond to different states of cortical activity.

  14. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot.

    Science.gov (United States)

    Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M

    2014-01-01

    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user.

  15. Modulation of visual responses in the superior temporal sulcus by audio-visual congruency.

    Science.gov (United States)

    Dahl, Christoph D; Logothetis, Nikos K; Kayser, Christoph

    2010-01-01

    Our ability to identify or recognize visual objects is often enhanced by evidence provided by other sensory modalities. Yet, where and how visual object processing benefits from the information received by the other senses remains unclear. One candidate region is the temporal lobe, which features neural representations of visual objects, and in which previous studies have provided evidence for multisensory influences on neural responses. In the present study we directly tested whether visual representations in the lower bank of the superior temporal sulcus (STS) benefit from acoustic information. To this end, we recorded neural responses in alert monkeys passively watching audio-visual scenes, and quantified the impact of simultaneously presented sounds on responses elicited by the presentation of naturalistic visual scenes. Using methods of stimulus decoding and information theory, we then asked whether the responses of STS neurons become more reliable and informative in multisensory contexts. Our results demonstrate that STS neurons are indeed sensitive to the modality composition of the sensory stimulus. Importantly, information provided by STS neurons' responses about the particular visual stimulus being presented was highest during congruent audio-visual and unimodal visual stimulation, but was reduced during incongruent bimodal stimulation. Together, these findings demonstrate that higher visual representations in the STS not only convey information about the visual input but also depend on the acoustic context of a visual scene.

  16. Modulation of visual responses in the superior temporal sulcus by audio-visual congruency

    Directory of Open Access Journals (Sweden)

    Christoph Dahl

    2010-04-01

    Full Text Available Our ability to identify or recognize visual objects is often enhanced by evidence provided by other sensory modalities. Yet, where and how visual object processing benefits from the information received by the other senses remains unclear. One candidate region is the temporal lobe, which features neural representations of visual objects, and in which previous studies have provided evidence for multisensory influences on neural responses. In the present study we directly tested whether visual representations in the lower bank of the superior temporal sulcus (STS benefit from acoustic information. To this end, we recorded neural responses in alert monkeys passively watching audio-visual scenes, and quantified the impact of simultaneously presented sounds on responses elicited by the presentation of naturalistic visual scenes. Using methods of stimulus decoding and information theory, we then asked whether the responses of STS neurons become more reliable and informative in multisensory contexts. Our results demonstrate that STS neurons are indeed sensitive to the modality composition of the sensory stimulus. Importantly, information provided by STS neurons’ responses about the particular visual stimulus being presented was highest during congruent audio-visual and unimodal visual stimulation, but was reduced during incongruent bimodal stimulation. Together, these findings demonstrate that higher visual representations in the STS not only convey information about the visual input but also depend on the acoustic context of a visual scene.

  17. The third language: A recurrent textual restriction that translators come across in audiovisual translation.

    Directory of Open Access Journals (Sweden)

    Montse Corrius Gimbert

    2005-01-01

    Full Text Available If the process of translating is not at all simple, the process of translating an audiovisual text is still more complex. Apart rom technical problems such as lip synchronisation, there are other factors to be considered such as the use of the language and textual structures deemed appropriate to the channel of communication. Bearing in mind that most of the films we are continually seeing on our screens were and are produced in the United States, there is an increasing need to translate them into the different languages of the world. But sometimes the source audiovisual text contains more than one language, and, thus, a new problem arises: the ranslators face additional difficulties in translating this “third language” (language or dialect into the corresponding target culture. There are many films containing two languages in the original version but in this paper we will focus mainly on three films: Butch Cassidy and the Sundance Kid (1969, Raid on Rommel (1999 and Blade Runner (1982. This paper aims at briefly illustrating different solutions which may be applied when we come across a “third language”.

  18. A comprehensive model of audiovisual perception: both percept and temporal dynamics.

    Directory of Open Access Journals (Sweden)

    Patricia Besson

    Full Text Available The sparse information captured by the sensory systems is used by the brain to apprehend the environment, for example, to spatially locate the source of audiovisual stimuli. This is an ill-posed inverse problem whose inherent uncertainty can be solved by jointly processing the information, as well as introducing constraints during this process, on the way this multisensory information is handled. This process and its result--the percept--depend on the contextual conditions perception takes place in. To date, perception has been investigated and modeled on the basis of either one of two of its dimensions: the percept or the temporal dynamics of the process. Here, we extend our previously proposed audiovisual perception model to predict both these dimensions to capture the phenomenon as a whole. Starting from a behavioral analysis, we use a data-driven approach to elicit a bayesian network which infers the different percepts and dynamics of the process. Context-specific independence analyses enable us to use the model's structure to directly explore how different contexts affect the way subjects handle the same available information. Hence, we establish that, while the percepts yielded by a unisensory stimulus or by the non-fusion of multisensory stimuli may be similar, they result from different processes, as shown by their differing temporal dynamics. Moreover, our model predicts the impact of bottom-up (stimulus driven factors as well as of top-down factors (induced by instruction manipulation on both the perception process and the percept itself.

  19. Alianza Efectiva Familia-Escuela: Un Programa Audiovisual Para Padres Effective Family-School Alliance: An Audiovisual Program for Parents

    Directory of Open Access Journals (Sweden)

    Lidia Alcalay

    2005-11-01

    Full Text Available El objetivo del presente artículo es identificar y describir algunas de las variables consideradas como fundamentales para promover una alianza efectiva entre la familia y la escuela. Estas variables se consideraron al desarrollar un material educativo consistente en un video y un manual de actividades, para ser usado con los padres y apoderados en el contexto escolar. El tratamiento de las temáticas estuvo orientado a ampliar la perspectiva de los padres en relación a su rol en la educación de sus hijos y a cuestionar y enriquecer su integración al sistema escolar. En este marco se plantea que el material puede aumentar las competencias parentales de manera de generar una alianza más efectiva con el sistema escolar en pro de un mejor desarrollo del niño en el ámbito social, emocional y cognitivo.The purpose of this article is to identify and describe some of the variables that are considered as essential in order to promote an effective alliance between family and school. These variables were considered in the development of an educational program that includes a video and a set of activities designed to be used with parents in the school context. The different contents of the program were elaborated in such a way so as to expand parents' perspective with respect to their rol in their children's education, as well as to question and enrich their integration to the school system. Within this context, the educational program is oriented to increase parental competences so as to establish a more effective alliance with the school system which in turn, will have a positive effect in social, emotional and cognitive development of the child.

  20. UN REPTE DE LA (SOCIOLINGÜÍSTICA APLICADA: EL MODEL DE LLENGUA COL·LOQUIAL PER A LA COMUNICACIÓ AUDIOVISUAL

    Directory of Open Access Journals (Sweden)

    Josep Angel Mas i Castells

    2007-05-01

    Full Text Available La diversitat de situacions en què s’utilitza la llengua en la comunicació audiovisual fa insuficient la divisió entre re g i s t re formal i informal a què solen limitar-se els materials d’orientació lingüística. Aquesta insuficiència es fa palesa especialment en les situacions informals, les quals abracen un llarg continuum que va, per exemple, des de l’entrevista a un famós al disseny de la parla d’un personatge illetrat en el guió d’una sèrie de ficció. Les aportacions de la sociolingüística, incloent-hi tant el variacionisme com l’anàlisi de les actituds lingüístiques, són fonanentals a l’hora de dissenyar propostes de model lingüístic. Cal subratllar que la importància d’aquests materials és cabdal en contextos de conflicte lingüístic com el de la comunitat lingüística de llengua catalana. Més encara en l’espai comunicatiu valencià, al qual es re f e reix especialment aquest article, on les connotacions ideològiques que assoleixen determinades variants poden entrebancar qualsevol proposta.The variety of situations where language is used in audiovisual communications renders the separation between formal and informal register, which normally employ the language orientation materials, inadequate. This inadequacy is highly evident particularly in informal situations, which embrace a long continuum ranging, for example, from the interview of a celebrity to the speech design for an illiterate character in a fiction series script. The sociolinguistic contributions, which include variationism as the analysis of linguistic attitudes, are crucial when formulating proposals for a language model. It should be stressed that the importance of these materials is crucial in contexts where linguistic conflict is pre s e n t , as in the Catalan language community. This applies to a greater extent to the Valencian communicative space, which is the central focus of the present.

  1. When Library and Archival Science Methods Converge and Diverge: KAUST’s Multi-Disciplinary Approach to the Management of its Audiovisual Heritage

    KAUST Repository

    Kenosi, Lekoko

    2015-07-16

    Libraries and Archives have long recognized the important role played by audiovisual records in the development of an informed global citizen and the King Abdullah University of Science and Technology (KAUST) is no exception. Lying on the banks of the Red Sea, KAUST has a state of the art library housing professional library and archives teams committed to the processing of digital audiovisual records created within and outside the University. This commitment, however, sometimes obscures the fundamental divergences unique to the two disciplines on the acquisition, cataloguing, access and long-term preservation of audiovisual records. This dichotomy is not isolated to KAUST but replicates itself in many settings that have employed Librarians and Archivists to manage their audiovisual collections. Using the KAUST audiovisual collections as a case study the authors of this paper will take the reader through the journey of managing KAUST’s digital audiovisual collection. Several theoretical and methodological areas of convergence and divergence will be highlighted as well as suggestions on the way forward for the IFLA and ICA working committees on the management of audiovisual records.

  2. Language Practice with Multimedia Supported Web-Based Grammar Revision Material

    Science.gov (United States)

    Baturay, Meltem Huri; Daloglu, Aysegul; Yildirim, Soner

    2010-01-01

    The aim of this study was to investigate the perceptions of elementary-level English language learners towards web-based, multimedia-annotated grammar learning. WEBGRAM, a system designed to provide supplementary web-based grammar revision material, uses audio-visual aids to enrich the contextual presentation of grammar and allows learners to…

  3. Lenguaje audiovisual y lenguaje escolar: dos cosmovisiones en la estructuración lingüística del niño Audiovisual language and school language: two cosmo-visions in the structuring of children linguistics

    Directory of Open Access Journals (Sweden)

    Lirian Astrid Ciro

    2007-06-01

    Full Text Available En el presente texto se pretende analizar la compleja red relacional existente entre el lenguaje audiovisual (partiendo de la televisión como uno de sus soportes y el lenguaje escolar, para vislumbrar sus efectos en el lenguaje infantil. La idea es mostrar el lenguaje audiovisual como un mecanismo potencialmente educativo, por cuanto es una forma de resignificar el mundo y de socialización lingüística; tal característica hace necesario entablar una relación estratégica entre él y el lenguaje escolar. De este modo, el lenguaje infantil se instaura como un punto intermedio en donde confluyen esos distintos lenguajes, y permite al niño tener cosmovisiones abiertas y flexibles de diversas realidades. Todo esto llevará a la configuración de seres creativos, novedosos y atentos a escuchar opciones... a la estructuración de una nueva sociedad, en donde la multiplicidad de códigos (entendidos como sistemas de simbolización vayan haciendo más fácil la expresión de lo que se es y se quiere ser.This paper analyzes the complex relationship between audiovisual language (TV being one of its main supports and school language in order to observe their effects on child language. In this way, audiovisual language is a potentially educational mechanism because it is both a new way of resignifying the world and a mechanism of linguistic socialization. Hence, it is necessary to establish a strategic relationship between audiovisual language and school language. In this way, child language is an intermediate point between these two languages and it allows the child to have open and flexible views of different realities and to be willing to weigh options. In short, it is the structuring of a new society where multiplicity of codes will contribute to facilitating free expression.

  4. Exploring determinants of early user acceptance for an audio-visual heritage archive service using the vignette method

    NARCIS (Netherlands)

    Ongena, Guido; Wijngaert, van de Lidwien; Huizer, E.

    2013-01-01

    The purpose of this study is to investigate factors, which explain the behavioural intention of the use of a new audio-visual cultural heritage archive service. An online survey in combination with a factorial survey is utilised to investigate the predictable strength of technological, individual an

  5. Undifferentiated Facial Electromyography Responses to Dynamic, Audio-Visual Emotion Displays in Individuals with Autism Spectrum Disorders

    Science.gov (United States)

    Rozga, Agata; King, Tricia Z.; Vuduc, Richard W.; Robins, Diana L.

    2013-01-01

    We examined facial electromyography (fEMG) activity to dynamic, audio-visual emotional displays in individuals with autism spectrum disorders (ASD) and typically developing (TD) individuals. Participants viewed clips of happy, angry, and fearful displays that contained both facial expression and affective prosody while surface electrodes measured…

  6. The effect of combined sensory and semantic components on audio-visual speech perception in older adults

    Directory of Open Access Journals (Sweden)

    Corrina eMaguinness

    2011-12-01

    Full Text Available Previous studies have found that perception in older people benefits from multisensory over uni-sensory information. As normal speech recognition is affected by both the auditory input and the visual lip-movements of the speaker, we investigated the efficiency of audio and visual integration in an older population by manipulating the relative reliability of the auditory and visual information in speech. We also investigated the role of the semantic context of the sentence to assess whether audio-visual integration is affected by top-down semantic processing. We presented participants with audio-visual sentences in which the visual component was either blurred or not blurred. We found that there was a greater cost in recall performance for semantically meaningless speech in the audio-visual blur compared to audio-visual no blur condition and this effect was specific to the older group. Our findings have implications for understanding how aging affects efficient multisensory integration for the perception of speech and suggests that multisensory inputs may benefit speech perception in older adults when the semantic content of the speech is unpredictable.

  7. Audiovisual infotainment in European news: A comparative content analysis of Dutch, Spanish, and Irish television news programs

    NARCIS (Netherlands)

    A. Paz Alencar (Amanda); S. Kruikemeier (Sanne)

    2016-01-01

    markdownabstractThis study investigates to what extent audiovisual infotainment features can be found in the narrative structure of television news in three European countries. Content analysis included a sample of 639 news reports aired in the first 3 weeks of September 2013, in six prime-time TV n

  8. Audiovisual infotainment in European news: a comparative content analysis of Dutch, Spanish and Irish television news programs

    NARCIS (Netherlands)

    Alencar, A.; Kruikemeier, S.

    2015-01-01

    This study investigates to what extent audiovisual infotainment features can be found in the narrative structure of television news in three European countries news. Content analysis included a sample of 639 news reports (or reporter packages) aired in the first three weeks of September 2013, in six

  9. Audiovisual distraction as a useful adjunct to epidural anesthesia and sedation for prolonged lower limb microvascular orthoplastic surgery.

    Science.gov (United States)

    Athanassoglou, Vassilis; Wallis, Anna; Galitzine, Svetlana

    2015-11-01

    Lower limb orthopedic operations are frequently performed under regional anesthesia, which allows avoidance of potential side effects and complications of general anesthesia and sedation. Often though, patients feel anxious about being awake during operations. To decrease intraoperative anxiety, we use multimedia equipment consisting of a tablet device, noise-canceling headphones, and a makeshift frame, where patients can listen to music, watch movies, or occupy themselves in numerous ways. These techniques have been extensively studies in minimally invasive, short, or minor procedures but not in prolonged orthoplastic operations. We report 2 cases where audiovisual distraction was successfully applied to 9.5-hour procedures, proved to be a very useful adjunct to epidural anesthesia + sedation, and made an important contribution to positive patients' outcomes and overall patients' experience with regional anesthesia for complex limb reconstructive surgery. In the era when not only patients' safety and clinical outcomes but also patients' positive experiences are of paramount importance, audiovisual distraction may provide a simple tool to help improve experience of appropriately informed patients undergoing suitable procedures under regional anesthesia. The anesthetic technique received a very positive appraisal by both patients and encouraged us to study further the impact of modern audiovisual technology on anxiolysis for major surgery under regional anesthesia. The duration of surgery per se is not a contraindication to the use of audiovisual distraction. The absolute proviso of successful application of this technique to major surgery is effective regional anesthesia and good teamwork between the clinicians and the patients.

  10. 36 CFR 1237.14 - What are the additional scheduling requirements for audiovisual, cartographic, and related records?

    Science.gov (United States)

    2010-07-01

    ... creation (see also 36 CFR part 1235). See § 1235.42 of this subchapter for specifications and standards for... scheduling requirements for audiovisual, cartographic, and related records? 1237.14 Section 1237.14 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT...

  11. La conformación del canon literario costarricense: observaciones a partir de la producción audiovisual

    Directory of Open Access Journals (Sweden)

    Bernardo Bolaños Esquivel

    2013-08-01

    Full Text Available El estudio analiza las relaciones entre obras insertas en el canon literario costarricense y los factores que las han llevado al formato audiovisual. Hecha una descripción del recorrido histórico de tales relaciones, se señalan factores determinantes que vinculan esas obras y la producción audiovisual. Entre esos factores se señala la cercanía con una imagen idílica de nación, la afinidad de los escritores con el poder político, y en general, criterios de conveniencia comercial. This study analyzes the relations between works belonging to the Costa Rican literary canon and the factors which have taken them to an audiovisual format. Once the history of those relations is described, mention is made of determining factors that link those works and audiovisual production. These factors include closeness with the idyllic image of nation, the authors’ affinity with political power, and criteria of commercial suitability, in general.

  12. Impairment-Factor-Based Audiovisual Quality Model for IPTV: Influence of Video Resolution, Degradation Type, and Content Type

    Directory of Open Access Journals (Sweden)

    Garcia MN

    2011-01-01

    Full Text Available This paper presents an audiovisual quality model for IPTV services. The model estimates the audiovisual quality of standard and high definition video as perceived by the user. The model is developed for applications such as network planning and packet-layer quality monitoring. It mainly covers audio and video compression artifacts and impairments due to packet loss. The quality tests conducted for model development demonstrate a mutual influence of the perceived audio and video quality, and the predominance of the video quality for the overall audiovisual quality. The balance between audio quality and video quality, however, depends on the content, the video format, and the audio degradation type. The proposed model is based on impairment factors which quantify the quality-impact of the different degradations. The impairment factors are computed from parameters extracted from the bitstream or packet headers. For high definition video, the model predictions show a correlation with unknown subjective ratings of 95%. For comparison, we have developed a more classical audiovisual quality model which is based on the audio and video qualities and their interaction. Both quality- and impairment-factor-based models are further refined by taking the content-type into account. At last, the different model variants are compared with modeling approaches described in the literature.

  13. Theological Media Literacy Education and Hermeneutic Analysis of Soviet Audiovisual Anti-Religious Media Texts in Students' Classroom

    Science.gov (United States)

    Fedorov, Alexander

    2015-01-01

    This article realized the Russian way of theological media education literacy and hermeneutic analysis of specific examples of Soviet anti-religious audiovisual media texts: a study of the process of interpretation of these media texts, cultural and historical factors influencing the views of the media agency/authors. The hermeneutic analysis…

  14. The whole is more than the sum of its parts - Audiovisual processing of phonemes investigated with ERPs

    NARCIS (Netherlands)

    Hessler, Dorte; Jonkers, Roel; Stowe, Laurie; Bastiaanse, Roelien

    2013-01-01

    In the current ERP study, an active oddball task was carried out, testing pure tones and auditory, visual and audiovisual syllables. For pure tones, an MMN, an N2b, and a P3 were found, confirming traditional findings. Auditory syllables evoked an N2 and a P3. We found that the amplitude of the P3 d

  15. Audiovisual en línea en la universidad española: bibliotecas y servicios especializados (una panorámica

    Directory of Open Access Journals (Sweden)

    Alfonso López Yepes

    2014-08-01

    Full Text Available Situación que presenta la información audiovisual en línea en el ámbito de las bibliotecas y servicios audiovisuales universitarios españoles, con ejemplos de aplicaciones y desarrollos concretos. Se destaca la presencia del audiovisual fundamentalmente en blogs, canales IPTV, portales bibliotecarios propios y en actuaciones concretas como “La Universidad Responde”, a cargo de los servicios audiovisuales de las universidades españolas, que supone sin duda un marco de referencia y de difusión informativa muy destacado también para el ámbito bibliotecario; así como en redes sociales, mencionándose una propuesta de modelo de red social de biblioteca universitaria. Se remite a la participación de bibliotecas y servicios en proyectos colaborativos de investigación y desarrollo social, presencia ya efectiva en el marco del proyecto “Red iberoamericana de patrimonio sonoro y audiovisual”, que apuesta  por la construcción social del conocimiento audiovisual basado en la interacción entre distintos grupos multidisciplinarios de profesionales con diferentes comunidades de usuarios e instituciones.A situation presenting audiovisual information online in the field of libraries and audiovisual university spanish services, with examples of applications and specific developments. The presence of the audiovisual in blogs and IPTV channels librarians and specific actions as The University Respond, in charge of the audiovisual services of the spanish universities, a very important reference and information dissemination for the field librarian and in social networks, mentioning a model of social network of University Library. Participation of libraries and services in collaborative research and social development projects in the Ibero-American network of sound and audiovisual heritage project, for the social construction of the audiovisual knowledge based on the interaction between various multidisciplinary groups of professionals with

  16. Brain networks engaged in audiovisual integration during speech perception revealed by persistent homology-based network filtration.

    Science.gov (United States)

    Kim, Heejung; Hahm, Jarang; Lee, Hyekyoung; Kang, Eunjoo; Kang, Hyejin; Lee, Dong Soo

    2015-05-01

    The human brain naturally integrates audiovisual information to improve speech perception. However, in noisy environments, understanding speech is difficult and may require much effort. Although the brain network is supposed to be engaged in speech perception, it is unclear how speech-related brain regions are connected during natural bimodal audiovisual or unimodal speech perception with counterpart irrelevant noise. To investigate the topological changes of speech-related brain networks at all possible thresholds, we used a persistent homological framework through hierarchical clustering, such as single linkage distance, to analyze the connected component of the functional network during speech perception using functional magnetic resonance imaging. For speech perception, bimodal (audio-visual speech cue) or unimodal speech cues with counterpart irrelevant noise (auditory white-noise or visual gum-chewing) were delivered to 15 subjects. In terms of positive relationship, similar connected components were observed in bimodal and unimodal speech conditions during filtration. However, during speech perception by congruent audiovisual stimuli, the tighter couplings of left anterior temporal gyrus-anterior insula component and right premotor-visual components were observed than auditory or visual speech cue conditions, respectively. Interestingly, visual speech is perceived under white noise by tight negative coupling in the left inferior frontal region-right anterior cingulate, left anterior insula, and bilateral visual regions, including right middle temporal gyrus, right fusiform components. In conclusion, the speech brain network is tightly positively or negatively connected, and can reflect efficient or effortful processes during natural audiovisual integration or lip-reading, respectively, in speech perception.

  17. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc.

  18. Neural systems underlying British Sign Language and audio-visual English processing in native users.

    Science.gov (United States)

    MacSweeney, Mairéad; Woll, Bencie; Campbell, Ruth; McGuire, Philip K; David, Anthony S; Williams, Steven C R; Suckling, John; Calvert, Gemma A; Brammer, Michael J

    2002-07-01

    In order to understand the evolution of human language, it is necessary to explore the neural systems that support language processing in its many forms. In particular, it is informative to separate those mechanisms that may have evolved for sensory processing (hearing) from those that have evolved to represent events and actions symbolically (language). To what extent are the brain systems that support language processing shaped by auditory experience and to what extent by exposure to language, which may not necessarily be acoustically structured? In this first neuroimaging study of the perception of British Sign Language (BSL), we explored these questions by measuring brain activation using functional MRI in nine hearing and nine congenitally deaf native users of BSL while they performed a BSL sentence-acceptability task. Eight hearing, non-signing subjects performed an analogous task that involved audio-visual English sentences. The data support the argument that there are both modality-independent and modality-dependent language localization patterns in native users. In relation to modality-independent patterns, regions activated by both BSL in deaf signers and by spoken English in hearing non-signers included inferior prefrontal regions bilaterally (including Broca's area) and superior temporal regions bilaterally (including Wernicke's area). Lateralization patterns were similar for the two languages. There was no evidence of enhanced right-hemisphere recruitment for BSL processing in comparison with audio-visual English. In relation to modality-specific patterns, audio-visual speech in hearing subjects generated greater activation in the primary and secondary auditory cortices than BSL in deaf signers, whereas BSL generated enhanced activation in the posterior occipito-temporal regions (V5), reflecting the greater movement component of BSL. The influence of hearing status on the recruitment of sign language processing systems was explored by comparing deaf

  19. Ús de mitjans audiovisuals per a la investigació de la realitat dels alumnes resistents Using of audiovisual media to research resistant students Uso de medios audiovisuales para la investigación de la realidad de los

    Directory of Open Access Journals (Sweden)

    Josep Jorba Vidal

    2009-01-01

    Full Text Available En aquest article s’exposa una experiència investigadora que proposa un apropament entre la sociologia de l’educació i el món audiovisual. Partint de les bases teòriques i metodològiques de les teories de la resistència escolar (aquelles que centren el seu interès en l’alumnat que no reconeix ni accepta les normes i valors escolars es proposa recollir, a través de tècniques etnogràfiques, el discurs, les experiències i els gustos dels alumnes amb material audiovisual (fotografies, música, vídeo, etc.; també pretén analitzar, editar i estructurar aquesta informació en forma de text i de vídeo, i finalment presentar els resultats dins l’estructura d’un fotolog (una de les eines de comunicació informàtica més esteses entre el jovent, tot i que molt desconegudes entre el professorat de secundària i els investigadors socials. __________________________________ Dans cet article, nous présentons une expérience de recherche qui propose un rapprochement entre la sociologie de l’éducation et le monde audiovisuel. En partant des bases théoriques et méthodologiques des théories de la résistance scolaire (celles qui centrent leur intérêt sur les élèves qui ne reconnaissent ni n’acceptent les normes ni les valeurs scolaires, nous proposons : de recueillir, au moyen des techniques ethnographiques, le discours, les expériences ainsi que les goûts des élèves grâce au matériel audiovisuel (photographie, musique, vidéo, etc. ; d’analyser, d’éditer et de structurer cette information sous forme de texte et de vidéo ; et, finalement, de présenter les résultats dans la structure d’un fotolog (un des outils de communication informatique parmi les plus répandus chez les jeunes, bien que très méconnu des professeurs du secondaire et des chercheurs en sciences sociales.This paper presents a research project that aims to bring the sociology of education and the audiovisual world closer together. Applying the

  20. Audiovisual Interaction

    DEFF Research Database (Denmark)

    Karandreas, Theodoros-Alexandros

    importance of each modality with respect to the overall quality evaluation. The results show that this was not due to specific interactions between stimuli but rather because the auditory modality dominated over the visual modality. Furthermore, for all experiments where less than optimal stimuli...

  1. Fiction series and video games: Transmedia and gamification in Contemporary Audiovisual discourses

    Directory of Open Access Journals (Sweden)

    Francisco Julián Martínez Cano

    2016-05-01

    Full Text Available The connection between fiction series and video games is evident, and this relationship drives to new strategies to generate narratives within transmedia context. In this article we try to revise the connection between the two media through current productions, with the intention of identifying the contributions of the video game in the serial narrative universes. Within the context of the transmedia narrative discourses, strategies usually have emerged from the TV series to the video game. Currently this tendency is turning around, giving birth titles that have been fed from the video game as the primary source to draw up their storyline. Identifying the contributions of the video game to the transmedia ecosystem of contemporary audiovisual entertainment products, offers the chance to generate more attractive and innovative designs for the audience.

  2. La imagen que piensa. Hacia una definición del ensayo audiovisual

    OpenAIRE

    García-Martínez, A.N. (Alberto Nahum)

    2006-01-01

    El presente artículo pretende analizar las claves esenciales –históricas, retóricas y de género– que caracterizan a un tipo de texto audiovisual que cuenta con una escasa tradición cinematográfica. El ensayo fílmico propone un discurso personal y asistemático que –a través de elementos como un estilo marcado, un montaje visible que privilegia la palabra o la inserción fílmica del autor– va construyendo su reflexión en las imágenes, representando así el camino del pensamiento trazado....

  3. One Country, Two Polarised Audiences: Estonia and the Deficiency of the Audiovisual Media Services Directive

    Directory of Open Access Journals (Sweden)

    Andres Jõesaar

    2015-12-01

    Full Text Available This article argues that until recent times, the Estonian media policy has mainly been interpreted as an economic issue and it did not account for the strategic need to build a comprehensive media field to serve all groups in society. This has happened despite the fact the Estonian media policy is in line with the European Union (EU media policy, which should ensure freedom of information, diversity of opinion and media pluralism. Findings of the Estonian case study show that despite these noble aims, Estonia has two radically different information fields: one for Estonian speaking audiences and one for Russian speakers. Events in Ukraine have added to the democratic media policy paradigm a question of national security. Now it is a challenge for the policy makers to unite polarised media fields and how to minimise the impact of Russian propaganda. On the EU level, one supportive measure could be a revision of the Audiovisual Media Service Directive.

  4. Integrating Audio-Visual Features and Text Information for Story Segmentation of News Video

    Institute of Scientific and Technical Information of China (English)

    LiuHua-yong; ZhouDong-ru

    2003-01-01

    Video data are composed of multimodal information streams including visual, auditory and textual streams, an approach of story segmentation for news video using multimodal analysis is described in this paper. The proposed approach detects the topic-caption frames, and integrates them with silence clips detection results, as well as shot segmentation results to locate the news story boundaries. The integration of audio-visual features and text information overcomes the weakness of the approach using only image analysis techniques. On test data with 135 400 frames, when the boundaries between news stories are detected, the accuracy rate 85.8% and the recall rate 97.5% are obtained. The experimental results show the approach is valid and robust.

  5. Processing of audiovisual associations in the human brain: dependency on expectations and rule complexity

    Directory of Open Access Journals (Sweden)

    Riikka eLindström

    2012-05-01

    Full Text Available In order to respond to environmental changes appropriately, the human brain must not only be able to detect environmental changes but also to form expectations of forthcoming events. The events in the external environment often have a number of multisensory features such as pitch and form. For integrated percepts of objects and events, crossmodal processing and crossmodally induced expectations of forthcoming events are needed. The aim of the present study was to determine whether the expectations created by visual stimuli can modulate the deviance detection in the auditory modality, as reflected by auditory event-related potentials (ERPs. Additionally, it was studied whether the complexity of the rules linking auditory and visual stimuli together affects this process. The N2 deflection of the ERP was observed in response to violations in the subjects' expectation of a forthcoming tone. Both temporal aspects and cognitive demands during the audiovisual deviance detection task modulated the brain processes involved.

  6. The ontogenetic origins of mirror neurons: evidence from 'tool-use' and 'audiovisual' mirror neurons.

    Science.gov (United States)

    Cook, Richard

    2012-10-23

    Since their discovery, mirror neurons--units in the macaque brain that discharge both during action observation and execution--have attracted considerable interest. Whether mirror neurons are an innate endowment or acquire their sensorimotor matching properties ontogenetically has been the subject of intense debate. It is widely believed that these units are an innate trait; that we are born with a set of mature mirror neurons because their matching properties conveyed upon our ancestors an evolutionary advantage. However, an alternative view is that mirror neurons acquire their matching properties during ontogeny, through correlated experience of observing and performing actions. The present article re-examines frequently overlooked neurophysiological reports of 'tool-use' and 'audiovisual' mirror neurons within the context of this debate. It is argued that these findings represent compelling evidence that mirror neurons are a product of sensorimotor experience, and not an innate endowment.

  7. Audiovisual physics reports: students' video production as a strategy for the didactic laboratory

    Science.gov (United States)

    Vinicius Pereira, Marcus; de Souza Barros, Susana; de Rezende Filho, Luiz Augusto C.; Fauth, Leduc Hermeto de A.

    2012-01-01

    Constant technological advancement has facilitated access to digital cameras and cell phones. Involving students in a video production project can work as a motivating aspect to make them active and reflective in their learning, intellectually engaged in a recursive process. This project was implemented in high school level physics laboratory classes resulting in 22 videos which are considered as audiovisual reports and analysed under two components: theoretical and experimental. This kind of project allows the students to spontaneously use features such as music, pictures, dramatization, animations, etc, even when the didactic laboratory may not be the place where aesthetic and cultural dimensions are generally developed. This could be due to the fact that digital media are more legitimately used as cultural tools than as teaching strategies.

  8. Designing online audiovisual heritage services: an empirical study of two comparable online video services

    Science.gov (United States)

    Ongena, G.; van de Wijngaert, L. A. L.; Huizer, E.

    2013-03-01

    The purpose of this study is to seek input for a new online audiovisual heritage service. In doing so, we assess comparable online video services to gain insights into the motivations and perceptual innovation characteristics of the video services. The research is based on data from a Dutch survey held among 1,939 online video service users. The results show that online video service held overlapping antecedents but does show differences in motivations and in perceived innovation characteristics. Hence, in general, one can state that in comparison, online video services comply with different needs and have differences in perceived innovation characteristics. This implies that one can design online video services for different needs. In addition to scientific implications, the outcomes also provide guidance for practitioners in implementing new online video services.

  9. [Current audiovisual technologies are a constituent of the continuing professional development concept].

    Science.gov (United States)

    Bezrukova, E Iu; Zatsepa, S A

    2009-01-01

    The paper is devoted to the topical problems of using innovation, information and communication technologies (ICT) in the higher medical education system, including in postgraduate professional education. The paper shows the key principles for organizing an audiovisual technology-based educational process and gives numerous practical examples of the real use of ICT in the education of not only medical, but also other specialists and the results of studies of applying the current technical aids of innovation professional education. Since each area of manpower training has its specificity and unique goals, the authors propose the highly effective decisions to organize an educational process, which fully take into consideration of the specific features of professional education. These technologies substantially expand access to educational resources, which is of great importance for a strategy of continuing professional development.

  10. Gone in a Flash: Manipulation of Audiovisual Temporal Integration Using Transcranial Magnetic Stimulation

    Directory of Open Access Journals (Sweden)

    Roy eHamilton

    2013-09-01

    Full Text Available While converging evidence implicates the right inferior parietal lobule in audiovisual integration, its role has not been fully elucidated by direct manipulation of cortical activity. Replicating and extending an experiment initially reported by Kamke, Vieth, Cottrell, and Mattingley (2012, we employed the sound-induced flash illusion, in which a single visual flash, when accompanied by two auditory tones, is misperceived as multiple flashes (Wilson, 1987; Shams, et al., 2000. Slow repetitive (1Hz TMS administered to the right angular gyrus, but not the right supramarginal gyrus, induced a transient decrease in the Peak Perceived Flashes (PPF, reflecting reduced susceptibility to the illusion. This finding independently confirms that perturbation of networks involved in multisensory integration can result in a more veridical representation of asynchronous auditory and visual events and that cross-modal integration is an active process in which the objective is the identification of a meaningful constellation of inputs, at times at the expense of accuracy.

  11. Producció fotogràfica per a l'obra audiovisual Luna Moth

    OpenAIRE

    Tordera Nuño, Eva

    2013-01-01

    En l’obra audiovisual Luna Moth, l’espai, el temps i les persones coincideixen en l’antic teatre a l’aire lliure de l’illa de Ramsholmen. Luna Moth és una obra multidisciplinar basada en la història cultural del poble finès d’Ekenäs i és el resultat de la col·laboració dels artistes que viuen a la residència de Villa Snäcksund. Luna Moth és la creació del grup, però aquest treball final de grau en desenvolupa la producció fotogràfica a càrrec d’Eva Tordera Nuño. Al llarg de la memòria es pres...

  12. Integrating Audio-Visual Features and Text Information for Story Segmentation of News Video

    Institute of Scientific and Technical Information of China (English)

    Liu Hua-yong; Zhou Dong-ru

    2003-01-01

    Video data are composed of multimodal information streams including visual, auditory and textual streams, so an approach of story segmentation for news video using multimodal analysis is described in this paper. The proposed approach detects the topic-caption frames, and integrates them with silence clips detection results, as well as shot segmentation results to locate the news story boundaries. The integration of audio-visual features and text information overcomes the weakness of the approach using only image analysis techniques. On test data with 135 400 frames, when the boundaries between news stories are detected, the accuracy rate 85.8% and the recall rate 97.5% are obtained. The experimental results show the approach is valid and robust.

  13. Literary Genres in Social Life: A Narrative, Audio-visual and Poetic Approach

    Directory of Open Access Journals (Sweden)

    Luis Felipe González Gutiérrez

    2008-05-01

    Full Text Available The proposal, "Literary Genres in Social Life: a Narrative, Audio-visual and Poetic Approach", attempts, by objective, to present/display to the academic psychology community and compatible social science disciplines the main contributions of literary genre theory through a social constructionist understanding of narrations and daily stories, and by means of an interactive construction of narrative collage. This work, sustained by an investigation financed by the University Santo Tomás in Bogota, Colombia, "Understanding of structuralist literary theories in the development of the narrative 'I' within the social constructionist approach", tries to propose alternative spaces for the presentation of its investigative results through the expression of metaphors, visual narrative sequences and interactive artistic forms, which invite the spectator to share in and to include/understand important concepts in the consolidation of social forms of construction of the quotidian. URN: urn:nbn:de:0114-fqs0802373

  14. A weapon for the future: the scientific use of audiovisual media in the teaching of Literature.

    Directory of Open Access Journals (Sweden)

    Yosdany Morejón Ortega

    2013-03-01

    Full Text Available The novel teaching literature in high school students has become vital premise to achieve the formation of the new man. A man able to understand the complex realities of today's world, while enjoying the aesthetic pleasure of reading. Thus it was conceived this article, which takes as a benchmark the effectiveness of audiovisual media to fulfill academic training objectives expected for that level. These results were supported by scientific research previously presented as an option to the academic title of Master of Science in Education and whose validity, still part of the actions of this subject in computer Polytechnics. Given the breadth of the indicators analyzed, this topic can be generalized in other fields of scientific knowledge, always remembering that visual stimuli are powerful tools in motivation, concentration and abstraction of knowledge.

  15. Fundamentos curriculares para una maestría en producción audiovisual

    Directory of Open Access Journals (Sweden)

    Maria Alvarado, Nerio Vílchez

    2008-01-01

    Full Text Available El estudio de los fundamentos curriculares para una maestría en producción audiovisual se apoya en Vílchez (2005, Postner (2001, Maldonado (2001, Boluda (2005, Tünnermann (2005, Miranda (2003, Morles (2005, Vaughan (2000, entre otros. La investigación es aplicada, descriptiva, con un diseño no experimental, ransaccional descriptivo, de campo. A la población constituida por diez (10 directivos de las universidades de Maracaibo, que ofrecen Carreras de Comunicación Social en pregrado, así como también que ofrezcan estudios de postgrado y cinco (5 expertos en currículo.

  16. Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images

    Directory of Open Access Journals (Sweden)

    Koji Iwano

    2007-03-01

    Full Text Available This paper proposes an audio-visual speech recognition method using lip information extracted from side-face images as an attempt to increase noise robustness in mobile environments. Our proposed method assumes that lip images can be captured using a small camera installed in a handset. Two different kinds of lip features, lip-contour geometric features and lip-motion velocity features, are used individually or jointly, in combination with audio features. Phoneme HMMs modeling the audio and visual features are built based on the multistream HMM technique. Experiments conducted using Japanese connected digit speech contaminated with white noise in various SNR conditions show effectiveness of the proposed method. Recognition accuracy is improved by using the visual information in all SNR conditions. These visual features were confirmed to be effective even when the audio HMM was adapted to noise by the MLLR method.

  17. Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images

    Directory of Open Access Journals (Sweden)

    Iwano Koji

    2007-01-01

    Full Text Available This paper proposes an audio-visual speech recognition method using lip information extracted from side-face images as an attempt to increase noise robustness in mobile environments. Our proposed method assumes that lip images can be captured using a small camera installed in a handset. Two different kinds of lip features, lip-contour geometric features and lip-motion velocity features, are used individually or jointly, in combination with audio features. Phoneme HMMs modeling the audio and visual features are built based on the multistream HMM technique. Experiments conducted using Japanese connected digit speech contaminated with white noise in various SNR conditions show effectiveness of the proposed method. Recognition accuracy is improved by using the visual information in all SNR conditions. These visual features were confirmed to be effective even when the audio HMM was adapted to noise by the MLLR method.

  18. The fragility of intergroup relations: divergent effects of delayed audiovisual feedback in intergroup and intragroup interaction.

    Science.gov (United States)

    Pearson, Adam R; West, Tessa V; Dovidio, John F; Powers, Stacie Renfro; Buck, Ross; Henning, Robert

    2008-12-01

    Intergroup interactions between racial or ethnic majority and minority groups are often stressful for members of both groups; however, the dynamic processes that promote or alleviate tension in intergroup interaction remain poorly understood. Here we identify a behavioral mechanism-response delay-that can uniquely contribute to anxiety and promote disengagement from intergroup contact. Minimally acquainted White, Black, and Latino participants engaged in intergroup or intragroup dyadic conversation either in real time or with a subtle temporal disruption (1-s delay) in audiovisual feedback. Whereas intergroup dyads reported greater anxiety and less interest in contact after engaging in delayed conversation than after engaging in real-time conversation, intragroup dyads reported less anxiety in the delay condition than they did after interacting in real time. These findings have theoretical and practical implications for understanding intergroup communication and social dynamics and for promoting positive intergroup contact.

  19. Gone in a flash: manipulation of audiovisual temporal integration using transcranial magnetic stimulation

    Science.gov (United States)

    Hamilton, Roy H.; Wiener, Martin; Drebing, Daniel E.; Coslett, H. Branch

    2013-01-01

    While converging evidence implicates the right inferior parietal lobule in audiovisual integration, its role has not been fully elucidated by direct manipulation of cortical activity. Replicating and extending an experiment initially reported by Kamke et al. (2012), we employed the sound-induced flash illusion, in which a single visual flash, when accompanied by two auditory tones, is misperceived as multiple flashes (Wilson, 1987; Shams et al., 2000). Slow repetitive (1 Hz) TMS administered to the right angular gyrus, but not the right supramarginal gyrus, induced a transient decrease in the Peak Perceived Flashes (PPF), reflecting reduced susceptibility to the illusion. This finding independently confirms that perturbation of networks involved in multisensory integration can result in a more veridical representation of asynchronous auditory and visual events and that cross-modal integration is an active process in which the objective is the identification of a meaningful constellation of inputs, at times at the expense of accuracy. PMID:24062701

  20. Gone in a flash: manipulation of audiovisual temporal integration using transcranial magnetic stimulation.

    Science.gov (United States)

    Hamilton, Roy H; Wiener, Martin; Drebing, Daniel E; Coslett, H Branch

    2013-01-01

    While converging evidence implicates the right inferior parietal lobule in audiovisual integration, its role has not been fully elucidated by direct manipulation of cortical activity. Replicating and extending an experiment initially reported by Kamke et al. (2012), we employed the sound-induced flash illusion, in which a single visual flash, when accompanied by two auditory tones, is misperceived as multiple flashes (Wilson, 1987; Shams et al., 2000). Slow repetitive (1 Hz) TMS administered to the right angular gyrus, but not the right supramarginal gyrus, induced a transient decrease in the Peak Perceived Flashes (PPF), reflecting reduced susceptibility to the illusion. This finding independently confirms that perturbation of networks involved in multisensory integration can result in a more veridical representation of asynchronous auditory and visual events and that cross-modal integration is an active process in which the objective is the identification of a meaningful constellation of inputs, at times at the expense of accuracy.

  1. Cortical Response Similarities Predict which Audiovisual Clips Individuals Viewed, but Are Unrelated to Clip Preference.

    Directory of Open Access Journals (Sweden)

    David A Bridwell

    Full Text Available Cortical responses to complex natural stimuli can be isolated by examining the relationship between neural measures obtained while multiple individuals view the same stimuli. These inter-subject correlation's (ISC's emerge from similarities in individual's cortical response to the shared audiovisual inputs, which may be related to their emergent cognitive and perceptual experience. Within the present study, our goal is to examine the utility of using ISC's for predicting which audiovisual clips individuals viewed, and to examine the relationship between neural responses to natural stimuli and subjective reports. The ability to predict which clips individuals viewed depends on the relationship of the EEG response across subjects and the nature in which this information is aggregated. We conceived of three approaches for aggregating responses, i.e. three assignment algorithms, which we evaluated in Experiment 1A. The aggregate correlations algorithm generated the highest assignment accuracy (70.83% chance = 33.33% and was selected as the assignment algorithm for the larger sample of individuals and clips within Experiment 1B. The overall assignment accuracy was 33.46% within Experiment 1B (chance = 06.25%, with accuracies ranging from 52.9% (Silver Linings Playbook to 11.75% (Seinfeld within individual clips. ISC's were significantly greater than zero for 15 out of 16 clips, and fluctuations within the delta frequency band (i.e. 0-4 Hz primarily contributed to response similarities across subjects. Interestingly, there was insufficient evidence to indicate that individuals with greater similarities in clip preference demonstrate greater similarities in cortical responses, suggesting a lack of association between ISC and clip preference. Overall these results demonstrate the utility of using ISC's for prediction, and further characterize the relationship between ISC magnitudes and subjective reports.

  2. Cortical Response Similarities Predict which Audiovisual Clips Individuals Viewed, but Are Unrelated to Clip Preference.

    Science.gov (United States)

    Bridwell, David A; Roth, Cullen; Gupta, Cota Navin; Calhoun, Vince D

    2015-01-01

    Cortical responses to complex natural stimuli can be isolated by examining the relationship between neural measures obtained while multiple individuals view the same stimuli. These inter-subject correlation's (ISC's) emerge from similarities in individual's cortical response to the shared audiovisual inputs, which may be related to their emergent cognitive and perceptual experience. Within the present study, our goal is to examine the utility of using ISC's for predicting which audiovisual clips individuals viewed, and to examine the relationship between neural responses to natural stimuli and subjective reports. The ability to predict which clips individuals viewed depends on the relationship of the EEG response across subjects and the nature in which this information is aggregated. We conceived of three approaches for aggregating responses, i.e. three assignment algorithms, which we evaluated in Experiment 1A. The aggregate correlations algorithm generated the highest assignment accuracy (70.83% chance = 33.33%) and was selected as the assignment algorithm for the larger sample of individuals and clips within Experiment 1B. The overall assignment accuracy was 33.46% within Experiment 1B (chance = 06.25%), with accuracies ranging from 52.9% (Silver Linings Playbook) to 11.75% (Seinfeld) within individual clips. ISC's were significantly greater than zero for 15 out of 16 clips, and fluctuations within the delta frequency band (i.e. 0-4 Hz) primarily contributed to response similarities across subjects. Interestingly, there was insufficient evidence to indicate that individuals with greater similarities in clip preference demonstrate greater similarities in cortical responses, suggesting a lack of association between ISC and clip preference. Overall these results demonstrate the utility of using ISC's for prediction, and further characterize the relationship between ISC magnitudes and subjective reports.

  3. BILINGUAL MULTIMODAL SYSTEM FOR TEXT-TO-AUDIOVISUAL SPEECH AND SIGN LANGUAGE SYNTHESIS

    Directory of Open Access Journals (Sweden)

    A. A. Karpov

    2014-09-01

    Full Text Available We present a conceptual model, architecture and software of a multimodal system for audio-visual speech and sign language synthesis by the input text. The main components of the developed multimodal synthesis system (signing avatar are: automatic text processor for input text analysis; simulation 3D model of human's head; computer text-to-speech synthesizer; a system for audio-visual speech synthesis; simulation 3D model of human’s hands and upper body; multimodal user interface integrating all the components for generation of audio, visual and signed speech. The proposed system performs automatic translation of input textual information into speech (audio information and gestures (video information, information fusion and its output in the form of multimedia information. A user can input any grammatically correct text in Russian or Czech languages to the system; it is analyzed by the text processor to detect sentences, words and characters. Then this textual information is converted into symbols of the sign language notation. We apply international «Hamburg Notation System» - HamNoSys, which describes the main differential features of each manual sign: hand shape, hand orientation, place and type of movement. On their basis the 3D signing avatar displays the elements of the sign language. The virtual 3D model of human’s head and upper body has been created using VRML virtual reality modeling language, and it is controlled by the software based on OpenGL graphical library. The developed multimodal synthesis system is a universal one since it is oriented for both regular users and disabled people (in particular, for the hard-of-hearing and visually impaired, and it serves for multimedia output (by audio and visual modalities of input textual information.

  4. Naturalistic stimulus structure determines the integration of audiovisual looming signals in binocular rivalry.

    Directory of Open Access Journals (Sweden)

    Verena Conrad

    Full Text Available Rapid integration of biologically relevant information is crucial for the survival of an organism. Most prominently, humans should be biased to attend and respond to looming stimuli that signal approaching danger (e.g. predator and hence require rapid action. This psychophysics study used binocular rivalry to investigate the perceptual advantage of looming (relative to receding visual signals (i.e. looming bias and how this bias can be influenced by concurrent auditory looming/receding stimuli and the statistical structure of the auditory and visual signals. Subjects were dichoptically presented with looming/receding visual stimuli that were paired with looming or receding sounds. The visual signals conformed to two different statistical structures: (1 a 'simple' random-dot kinematogram showing a starfield and (2 a "naturalistic" visual Shepard stimulus. Likewise, the looming/receding sound was (1 a simple amplitude- and frequency-modulated (AM-FM tone or (2 a complex Shepard tone. Our results show that the perceptual looming bias (i.e. the increase in dominance times for looming versus receding percepts is amplified by looming sounds, yet reduced and even converted into a receding bias by receding sounds. Moreover, the influence of looming/receding sounds on the visual looming bias depends on the statistical structure of both the visual and auditory signals. It is enhanced when audiovisual signals are Shepard stimuli. In conclusion, visual perception prioritizes processing of biologically significant looming stimuli especially when paired with looming auditory signals. Critically, these audiovisual interactions are amplified for statistically complex signals that are more naturalistic and known to engage neural processing at multiple levels of the cortical hierarchy.

  5. Traduzione audiovisiva: teoria e pratica dell'adattamento Audiovisual translation: Theory and practice

    Directory of Open Access Journals (Sweden)

    Eleonora Fois

    2012-12-01

    Full Text Available La traduzione audiovisiva è un settore ancora relativamente giovane ed inesplorato dei Translation Studies. Pur essendo lo strumento di fruizione principale di prodotti audiovisivi sul grande e piccolo schermo, l'adattamento viene guardato con sospetto da critici e studiosi, che non concordano nemmeno sulla definizione scientifica. Dopo aver presentato le diverse tipologie di traduzione audiovisiva, l'articolo si concentra sulle tappe del processo di adattamento: le sfide linguistiche, gli accorgimenti tecnici spesso non considerati nell'analisi dell'attività, le constraints, le figure coinvolte, e non ultime le regolamentazioni uniformizzanti del settore. La scelta del case study cade su South Park come ottimo esempio di sfida traduttiva che si snoda su molteplici livelli, per analizzare quali sono i problemi concreti che si presentano ad ogni battuta e ad ogni frame, e verificare le vittorie e le sconfitte del mestiereAudiovisual translation is a relatively new  and yet quite unexplored branch of translation studies. Despite being the main means for the enjoinment of big screen and television products, adaptation still arouses suspicion in scholars and critics, who do not even agree on its definition. After presenting the different kinds of audiovisual translation, the article focuses on the steps of the adaptation process: linguistic challenges, technical solutions which often pass unnoticed in an analysis, constraints, the professionals involved, and, last but not least, the norms which rule and conform the field. The choice for the case study has fallen on South Park, an excellent example of challenge on multiple levels which gives the chance to analyse the actual problems every frame or line presents, and to verify victories and losses of the job.

  6. DANCING AROUND THE SUBJECT WITH ROBOTS: ETHICAL COMMUNICATION AS A “TRIPLE AUDIOVISUAL REALITY”

    Directory of Open Access Journals (Sweden)

    Eleanor Sandry

    2012-06-01

    Full Text Available Communication is often thought of as a bridge between self and other, supported by what they have in common, and pursued with the aim of further developing this commonality. However, theorists such as John Durham Peters and Amit Pinchevski argue that this conception, connected as it is with the need to resolve and remove difference, is inherently ‘violent’ to the other and therefore unethical. To encourage ethical communication, they suggest that theory should instead support acts of communication for which the differences between self and other are not only retained, but also valued for the possibilities they offer. As a means of moving towards a more ethical stance, this paper stresses the importance of understanding communication as more than the transmission of information in spoken and written language. In particular, it draws on Fernando Poyatos’ research into simultaneous translation, which suggests that communication is a “triple audiovisual reality” consisting of language, paralanguage and kinesics. This perspective is then extended by considering the way in which Alan Fogel’s dynamic systems model also stresses the place of nonverbal signs. The paper explores and illustrates these theories by considering human-robot interactions because analysis of such interactions, with both humanoid and non-humanoid robots, helps to draw out the importance of paralanguage and kinesics as elements of communication. The human-robot encounters discussed here also highlight the way in which these theories position both reason and emotion as valuable in communication. The resulting argument – that communication occurs as a dynamic process, relying on a triple audiovisual reality drawn from both reason and emotion – supports a theoretical position that values difference, rather than promoting commonality as a requirement for successful communicative events. In conclusion, this paper extends this theory and suggests that it can form a basis

  7. "Singing in the Tube"--audiovisual assay of plant oil repellent activity against mosquitoes (Culex pipiens).

    Science.gov (United States)

    Adams, Temitope F; Wongchai, Chatchawal; Chaidee, Anchalee; Pfeiffer, Wolfgang

    2016-01-01

    Plant essential oils have been suggested as a promising alternative to the established mosquito repellent DEET (N,N-diethyl-meta-toluamide). Searching for an assay with generally available equipment, we designed a new audiovisual assay of repellent activity against mosquitoes "Singing in the Tube," testing single mosquitoes in Drosophila cultivation tubes. Statistics with regression analysis should compensate for limitations of simple hardware. The assay was established with female Culex pipiens mosquitoes in 60 experiments, 120-h audio recording, and 2580 estimations of the distance between mosquito sitting position and the chemical. Correlations between parameters of sitting position, flight activity pattern, and flight tone spectrum were analyzed. Regression analysis of psycho-acoustic data of audio files (dB[A]) used a squared and modified sinus function determining wing beat frequency WBF ± SD (357 ± 47 Hz). Application of logistic regression defined the repelling velocity constant. The repelling velocity constant showed a decreasing order of efficiency of plant essential oils: rosemary (Rosmarinus officinalis), eucalyptus (Eucalyptus globulus), lavender (Lavandula angustifolia), citronella (Cymbopogon nardus), tea tree (Melaleuca alternifolia), clove (Syzygium aromaticum), lemon (Citrus limon), patchouli (Pogostemon cablin), DEET, cedar wood (Cedrus atlantica). In conclusion, we suggest (1) disease vector control (e.g., impregnation of bed nets) by eight plant essential oils with repelling velocity superior to DEET, (2) simple mosquito repellency testing in Drosophila cultivation tubes, (3) automated approaches and room surveillance by generally available audio equipment (dB[A]: ISO standard 226), and (4) quantification of repellent activity by parameters of the audiovisual assay defined by correlation and regression analyses.

  8. Nuevos juguetes para un etnógrafo ansioso: derroteros del registro audiovisual

    Directory of Open Access Journals (Sweden)

    Agustina Pérez Rial

    2014-11-01

    Full Text Available The first generation of motion pictures expands the possibilities of the iconic and indexical qualities of photography, of figuration and contact, as it complicates the links between the image and what is depicted. This paper proposes an analysis of the audiovisual image as a result of a conglomeration of signs in a corpus of discourses ranging from early ethnographic film to more recent developments. It consolidated a challenge to the heuristic power traditionally assigned to the image and its epistemic heiress, the motion picture. The beginning of this journey could be placed at the end of the nineteenth century, a period in which a considerable number of films were produced by way of chronophotography, that had the purpose to document the lives of distant and unknown people. These were the years in which Felix-Louis Regnault, member of the Society of Anthropology of Paris, thought about motion pictures as privileged tool for studying the gestures of the human body and recorded in 1895 the first scenes of an African woman. It is also the time in which ethnography goes hand by hand with colonialism, and cinema becomes a metonymic device of apprehension of the colonized space. The main purpose of this work is to trace a path that goes from the first uses of chronophotographic devices towards current audiovisual productions. In order to do this, we will study privileged rhetorical tools for the construction of otherness in order to give an account of key moments in the relationship between image and knowledge/science.

  9. Creatividad y producción audiovisual en la red: el caso de la serie andaluza

    Directory of Open Access Journals (Sweden)

    Jiménez Marín, Gloria

    2012-01-01

    Full Text Available En español: La Web 2.0 ha posibilitado que jóvenes creadores generen contenido audiovisual y puedan difundirlo a través de los medios sociales, sin necesidad de pasar por los canales habituales de distribución, hasta ahora imprescindibles. Al otro lado del ordenador o de los dispositivos móviles le esperan receptores ansiosos por consumir vídeo, una actividad a la que cada vez dedicamos más horas… con una diferencia fundamental: hemos dejado de ver el televisor para consumir más audiovisual online en otro tipo de sistemas y aparatos emisores. Gracias a este contexto, en Andalucía vivimos el nacimiento de series, consideradas ya de culto, que han puesto de manifiesto el potencial de nuestros jóvenes creadores. Sin embargo, la realización periódica de capítulos supone un esfuerzo económico que la mayoría de ellos no pueden sufragar. Mientras, las agencias de comunicación se enfrentan al fenómeno de la publicidad online. Los ejecutivos se encuentran ante receptores que ansían experiencias y contenido: éstas son las claves de la publicidad 2.0. Y a partir de aquí, las piezas comienzan a funcionar: unos poseen las ideas, el contenido; otros, la financiación. Tal es el caso de Niña Repelente, una de las series online que mayor repercusión ha tenido en los últimos años y su acuerdo de patrocinio con la compañía líder de telefonía en España, firmado en 2010, que ha supuesto la difusión de la serie en Tuenti, la red social autóctona de más uso entre los jóvenes españoles. El caso referido da pie a este trabajo que analiza las claves de esas transformaciones en las distintas fases de la creación audiovisual (creación, distribución, consumo, al tiempo que estudia las variables críticas y abre preguntas para el mundo de la comunicación a partir de las últimas tendencias en las aplicaciones web.In english: The Web 2.0 has done that young designers can generate and disseminate audiovisual content through social media

  10. Sound and Music in Narrative Multimedia : A macroscopic discussion of audiovisual relations and auditory narrative functions in film, television and video games

    OpenAIRE

    Lund, Are Valen

    2012-01-01

    This thesis examines how we perceive an audiovisual narrative - here defined as film, television and video games - and seeks to establish a descriptive framework for auditory stimuli and their narrative functions in this regard. I initially adopt the viewpoint of cognitive psychology an account for basic information processing operations. I then discuss audiovisual perception in terms of the effects of sensory integration between the visual and auditory modalities on the construction of meani...

  11. Aula virtual y presencial en aprendizaje de comunicación audiovisual y educación Virtual and Real Classroom in Learning Audiovisual Communication and Education

    Directory of Open Access Journals (Sweden)

    Josefina Santibáñez Velilla

    2010-10-01

    Full Text Available El modelo mixto de enseñanza-aprendizaje pretende utilizar las tecnologías de la información y de la comunicación (TIC para garantizar una formación más ajustada al Espacio Europeo de Educación Superior (EEES. Se formularon los siguientes objetivos de investigación: Averiguar la valoración que hacen los alumnos de Magisterio del aula virtual WebCT como apoyo a la docencia presencial, y conocer las ventajas del uso de la WebCT y de las TIC por los alumnos en el estudio de caso: «Valores y contravalores transmitidos por series televisivas visionadas por niños y adolescentes». La investigación se realizó con una muestra de 205 alumnos de la Universidad de La Rioja que cursaban la asignatura de «Tecnologías aplicadas a la Educación». Para la descripción objetiva, sistemática y cuantitativa del contenido manifiesto de los documentos se ha utilizado la técnica de análisis de contenido cualitativa y cuantitativa. Los resultados obtenidos demuestran que las herramientas de comunicación, contenidos y evaluación son valoradas favorablemente por los alumnos. Se llega a la conclusión de que la WebCT y las TIC constituyen un apoyo a la innovación metodológica del EEES basada en el aprendizaje centrado en el alumno. Los alumnos evidencian su competencia audiovisual en los ámbitos de análisis de valores y de expresión a través de documentos audiovisuales en formatos multimedia. Dichos alumnos aportan un nuevo sentido innovador y creativo al uso docente de series televisivas.The mixed model of Teaching-Learning intends to use Information and Communication Technologies (ICTs to guarantee an education more adjusted to European Space for Higher Education (ESHE. The following research objectives were formulated: 1 To find out the assessment made by teacher-training college students of the virtual classroom WebCT as an aid to face-to-face teaching. 2 To know the advantages of the use of WebCT and ICTs by students in the case study:

  12. Brazil between the screens and the street: production and consumption of the audiovisual journalistic narratives about the nationwide political protests in june 2013tion of audiovisual news stories concerning the June 2013 protests in Brazil

    OpenAIRE

    Beatriz Becker; Monica Machado

    2014-01-01

    This article discusses the challenges that the technological and cultural mediations impose to audiovisual journalism in the coverage of the june protests of 2013, from the televisual analysis of the enunciations of the Jornal Nacional and the digital contents and formats of Mídia Ninja. It is suggest that viewers and users tend to break their TV reading contracts and get into other screens through which they concretize innovative forms of influencing recent history and wear out the tradition...

  13. El uso de la documentación audiovisual en los programas informativos diarios de televisión

    OpenAIRE

    Agirreazaldegi Berriozabal, Teresa

    1996-01-01

    [SPA] El objetivo de la investigación es conocer cual es la aportación cuantitativa y cualitativa de la documentación audiovisual en la información que ofrece diariamente la televisión. El marco temporal de la investigación de campo se sitúa en los años 1993 y 1994, en un marco geográfico constituido por los canales que emiten en el estado español. El estudio parte de una aproximación teórica a la documentación periodística, a la documentación audiovisual y a los estu...

  14. Production of Audiovisual Content in Ultra High Definition (UHD: Immersive Experience for Multimedia Viewing Screen TV and Smartphones

    Directory of Open Access Journals (Sweden)

    Francisco Javier MONTEMAYOR RUIZ

    2016-06-01

    Full Text Available This research analyzes the production of audiovisual content in ultra high definition (UHD as a catalyst factor of the new narratological discourse implanted in the media to the expressive, interactive and immersive possibilities obtained with the use of UHD 4K and 8K for viewing on large screen and, especially, in the 'fourth screen'. Through indepth interviews with experts from the academic and professional media world it is to establish an overview of the convergence of technology and industry of content creation, where the products are evolving to as for the entertainment needs of digital natives and adapting to new forms of consumption across multiple platforms, facing, in this way, a new business model for both industry technology industry and for the entire audiovisual sector.

  15. El vídeo esférico en Youtube y su influencia en el contenido audiovisual

    Directory of Open Access Journals (Sweden)

    Jorge Gallardo-Camacho

    2015-01-01

    Full Text Available Este artículo analiza la eclosión de los vídeos de 360 grados disponiblesen Youtube desde marzo de 2015. El lenguaje audiovisual se transforma conlos vídeos esféricos que permiten modificar el punto de vista del espectador. Analizaremos la evolución tecnológica, la difusión y, principalmente, el contenido de este nuevo tipo de vídeos que modifican la experiencia del espectador. Para ello observaremos los 20 vídeos más vistos con esta nueva tecnología en Youtube España hasta septiembre de 2015. Concluiremos que los vídeos esféricos influyen sobre el contenido que queda supeditado ante esta forma de presentación audiovisual preferiblemente consumida en dispositivos móviles.

  16. Assessing the effect of culturally specific audiovisual educational interventions on attaining self-management skills for chronic obstructive pulmonary disease in Mandarin- and Cantonese-speaking patients: a randomized controlled trial

    Science.gov (United States)

    Poureslami, Iraj; Kwan, Susan; Lam, Stephen; Khan, Nadia A; FitzGerald, John Mark

    2016-01-01

    Background Patient education is a key component in the management of chronic obstructive pulmonary disease (COPD). Delivering effective education to ethnic groups with COPD is a challenge. The objective of this study was to develop and assess the effectiveness of culturally and linguistically specific audiovisual educational materials in supporting self-management practices in Mandarin- and Cantonese-speaking patients. Methods Educational materials were developed using participatory approach (patients involved in the development and pilot test of educational materials), followed by a randomized controlled trial that assigned 91 patients to three intervention groups with audiovisual educational interventions and one control group (pamphlet). The patients were recruited from outpatient clinics. The primary outcomes were improved inhaler technique and perceived self-efficacy to manage COPD. The secondary outcome was improved patient understanding of pulmonary rehabilitation procedures. Results Subjects in all three intervention groups, compared with control subjects, demonstrated postintervention improvements in inhaler technique (P<0.001), preparedness to manage a COPD exacerbation (P<0.01), ability to achieve goals in managing COPD (P<0.01), and understanding pulmonary rehabilitation procedures (P<0.05). Conclusion Culturally appropriate educational interventions designed specifically to meet the needs of Mandarin and Cantonese COPD patients are associated with significantly better understanding of self-management practices. Self-management education led to improved proper use of medications, ability to manage COPD exacerbations, and ability to achieve goals in managing COPD. Clinical implication A relatively simple culturally appropriate disease management education intervention improved inhaler techniques and self-management practices. Further research is needed to assess the effectiveness of self-management education on behavioral change and patient empowerment

  17. Audio-Visual Biofeedback Does Not Improve the Reliability of Target Delineation Using Maximum Intensity Projection in 4-Dimensional Computed Tomography Radiation Therapy Planning

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Wei, E-mail: wlu@umm.edu [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland (United States); Neuner, Geoffrey A.; George, Rohini; Wang, Zhendong; Sasor, Sarah [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland (United States); Huang, Xuan [Research and Development, Care Management Department, Johns Hopkins HealthCare LLC, Glen Burnie, Maryland (United States); Regine, William F.; Feigenberg, Steven J.; D' Souza, Warren D. [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland (United States)

    2014-01-01

    Purpose: To investigate whether coaching patients' breathing would improve the match between ITV{sub MIP} (internal target volume generated by contouring in the maximum intensity projection scan) and ITV{sub 10} (generated by combining the gross tumor volumes contoured in 10 phases of a 4-dimensional CT [4DCT] scan). Methods and Materials: Eight patients with a thoracic tumor and 5 patients with an abdominal tumor were included in an institutional review board-approved prospective study. Patients underwent 3 4DCT scans with: (1) free breathing (FB); (2) coaching using audio-visual (AV) biofeedback via the Real-Time Position Management system; and (3) coaching via a spirometer system (Active Breathing Coordinator or ABC). One physician contoured all scans to generate the ITV{sub 10} and ITV{sub MIP}. The match between ITV{sub MIP} and ITV{sub 10} was quantitatively assessed with volume ratio, centroid distance, root mean squared distance, and overlap/Dice coefficient. We investigated whether coaching (AV or ABC) or uniform expansions (1, 2, 3, or 5 mm) of ITV{sub MIP} improved the match. Results: Although both AV and ABC coaching techniques improved frequency reproducibility and ABC improved displacement regularity, neither improved the match between ITV{sub MIP} and ITV{sub 10} over FB. On average, ITV{sub MIP} underestimated ITV{sub 10} by 19%, 19%, and 21%, with centroid distance of 1.9, 2.3, and 1.7 mm and Dice coefficient of 0.87, 0.86, and 0.88 for FB, AV, and ABC, respectively. Separate analyses indicated a better match for lung cancers or tumors not adjacent to high-intensity tissues. Uniform expansions of ITV{sub MIP} did not correct for the mismatch between ITV{sub MIP} and ITV{sub 10}. Conclusions: In this pilot study, audio-visual biofeedback did not improve the match between ITV{sub MIP} and ITV{sub 10}. In general, ITV{sub MIP} should be limited to lung cancers, and modification of ITV{sub MIP} in each phase of the 4DCT data set is recommended.

  18. Audiovisual Translation and Subtitling. Spanish and Latin American Spanish subtitles: Analysis of Sex And The City translation

    OpenAIRE

    González Ruiz, Carmen

    2016-01-01

    Subtitling is one of the most common types of audiovisual translation used nowadays. Besides, subtitling may become even a more complex process if we are dealing with Spanish language, which is one of the most extensive languages in the world. For this reason, we have decided to analyze and compare two different variants of Spanish: Peninsular Spanish and Latin American Spanish. Therefore, we can find some factors such as culture and language variants which may influence in these two varietie...

  19. Applying Corpus Methodology to Error Analysis of Students' Translation into the L1: The Context of Audiovisual Translation

    OpenAIRE

    Yakimovskaya, Ksenia

    2012-01-01

    The aim of the present research is to investigate error patterns in students’ translation of the audiovisual discourse and to describe factors influencing the process of interpretation. As translation into the mother tongue is usually considered to be the norm, abstracts taken from the movie script were rendered by participants from English (L2) into Russian (L1). The data was collected from 12 learners studying at the Department of Translation and Interpretation at PyatigorskStateLinguisticU...

  20. Use of High-Definition Audiovisual Technology in a Gross Anatomy Laboratory: Effect on Dental Students' Learning Outcomes and Satisfaction.

    Science.gov (United States)

    Ahmad, Maha; Sleiman, Naama H; Thomas, Maureen; Kashani, Nahid; Ditmyer, Marcia M

    2016-02-01

    Laboratory cadaver dissection is essential for three-dimensional understanding of anatomical structures and variability, but there are many challenges to teaching gross anatomy in medical and dental schools, including a lack of available space and qualified anatomy faculty. The aim of this study was to determine the efficacy of high-definition audiovisual educational technology in the gross anatomy laboratory in improving dental students' learning outcomes and satisfaction. Exam scores were compared for two classes of first-year students at one U.S. dental school: 2012-13 (no audiovisual technology) and 2013-14 (audiovisual technology), and section exams were used to compare differences between semesters. Additionally, an online survey was used to assess the satisfaction of students who used the technology. All 284 first-year students in the two years (2012-13 N=144; 2013-14 N=140) participated in the exams. Of the 140 students in the 2013-14 class, 63 completed the survey (45% response rate). The results showed that those students who used the technology had higher scores on the laboratory exams than those who did not use it, and students in the winter semester scored higher (90.17±0.56) than in the fall semester (82.10±0.68). More than 87% of those surveyed strongly agreed or agreed that the audiovisual devices represented anatomical structures clearly in the gross anatomy laboratory. These students reported an improved experience in learning and understanding anatomical structures, found the laboratory to be less overwhelming, and said they were better able to follow dissection instructions and understand details of anatomical structures with the new technology. Based on these results, the study concluded that the ability to provide the students a clear view of anatomical structures and high-quality imaging had improved their learning experience.