WorldWideScience

Sample records for audiovisual materials

  1. Audio-visual Materials and Rural Libraries

    Science.gov (United States)

    Escolar-Sobrino, Hipolito

    1972-01-01

    Audio-visual materials enlarge the educational work being done in the classroom and the library. This article examines the various types of audio-visual material and equipment and suggests ways in which audio-visual media can be used economically and efficiently in rural libraries. (Author)

  2. Audio-Visual Materials Catalog.

    Science.gov (United States)

    Anderson (M.D.) Hospital and Tumor Inst., Houston, TX.

    This catalog lists 27 audiovisual programs produced by the Department of Medical Communications of the University of Texas M. D. Anderson Hospital and Tumor Institute for public distribution. Video tapes, 16 mm. motion pictures and slide/audio series are presented dealing mostly with cancer and related subjects. The programs are intended for…

  3. Audio-Visual Materials for Chinese Studies.

    Science.gov (United States)

    Ching, Eugene, Comp.; Ching, Nora C., Comp.

    This publication is designed for teachers of Chinese language and culture who are interested in using audiovisual materials to supplement classroom instruction. The listings objectively present materials which are available; the compilers have not attempted to evaluate them. Content includes historical studies, techniques of brush painting, myths,…

  4. Making Audio-Visual Teaching Materials for Elementary Science

    OpenAIRE

    永田, 四郎

    1980-01-01

    For the elementary science, some audio-visual teaching materials were made by author and our students. These materials are slides for projector, transparencies and materials for OHP, 8 mm sound films and video tapes. We hope this kind of study will continue.

  5. Environmental conditions and the storage of audiovisual materials in ...

    African Journals Online (AJOL)

    This article discusses environmental factors which affect audiovisual (AV) materials in east and southern Africa. Since countries in the East and Southern Africa Regional Branch of the International Council on Archives (ESARBICA) are in a tropical region, AV materials are affected by high temperatures which result in ...

  6. Audio-visual materials usage preference among agricultural ...

    African Journals Online (AJOL)

    It was found that respondents preferred radio, television, poster, advert, photographs, specimen, bulletin, magazine, cinema, videotape, chalkboard, and bulletin board as audio-visual materials for extension work. These are the materials that can easily be manipulated and utilized for extension work. Nigerian Journal of ...

  7. Audiovisual integration in the human perception of materials.

    Science.gov (United States)

    Fujisaki, Waka; Goda, Naokazu; Motoyoshi, Isamu; Komatsu, Hidehiko; Nishida, Shin'ya

    2014-04-17

    Interest in the perception of the material of objects has been growing. While material perception is a critical ability for animals to properly regulate behavioral interactions with surrounding objects (e.g., eating), little is known about its underlying processing. Vision and audition provide useful information for material perception; using only its visual appearance or impact sound, we can infer what an object is made from. However, what material is perceived when the visual appearance of one material is combined with the impact sound of another, and what are the rules that govern cross-modal integration of material information? We addressed these questions by asking 16 human participants to rate how likely it was that audiovisual stimuli (48 combinations of visual appearances of six materials and impact sounds of eight materials) along with visual-only stimuli and auditory-only stimuli fell into each of 13 material categories. The results indicated strong interactions between audiovisual material perceptions; for example, the appearance of glass paired with a pepper sound is perceived as transparent plastic. Rating material-category likelihoods follow a multiplicative integration rule in that the categories judged to be likely are consistent with both visual and auditory stimuli. On the other hand, rating-material properties, such as roughness and hardness, follow a weighted average rule. Despite a difference in their integration calculations, both rules can be interpreted as optimal Bayesian integration of independent audiovisual estimations for the two types of material judgment, respectively.

  8. Recent Audio-Visual Materials on the Soviet Union.

    Science.gov (United States)

    Clarke, Edith Campbell

    1981-01-01

    Identifies and describes audio-visual materials (films, filmstrips, and audio cassette tapes) about the Soviet Union which have been produced since 1977. For each entry, information is presented on title, time required, date of release, cost (purchase and rental), and an abstract. (DB)

  9. Selected Audio-Visual Materials for Consumer Education. [New Version.

    Science.gov (United States)

    Johnston, William L.

    Ninety-two films, filmstrips, multi-media kits, slides, and audio cassettes, produced between 1964 and 1974, are listed in this selective annotated bibliography on consumer education. The major portion of the bibliography is devoted to films and filmstrips. The main topics of the audio-visual materials include purchasing, advertising, money…

  10. Planning Schools for Use of Audio-Visual Materials. No. 3: The Audio-Visual Materials Center.

    Science.gov (United States)

    National Education Association, Washington, DC. Dept. of Audiovisual Instruction.

    This manual discusses the role, organizational patterns, expected services, and space and housing needs of the audio-visual instructional materials center. In considering the housing of basic functions, photographs, floor layouts, diagrams, and specifications of equipment are presented. An appendix includes a 77-item bibliography, a 7-page list of…

  11. Music in Audio-Visual Materials.

    Science.gov (United States)

    Jaspers, Fons

    1991-01-01

    Reviews literature on music as a component of instructional materials. The relationship between music and emotion is examined; the use and effects of music are discussed; music as nonverbal communication is considered; effects on cognitive and attitudinal learning results are described; and emotional, cognitive, and structural needs are discussed.…

  12. Culture through comparison: creating audio-visual listening materials for a CLIL course

    National Research Council Canada - National Science Library

    Zhyrun, Iryna

    2016-01-01

    ... of audio-visual materials design for listening comprehension taking into consideration educational and cultural contexts, course content, and language learning outcomes of the program. In addition, it discusses advantages and limitations of created audio-visual materials by contrasting them with authentic materials of similar type foun...

  13. Embodiment and Materialization in "Neutral" Materials: Using Audio-Visual Analysis to Discern Social Representations

    Directory of Open Access Journals (Sweden)

    Anna Hedenus

    2015-11-01

    Full Text Available The use of audio-visual media puts bodies literally in focus, but there is as yet surprisingly little in the methodology literature about how to analyze the body in this kind of material. The aim of this article is to illustrate how qualitative audio-visual analysis, focusing on embodiment and materialization, may be used to discern social representations; this is of especial interest when studying materials which have an explicit ambition to achieve "neutrality" without reference to certain kinds of bodies. Filmed occupational descriptions—produced by the Swedish Employment Agency (SEA—are analyzed and discussed. The examples presented in the article illustrate how various forms of audio-visual analysis—content analysis, sequential analysis and narrative analysis—can be used to reveal how social representations of occupations and practitioners are embodied and materialized in these films. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs160139

  14. Easy Method for Inventory-Taking and Classification of Audio-Visual Material. First Edition, Revised.

    Science.gov (United States)

    Lamy-Rousseau, Francoise

    The alphanumeric code is a system put forward with the hope that it will bring uniformity in methods of inventory-taking and describing all sorts of audio-visual material which can be used in either French or English. The alphanumeric code classifies audio-visual materials in such a way as to indicate the exact nature of the media, the format, the…

  15. Planning Schools for Use of Audio-Visual Materials. No. 1--Classrooms, 3rd Edition.

    Science.gov (United States)

    National Education Association, Washington, DC.

    Intended to inform school board administrators and teachers of the current (1958) thinking on audio-visual instruction for use in planning new buildings, purchasing equipment, and planning instruction. Attention is given the problem of overcoming obstacles to the incorporation of audio-visual materials into the curriculum. Discussion includes--(1)…

  16. International Co-Production of Audio-Visual Material in Europe.

    Science.gov (United States)

    Greetfeld, Hans

    1979-01-01

    Describes an international advisory committee on audiovisual media, the procedure followed in the production and use of audiovisual materials and resources among member countries. Includes a list of the films that have been produced, over the years, by the committee. (GA)

  17. THE IROQUOIS, A BIBLIOGRAPHY OF AUDIO-VISUAL MATERIALS--WITH SUPPLEMENT. (TITLE SUPPLIED).

    Science.gov (United States)

    KELLERHOUSE, KENNETH; AND OTHERS

    APPROXIMATELY 25 SOURCES OF AUDIOVISUAL MATERIALS PERTAINING TO THE IROQUOIS AND OTHER NORTHEASTERN AMERICAN INDIAN TRIBES ARE LISTED ACCORDING TO TYPE OF AUDIOVISUAL MEDIUM. AMONG THE LESS-COMMON MEDIA ARE RECORDINGS OF IROQUOIS MUSIC AND DO-IT-YOURSELF REPRODUCTIONS OF IROQUOIS ARTIFACTS. PRICES ARE GIVEN WHERE APPLICABLE. (BR)

  18. Magazine Production: A Selected, Annotated Bibliography of Audio-Visual Materials.

    Science.gov (United States)

    Applegate, Edd

    This bibliography, which contains 13 annotations, is designed to help instructors choose appropriate audio-visual materials for a course in magazine production. Names and addresses of institutions from which the materials may be secured have been included. (MS)

  19. Teleconferences and Audiovisual Materials in Earth Science Education

    Science.gov (United States)

    Cortina, L. M.

    2007-05-01

    Unidad de Educacion Continua y a Distancia, Universidad Nacional Autonoma de Mexico, Coyoaca 04510 Mexico, MEXICO As stated in the special session description, 21st century undergraduate education has access to resources/experiences that go beyond university classrooms. However in some cases, resources may go largely unused and a number of factors may be cited such as logistic problems, restricted internet and telecommunication service access, miss-information, etc. We present and comment on our efforts and experiences at the National University of Mexico in a new unit dedicated to teleconferences and audio-visual materials. The unit forms part of the geosciences institutes, located in the central UNAM campus and campuses in other States. The use of teleconference in formal graduate and undergraduate education allows teachers and lecturers to distribute course material as in classrooms. Course by teleconference requires learning and student and teacher effort without physical contact, but they have access to multimedia available to support their exhibition. Well selected multimedia material allows the students to identify and recognize digital information to aid understanding natural phenomena integral to Earth Sciences. Cooperation with international partnerships providing access to new materials and experiences and to field practices will greatly add to our efforts. We will present specific examples of the experiences that we have at the Earth Sciences Postgraduate Program of UNAM with the use of technology in the education in geosciences.

  20. Audiovisual materials are effective for enhancing the correction of articulation disorders in children with cleft palate.

    Science.gov (United States)

    Pamplona, María Del Carmen; Ysunza, Pablo Antonio; Morales, Santiago

    2017-02-01

    Children with cleft palate frequently show speech disorders known as compensatory articulation. Compensatory articulation requires a prolonged period of speech intervention that should include reinforcement at home. However, frequently relatives do not know how to work with their children at home. To study whether the use of audiovisual materials especially designed for complementing speech pathology treatment in children with compensatory articulation can be effective for stimulating articulation practice at home and consequently enhancing speech normalization in children with cleft palate. Eighty-two patients with compensatory articulation were studied. Patients were randomly divided into two groups. Both groups received speech pathology treatment aimed to correct articulation placement. In addition, patients from the active group received a set of audiovisual materials to be used at home. Parents were instructed about strategies and ideas about how to use the materials with their children. Severity of compensatory articulation was compared at the onset and at the end of the speech intervention. After the speech therapy period, the group of patients using audiovisual materials at home demonstrated significantly greater improvement in articulation, as compared with the patients receiving speech pathology treatment on - site without audiovisual supporting materials. The results of this study suggest that audiovisual materials especially designed for practicing adequate articulation placement at home can be effective for reinforcing and enhancing speech pathology treatment of patients with cleft palate and compensatory articulation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Photojournalism: The Basic Course. A Selected, Annotated Bibliography of Audio-Visual Materials.

    Science.gov (United States)

    Applegate, Edd

    Designed to help instructors choose appropriate audio-visual materials for the basic course in photojournalism, this bibliography contains 11 annotated entries. Annotations include the name of the materials, running time, whether black-and-white or color, and names of institutions from which the materials can be secured, as well as brief…

  2. An Annotated Guide to Audio-Visual Materials for Teaching Shakespeare.

    Science.gov (United States)

    Albert, Richard N.

    Audio-visual materials, found in a variety of periodicals, catalogs, and reference works, are listed in this guide to expedite the process of finding appropriate classroom materials for a study of William Shakespeare in the classroom. Separate listings of films, filmstrips, and recordings are provided, with subdivisions for "The Plays"…

  3. Filmstrips, Phonograph Records, Cassettes: An Annotated List of Audio-Visual Materials.

    Science.gov (United States)

    Nazzaro, Lois B., Ed.

    The Reader Development Program of The Free Library of Philadelphia makes available audio-visual materials designed to aid under-educated adults and young adults in overcoming the educational, cultural and economic deficiencies in their lives. These materials are loaned for a week at a time to instructors, tutors, reading specialists, social…

  4. ACES Human Sexuality Training Network Handbook. A Compilation of Sexuality Course Syllabi and Audio-Visual Material.

    Science.gov (United States)

    American Association for Counseling and Development, Alexandria, VA.

    This handbook contains a compilation of human sexuality course syllabi and audio-visual materials. It was developed to enable sex educators to identify and contact one another, to compile Human Sexuality Course Syllabi from across the country, and to bring to attention audio-visual materials which are available for teaching Human Sexuality…

  5. The development of audio-visual materials to prepare patients for medical procedures: an oncology application.

    Science.gov (United States)

    Carey, M; Schofield, P; Jefford, M; Krishnasamy, M; Aranda, S

    2007-09-01

    This paper describes a systematic process for the development of educational audio-visual materials that are designed to prepare patients for potentially threatening procedures. Literature relating to the preparation of patients for potentially threatening medical procedures, psychological theory, theory of diffusion of innovations and patient information was examined. Four key principles were identified as being important: (1) stakeholder consultation, (2) provision of information to prepare patients for the medical procedure, (3) evidence-based content, and (4) promotion of patient confidence. These principles are described along with an example of the development of an audio-visual resource to prepare patients for chemotherapy treatment. Using this example, practical strategies for the application of each of the principles are described. The principles and strategies described may provide a practical, evidence-based guide to the development of other types of patient audio-visual materials.

  6. The Use of Video as an Audio-visual Material in Foreign Language Teaching Classroom

    Science.gov (United States)

    Cakir, Ismail

    2006-01-01

    In recent years, a great tendency towards the use of technology and its integration into the curriculum has gained a great importance. Particularly, the use of video as an audio-visual material in foreign language teaching classrooms has grown rapidly because of the increasing emphasis on communicative techniques, and it is obvious that the use of…

  7. An Annotated List of Audio-Visual Materials, Supplement One. Reader Development Program.

    Science.gov (United States)

    Forinash, Melissa R., Ed.

    This annual supplement to the annotated list of audio-visual materials includes the filmstrips added to the Reader Development collection since June, 1971. The list is arranged alphabetically by filmstrip title, and a brief subject index follows the list. A catalog giving the addresses of filmstrip distributors is also included. A total of 43…

  8. Introduction to Mass Communications: A Selected, Annotated Bibliography of Audio-Visual Materials.

    Science.gov (United States)

    Applegate, Edd

    Intended for use at the college level, this selected, annotated bibliography is designed to help teachers choose appropriate audiovisual materials for introductory courses in mass communications. Each entry, in addition to the annotation, indicates the length of the film, whether it is in black and white or color, and the institution, with…

  9. Audiovisual Material as Educational Innovation Strategy to Reduce Anxiety Response in Students of Human Anatomy

    Science.gov (United States)

    Casado, Maria Isabel; Castano, Gloria; Arraez-Aybar, Luis Alfonso

    2012-01-01

    This study presents the design, effect and utility of using audiovisual material containing real images of dissected human cadavers as an innovative educational strategy (IES) in the teaching of Human Anatomy. The goal is to familiarize students with the practice of dissection and to transmit the importance and necessity of this discipline, while…

  10. Evaluation of Modular EFL Educational Program (Audio-Visual Materials Translation & Translation of Deeds & Documents)

    Science.gov (United States)

    Imani, Sahar Sadat Afshar

    2013-01-01

    Modular EFL Educational Program has managed to offer specialized language education in two specific fields: Audio-visual Materials Translation and Translation of Deeds and Documents. However, no explicit empirical studies can be traced on both internal and external validity measures as well as the extent of compatibility of both courses with the…

  11. Catalogo de peliculas educativas y otros materiales audiovisuales (Catalogue of Educational Films and other Audiovisual Materials).

    Science.gov (United States)

    Encyclopaedia Britannica, Inc., Chicago, IL.

    This catalogue of educational films and other audiovisual materials consists predominantly of films in Spanish and English which are intended for use in elementary and secondary schools. A wide variety of topics including films for social studies, language arts, humanities, physical and natural sciences, safety and health, agriculture, physical…

  12. Recommended Audio-Visual Materials on South Africa.

    Science.gov (United States)

    Crofts, Marylee

    1984-01-01

    Presents a descriptive list of films, videocassettes, and slide sets available and recommended for teaching about South Africa and Namibia. Organizes cited materials according to the subjects they cover, including resistance to apartheid, the police state, homelands and Bantustans, the struggle of women, labor, the United States role, white rule,…

  13. UN MATERIAL AUDIOVISUAL DIDÁCTICO PARA LA ENSEÑANZA DE LA ESTADÍSTICA

    OpenAIRE

    Celina Marelli Espinoza García; José María Fernández Batanero

    2012-01-01

    El diseño y desarrollo de un material audiovisual didáctico permite en los docentes cumplir entre muchas funciones la de convertirse en productor de medios y materiales de enseñanza adaptados al contexto en el cual labora de manera tal que establece nuevos ambientes de aprendizajes. El objetivo de este estudio fue el diseño, desarrollo y evaluación de un material didáctico para la enseñanza y aprendizaje de la unidad didáctica: naturaleza de la estadística. Una investigación del tipo mixta (c...

  14. Audiovisual Review

    Science.gov (United States)

    Physiology Teacher, 1976

    1976-01-01

    Lists and reviews recent audiovisual materials in areas of medical, dental, nursing and allied health, and veterinary medicine; undergraduate, and high school studies. Each is classified as to level, type of instruction, usefulness, and source of availability. Topics include respiration, renal physiology, muscle mechanics, anatomy, evolution,…

  15. Audiovisual materials: a way to reinforce listening skills in primary school teacher education

    OpenAIRE

    González-Vera, Pilar; Hornero Corisco, Ana

    2016-01-01

    This paper aims to show the effective use of technology and audiovisual materials in the teaching and learning of EFL. It intends to offer an efficient way to improve students’ listening skills through the use of intensive listening (Harmer 2007). That listening skills need to be reinforced both in and outside the classroom is proved by the evidence of the weak competence of Spanish students in oral skills in English provided by a number of surveys at national and European level (Hornero et a...

  16. Interlibrary loan of audiovisual materials in the health sciences: how a system operates in New Jersey.

    Science.gov (United States)

    Crowley, C M

    1976-10-01

    An audiovisual loan program developed by the library of the College of Medicine and Dentistry of New Jersey is described. This program, supported by an NLM grant, has circulated audiovisual software from CMDNJ to libraries since 1974. Project experiences and statistics reflect the great demand for audiovisuals by health science libraries and demonstrate that a borrowing system following the pattern of traditional interlibrary loan can operate effectively and efficiently to serve these needs.

  17. [Learning to use semiautomatic external defibrillators through audiovisual materials for schoolchildren].

    Science.gov (United States)

    Jorge-Soto, Cristina; Abelairas-Gómez, Cristian; Barcala-Furelos, Roberto; Gregorio-García, Carolina; Prieto-Saborit, José Antonio; Rodríguez-Núñez, Antonio

    2016-01-01

    To assess the ability of schoolchildren to use a automated external defibrillator (AED) to provide an effective shock and their retention of the skill 1 month after a training exercise supported by audiovisual materials. Quasi-experimental controlled study in 205 initially untrained schoolchildren aged 6 to 16 years old. SAEDs were used to apply shocks to manikins. The students took a baseline test (T0) of skill, and were then randomized to an experimental or control group in the first phase (T1). The experimental group watched a training video, and both groups were then retested. The children were tested in simulations again 1 month later (T2). A total of 196 students completed all 3 phases. Ninety-six (95.0%) of the secondary school students and 54 (56.8%) of the primary schoolchildren were able to explain what a SAED is. Twenty of the secondary school students (19.8%) and 8 of the primary schoolchildren (8.4%) said they knew how to use one. At T0, 78 participants (39.8%) were able to simulate an effective shock. At T1, 36 controls (34.9%) and 56 experimental-group children (60.2%) achieved an effective shock (P< .001). At T2, 53 controls (51.4%) and 61 experimental-group children (65.6%) gave effective shocks (P=.045). All the students completed the tests in 120 seconds. Their average times decreased with each test. The secondary school students achieved better results. Previously untrained secondary school students know what a AED is and half of them can manage to use one in simulations. Brief narrative, audiovisual instruction improves students' skill in managing a SAED and helps them retain what they learned for later use.

  18. Audiovisual Instruction. The Library of Education.

    Science.gov (United States)

    De Kieffer, Robert E.

    Audiovisual instruction has become a necessary part of good teaching. This monograph separates the experiences and devices of audiovisual materials into three categories: nonprojected, projected, and audio materials and equipment. The design of schools for the use of such materials, the importance of audiovisual research, and the administration of…

  19. Selección del material audiovisual en Antena 3 TV

    OpenAIRE

    Grupo de Selección, Departamento de Documentación Antena 3 T

    2004-01-01

    This article presents a selection guidebook of the Antena 3 TV Documentation Department prepared with the aim of standardizing the process of selecting and adjusting it to the new circumstances emerged in the audiovisual media, paying attention to the criteria and the selection guidelines.

    Este artículo pretende dar a conocer el Manual de Selección del Centro de Documentación de Antena 3 TV elaborado con objeto de normalizar el proceso de selección y adecuarlo a las nuevas ...

  20. AUDIOVISUAL SERVICES CATALOG.

    Science.gov (United States)

    Stockton Unified School District, CA.

    A CATALOG HAS BEEN PREPARED TO HELP TEACHERS SELECT AUDIOVISUAL MATERIALS WHICH MIGHT BE HELPFUL IN ELEMENTARY CLASSROOMS. INCLUDED ARE FILMSTRIPS, SLIDES, RECORDS, STUDY PRINTS, FILMS, TAPE RECORDINGS, AND SCIENCE EQUIPMENT. TEACHERS ARE REMINDED THAT THEY ARE NOT LIMITED TO USE OF THE SUGGESTED MATERIALS. APPROPRIATE GRADE LEVELS HAVE BEEN…

  1. Selected Mental Health Audiovisuals.

    Science.gov (United States)

    National Inst. of Mental Health (DHEW), Rockville, MD.

    Presented are approximately 2,300 abstracts on audio-visual Materials--films, filmstrips, audiotapes, and videotapes--related to mental health. Each citation includes material title; name, address, and phone number of film distributor; rental and purchase prices; technical information; and a description of the contents. Abstracts are listed in…

  2. Relationship between Audio-Visual Materials and Environmental Factors on Students Academic Performance in Senior Secondary Schools in Borno State: Implications for Counselling

    Science.gov (United States)

    Bello, S.; Goni, Umar

    2016-01-01

    This is a survey study, designed to determine the relationship between audio-visual materials and environmental factors on students' academic performance in Senior Secondary Schools in Borno State: Implications for Counselling. The study set two research objectives, and tested two research hypotheses. The population of this study is 1,987 students…

  3. In Focus: Alcohol and Alcoholism Audiovisual Guide.

    Science.gov (United States)

    National Clearinghouse for Alcohol Information (DHHS), Rockville, MD.

    This guide reviews audiovisual materials currently available on alcohol abuse and alcoholism. An alphabetical index of audiovisual materials is followed by synopses of the indexed materials. Information about the intended audience, price, rental fee, and distributor is included. This guide also provides a list of publications related to media…

  4. Culture through Comparison: Creating Audio-Visual Listening Materials for a CLIL Course

    Science.gov (United States)

    Zhyrun, Iryna

    2016-01-01

    Authentic listening has become a part of CLIL materials, but it can be difficult to find listening materials that perfectly match the language level, length requirements, content, and cultural context of a course. The difficulty of finding appropriate materials online, financial limitations posed by copyright fees, and necessity to produce…

  5. The Best Colors for Audio-Visual Materials for More Effective Instruction.

    Science.gov (United States)

    Start, Jay

    A number of variables may affect the ability of students to perceive, and learn from, instructional materials. The objectives of the study presented here were to determine the projected color that provided the best visual acuity for the viewer, and the necessary minimum exposure time for achieving maximum visual acuity. Fifty…

  6. Material audiovisual para el estudio de los sensores y actuadores del sistema de inyección electrónica a gasolina

    OpenAIRE

    Padilla Calle, Henry Patricio; Pulla Morocho, Christian Omar

    2007-01-01

    El contenido del presente trabajo final de grado titulado como "Material Audiovisual para el Estudio de Sensores y Actuadores del Sistema de Inyección a Gasolina" tiene como misión principal dar a conocer al estudiante, al profesional o al público en general que se halle ligado a la rama de la mecánica automotriz, sobre los principales sistemas de inyección a gasolina existentes en el medio; así también destacando sus componentes principales y las ventajas que presentan; siendo de esta forma ...

  7. Historia audiovisual para una sociedad audiovisual

    Directory of Open Access Journals (Sweden)

    Julio Montero Díaz

    2013-04-01

    Full Text Available This article analyzes the possibilities of presenting an audiovisual history in a society in which audiovisual media has progressively gained greater protagonism. We analyze specific cases of films and historical documentaries and we assess the difficulties faced by historians to understand the keys of audiovisual language and by filmmakers to understand and incorporate history into their productions. We conclude that it would not be possible to disseminate history in the western world without audiovisual resources circulated through various types of screens (cinema, television, computer, mobile phone, video games.

  8. Search in audiovisual broadcast archives

    NARCIS (Netherlands)

    Huurnink, B.

    2010-01-01

    Documentary makers, journalists, news editors, and other media professionals routinely require previously recorded audiovisual material for new productions. For example, a news editor might wish to reuse footage from overseas services for the evening news, or a documentary maker describing the

  9. Alternative Media Technologies for the Open University. A Research Report on Costed Alternatives to the Direct Transmission of Audio-Visual Materials. Final Report. I.E.T. Papers on Broadcasting No. 79.

    Science.gov (United States)

    Bates, Tony; Kern, Larry

    This study examines alternatives to direct transmission of television and radio programs for courses with low student enrollment at the Open University. Examined are cut-off points in terms of student numbers at which alternative means of distributing audio or audio-visual materials become more economical than direct television or radio…

  10. El fénix quiere vivir : algunas consideraciones sobre la documentación audiovisual

    OpenAIRE

    Endean Gamboa, Robert

    2003-01-01

    The paper presents an overview of the audio-visual documents, with a retrospective study and different points of view of national and foreign authors on the importance of the audio-visual materials and its organization, preservation and diffusion.

  11. Habit, craft and creativity: how digital search habits shape the craft of professional audiovisual storytelling

    NARCIS (Netherlands)

    Sauer, Sabrina

    2017-01-01

    The increased digitalization of audiovisual materials allows media professionals, such as news documentation specialists and documentary filmmakers, to increasingly make use of online access to digital archives to find sources for their audiovisual stories. This paper presents empirical insights

  12. Preservation and Management of Audiovisual Archives in Botswana ...

    African Journals Online (AJOL)

    This paper reviews the state of the audio-visual collections held by different government institutions in Botswana. The rationale of such review rests on the observation that although audiovisual materials constitute a vital information resource in such institutions, they are often not adequately managed after they are created.

  13. Digital audiovisual archives

    CERN Document Server

    Stockinger, Peter

    2013-01-01

    Today, huge quantities of digital audiovisual resources are already available - everywhere and at any time - through Web portals, online archives and libraries, and video blogs. One central question with respect to this huge amount of audiovisual data is how they can be used in specific (social, pedagogical, etc.) contexts and what are their potential interest for target groups (communities, professionals, students, researchers, etc.).This book examines the question of the (creative) exploitation of digital audiovisual archives from a theoretical, methodological, technical and practical

  14. Teaching Materials for French. Recorded and Audio-Visual Courses; Recorded and Audio-Visual Supplementary Material; Books for Conversation-Comprehension-Composition-Translation; Pictorial Readers-Classroom Magazines, Books with Games & Puzzles-Playlets-Songs; Primary School French.

    Science.gov (United States)

    Centre for Information on Language Teaching, London (England).

    These five lists form an annotated bibliography of instructional materials for use in teaching French, classified according to the age and level of instruction for which they were intended. Each list treats a separate category of materials. There is a title index, as well as an index of authors, editors, compilers, and adaptors, with each list.…

  15. Audiovisual Instruction in Pediatric Pharmacy Practice.

    Science.gov (United States)

    Mutchie, Kelly D.; And Others

    1981-01-01

    A pharmacy practice program added to the core baccalaureate curriculum at the University of Utah College of Pharmacy which includes a practice in pediatrics is described. An audiovisual program in pediatric diseases and drug therapy was developed. This program allows the presentation of more material without reducing clerkship time. (Author/MLW)

  16. Audiovisual Materials for the Teaching of Language Variation. An Annotated Bibliography. CAL-ERIC/CLL Series on Languages and Linguistics, No. 31.

    Science.gov (United States)

    Tripp, Rosemary; Behrens, Sophia

    This annotated bibliography provides information concerning audiovisual aids available for use in teaching and teacher training in language variation. A variety of topics are covered, including regional dialect studies, language change, language acquisition, social dialects, and language in education. Each entry includes the name of the product,…

  17. Audiovisual Materials for the Teaching of Language Acquisition: An Annotated Bibliography. CAL-ERIC/CLL Series on Languages and Linguistics, No. 32.

    Science.gov (United States)

    Tripp, Rosemary; Behrens, Sophia

    This annotated bibliography provides information concerning audiovisual aids for use in teaching and teacher training in language acquisition. A variety of areas is covered, including children's acquisition of morphology, phonology, and semantics, vocabulary and language development, the acquisition of specific items such as negatives and…

  18. Fundamentos da montagem audiovisual

    OpenAIRE

    Sílvia Okumura Hayashi

    2016-01-01

    Este trabalho é um estudo sobre a montagem audiovisual. A pesquisa se estrutura em torno das relações entre os elementos fundamentais do ofício de se montar imagens e sons: o tempo, o espaço, a montagem, as ferramentas de trabalho, media pipelines e o mapa. Para tanto, investigamos a natureza algorítmica e tecnológica da montagem audiovisual, as formas de sua aplicação na produção industrial e também as possibilidades de criação formas singulares de montagem que se originam a partir da explor...

  19. Audiovisual interpretative skills: between textual culture and formalized literacy

    Directory of Open Access Journals (Sweden)

    Estefanía Jiménez, Ph. D.

    2010-01-01

    Full Text Available This paper presents the results of a study on the process of acquiring interpretative skills to decode audiovisual texts among adolescents and youth. Based on the conception of such competence as the ability to understand the meanings connoted beneath the literal discourses of audiovisual texts, this study compared two variables: the acquisition of such skills from the personal and social experience in the consumption of audiovisual products (which is affected by age difference, and, on the second hand, the differences marked by the existence of formalized processes of media literacy. Based on focus groups of young students, the research assesses the existing academic debate about these processes of acquiring skills to interpret audiovisual materials.

  20. Digital Audiovisual Archives: Unlocking our Audio and Audiovisual ...

    African Journals Online (AJOL)

    This article discusses the importance of digital sound and audiovisual archives in the broadcast environment. However, the digitisation of sound and audiovisual collections also impact on the non-broadcast environment. Digitising our AV collections has become critical. As such, we have entered an exciting phase in ...

  1. The Audio-Visual Man.

    Science.gov (United States)

    Babin, Pierre, Ed.

    A series of twelve essays discuss the use of audiovisuals in religious education. The essays are divided into three sections: one which draws on the ideas of Marshall McLuhan and other educators to explore the newest ideas about audiovisual language and faith, one that describes how to learn and use the new language of audio and visual images, and…

  2. Venezuela: Nueva Experiencia Audiovisual

    Directory of Open Access Journals (Sweden)

    Revista Chasqui

    2015-01-01

    Full Text Available La Universidad Simón Bolívar (USB creó en 1986, la Fundación para el Desarrollo del Arte Audiovisual, ARTEVISION. Su objetivo general es la promoción y venta de servicios y productos para la televisión, radio, cine, diseño y fotografía de alta calidad artística y técnica. Todo esto sin descuidar los aspectos teóricos-académicos de estas disciplinas.

  3. Audiovisual cultural heritage: bridging the gap between digital archives and its users

    NARCIS (Netherlands)

    Ongena, G.; Donoso, Veronica; Geerts, David; Cesar, Pablo; de Grooff, Dirk

    2009-01-01

    This document describes a PhD research track on the disclosure of audiovisual digital archives. The domain of audiovisual material is introduced as well as a problem description is formulated. The main research objective is to investigate the gap between the different users and the digital archives.

  4. Search behavior of media professionals at an audiovisual archive: A transaction log analysis

    NARCIS (Netherlands)

    Huurnink, B.; Hollink, L.; van den Heuvel, W.; de Rijke, M.

    2010-01-01

    Finding audiovisual material for reuse in new programs is an important activity for news producers, documentary makers, and other media professionals. Such professionals are typically served by an audiovisual broadcast archive. We report on a study of the transaction logs of one such archive. The

  5. Federal Audiovisual Policy Act. Hearing before a Subcommittee of the Committee on Government Operations, House of Representatives, Ninety-Eighth Congress, Second Session on H.R. 3325 to Establish in the Office of Management and Budget an Office to Be Known as the Office of Federal Audiovisual Policy, and for Other Purposes.

    Science.gov (United States)

    Congress of the U. S., Washington, DC. House Committee on Government Operations.

    The views of private industry and government are offered in this report of a hearing on the Federal Audiovisual Policy Act, which would establish an office to coordinate federal audiovisual activity and require most audiovisual material produced for federal agencies to be acquired under contract from private producers. Testimony is included from…

  6. Self-organizing maps for measuring similarity of audiovisual speech percepts

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich

    . Dependent on the training data, these other units may also be contextually immediate neighboring units. The poster demonstrates the idea with text material spoken by one individual subject using a set of simple audio-visual features. The data material for the training process consists of 44 labeled......The goal of this work is to find a way to measure similarity of audiovisual speech percepts. Phoneme-related self-organizing maps (SOM) with a rectangular basis are trained with data material from a (labeled) video film. For the training, a combination of auditory speech features and corresponding...... audio-visual speech percepts and to measure coarticulatory effects....

  7. The production of audiovisual teaching tools in minimally invasive surgery.

    Science.gov (United States)

    Tolerton, Sarah K; Hugh, Thomas J; Cosman, Peter H

    2012-01-01

    Audiovisual learning resources have become valuable adjuncts to formal teaching in surgical training. This report discusses the process and challenges of preparing an audiovisual teaching tool for laparoscopic cholecystectomy. The relative value in surgical education and training, for both the creator and viewer are addressed. This audiovisual teaching resource was prepared as part of the Master of Surgery program at the University of Sydney, Australia. The different methods of video production used to create operative teaching tools are discussed. Collating and editing material for an audiovisual teaching resource can be a time-consuming and technically challenging process. However, quality learning resources can now be produced even with limited prior video editing experience. With minimal cost and suitable guidance to ensure clinically relevant content, most surgeons should be able to produce short, high-quality education videos of both open and minimally invasive surgery. Despite the challenges faced during production of audiovisual teaching tools, these resources are now relatively easy to produce using readily available software. These resources are particularly attractive to surgical trainees when real time operative footage is used. They serve as valuable adjuncts to formal teaching, particularly in the setting of minimally invasive surgery. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  8. [Audio-visual aids and tropical medicine].

    Science.gov (United States)

    Morand, J J

    1989-01-01

    The author presents a list of the audio-visual productions about Tropical Medicine, as well as of their main characteristics. He thinks that the audio-visual educational productions are often dissociated from their promotion; therefore, he invites the future creator to forward his work to the Audio-Visual Health Committee.

  9. Elaboración de un material audiovisual, sobre la aplicación de la musicoterapia psicoprofiláctica, para favorecer el proceso de embarazo desde el quinto mes de gestación, en mujeres adolescentes de 14 a 16 años en el sector de Chilibulo

    OpenAIRE

    Muñoz Vasco, Raúl Clemente

    2015-01-01

    Through this work audiovisual material result of a process of preventive intervention with music therapy, psycho and prenatal stimulation, led a group of pregnant adolescents from the fifth month of pregnancy, Chilibulo Parish, located on the slopes of shows Unguí hill, south of Quito. The design and application of the product is carried out in educational institutions in the area, between August and November 2014 through 10 psychoprophylactic prenatal music therapy sessions of 2 hours, wi...

  10. Publicación de materiales audiovisuales a través de un servidor de video-streaming Publication of audio-visual materials through a streaming video server

    Directory of Open Access Journals (Sweden)

    Acevedo Clavijo Edwin Jovanny

    2010-07-01

    Full Text Available Esta propuesta tiene como objetivo estudiar varias alternativas de servidores Streaming para determinar la mejor herramienta para el desarrollo de la publicación de material audiovisual educativo. Se evaluaron las plataformas más utilizadas teniendo en cuenta sus características y beneficios que tiene cada servidor entre las los cuales están: Hélix Universal Server, Windows Media Server de Microsoft, Peer Cast y Darwin Server. implementando un servidor con mayores capacidades y beneficios para la publicación de videos con fines académicos a través de la intranet de la Universidad Cooperativa de Colombia seccional Barrancabermeja This proposal has as an principal objective to study different alternatives for streaming servers to determine the best tool in the project’s development. Platforms most used were evaluated features and benefits in each served such as: Helix Universal Server, Microsoft Windows Media Server, Peer Cast and Darwin Server. Implementing a server with more capabilities and benefits for the publication of videos for academic purposes through the intranet of the Cooperative University of Colombia Barrancabermeja’s sectional

  11. The level of audiovisual print-speech integration deficits in dyslexia.

    Science.gov (United States)

    Kronschnabel, Jens; Brem, Silvia; Maurer, Urs; Brandeis, Daniel

    2014-09-01

    The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No

  12. Preventive Maintenance Handbook. Audiovisual Equipment.

    Science.gov (United States)

    Educational Products Information Exchange Inst., Stony Brook, NY.

    The preventive maintenance system for audiovisual equipment presented in this handbook is designed by specialists so that it can be used by nonspecialists in school sites. The report offers specific advice on saftey factors and also lists major problems that should not be handled by nonspecialists. Other aspects of a preventive maintenance system…

  13. A Possible Neurophysiological Correlate of AudioVisual Binding and Unbinding in Speech Perception

    Directory of Open Access Journals (Sweden)

    Attigodu Chandrashekara eGanesh

    2014-11-01

    Full Text Available Audiovisual speech integration of auditory and visual streams generally ends up in a fusion into a single percept. One classical example is the McGurk effect in which incongruent auditory and visual speech signals may lead to a fused percept different from either visual or auditory inputs. In a previous set of experiments, we showed that if a McGurk stimulus is preceded by an incongruent audiovisual context (composed of incongruent auditory and visual speech materials the amount of McGurk fusion is largely decreased. We interpreted this result in the framework of a two-stage binding and fusion model of audiovisual speech perception, with an early audiovisual binding stage controlling the fusion/decision process and likely to produce unbinding with less fusion if the context is incoherent. In order to provide further electrophysiological evidence for this binding/unbinding stage, early auditory evoked N1/P2 responses were here compared during auditory, congruent and incongruent audiovisual speech perception, according to either prior coherent or incoherent audiovisual contexts. Following the coherent context, in line with previous EEG/MEG studies, visual information in the congruent audiovisual condition was found to modify auditory evoked potentials, with a latency decrease of P2 responses compared to the auditory condition. Importantly, both P2 amplitude and latency in the congruent audiovisual condition increased from the coherent to the incoherent context. Although potential contamination by visual responses from the visual cortex cannot be discarded, our results might provide a possible neurophysiological correlate of early binding/unbinding process applied on audiovisual interactions.

  14. Bilingualism affects audiovisual phoneme identification.

    Science.gov (United States)

    Burfin, Sabine; Pascalis, Olivier; Ruiz Tada, Elisa; Costa, Albert; Savariaux, Christophe; Kandel, Sonia

    2014-01-01

    We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience-i.e., the exposure to a double phonological code during childhood-affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identification experiment with bilingual and monolingual adult participants. It was an ABX task involving a Bengali dental-retroflex contrast that does not exist in any of the participants' languages. The phonemes were presented in audiovisual (AV) and audio-only (A) conditions. The results revealed that in the audio-only condition monolinguals and bilinguals had difficulties in discriminating the retroflex non-native phoneme. They were phonologically "deaf" and assimilated it to the dental phoneme that exists in their native languages. In the audiovisual presentation instead, both groups could overcome the phonological deafness for the retroflex non-native phoneme and identify both Bengali phonemes. However, monolinguals were more accurate and responded quicker than bilinguals. This suggests that bilinguals do not use the same processes as monolinguals to decode visual speech.

  15. Bilingualism affects audiovisual phoneme identification

    Directory of Open Access Journals (Sweden)

    Sabine eBurfin

    2014-10-01

    Full Text Available We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience –i.e., the exposure to a double phonological code during childhood– affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identification experiment with bilingual and monolingual adult participants. It was an ABX task involving a Bengali dental-retroflex contrast that does not exist in any of the participants’ languages. The phonemes were presented in audiovisual (AV and audio-only (A conditions. The results revealed that in the audio-only condition monolinguals and bilinguals had difficulties in discriminating the retroflex non-native phoneme. They were phonologically deaf and assimilated it to the dental phoneme that exists in their native languages. In the audiovisual presentation instead, both groups could overcome the phonological deafness for the retroflex non-native phoneme and identify both Bengali phonemes. However, monolinguals were more accurate and responded quicker than bilinguals. This suggests that bilinguals do not use the same processes as monolinguals to decode visual speech.

  16. Decreased BOLD responses in audiovisual processing

    NARCIS (Netherlands)

    Wiersinga-Post, Esther; Tomaskovic, Sonja; Slabu, Lavinia; Renken, Remco; de Smit, Femke; Duifhuis, Hendrikus

    2010-01-01

    Audiovisual processing was studied in a functional magnetic resonance imaging study using the McGurk effect. Perceptual responses and the brain activity patterns were measured as a function of audiovisual delay. In several cortical and subcortical brain areas, BOLD responses correlated negatively

  17. Audiovisual signs and information science: an evaluation

    Directory of Open Access Journals (Sweden)

    Jalver Bethônico

    2006-12-01

    Full Text Available This work evaluates the relationship of Information Science with audiovisual signs, pointing out conceptual limitations, difficulties imposed by the verbal fundament of knowledge, the reduced use within libraries and the ways in the direction of a more consistent analysis of the audiovisual means, supported by the semiotics of Charles Peirce.

  18. Audiovisual Discrimination between Laughter and Speech

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audiovisual approach to distinguishing laughter from speech and we show that integrating the information from audio and video leads to an improved reliability of audiovisual approach in

  19. Cinco discursos da digitalidade audiovisual

    Directory of Open Access Journals (Sweden)

    Gerbase, Carlos

    2001-01-01

    Full Text Available Michel Foucault ensina que toda fala sistemática - inclusive aquela que se afirma “neutra” ou “uma desinteressada visão objetiva do que acontece” - é, na verdade, mecanismo de articulação do saber e, na seqüência, de formação de poder. O aparecimento de novas tecnologias, especialmente as digitais, no campo da produção audiovisual, provoca uma avalanche de declarações de cineastas, ensaios de acadêmicos e previsões de demiurgos da mídia.

  20. Audio-visual gender recognition

    Science.gov (United States)

    Liu, Ming; Xu, Xun; Huang, Thomas S.

    2007-11-01

    Combining different modalities for pattern recognition task is a very promising field. Basically, human always fuse information from different modalities to recognize object and perform inference, etc. Audio-Visual gender recognition is one of the most common task in human social communication. Human can identify the gender by facial appearance, by speech and also by body gait. Indeed, human gender recognition is a multi-modal data acquisition and processing procedure. However, computational multimodal gender recognition has not been extensively investigated in the literature. In this paper, speech and facial image are fused to perform a mutli-modal gender recognition for exploring the improvement of combining different modalities.

  1. Inconspicuous portable audio/visual recording: transforming an IV pole into a mobile video capture stand.

    Science.gov (United States)

    Pettineo, Christopher M; Vozenilek, John A; Kharasch, Morris; Wang, Ernest; Aitchison, Pam; Arreguin, Andrew

    2008-01-01

    Although a traditional simulation laboratory may have excellent installed audio/visual capabilities, often large classes overwhelm the limited space in the laboratory. With minimal monetary investment, it is possible to create a portable audio/visual stand from an old IV pole. An IV pole was transformed into an audio/visual stand to overcome the burden of transporting individual electronic components during a patient safety research project conducted in an empty patient room with a standardized patient. The materials and methods for making the modified IV pole are outlined in this article. The limiting factor of production is access to an old IV pole; otherwise a few purchases from an electronics store complete the audio/visual IV pole. The modified IV pole is a cost-effective and portable solution to limited space or the need for audio/visual capabilities outside of a simulation laboratory. The familiarity of an IV pole in a clinical setting reduces the visual disturbance of relocated audio/visual equipment in a room previously void of such instrumentation.

  2. Seventeenth "CW" Survey of Audiovisual Materials

    Science.gov (United States)

    Seittelman, Elizabeth E.

    1975-01-01

    This survey is an annotated list of films, filmstrips, slides, transparencies, pictures and prints, posters and charts, maps, replicas and models, coloring books and puzzles, jewelry, recordings and catalogs dealing with classical history and languages. Addresses of producers and suppliers are included. (CK)

  3. Effects of audio-visual aids on foreign language test anxiety, reading and listening comprehension, and retention in EFL learners.

    Science.gov (United States)

    Lee, Shu-Ping; Lee, Shin-Da; Liao, Yuan-Lin; Wang, An-Chi

    2015-04-01

    This study examined the effects of audio-visual aids on anxiety, comprehension test scores, and retention in reading and listening to short stories in English as a Foreign Language (EFL) classrooms. Reading and listening tests, general and test anxiety, and retention were measured in English-major college students in an experimental group with audio-visual aids (n=83) and a control group without audio-visual aids (n=94) with similar general English proficiency. Lower reading test anxiety, unchanged reading comprehension scores, and better reading short-term and long-term retention after four weeks were evident in the audiovisual group relative to the control group. In addition, lower listening test anxiety, higher listening comprehension scores, and unchanged short-term and long-term retention were found in the audiovisual group relative to the control group after the intervention. Audio-visual aids may help to reduce EFL learners' listening test anxiety and enhance their listening comprehension scores without facilitating retention of such materials. Although audio-visual aids did not increase reading comprehension scores, they helped reduce EFL learners' reading test anxiety and facilitated retention of reading materials.

  4. Audiovisual quality assessment and prediction for videotelephony

    CERN Document Server

    Belmudez, Benjamin

    2015-01-01

    The work presented in this book focuses on modeling audiovisual quality as perceived by the users of IP-based solutions for video communication like videotelephony. It also extends the current framework for the parametric prediction of audiovisual call quality. The book addresses several aspects related to the quality perception of entire video calls, namely, the quality estimation of the single audio and video modalities in an interactive context, the audiovisual quality integration of these modalities and the temporal pooling of short sample-based quality scores to account for the perceptual quality impact of time-varying degradations.

  5. Audiovisual Styling and the Film Experience

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2015-01-01

    Approaches to music and audiovisual meaning in film appear to be very different in nature and scope when considered from the point of view of experimental psychology or humanistic studies. Nevertheless, this article argues that experimental studies square with ideas of audiovisual perception...... and meaning in humanistic film music studies in two ways: through studies of vertical synchronous interaction and through studies of horizontal narrative effects. Also, it is argued that the combination of insights from quantitative experimental studies and qualitative audiovisual film analysis may actually...... that are more intimately linked with present concerns within humanistic film studies...

  6. Audiovisual segregation in cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Simon Landry

    Full Text Available It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition, as well as in normal controls. A visual speech recognition task (i.e. speechreading was administered either in silence or in combination with three types of auditory distractors: i noise ii reverse speech sound and iii non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users.

  7. A promessa do audiovisual interativo

    Directory of Open Access Journals (Sweden)

    João Baptista Winck

    Full Text Available A cadeia produtiva do audiovisual utiliza o capital cultural, especialmente a criatividade, como sua principal fonte de recursos, inaugurando o que se vem chamando de economia criativa. Essa cadeia de valor manufatura a inventividade como matéria-prima, transformado idéias em objetos de consumo de larga escala. A indústria da televisão está inserida num conglomerado maior de indústrias, como a da moda, das artes, da música etc. Esse gigantesco parque tecnológico reúne as atividades que têm a criação como valor, sua produção em escala como meio e o incremento da propriedade intelectual como fim em si mesmo. A industrialização da criatividade, aos poucos, está alterando o corpo teórico acerca do que se pensa sobre as relações de trabalho, as ferramentas e, acima de tudo, o conceito de bens como produto da inteligência.

  8. Audiovisual English-Arabic Translation: De Beaugrande's Perspective

    Directory of Open Access Journals (Sweden)

    Alaa Eddin Hussain

    2016-05-01

    Full Text Available This paper attempts to demonstrate the significance of the seven standards of textuality with special application to audiovisual English Arabic translation.  Ample and thoroughly analysed examples have been provided to help in audiovisual English-Arabic translation decision-making. A text is meaningful if and only if it carries meaning and knowledge to its audience, and is optimally activatable, recoverable and accessible.  The same is equally applicable to audiovisual translation (AVT. The latter should also carry knowledge which can be easily accessed by the TL audience, and be processed with least energy and time, i.e. achieving the utmost level of efficiency. Communication occurs only when that text is coherent, with continuity of senses and concepts that are appropriately linked. Coherence of a text will be achieved when all aspects of cohesive devices are well accounted for pragmatically.  This combined with a good amount of psycholinguistic element will provide a text with optimal communicative value. Non-text is certainly devoid of such components and ultimately non-communicative. Communicative knowledge can be classified into three categories: determinate knowledge, typical knowledge and accidental knowledge. To create dramatic suspense and the element of surprise, the text in AV environment, as in any dialogue, often carries accidental knowledge.  This unusual knowledge aims to make AV material interesting in the eyes of its audience. That cognitive environment is enhanced by an adequate employment of material (picture and sound, and helps to recover sense in the text. Hence, the premise of this paper is the application of certain aspects of these standards to AV texts taken from various recent feature films and documentaries, in order to facilitate the translating process and produce a final appropriate product.

  9. Lip movements affect infants' audiovisual speech perception.

    Science.gov (United States)

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  10. Audiovisual integration facilitates unconscious visual scene processing.

    Science.gov (United States)

    Tan, Jye-Sheng; Yeh, Su-Ling

    2015-10-01

    Meanings of masked complex scenes can be extracted without awareness; however, it remains unknown whether audiovisual integration occurs with an invisible complex visual scene. The authors examine whether a scenery soundtrack can facilitate unconscious processing of a subliminal visual scene. The continuous flash suppression paradigm was used to render a complex scene picture invisible, and the picture was paired with a semantically congruent or incongruent scenery soundtrack. Participants were asked to respond as quickly as possible if they detected any part of the scene. Release-from-suppression time was used as an index of unconscious processing of the complex scene, which was shorter in the audiovisual congruent condition than in the incongruent condition (Experiment 1). The possibility that participants adopted different detection criteria for the 2 conditions was excluded (Experiment 2). The audiovisual congruency effect did not occur for objects-only (Experiment 3) and background-only (Experiment 4) pictures, and it did not result from consciously mediated conceptual priming (Experiment 5). The congruency effect was replicated when catch trials without scene pictures were added to exclude participants with high false-alarm rates (Experiment 6). This is the first study demonstrating unconscious audiovisual integration with subliminal scene pictures, and it suggests expansions of scene-perception theories to include unconscious audiovisual integration. (c) 2015 APA, all rights reserved).

  11. Effects of Audiovisual Media on L2 Listening Comprehension: A Preliminary Study in French

    Science.gov (United States)

    Becker, Shannon R.; Sturm, Jessica L.

    2017-01-01

    The purpose of the present study was to determine whether integrating online audiovisual materials into the listening instruction of L2 French learners would have a measurable impact on their listening comprehension development. Students from two intact sections of second-semester French were tested on their listening comprehension before and…

  12. Smoking education for low-educated adolescents: Comparing print and audiovisual messages

    NARCIS (Netherlands)

    de Graaf, A.; van den Putte, B.; Zebregs, S.; Lammers, J.; Neijens, P.

    2016-01-01

    This study aims to provide insight into which modality is most effective for educating low-educated adolescents about smoking. It compares the persuasive effects of print and audiovisual smoking education materials. We conducted a field experiment with 2 conditions (print vs. video) and 3

  13. Designing between Pedagogies and Cultures: Audio-Visual Chinese Language Resources for Australian Schools

    Science.gov (United States)

    Yuan, Yifeng; Shen, Huizhong

    2016-01-01

    This design-based study examines the creation and development of audio-visual Chinese language teaching and learning materials for Australian schools by incorporating users' feedback and content writers' input that emerged in the designing process. Data were collected from workshop feedback of two groups of Chinese-language teachers from primary…

  14. Challenges of Using Audio-Visual Aids as Warm-Up Activity in Teaching Aviation English

    Science.gov (United States)

    Sahin, Mehmet; Sule, St.; Seçer, Y. E.

    2016-01-01

    This study aims to find out the challenges encountered in the use of video as audio-visual material as a warm-up activity in aviation English course at high school level. This study is based on a qualitative study in which focus group interview is used as the data collection procedure. The participants of focus group are four instructors teaching…

  15. Eyewitnesses of History: Italian Amateur Cinema as Cultural Heritage and Source for Audiovisual and Media Production

    NARCIS (Netherlands)

    Simoni, Paolo

    2015-01-01

    abstractThe role of amateur cinema as archival material in Italian media productions has only recently been discovered. Italy, as opposed to other European countries, lacked a local, regional and national policy for the collection and preservation of private audiovisual documents, which led, as a

  16. Practicas de produccion audiovisual universitaria reflejadas en los trabajos presentados en la muestra audiovisual universitaria Ventanas 2005-2009

    National Research Council Canada - National Science Library

    Urbanczyk, Maria; Fernando Hernandez, Yesid; Uribe Reyes, Catalina

    2011-01-01

    Este articulo presenta los resultados de la investigacion realizada sobre la produccion audiovisual universitaria en Colombia, a partir de los trabajos presentados en la muestra audiovisual Ventanas 2005-2009...

  17. Literacia y memoria audiovisual de las dictaduras

    OpenAIRE

    Fernández, Olivia

    2015-01-01

    Durante las dictaduras ibéricas fueron producidos noti- ciarios cinematográficos por los organismos de propaganda de los estados con la finalidad de controlar la información audiovisual y apoyar la difusión de los valores de los regíme- nes peninsulares. En la actualidad, este acervo de imágenes del pasado dictatorial ibérico constituye un archivo custodia- do por las filmotecas y archivos encargados de preservar la memoria audiovisual. En los últimos años, dichas imágenes han sido objeto de ...

  18. Attention to touch weakens audiovisual speech integration.

    Science.gov (United States)

    Alsius, Agnès; Navarra, Jordi; Soto-Faraco, Salvador

    2007-11-01

    One of the classic examples of multisensory integration in humans occurs when speech sounds are combined with the sight of corresponding articulatory gestures. Despite the longstanding assumption that this kind of audiovisual binding operates in an attention-free mode, recent findings (Alsius et al. in Curr Biol, 15(9):839-843, 2005) suggest that audiovisual speech integration decreases when visual or auditory attentional resources are depleted. The present study addressed the generalization of this attention constraint by testing whether a similar decrease in multisensory integration is observed when attention demands are imposed on a sensory domain that is not involved in speech perception, such as touch. We measured the McGurk illusion in a dual task paradigm involving a difficult tactile task. The results showed that the percentage of visually influenced responses to audiovisual stimuli was reduced when attention was diverted to a tactile task. This finding is attributed to a modulatory effect on audiovisual integration of speech mediated by supramodal attention limitations. We suggest that the interactions between the attentional system and crossmodal binding mechanisms may be much more extensive and dynamic than it was advanced in previous studies.

  19. Short Communication: Preservation of Photographs and Audiovisual ...

    African Journals Online (AJOL)

    It is argued that the audiovisual heritage which has remained largely untapped or is scattered both within individual nations or has been collected and carried overseas holds the key to collective memory. The current lack of requisite resources for the collection and preservation of this cultural heritage remains a major ...

  20. Audiovisual laughter detection based on temporal features

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    2008-01-01

    Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audio-visual approach to distinguishing laugh- ter from speech based on temporal features and we show that integrating the information from audio and video chan- nels leads

  1. Rapid, generalized adaptation to asynchronous audiovisual speech.

    Science.gov (United States)

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-07

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  2. Longevity and Depreciation of Audiovisual Equipment.

    Science.gov (United States)

    Post, Richard

    1987-01-01

    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  3. Audiovisual vocal outburst classification in noisy conditions

    NARCIS (Netherlands)

    Eyben, Florian; Petridis, Stavros; Schuller, Björn; Pantic, Maja

    2012-01-01

    In this study, we investigate an audiovisual approach for classification of vocal outbursts (non-linguistic vocalisations) in noisy conditions using Long Short-Term Memory (LSTM) Recurrent Neural Networks and Support Vector Machines. Fusion of geometric shape features and acoustic low-level

  4. Audiovisual Asynchrony Detection in Human Speech

    Science.gov (United States)

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  5. Audio-Visual Technician | IDRC - International Development ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Controls the inventory of portable audio-visual equipment and mobile telephones within IDRC's loans library. Delivers, installs, uninstalls and removes equipment reserved by IDRC staff through the automated booking system. Participates in the planning process for upgrade and /or acquisition of new audio-visual ...

  6. A Critical Bibliography of Materials on China.

    Science.gov (United States)

    Witzel, Anne; Chapman, Rosemary

    This ungraded, annotated bibliography includes books of biography, history and society, culture, and literature. Filmstrips, study prints, slides and films are listed in the section of audio-visual materials. Also included is a list of sources of books and audio-visual materials that are included in a multi-media package on China used in the…

  7. Audiovisual preconditioning enhances the efficacy of an anatomical dissection course: A randomised study.

    Science.gov (United States)

    Collins, Anne M; Quinlan, Christine S; Dolan, Roisin T; O'Neill, Shane P; Tierney, Paul; Cronin, Kevin J; Ridgway, Paul F

    2015-07-01

    The benefits of incorporating audiovisual materials into learning are well recognised. The outcome of integrating such a modality in to anatomical education has not been reported previously. The aim of this randomised study was to determine whether audiovisual preconditioning is a useful adjunct to learning at an upper limb dissection course. Prior to instruction participants completed a standardised pre course multiple-choice questionnaire (MCQ). The intervention group was subsequently shown a video with a pre-recorded commentary. Following initial dissection, both groups completed a second MCQ. The final MCQ was completed at the conclusion of the course. Statistical analysis confirmed a significant improvement in the performance in both groups over the duration of the three MCQs. The intervention group significantly outperformed their control group counterparts immediately following audiovisual preconditioning and in the post course MCQ. Audiovisual preconditioning is a practical and effective tool that should be incorporated in to future course curricula to optimise learning. Level of evidence This study appraises an intervention in medical education. Kirkpatrick Level 2b (modification of knowledge). Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  8. Efficient visual search from synchronized auditory signals requires transient audiovisual events

    NARCIS (Netherlands)

    van den Burg, E.; Cass, J.R.; Olivers, C.N.L.; Theeuwes, J.; Alais, D

    2010-01-01

    Background: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is

  9. Estatuto do Audiovisual de TV na Internet

    Directory of Open Access Journals (Sweden)

    KILPP, Suzana

    2012-08-01

    Full Text Available O artigo é um relato parcial da pesquisa Audiovisualidades Digitais que realizamos de 2009 a 2011. Fazemos uma rápida incursão em alguns sites de emissoras de TV na Internet para autenticar os tipos de postagens de vídeo (e suas características imagéticas na circunvizinhança de outras postagens que participam do design de interface das homes e das watchpages dos mesmos, e assim tecer considerações preliminares sobre o estatuto do audiovisual de TV na Internet. Tais considerações são tensionadas por conceitos de Benjamin, Bergson, Bolter e Grusin, Derrida, Flusser, Kilpp, Manovich, e McLuhan, e inscrevem-se numa ecologia audiovisual que vimos perseguindo em nossas pesquisas.

  10. Automated social skills training with audiovisual information.

    Science.gov (United States)

    Tanaka, Hiroki; Sakti, Sakriani; Neubig, Graham; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi

    2016-08-01

    People with social communication difficulties tend to have superior skills using computers, and as a result computer-based social skills training systems are flourishing. Social skills training, performed by human trainers, is a well-established method to obtain appropriate skills in social interaction. Previous works have attempted to automate one or several parts of social skills training through human-computer interaction. However, while previous work on simulating social skills training considered only acoustic and linguistic features, human social skills trainers take into account visual features (e.g. facial expression, posture). In this paper, we create and evaluate a social skills training system that closes this gap by considering audiovisual features regarding ratio of smiling, yaw, and pitch. An experimental evaluation measures the difference in effectiveness of social skill training when using audio features and audiovisual features. Results showed that the visual features were effective to improve users' social skills.

  11. Audiovisual data fusion for successive speakers tracking

    OpenAIRE

    Labourey, Quentin; Aycard, Olivier; Pellerin, Denis; Rombaut, Michèle

    2014-01-01

    International audience; In this paper, a human speaker tracking method on audio and video data is presented. It is applied to con- versation tracking with a robot. Audiovisual data fusion is performed in a two-steps process. Detection is performed independently on each modality: face detection based on skin color on video data and sound source localization based on the time delay of arrival on audio data. The results of those detection processes are then fused thanks to an adaptation of bayes...

  12. Documental Audiovisual sobre el teatro callejero.

    OpenAIRE

    Chancusi Chiguano, Tatiana Andrea; Quinatoa Medina, Margarita Belén

    2016-01-01

    The product consists of a video documentary that last 27 minutes 32 seconds, which subject is street theater. The work of people that perform this activity in public space stands out in this audiovisual product. It is highlighting women’s role in this kind of theater. The video chronicles the experiences, reflections, and the daily comings and goings of Sonia Flores, also known in the art world as Maria Lola Vaca del Campo (Cow of the field); it also chronicles the relatio...

  13. Musical expertise induces audiovisual integration of abstract congruency rules

    National Research Council Canada - National Science Library

    Paraskevopoulos, Evangelos; Kuchenbuch, Anja; Herholz, Sibylle C; Pantev, Christo

    2012-01-01

    .... The present study used magnetoencephalography in human subjects to identify the neural correlates of an audiovisual incongruency response, which is not generated due to incongruency of the unisensory...

  14. Panorama de les fonts audiovisuals internacionals en televisió : contingut, gestió i drets

    Directory of Open Access Journals (Sweden)

    López de Solís, Iris

    2014-12-01

    Full Text Available Les cadenes generalistes espanyoles (nacionals i autonòmiques disposen de diferents fonts audiovisuals per informar dels temes internacionals, com ara agències, consorcis de notícies i corresponsalies. En aquest article, a partir de les dades facilitades per diferents cadenes, s'aborda la cobertura, l'ús i la gestió d'aquestes fonts, així com també els seus drets d'ús i arxivament, i s'analitza la història i les eines en línia de les agències més emprades. Finalment, es descriu la tasca diària del departament d'Eurovision de TVE, al qual fa uns mesos s'han incorporat documentalistes que, a més de tractar documentalment el material audiovisual, duen a terme tasques d'edició i de producció.Las cadenas generalistas españolas (nacionales y autonómicas cuentan con diferentes fuentes audiovisuales para informar de los temas internacionales, como agencias, consorcios de noticias y corresponsalías. En este artículo, a partir de los datos facilitados por diferentes cadenas, se aborda la cobertura, el uso y la gestión de dichas fuentes, así como sus derechos de uso y archivado, y se analiza la historia y las herramientas en línea de las agencias más empleadas. Finalmente se describe la labor diaria del departamento de Eurovision de TVE, al que hace unos meses se han incorporado documentalistas que, además de tratar documentalmente el material audiovisual, realizan labores de edición y producción.At both national and regional levels, Spain’s main public service television channels rely upon a number of independent producers of audiovisual content to deliver news on international affairs, including news agencies and consortia and correspondent networks. Using the data provided by different channels, this paper examines the coverage, use and management of these sources as well as the regulations determining their use and storage. It also analyzes the history of the most prominent agencies and the online toolkits they offer

  15. Use of Audiovisual Texts in University Education Process

    Science.gov (United States)

    Aleksandrov, Evgeniy P.

    2014-01-01

    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  16. Trigger videos on the Web: Impact of audiovisual design

    NARCIS (Netherlands)

    Verleur, R.; Heuvelman, A.; Verhagen, Pleunes Willem

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is

  17. Decision-level fusion for audio-visual laughter detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, M.; Truong, K.; Poppe, R.; Pantic, M.

    2008-01-01

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laughter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is

  18. The Audio-Visual Marketing Handbook for Independent Schools.

    Science.gov (United States)

    Griffith, Tom

    This how-to booklet offers specific advice on producing video or slide/tape programs for marketing independent schools. Five chapters present guidelines for various stages in the process: (1) Audio-Visual Marketing in Context (aesthetics and economics of audiovisual marketing); (2) A Question of Identity (identifying the audience and deciding on…

  19. Thesaurus enrichment for query expansion in audiovisual archives

    NARCIS (Netherlands)

    Hollink, L.; Malaise, V.; Schreiber, A.Th.

    2010-01-01

    It is common practice in audiovisual archives to disclose documents using metadata from a structured vocabulary or thesaurus. Many of these thesauri have limited or no structure. The objective of this paper is to find out whether retrieval of audiovisual resources from a collection indexed with an

  20. Thesaurus enrichment for query expansion in audiovisual archives.

    NARCIS (Netherlands)

    Hollink, L.; Malaisé, V.; Schreiber, A.Th.

    2009-01-01

    It is common practice in audiovisual archives to disclose documents using metadata from a structured vocabulary or thesaurus. Many of these thesauri have limited or no structure. The objective of this paper is to find out whether retrieval of audiovisual resources from a collection indexed with an

  1. Trigger Videos on the Web: Impact of Audiovisual Design

    Science.gov (United States)

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  2. Survey of audiovisual standards and practices in health sciences libraries.

    OpenAIRE

    McCarthy, J

    1983-01-01

    A survey of audiovisual (AV) practices in health sciences libraries was conducted by the Audiovisual Standards and Practices Committee of the Medical Library Association. The objective was to determine the variety and extent of AV practices currently in use in health sciences libraries, as a preliminary step toward developing AV standards.

  3. Exposure to audiovisual programs as sources of authentic language ...

    African Journals Online (AJOL)

    Exposure to audiovisual programs as sources of authentic language input and second language acquisition in informal settings. ... Also, the present study reveals that the choice of authentic audiovisual input seems to have a more significant impact on language development compared to the amount of exposure. Southern ...

  4. Audiovisual Archive Exploitation in the Networked Information Society

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.

    2011-01-01

    Safeguarding the massive body of audiovisual content, including rich music collections, in audiovisual archives and enabling access for various types of user groups is a prerequisite for unlocking the social-economic value of these collections. Data quantities and the need for specific content

  5. The natural statistics of audiovisual speech.

    Directory of Open Access Journals (Sweden)

    Chandramouli Chandrasekaran

    2009-07-01

    Full Text Available Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2-7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.

  6. Alterations in audiovisual simultaneity perception in amblyopia.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2017-01-01

    Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV) perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window). Adults with unilateral amblyopia (n = 17) and visually normal controls (n = 17) judged the simultaneity of a flash and a click presented with both eyes viewing. The signal onset asynchrony (SOA) varied from 0 ms to 450 ms for auditory-lead and visual-lead conditions. A subset of participants with amblyopia (n = 6) was tested monocularly. Compared to the control group, the auditory-lead side of the AV simultaneity window was widened by 48 ms (36%; p = 0.002), whereas that of the visual-lead side was widened by 86 ms (37%; p = 0.02). The overall mean window width was 500 ms, compared to 366 ms among controls (37% wider; p = 0.002). Among participants with amblyopia, the simultaneity window parameters were unchanged by viewing condition, but subgroup analysis revealed differential effects on the parameters by amblyopia severity, etiology, and foveal suppression status. Possible mechanisms to explain these findings include visual temporal uncertainty, interocular perceptual latency asymmetry, and disruption of normal developmental tuning of sensitivity to audiovisual asynchrony.

  7. Categorization of natural dynamic audiovisual scenes.

    Directory of Open Access Journals (Sweden)

    Olli Rummukainen

    Full Text Available This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database.

  8. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan.

    Directory of Open Access Journals (Sweden)

    Jean-Paul Noel

    Full Text Available Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.

  9. Audiovisual Interval Size Estimation Is Associated with Early Musical Training.

    Directory of Open Access Journals (Sweden)

    Mary Kathryn Abel

    Full Text Available Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.

  10. Audiovisual Quality Fusion based on Relative Multimodal Complexity

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Reiter, Ulrich

    2011-01-01

    for relative multimodal complexity analysis to derive the fusion parameter in objective audiovisual quality metrics. Audio and video qualities are first estimated separately using advanced quality models, and then they are combined into the overall audiovisual quality using a linear fusion. Based on carefully...... metrics, compared to the fusion parameters obtained from the subjective quality tests using other known optimization methods.......In multimodal presentations the perceived audiovisual quality assessment is significantly influenced by the content of both the audio and visual tracks. Based on our earlier subjective quality test for finding the optimal trade-off between audio and video quality, this paper proposes a novel method...

  11. Entorno de creación de contenido audiovisual

    OpenAIRE

    IBÁÑEZ SEMPERE, JORGE

    2015-01-01

    En este PFC se pone a disposición de cualquier interesado un completo entorno de creación de contenido audiovisual, valiéndonos del plató creado en la ETSIT, podremos trabajar con un chroma, emplear software de edición, adquirir nociones básicas de audiovisual e incluso llegar a emitir nuestro propio programa en streaming Ibáñez Sempere, J. (2015). Entorno de creación de contenido audiovisual. http://hdl.handle.net/10251/51432. Archivo delegado

  12. An Audio-visual Approach to Teaching the Social Aspects of Sustainable Product Design

    Directory of Open Access Journals (Sweden)

    Matthew Alan Watkins

    2015-07-01

    Full Text Available This paper considers the impact of audio-visual resources in enabling students to develop an understanding of the social aspects of sustainable product design. Building on literature con­cern­ing the learning preferences of ‘Net Generation’ learners, three audio-visual workshops were developed to introduce students to the wider social aspects of sustainability and encour­age students to reflect upon the impact of their practice. The workshops were delivered in five universities in Britain and Ireland among undergraduate and postgraduate students. They were designed to encourage students to reflect upon carefully designed audio-visual materials in a group-based environment, seeking to foster the preferences of Net Generation learners through collaborative learning and learning through discovery. It also sought to address the perceived weaknesses of this generation of learners by encouraging critical reflection. The workshops proved to be popular with students and were successful in enabling them to grasp the complexity of the social aspects of sustainable design in a short span of time, as well as in encouraging personal responses and creative problem solving through an exploration of design thinking solutions.

  13. Child′s dental fear: Cause related factors and the influence of audiovisual modeling

    Directory of Open Access Journals (Sweden)

    Jayanthi Mungara

    2013-01-01

    Full Text Available Background: Delivery of effective dental treatment to a child patient requires thorough knowledge to recognize dental fear and its management by the application of behavioral management techniques. Children′s Fear Survey Schedule - Dental Subscale (CFSS-DS helps in identification of specific stimuli which provoke fear in children with regard to dental situation. Audiovisual modeling can be successfully used in pediatric dental practice. Aim: To assess the degree of fear provoked by various stimuli in the dental office and to evaluate the effect of audiovisual modeling on dental fear of children using CFSS-DS. Materials and Methods: Ninety children were divided equally into experimental (group I and control (group II groups and were assessed in two visits for their degree of fear and the effect of audiovisual modeling, with the help of CFSS-DS. Results: The most fear-provoking stimulus for children was injection and the least was to open the mouth and having somebody look at them. There was no statistically significant difference in the overall mean CFSS-DS scores between the two groups during the initial session (P > 0.05. However, in the final session, a statistically significant difference was observed in the overall mean fear scores between the groups (P < 0.01. Significant improvement was seen in group I, while no significant change was noted in case of group II. Conclusion: Audiovisual modeling resulted in a significant reduction of overall fear as well as specific fear in relation to most of the items. A significant reduction of fear toward dentists, doctors in general, injections, being looked at, the sight, sounds, and act of the dentist drilling, and having the nurse clean their teeth was observed.

  14. Audiovisual Translation in LSP – A Case for Using Captioning in Teaching Languages for Specific Purposes

    Directory of Open Access Journals (Sweden)

    Jaroslaw Krajka

    2015-11-01

    Full Text Available Audiovisual translation, or producing subtitles for video materials, had long been out of reach of language teachers due to sophisticated and expensive software. However, with the advent of social networking and video sharing sites, it has become possible to create subtitles for videos in a much easier fashion without any expense. Subtitled materials open up interesting instructional opportunities in the classroom, giving teachers three channels of information delivery for flexible use. The present paper deals with the phenomenon of subtitling videos for the ESP classroom. The author starts with a literature review, then presents implementation models and classroom procedures. Finally, technical solutions are outlined.

  15. Nuevos actores sociales en el escenario audiovisual

    Directory of Open Access Journals (Sweden)

    Gloria Rosique Cedillo

    2012-04-01

    Full Text Available A raíz de la entrada de las televisiones privadas al sector audiovisual español, el panorama de los contenidos de entretenimiento de la televisión generalista vivió cambios trascendentales que se vieron reflejados en las parrillas de programación. Esta situación ha abierto la polémica en torno a la disyuntiva de tener o no una televisión, sea pública o privada, que no cumple con las expectativas sociales esperadas. Esto ha motivado a que grupos civiles organizados en asociaciones de telespectadores, emprendan diversas acciones con el objetivo de incidir en el rumbo que los contenidos de entretenimiento vienen tomando, apostando fuertemente por la educación del receptor en relación a los medios audiovisuales, y por la participación ciudadana en torno a los temas televisivos.

  16. Ordinal models of audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2011-01-01

    Audiovisual information is integrated in speech perception. One manifestation of this is the McGurk illusion in which watching the articulating face alters the auditory phonetic percept. Understanding this phenomenon fully requires a computational model with predictive power. Here, we describe...... ordinal models that can account for the McGurk illusion. We compare this type of models to the Fuzzy Logical Model of Perception (FLMP) in which the response categories are not ordered. While the FLMP generally fit the data better than the ordinal model it also employs more free parameters in complex...... experiments when the number of response categories are high as it is for speech perception in general. Testing the predictive power of the models using a form of cross-validation we found that ordinal models perform better than the FLMP. Based on these findings we suggest that ordinal models generally have...

  17. Prediction and constraint in audiovisual speech perception.

    Science.gov (United States)

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration

  18. Talker Variability in Audiovisual Speech Perception

    Directory of Open Access Journals (Sweden)

    Shannon eHeald

    2014-07-01

    Full Text Available A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition. So far, this talker-variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target-word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.

  19. Prediction and constraint in audiovisual speech perception

    Science.gov (United States)

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported

  20. An Instrumented Glove for Control Audiovisual Elements in Performing Arts

    Directory of Open Access Journals (Sweden)

    Rafael Tavares

    2018-02-01

    Full Text Available The use of cutting-edge technologies such as wearable devices to control reactive audiovisual systems are rarely applied in more conventional stage performances, such as opera performances. This work reports a cross-disciplinary approach for the research and development of the WMTSensorGlove, a data-glove used in an opera performance to control audiovisual elements on stage through gestural movements. A system architecture of the interaction between the wireless wearable device and the different audiovisual systems is presented, taking advantage of the Open Sound Control (OSC protocol. The developed wearable system was used as audiovisual controller in “As sete mulheres de Jeremias Epicentro”, a portuguese opera by Quarteto Contratempus, which was premiered in September 2017.

  1. Audio-visual detection benefits in the rat

    National Research Council Canada - National Science Library

    Gleiss, Stephanie; Kayser, Christoph

    2012-01-01

    ... multisensory protocols. We here demonstrate the feasibility of an audio-visual stimulus detection task for rats, in which the animals detect lateralized uni- and multi-sensory stimuli in a two-response forced choice paradigm...

  2. Narrativa audiovisual. Estrategias y recursos [Reseña

    OpenAIRE

    Cuenca Jaramillo, María Dolores

    2011-01-01

    Reseña del libro "Narrativa audiovisual. Estrategias y recursos" de Fernando Canet y Josep Prósper. Cuenca Jaramillo, MD. (2011). Narrativa audiovisual. Estrategias y recursos [Reseña]. Vivat Academia. Revista de Comunicación. Año XIV(117):125-130. http://hdl.handle.net/10251/46210 Senia 125 130 Año XIV 117

  3. Audio-visual perception of new wind parks

    OpenAIRE

    Yu, T.; Behm, H.; Bill, R.; Kang, J.

    2017-01-01

    Previous studies have reported negative impacts of wind parks on the public. These studies considered the noise levels or visual levels separately but not audio-visual interactive factors. This study investigated the audio-visual impact of a new wind park using virtual technology that combined audio and visual features of the environment. Participants were immersed through Google Cardboard in an actual landscape without wind parks (ante operam) and in the same landscape with wind parks (post ...

  4. Manual for the Development of WEEA Educational Materials.

    Science.gov (United States)

    Follett, Marguerite A., Comp.

    Guidelines for the production and dissemination of educational products through Women's Educational Equity Act Program (WEEAP) grants are presented. Basic requirements are given for manuscript preparation and style for print materials. Audiovisual production guidelines include general pointers, audiovisual product definitions, tips on technique,…

  5. GRAPE - GIS Repetition Using Audio-Visual Repetition Units and its Leanring Effectiveness

    Science.gov (United States)

    Niederhuber, M.; Brugger, S.

    2011-09-01

    A new audio-visual learning medium has been developed at the Department of Environmental Sciences at ETH Zurich (Switzerland), for use in geographical information sciences (GIS) courses. This new medium, presented in the form of Repetition Units, allows students to review and consolidate the most important learning concepts on an individual basis. The new material consists of: a) a short enhanced podcast (recorded and spoken slide show) with a maximum duration of 5 minutes, which focuses on only one important aspect of a lecture's theme; b) one or two relevant exercises, covering different cognitive levels of learning, with a maximum duration of 10 minutes; and c), solutions for the exercises. During a pilot phase in 2010, six Repetition Units were produced by the lecturers. Twenty more Repetition Units will be produced by our students during the fall semester of 2011 and 2012. The project is accompanied by a 5-year study (2009 - 2013) that investigates learning success using the new material, focussing on the question, whether or not the new material help to consolidate and refresh basic GIS knowledge. It will be analysed based on longitudinal studies. Initial results indicate that the new medium helps to refresh knowledge as the test groups scored higher than the control group. These results are encouraging and suggest that the new material with its combination of short audio-visual podcasts and relevant exercises help to consolidate students' knowledge.

  6. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. No, there is no 150 ms lead of visual speech on auditory speech, but a range of audiovisual asynchronies varying from small audio lead to large audio lag.

    Directory of Open Access Journals (Sweden)

    Jean-Luc Schwartz

    2014-07-01

    Full Text Available An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call "preparatory gestures". However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way. This is what we call "comodulatory gestures" providing auditory and visual events more or less in synchrony. We provide audiovisual data on sequences of plosive-vowel syllables (pa, ta, ka, ba, da, ga, ma, na showing that audiovisual synchrony is actually rather precise, varying between 20 ms audio lead and 70 ms audio lag. We show how more complex speech material should result in a range typically varying between 40 ms audio lead and 200 ms audio lag, and we discuss how this natural coordination is reflected in the so-called temporal integration window for audiovisual speech perception. Finally we present a toy model of auditory and audiovisual predictive coding, showing that visual lead is actually not necessary for visual prediction.

  8. Audio-Visual Peripheral Localization Disparity

    Directory of Open Access Journals (Sweden)

    Ryota Miyauchi

    2011-10-01

    Full Text Available In localizing simultaneous auditory and visual events, the brain should map the audiovisual events onto a unified perceptual space in a subsequent spatial process for integrating and/or comparing multisensory information. However, there is little qualitative and quantitative psychological data for estimating multisensory localization in peripheral visual fields. We measured the relative perceptual direction of a sound to a flash when they were simultaneously presented in peripheral visual fields. The results demonstrated that the sound and flash were perceptually located at the same position when the sound was presented in 5 deg-periphery from the flash. This phenomenon occurred even excluding the trial in which the participants' eyes moved. The measurement of the location of each sound and flash in a pointing task showed that the perceptual location of the sound shifted toward the frontal direction and conversely the perceptual location of the flash shifted toward the periphery. Our findings suggest that unisensory perceptual spaces of audition and vision have deviations in peripheral visual fields and, when the brain remaps unisensory locations of auditory and visual events into unified perceptual space, the unisensory spatial information of the events can be suitably maintained.

  9. Causal inference of asynchronous audiovisual speech

    Directory of Open Access Journals (Sweden)

    John F Magnotti

    2013-11-01

    Full Text Available During speech perception, humans integrate auditory information from the voice with visual information from the face. This multisensory integration increases perceptual precision, but only if the two cues come from the same talker; this requirement has been largely ignored by current models of speech perception. We describe a generative model of multisensory speech perception that includes this critical step of determining the likelihood that the voice and face information have a common cause. A key feature of the model is that it is based on a principled analysis of how an observer should solve this causal inference problem using the asynchrony between two cues and the reliability of the cues. This allows the model to make predictions abut the behavior of subjects performing a synchrony judgment task, predictive power that does not exist in other approaches, such as post hoc fitting of Gaussian curves to behavioral data. We tested the model predictions against the performance of 37 subjects performing a synchrony judgment task viewing audiovisual speech under a variety of manipulations, including varying asynchronies, intelligibility, and visual cue reliability. The causal inference model outperformed the Gaussian model across two experiments, providing a better fit to the behavioral data with fewer parameters. Because the causal inference model is derived from a principled understanding of the task, model parameters are directly interpretable in terms of stimulus and subject properties.

  10. Student′s preference of various audiovisual aids used in teaching pre- and para-clinical areas of medicine

    OpenAIRE

    Navatha Vangala

    2015-01-01

    Introduction: The formal lecture is among the oldest teaching methods that have been widely used in medical education. Delivering a lecture is made easy and better by use of audiovisual aids (AV aids) such as blackboard or whiteboard, an overhead projector, and PowerPoint presentation (PPT). Objective: To know the students preference of various AV aids and their use in medical education with an aim to improve their use in didactic lectures. Materials and Methods: The study was carried out amo...

  11. Audiovisual classification of vocal outbursts in human conversation using long-short-term memory networks

    NARCIS (Netherlands)

    Eyben, Florian; Petridis, Stavros; Schuller, Björn; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    We investigate classification of non-linguistic vocalisations with a novel audiovisual approach and Long Short-Term Memory (LSTM) Recurrent Neural Networks as highly successful dynamic sequence classifiers. As database of evaluation serves this year's Paralinguistic Challenge's Audiovisual Interest

  12. A representação audiovisual das mulheres migradas The audiovisual representation of migrant women

    Directory of Open Access Journals (Sweden)

    Luciana Pontes

    2012-12-01

    Full Text Available Neste artigo analiso as representações sobre as mulheres migradas nos fundos audiovisuais de algumas entidades que trabalham com gênero e imigração em Barcelona. Por haver detectado nos audiovisuais analisados uma associação recorrente das mulheres migradas à pobreza, à criminalidade, à ignorância, à maternidade obrigatória e numerosa, à prostituição etc., busquei entender como tais representações tomam forma, estudando os elementos narrativos, estilísticos, visuais e verbais através dos quais se articulam essas imagens e discursos sobre as mulheres migradas.In this paper I analyze the representations of the migrant women at the audiovisual founds in some of the organizations that work with gender and immigration in Barcelona. At the audiovisuals I have found a recurring association of the migrant women with poverty, criminality, ignorance, passivity, undocumentation, gender violence, compulsory and numerous motherhood, prostitution, etc. Thus, I tried to understand the ways in which these representations are shaped, studying the narrative, stylistic, visual and verbal elements through which these images and discourses of the migrant women are articulated.

  13. Media Aid Beyond the Factual: Culture, Development, and Audiovisual Assistance

    Directory of Open Access Journals (Sweden)

    Benjamin A. J. Pearson

    2015-01-01

    Full Text Available This paper discusses audiovisual assistance, a form of development aid that focuses on the production and distribution of cultural and entertainment media such as fictional films and TV shows. While the first audiovisual assistance program dates back to UNESCO’s International Fund for the Promotion of Culture in the 1970s, the past two decades have seen a proliferation of audiovisual assistance that, I argue, is related to a growing concern for culture in post-2015 global development agendas. In this paper, I examine the aims and motivations behind the EU’s audiovisual assistance programs to countries in the Global South, using data from policy documents and semi-structured, in-depth interviews with Program Managers and administrative staff in Brussels. These programs prioritize forms of audiovisual content that are locally specific, yet globally tradable. Furthermore, I argue that they have an ambivalent relationship with traditional notions of international development, one that conceptualizes media not only as a means to achieve economic development and human rights aims, but as a form of development itself.

  14. Parametric packet-based audiovisual quality model for IPTV services

    CERN Document Server

    Garcia, Marie-Neige

    2014-01-01

    This volume presents a parametric packet-based audiovisual quality model for Internet Protocol TeleVision (IPTV) services. The model is composed of three quality modules for the respective audio, video and audiovisual components. The audio and video quality modules take as input a parametric description of the audiovisual processing path, and deliver an estimate of the audio and video quality. These outputs are sent to the audiovisual quality module which provides an estimate of the audiovisual quality. Estimates of perceived quality are typically used both in the network planning phase and as part of the quality monitoring. The same audio quality model is used for both these phases, while two variants of the video quality model have been developed for addressing the two application scenarios. The addressed packetization scheme is MPEG2 Transport Stream over Real-time Transport Protocol over Internet Protocol. In the case of quality monitoring, that is the case for which the network is already set-up, the aud...

  15. The Fungible Audio-Visual Mapping and its Experience

    Directory of Open Access Journals (Sweden)

    Adriana Sa

    2014-12-01

    Full Text Available This article draws a perceptual approach to audio-visual mapping. Clearly perceivable cause and effect relationships can be problematic if one desires the audience to experience the music. Indeed perception would bias those sonic qualities that fit previous concepts of causation, subordinating other sonic qualities, which may form the relations between the sounds themselves. The question is, how can an audio-visual mapping produce a sense of causation, and simultaneously confound the actual cause-effect relationships. We call this a fungible audio-visual mapping. Our aim here is to glean its constitution and aspect. We will report a study, which draws upon methods from experimental psychology to inform audio-visual instrument design and composition. The participants are shown several audio-visual mapping prototypes, after which we pose quantitative and qualitative questions regarding their sense of causation, and their sense of understanding the cause-effect relationships. The study shows that a fungible mapping requires both synchronized and seemingly non-related components – sufficient complexity to be confusing. As the specific cause-effect concepts remain inconclusive, the sense of causation embraces the whole. 

  16. Mapping audiovisual translation investigations: research approaches and the role of technology

    OpenAIRE

    Matamala, Anna

    2017-01-01

    This article maps audiovisual translation research by analysing in a contrastive way the abstracts presented at three audiovisual translation conferences ten years ago and nowadays. The comparison deals with the audiovisual transfer modes and topics under discussion, and the approach taken by the authors in their abstracts. The article then shifts the focus to the role of technology in audiovisual translation research, as it is considered an element that is impacting and will continue to impa...

  17. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    Science.gov (United States)

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  18. The role of visual spatial attention in audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias; Tiippana, K.; Laarni, J.

    2009-01-01

    integration did not change. Visual spatial attention was also able to select between the faces when lip reading. This suggests that visual spatial attention acts at the level of visual speech perception prior to audiovisual integration and that the effect propagates through audiovisual integration......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre......-attentive but recent reports have challenged this view. Here we study the effect of visual spatial attention on the McGurk effect. By presenting a movie of two faces symmetrically displaced to each side of a central fixation point and dubbed with a single auditory speech track, we were able to discern the influences...

  19. Dynamic Bayesian Networks for Audio-Visual Speech Recognition

    Directory of Open Access Journals (Sweden)

    Liang Luhong

    2002-01-01

    Full Text Available The use of visual features in audio-visual speech recognition (AVSR is justified by both the speech generation mechanism, which is essentially bimodal in audio and visual representation, and by the need for features that are invariant to acoustic noise perturbation. As a result, current AVSR systems demonstrate significant accuracy improvements in environments affected by acoustic noise. In this paper, we describe the use of two statistical models for audio-visual integration, the coupled HMM (CHMM and the factorial HMM (FHMM, and compare the performance of these models with the existing models used in speaker dependent audio-visual isolated word recognition. The statistical properties of both the CHMM and FHMM allow to model the state asynchrony of the audio and visual observation sequences while preserving their natural correlation over time. In our experiments, the CHMM performs best overall, outperforming all the existing models and the FHMM.

  20. Perceived synchrony for realistic and dynamic audiovisual events

    Directory of Open Access Journals (Sweden)

    Ragnhild eEg

    2015-06-01

    Full Text Available In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli.

  1. The process of developing audiovisual patient information: challenges and opportunities.

    Science.gov (United States)

    Hutchison, Catherine; McCreaddie, May

    2007-11-01

    The aim of this project was to produce audiovisual patient information, which was user friendly and fit for purpose. The purpose of the audiovisual patient information is to inform patients about randomized controlled trials, as a supplement to their trial-specific written information sheet. Audiovisual patient information is known to be an effective way of informing patients about treatment. User involvement is also recognized as being important in the development of service provision. The aim of this paper is (i) to describe and discuss the process of developing the audiovisual patient information and (ii) to highlight the challenges and opportunities, thereby identifying implications for practice. A future study will test the effectiveness of the audiovisual patient information in the cancer clinical trial setting. An advisory group was set up to oversee the project and provide guidance in relation to information content, level and delivery. An expert panel of two patients provided additional guidance and a dedicated operational team dealt with the logistics of the project including: ethics; finance; scriptwriting; filming; editing and intellectual property rights. Challenges included the limitations of filming in a busy clinical environment, restricted technical and financial resources, ethical needs and issues around copyright. There were, however, substantial opportunities that included utilizing creative skills, meaningfully involving patients, teamworking and mutual appreciation of clinical, multidisciplinary and technical expertise. Developing audiovisual patient information is an important area for nurses to be involved with. However, this must be performed within the context of the multiprofessional team. Teamworking, including patient involvement, is crucial as a wide variety of expertise is required. Many aspects of the process are transferable and will provide information and guidance for nurses, regardless of specialty, considering developing this

  2. Herramienta observacional para el estudio de conductas violentas en un cómic audiovisual

    Directory of Open Access Journals (Sweden)

    Zaida Márquez

    2012-01-01

    Full Text Available Abstract This research paper presents a study which aimed to structure a system of categories for observation and description of violent behavior within an audiovisual children program, specifically in cartoons. A chapter of an audiovisual cartoon was chosen as an example. This chapter presented three main female characters in a random fashion in order to be observed by the children. Categories were established using the taxonomic criteria proposed by Anguera (2001 and were made up of various typed behaviors according to levels of response. To identify a stable behavioral pattern, some events were taken as a sample, taking into account one or several behavior registered in the observed sessions. The episode was analyzed by two observers who appreciated the material simultaneously, making two observations, registering the relevant data and contrasting opinions. The researchers determined a set of categories which expressed violent behavior such as: Nonverbal behavior, special behavior, and vocal/verbal behavior. It was concluded that there was a pattern of predominant and stable violent behavior in the cartoon observed. Resumen El presente artículo de investigación presenta un trabajo cuyo objetivo consistió en estructurar un sistema de categorías para la observación y descripción de conductas violentas en un cómic audiovisual (dibujo animado. Se seleccionó como muestra un cómic audiovisual que tiene tres personajes principales femeninos; tomándose de forma aleatoria, para su observación, uno de sus capítulos. Para el establecimiento de las categorías se escogieron como base los criterios taxonómicos propuestos por Anguera (2001, con lo cual se tipificaron las conductas que conforman cada categoría según los niveles de respuesta. Y para identificar un patrón de conducta estable se ha realizado un muestreo de eventos, usando todas las ocurrencias de una o varias conductas que se registraron en las sesiones observadas. El episodio

  3. Evolution of audiovisual production in five Spanish Cybermedia

    Directory of Open Access Journals (Sweden)

    Javier Mayoral Sánchez

    2014-12-01

    Full Text Available This paper quantifies and analyzes the evolution of audiovisual production of five Spanish digital newspapers: abc.es, elconfidencial.com, elmundo.es, elpais.com and lavanguardia.com. So have been studied videos published on the five cover for four weeks (fourteen days in November 2011 and another fourteen in March 2014. This diachronic perspective has revealed a remarkable contradiction in online media about audiovisual products. Even with very considerable differences between them, the five analyzed media increasingly publish videos. They do it in in the most valued areas of their homepages. However, is not perceived in them a willingness to engage firmly

  4. El archivo de RTVV: Patrimonio Audiovisual de la Humanidad

    Directory of Open Access Journals (Sweden)

    Hidalgo Goyanes, Paloma

    2014-07-01

    Full Text Available Los documentos audiovisuales son importantes para el estudio de los siglos XX y XXI. Los archivos de televisión contribuyen a la formación del imaginario colectivo y forman parte del Patrimonio Audiovisual de la Humanidad. La preservación del archivo audiovisual de la RTVV es responsabilidad de los poderes públicos, según se expresa en la legislación vigente y un derecho de los ciudadanos y de los contribuyentes como herederos de este patrimonio que refleja su historia, su cultura y su lengua.

  5. Eyewitnesses of History: Italian Amateur Cinema as Cultural Heritage and Source for Audiovisual and Media Production

    Directory of Open Access Journals (Sweden)

    Paolo Simoni

    2015-12-01

    Full Text Available The role of amateur cinema as archival material in Italian media productions has only recently been discovered. Italy, as opposed to other European countries, lacked a local, regional and national policy for the collection and preservation of private audiovisual documents, which led, as a result, to the inaccessibility of the sources. In 2002 the Archivio Nazionale del Film di Famiglia (Italy’s Amateur Film Archive, founded in Bologna by the Home Movies Association, became the reference repository of home movies and amateur cinema, promoting the availability of a cultural heritage that had previously been neglected. Today, it preserves about 5,000 hours of footage, contributes to documentary film productions and acts as a cultural and production center. The impact factor of the Home Movies Archive on the Italian audiovisual scenario and the sustainable perspectives strengthen the awareness that amateur film offers new opportunities to discover and represent the past from a different perspective, the one of an eyewitness “from below”. The article overviews the European and Italian discovery of amateur cinema as historical source from the seventies, and some cases from the Italian panorama during the last fifteen years, which powerfully raised the attention on home movies and amateur archive material.

  6. L'Arxiu de la Paraula : context i projecte del repositori audiovisual de l'Ateneu Barcelonès

    Directory of Open Access Journals (Sweden)

    Alcaraz Martínez, Rubén

    2014-12-01

    Full Text Available Es presenten els resultats del projecte de digitalització del fons audiovisual de l'Ateneu Barcelonès iniciat per l'Àrea de Biblioteca i Arxiu Històric l'any 2011. S'explica la metodologia de treball fent èmfasi en la gestió dels fitxers analògics i nascuts digitals i en la problemàtica derivada dels drets d'autor. Finalment, es presenta l'Arxiu de la Paraula i l'@teneu hub, nous repositori i web respectivament, l'objectiu dels quals és difondre el patrimoni audiovisual i donar accés centralitzat als diferents continguts generats per l'entitat.Se presentan los resultados del proyecto de digitalización del fondo audiovisual del Ateneu Barcelonès iniciado por el área de Biblioteca y Archivo Histórico en 2011. Se explica la metodología de trabajo haciendo énfasis en la gestión de los ficheros analógicos y nacidos digitales y en la problemática derivada de los derechos de autor. Finalmente, se presenta el Archivo de la Palabra y el @teneo hub, nuevos repositorio y web respectivamente, cuyo objetivo es difundir el patrimonio audiovisual y dar acceso centralizado a los diferentes contenidos generados por la entidad.This paper reports on the project to digitize the audiovisual archives of the Ateneu Barcelonès, which was launched by that institution’s Library and Archive department in 2011. The paper explains the methodology used to create the repository, focusing on the management of analogue files and born-digital materials and the question of author’s rights. Finally, it presents the new repository L’Arxiu de la Paraula (the Word Archive and the new website, @teneu hub, which are designed to disseminate the Ateneu’s audiovisual heritage and provide centralized access to its different contents.

  7. El Archivo de la Palabra : contexto y proyecto del repositorio audiovisual del Ateneu Barcelonès

    Directory of Open Access Journals (Sweden)

    Alcaraz Martínez, Rubén

    2014-12-01

    Full Text Available Es presenten els resultats del projecte de digitalització del fons audiovisual de l'Ateneu Barcelonès iniciat per l'Àrea de Biblioteca i Arxiu Històric l'any 2011. S'explica la metodologia de treball fent èmfasi en la gestió dels fitxers analògics i nascuts digitals i en la problemàtica derivada dels drets d'autor. Finalment, es presenta l'Arxiu de la Paraula i l'@teneu hub, nous repositori i web respectivament, l'objectiu dels quals és difondre el patrimoni audiovisual i donar accés centralitzat als diferents continguts generats per l'entitat.Se presentan los resultados del proyecto de digitalización del fondo audiovisual del Ateneu Barcelonès iniciado por el área de Biblioteca y Archivo Histórico en 2011. Se explica la metodología de trabajo haciendo énfasis en la gestión de los ficheros analógicos y nacidos digitales y en la problemática derivada de los derechos de autor. Finalmente, se presenta el Archivo de la Palabra y el @teneo hub, nuevos repositorio y web respectivamente, cuyo objetivo es difundir el patrimonio audiovisual y dar acceso centralizado a los diferentes contenidos generados por la entidad.This paper reports on the project to digitize the audiovisual archives of the Ateneu Barcelonès, which was launched by that institution’s Library and Archive department in 2011. The paper explains the methodology used to create the repository, focusing on the management of analogue files and born-digital materials and the question of author’s rights. Finally, it presents the new repository L’Arxiu de la Paraula (the Word Archive and the new website, @teneu hub, which are designed to disseminate the Ateneu’s audiovisual heritage and provide centralized access to its different contents.

  8. Kijkwijzer: The Dutch rating system for audiovisual productions

    NARCIS (Netherlands)

    Valkenburg, P.M.; Beentjes, J.W.J.; Nikken, P.; Tan, E.S.H.

    2002-01-01

    Kijkwijzer is the name of the new Dutch rating system in use since early 2001 to provide information about the possible harmful effects of movies, home videos and television programs on young people. The rating system is meant to provide audiovisual productions with both age-based and content-based

  9. Market potential for interactive audio-visual media

    NARCIS (Netherlands)

    Leurdijk, A.; Limonard, S.

    2005-01-01

    NM2 (New Media for a New Millennium) develops tools for interactive, personalised and non-linear audio-visual content that will be tested in seven pilot productions. This paper looks at the market potential for these productions from a technological, a business and a users' perspective. It shows

  10. Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders

    Science.gov (United States)

    Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.

    2013-01-01

    Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their typically developing peers. To shed light on possible differences in the maturation of audiovisual speech integration, we tested younger (ages 6-12) and older (ages 13-18) children with and without ASD on a task indexing such multisensory integration. To do this, we used the McGurk effect, in which the pairing of incongruent auditory and visual speech tokens typically results in the perception of a fused percept distinct from the auditory and visual signals, indicative of active integration of the two channels conveying speech information. Whereas little difference was seen in audiovisual speech processing (i.e., reports of McGurk fusion) between the younger ASD and TD groups, there was a significant difference at the older ages. While TD controls exhibited an increased rate of fusion (i.e., integration) with age, children with ASD failed to show this increase. These data suggest arrested development of audiovisual speech integration in ASD. The results are discussed in light of the extant literature and necessary next steps in research. PMID:24218241

  11. A montagem audiovisual a partir de mapa multitemporal

    Directory of Open Access Journals (Sweden)

    Leonardo Souza

    2013-07-01

    Full Text Available http://dx.doi.org/10.5007/1807-9288.2013v9n1p193   Situado no âmbito da discussão sobre tecnologias contemporâneas no ensino das artes audiovisuais, este artigo trata do ensino da montagem audiovisual a partir da composição de multiplicidades temporais. Neste contexto o termo audiovisual designa o vídeo digital composto por múltiplos fluxos temporais. Essa definição se refere à aplicação do hipervídeo no estudo da montagem audiovisual, buscando estabelecer relações com as pesquisas sobre a tecnologia na arte e com o estudo das artes audiovisuais contemporâneas. Por multiplicidades temporais entendam-se os diversos fluxos temporais – tempos ucrônicos, segundo Couchot (2005 – que a imagem digital tornou possível perceber na montagem em rizoma. A partir do conceito de tempo ucrônico, este artigo busca investigar as formas temporais e as narrativas que se tornam possíveis na montagem audiovisual e seu aprendizado. Neste escopo também é apresentado o software desenvolvido para a montagem de multitemporalidades com o intuito de fornecer subsídio para o ensino da montagem nas artes audiovisuais.

  12. Speech-specificity of two audiovisual integration effects

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2010-01-01

    Seeing the talker’s articulatory mouth movements can influence the auditory speech percept both in speech identification and detection tasks. Here we show that these audiovisual integration effects also occur for sine wave speech (SWS), which is an impoverished speech signal that naïve observers...

  13. Effect of Audio-Visual Intervention Program on Cognitive ...

    African Journals Online (AJOL)

    Thus the purpose of the study was to study the effectiveness of the audio-visual intervention program on the cognitive development of preschool children in relation to their socio economic status. The researcher employed experimental method to conduct the study. The sample consisted of 100 students from preschool of ...

  14. Content-based analysis improves audiovisual archive retrieval

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2012-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. In this paper, we take into account the information needs

  15. Audiovisual Ethnography of Philippine Music: A Process-oriented Approach

    Directory of Open Access Journals (Sweden)

    Terada Yoshitaka

    2013-06-01

    Full Text Available Audiovisual documentation has been an important part of ethnomusicological endeavors, but until recently it was treated primarily as a tool of preservation and/or documentation that supplements written ethnography, albeit there a few notable exceptions. The proliferation of inexpensive video equipment has encouraged the unprecedented number of scholars and students in ethnomusicology to be involved in filmmaking, but its potential as a methodology has not been fully explored. As a small step to redefine the application of audiovisual media, Dr. Usopay Cadar, my teacher in Philippine music, and I produced two films: one on Maranao kolintang music and the other on Maranao culture in general, based on the audiovisual footage we collected in 2008. This short essay describes how the screenings of these films were organized in March 2013 for the diverse audiences in the Philippines, and what types of reactions and interactions transpired during the screenings. These screenings were organized both to obtain feedback about the content of the films from the caretakers and stakeholders of the documented tradition and to create a venue for interactions and collaborations to discuss the potential of audiovisual ethnography. Drawing from the analysis of the current project, I propose to regard film not as a fixed product but as a living and organic site that is open to commentaries and critiques, where changes can be made throughout the process. In this perspective, ‘filmmaking’ refers to the entire process of research, filming, editing and post-production activities.

  16. Accessing Audiovisual Heritage: A Roadmap for Collaborative Innovation

    NARCIS (Netherlands)

    Oomen, Johan; Ordelman, Roeland J.F.

    Digitization of audiovisual archives is opening up a wealth of challenges and possibilities for innovations in science, education, and business. The key to unlocking archives for innovation is multimedia technology. In this article the authors zoom in on one of the largest multimedia archives in

  17. Assessing the Impact of Audiovisual Translation on the Improvement ...

    African Journals Online (AJOL)

    Audiovisual translation (AVT) or screen translation is a term used to refer to any language and cultural transfer aimed at translating original dialogues coming from any acoustic or visual product. Academic Literacy (AL) is viewed as the ability to cope with the reading, thinking and reasoning demands required of a student ...

  18. Audio-Visual Communications, A Tool for the Professional

    Science.gov (United States)

    Journal of Environmental Health, 1976

    1976-01-01

    The manner in which the Cuyahoga County, Ohio Department of Environmental Health utilizes audio-visual presentations for communication with business and industry, professional public health agencies and the general public is presented. Subjects including food sanitation, radiation protection and safety are described. (BT)

  19. Bi-directional audiovisual influences on temporal modulation discrimination.

    Science.gov (United States)

    Varghese, Leonard; Mathias, Samuel R; Bensussen, Seth; Chou, Kenny; Goldberg, Hannah R; Sun, Yile; Sekuler, Robert; Shinn-Cunningham, Barbara G

    2017-04-01

    Cross-modal interactions of auditory and visual temporal modulation were examined in a game-like experimental framework. Participants observed an audiovisual stimulus (an animated, sound-emitting fish) whose sound intensity and/or visual size oscillated sinusoidally at either 6 or 7 Hz. Participants made speeded judgments about the modulation rate in either the auditory or visual modality while doing their best to ignore information from the other modality. Modulation rate in the task-irrelevant modality matched the modulation rate in the task-relevant modality (congruent conditions), was at the other rate (incongruent conditions), or had no modulation (unmodulated conditions). Both performance accuracy and parameter estimates from drift-diffusion decision modeling indicated that (1) the presence of temporal modulation in both modalities, regardless of whether modulations were matched or mismatched in rate, resulted in audiovisual interactions; (2) congruence in audiovisual temporal modulation resulted in more reliable information processing; and (3) the effects of congruence appeared to be stronger when judging visual modulation rates (i.e., audition influencing vision), than when judging auditory modulation rates (i.e., vision influencing audition). The results demonstrate that audiovisual interactions from temporal modulations are bi-directional in nature, but with potential asymmetries in the size of the effect in each direction.

  20. Audiovisual perception of congruent and incongruent Dutch front vowels.

    Science.gov (United States)

    Valkenier, Bea; Duyne, Jurriaan Y; Andringa, Tjeerd C; Baskent, Deniz

    2012-12-01

    Auditory perception of vowels in background noise is enhanced when combined with visually perceived speech features. The objective of this study was to investigate whether the influence of visual cues on vowel perception extends to incongruent vowels, in a manner similar to the McGurk effect observed with consonants. Identification of Dutch front vowels /i, y, e, Y/ that share all features other than height and lip-rounding was measured for congruent and incongruent audiovisual conditions. The audio channel was systematically degraded by adding noise, increasing the reliance on visual cues. The height feature was more robustly carried over through the auditory channel and the lip-rounding feature through the visual channel. Hence, congruent audiovisual presentation enhanced identification, while incongruent presentation led to perceptual fusions and thus decreased identification. Visual cues influence the identification of congruent as well as incongruent audiovisual vowels. Incongruent visual information results in perceptual fusions, demonstrating that the McGurk effect can be instigated by long phonemes such as vowels. This result extends to the incongruent presentation of the visually less reliably perceived height. The findings stress the importance of audiovisual congruency in communication devices, such as cochlear implants and videoconferencing tools, where the auditory signal could be degraded.

  1. Safeguarding human dignity in the European audiovisual sector

    NARCIS (Netherlands)

    McGonagle, T.

    2007-01-01

    Audiovisual media services fall within the scope of many different international and national legal instruments, as well as of best practices and standards developed by case law. These rules often target a much wider spectrum of activities. Only some of them address specifically the media. Those,

  2. La presencia del narratario en el relato audiovisual

    OpenAIRE

    Prósper Ribes, Josep

    2015-01-01

    El narratario es un elemento básico en el relato audiovisual estrechamente relacionado con el narrador. Siempre hay una instancia narrataria y puede haber en un relato narratarios delegados. El narratario es fundamental para configurar la narración. La

  3. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    Science.gov (United States)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  4. Electrophysiological evidence for speech-specific audiovisual integration

    NARCIS (Netherlands)

    Baart, M.; Stekelenburg, J.J.; Vroomen, J.

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were

  5. Use of Audiovisual Media and Equipment by Medical Educationists ...

    African Journals Online (AJOL)

    The most frequently used audiovisual medium and equipment is transparency on Overhead projector (O. H. P.) while the medium and equipment that is barely used for teaching is computer graphics on multi-media projector. This study also suggests ways of improving teaching-learning processes in medical education, ...

  6. Audio-Visual Language--Verbal and Visual Codes.

    Science.gov (United States)

    Doelker, Christian

    1980-01-01

    Figurative (visual representation) and commentator (verbal representation) functions and their use in audiovisual media are discussed. Three categories each of visual and aural media are established: real images, artificial forms, and graphic signs; and sound effects, music, and the spoken language. (RAO)

  7. Decision-Level Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, Mannes; Truong, Khiet Phuong; Poppe, Ronald Walter; Pantic, Maja; Popescu-Belis, Andrei; Stiefelhagen, Rainer

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laugh- ter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio- visual laughter detection is

  8. Audiovisual Vowel Monitoring and the Word Superiority Effect in Children

    Science.gov (United States)

    Fort, Mathilde; Spinelli, Elsa; Savariaux, Christophe; Kandel, Sonia

    2012-01-01

    The goal of this study was to explore whether viewing the speaker's articulatory gestures contributes to lexical access in children (ages 5-10) and in adults. We conducted a vowel monitoring task with words and pseudo-words in audio-only (AO) and audiovisual (AV) contexts with white noise masking the acoustic signal. The results indicated that…

  9. Audio-Visual Equipment Depreciation. RDU-75-07.

    Science.gov (United States)

    Drake, Miriam A.; Baker, Martha

    A study was conducted at Purdue University to gather operational and budgetary planning data for the Libraries and Audiovisual Center. The objectives were: (1) to complete a current inventory of equipment including year of purchase, costs, and salvage value; (2) to determine useful life data for general classes of equipment; and (3) to determine…

  10. The Role of Audiovisual Mass Media News in Language Learning

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2011-01-01

    The present paper focuses on the role of audio/visual mass media news in language learning. In this regard, the two important issues regarding the selection and preparation of TV news for language learning are the content of the news and the linguistic difficulty. Content is described as whether the news is specialized or universal. Universal…

  11. Multistage audiovisual integration of speech: dissociating identification and detection

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    signal. Here we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers...

  12. Understanding the basics of audiovisual archiving in Africa and the ...

    African Journals Online (AJOL)

    The division between the two worlds (the developed and the developing world) has not spared the process of audiovisual archiving and the gap is widening bringing in a lot of challenges to Africa as part of the developing world. While the developed world is today concerned about digital technology and web-based ...

  13. History and audiovisual narratives: on fact and on fiction

    Directory of Open Access Journals (Sweden)

    Marcius Freire

    2013-09-01

    Full Text Available The paper intends to relate two different fields: audiovisual discourses(on cinema and on television and the historical discourse as a way ofarticulating social imaginaries. It aims, therefore, to make a comparative analysis of diverse indicial narratives in order to establish their repetitions and variations concerning the way of recalling and reconstructing memory sights through the representation of historical facts.

  14. Audiovisual Integration in Noise by Children and Adults

    Science.gov (United States)

    Barutchu, Ayla; Danaher, Jaclyn; Crewther, Sheila G.; Innes-Brown, Hamish; Shivdasani, Mohit N.; Paolini, Antonio G.

    2010-01-01

    The aim of this study was to investigate the development of multisensory facilitation in primary school-age children under conditions of auditory noise. Motor reaction times and accuracy were recorded from 8-year-olds, 10-year-olds, and adults during auditory, visual, and audiovisual detection tasks. Auditory signal-to-noise ratios (SNRs) of 30-,…

  15. Developing a typology of humor in audiovisual media

    NARCIS (Netherlands)

    Buijzen, M.A.; Valkenburg, P.M.

    2004-01-01

    The main aim of this study was to develop and investigate a typology of humor in audiovisual media. We identified 41 humor techniques, drawing on Berger's (1976, 1993) typology of humor in narratives, audience research on humor preferences, and an inductive analysis of humorous commercials. We

  16. Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.

    Science.gov (United States)

    Gebru, Israel; Ba, Sileye; Li, Xiaofei; Horaud, Radu

    2017-01-05

    Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semisupervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.

  17. Context-specific effects of musical expertise on audiovisual integration

    Directory of Open Access Journals (Sweden)

    Laura eBishop

    2014-10-01

    Full Text Available Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronisation. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinettists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronised. The range of asynchronies most often endorsed as synchronised was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well.

  18. The Elicitation of Audiovisual Steady-State Responses: Multi-Sensory Signal Congruity and Phase Effects

    Science.gov (United States)

    Rhone, Ariane E.; Idsardi, William J.; Simon, Jonathan Z.; Poeppel, David

    2013-01-01

    Most ecologically natural sensory inputs are not limited to a single modality. While it is possible to use real ecological materials as experimental stimuli to investigate the neural basis of multi-sensory experience, parametric control of such tokens is limited. By using artificial bimodal stimuli composed of approximations to ecological signals, we aim to observe the interactions between putatively relevant stimulus attributes. Here we use MEG as an electrophysiological tool and employ as a measure the steady-state response (SSR), an experimental paradigm typically applied to unimodal signals. In this experiment we quantify the responses to a bimodal audio-visual signal with different degrees of temporal (phase) congruity, focusing on stimulus properties critical to audiovisual speech. An amplitude modulated auditory signal (‘pseudo-speech’) is paired with a radius-modulated ellipse (‘pseudo-mouth’), with the envelope of low-frequency modulations occurring in phase or at offset phase values across modalities. We observe (i) that it is possible to elicit an SSR to bimodal signals; (ii) that bimodal signals exhibit greater response power than unimodal signals; and (iii) that the SSR power at specific harmonics and sensors differentially reflects the congruity between signal components. Importantly, we argue that effects found at the modulation frequency and second harmonic reflect differential aspects of neural coding of multisensory signals. The experimental paradigm facilitates a quantitative characterization of properties of multi-sensory speech and other bimodal computations. PMID:21380858

  19. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs

    NARCIS (Netherlands)

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal

  20. Sistema audiovisual para reconocimiento de comandos Audiovisual system for recognition of commands

    Directory of Open Access Journals (Sweden)

    Alexander Ceballos

    2011-08-01

    Full Text Available Se presenta el desarrollo de un sistema automático de reconocimiento audiovisual del habla enfocado en el reconocimiento de comandos. La representación del audio se realizó mediante los coeficientes cepstrales de Mel y las primeras dos derivadas temporales. Para la caracterización del vídeo se hizo seguimiento automático de características visuales de alto nivel a través de toda la secuencia. Para la inicialización automática del algoritmo se emplearon transformaciones de color y contornos activos con información de flujo del vector gradiente ("GVF snakes" sobre la región labial, mientras que para el seguimiento se usaron medidas de similitud entre vecindarios y restricciones morfológicas definidas en el estándar MPEG-4. Inicialmente, se presenta el diseño del sistema de reconocimiento automático del habla, empleando únicamente información de audio (ASR, mediante Modelos Ocultos de Markov (HMMs y un enfoque de palabra aislada; posteriormente, se muestra el diseño de los sistemas empleando únicamente características de vídeo (VSR, y empleando características de audio y vídeo combinadas (AVSR. Al final se comparan los resultados de los tres sistemas para una base de datos propia en español y francés, y se muestra la influencia del ruido acústico, mostrando que el sistema de AVSR es más robusto que ASR y VSR.We present the development of an automatic audiovisual speech recognition system focused on the recognition of commands. Signal audio representation was done using Mel cepstral coefficients and their first and second order time derivatives. In order to characterize the video signal, a set of high-level visual features was tracked throughout the sequences. Automatic initialization of the algorithm was performed using color transformations and active contour models based on Gradient Vector Flow (GVF Snakes on the lip region, whereas visual tracking used similarity measures across neighborhoods and morphological

  1. A Model for Producing and Sharing Instructional Materials in Veterinary Medicine. Final Report.

    Science.gov (United States)

    Ward, Billy C.; Niec, Alphonsus P.

    This report describes a study of factors which appear to influence the "shareability" of audiovisual materials in the field of veterinary medicine. Specific factors addressed are content quality, instructional effectiveness, technical quality, institutional support, organization, logistics, and personal attitudes toward audiovisuals. (Author/CO)

  2. Audiovisual Temporal Processing and Synchrony Perception in the Rat

    Science.gov (United States)

    Schormans, Ashley L.; Scott, Kaela E.; Vo, Albert M. Q.; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L.

    2017-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer’s ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats (n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats (n = 7) perceived the synchronous audiovisual stimuli to be “visual first” for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20–40 ms. Ultimately

  3. Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation

    Science.gov (United States)

    Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina

    2017-01-01

    Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this “online” multisensory improvement, there is evidence of long-lasting, “offline” effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced “online” effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can

  4. Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation

    Directory of Open Access Journals (Sweden)

    Elisa Magosso

    2017-12-01

    Full Text Available Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this “online” multisensory improvement, there is evidence of long-lasting, “offline” effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC, which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced “online” effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition. The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this

  5. Audiovisual Market Place 1970. A Multimedia Guide.

    Science.gov (United States)

    1970

    Both hardware manufacturers and software producers/distributors are arranged alphabetically by firm name and in indexes classified by product line. Entries indicate names of key personnel, addresses, phone numbers, types of material or equipment offered, and availability of printed instructions or other materials supplied with the product. Also…

  6. Audiovisual Market Place 1971. A Multimedia Guide.

    Science.gov (United States)

    1971

    Both hardware manufacturers and software producers/distributors are arranged alphabetically by firm name and in indexes classified by product line. Entries indicate names of key personnel, addresses, phone numbers, types of material or equipment offered, and availability of printed instructions or other materials supplied with the product. Also…

  7. A influência do ambiente audiovisual na legendação de filmes

    Directory of Open Access Journals (Sweden)

    Antonia Célia Ribeiro Nobre

    2002-01-01

    Full Text Available Este artigo mostra como a legendação é influenciada por muitos fatores presentes dentro do ambiente audiovisual devido, sobretudo, à função comunicativa audiovisual, à composição semiótica, à mecânica da legendação, e às visões e ao comportamento das pessoas envolvidas na produção audiovisual, na tradução e na distribuição, na crítica e no público.This article shows how subtitling is influenced by many factors among the audiovisual environment, due primarily to the audiovisuals communicative function and semiotic composition; the mechanics of subtitling; and the views and behavior of people involved with the audiovisuals production, translation and distribution, the critics and the public.

  8. AUDIOVISUAL JOURNALISM: FROM THE TV SCREEN TO OTHER SCREENS

    Directory of Open Access Journals (Sweden)

    Mayra Fernanda Ferreira

    2013-06-01

    Full Text Available This article is based on research which has been developed in partnership with Unesp TV, a university TV broadcast station of the Universidade Estadual Paulista Julio de Mesquita Filho, Bauru campus/SP. The study aims to identify convergent and divergent aspects in the design of audiovisual journalistic content for TV and other media such as the internet and mobile communication systems. The results presented here are the considerations obtained from the first stage of the research. In this phase, the basic steps which should guide the design of the content to feed broadcasting time are outlined, as well as the online audiovisual news broadcast and business management of a TV station, compared to the model which has been followed by internet TV broadcasters.

  9. Audiovisual translation research in Brazil and in Europe

    Directory of Open Access Journals (Sweden)

    Lina Alvarenga

    2002-01-01

    Full Text Available This article presents contributions from three translation scholars aimed at discussing the present situation of audiovisual translation research in Brazil and in Europe. The first contribution deals with issues concerning both contexts whereas the two others focus on local research issues.Este artigo apresenta contribuições de três pesquisadoras de tradução que discutem a situação atual das pesquisas sobre tradução audiovisual no Brasil e na Europa. A primeira contribuição trata de questões correlatas aos dois contextos enquanto que as duas outras enfocam questões de pesquisa de interesse local.

  10. JORNALISMO AUDIOVISUAL: DA TELA DA TV PARA OUTRAS TELAS

    Directory of Open Access Journals (Sweden)

    Mayra Ferreira

    2012-12-01

    Full Text Available O presente trabalho é fruto de uma pesquisa em desenvolvimento em parceria com a TV Unesp, emissora universitária vinculada à Universidade Estadual Paulista Júlio de Mesquita, campus de Bauru/SP e que tem por objetivo identificar os pontos convergentes e divergentes na produção de conteúdo audiovisual informativo para a TV aberta e demais plataformas digitais e móveis. O texto em questão é a conclusão da primeira etapa da pesquisa que se pautou por identificar os pressupostos básicos que deverão nortear os conteúdos que serão produzidos posteriormente e veiculados pela emissora no decorrer da pesquisa. Foram analisadas as características do jornalismo audiovisual online e o modelo de negócios da TV aberta em contraste com o modelo de negócios presente na Internet.

  11. The attentional window modulates capture by audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available Visual search is markedly improved when a target color change is synchronized with a spatially non-informative auditory signal. This "pip and pop" effect is an automatic process as even a distractor captures attention when accompanied by a tone. Previous studies investigating visual attention have indicated that automatic capture is susceptible to the size of the attentional window. The present study investigated whether the pip and pop effect is modulated by the extent to which participants divide their attention across the visual field We show that participants were better in detecting a synchronized audiovisual event when they divided their attention across the visual field relative to a condition in which they focused their attention. We argue that audiovisual capture is reduced under focused conditions relative to distributed settings.

  12. Jornalismo audiovisual: da tela da TV para outras telas

    Directory of Open Access Journals (Sweden)

    Francisco Machado Filho

    2012-12-01

    Full Text Available O presente trabalho é fruto de uma pesquisa em desenvolvimento em parceria com a TV Unesp, emissora universitária vinculada à Universidade Estadual Paulista Júlio de Mesquita, campus de Bauru/SP e que tem por objetivo identificar os pontos convergentes e divergentes na produção de conteúdo audiovisual informativo para a TV aberta e demais plataformas digitais e móveis. O texto em questão é a conclusão da primeira etapa da pesquisa que se pautou por identificar os pressupostos básicos que deverão nortear os conteúdos que serão produzidos posteriormente e veiculados pela emissora no decorrer da pesquisa. Foram analisadas as características do jornalismo audiovisual online e o modelo de negócios da TV aberta em contraste com o modelo de negócios presente na Internet.

  13. Audiovisual journalism: from the TV screen to other screens

    Directory of Open Access Journals (Sweden)

    Francisco Machado Filho

    2012-12-01

    Full Text Available This article is based on research which has been developed in partnership with Unesp TV, a university TV broadcast station of the Universidade Estadual Paulista Julio de Mesquita Filho, Bauru campus/SP. The study aims to identify convergent and divergent aspects in the design of audiovisual journalistic content for TV and other media such as the internet and mobile communication systems. The results presented here are the considerations obtained from the first stage of the research. In this phase, the basic steps which should guide the design of the content to feed broadcasting time are outlined, as well as the online audiovisual news broadcast and business management of a TV station, compared to the model which has been followed by internet TV broadcasters.

  14. Health Education Audiovisual Media on Mental Illness for Family

    OpenAIRE

    Wahyuningsih, Dyah; Wiyati, Ruti; Subagyo, Widyo

    2012-01-01

    This study aimed to produce health education media in form of Video Compact Disk (VCD). The first disk consist of method how to take care of patient with social isolation and the second disk consist of method how to take care of patient with violence behaviour. The implementation of audiovisual media is giving for family in Psyciatric Ward Banyumas hospital. The family divided in two groups, the first group was given health education about social isolation and the second group was given healt...

  15. Audiovisual journalism: from the TV screen to other screens

    OpenAIRE

    Mayra Fernanda Ferreira; Francisco Machado Filho

    2012-01-01

    This article is based on research which has been developed in partnership with Unesp TV, a university TV broadcast station of the Universidade Estadual Paulista Julio de Mesquita Filho, Bauru campus/SP. The study aims to identify convergent and divergent aspects in the design of audiovisual journalistic content for TV and other media such as the internet and mobile communication systems. The results presented here are the considerations obtained from the first stage of the research. In this p...

  16. Neural development of networks for audiovisual speech comprehension.

    Science.gov (United States)

    Dick, Anthony Steven; Solodkin, Ana; Small, Steven L

    2010-08-01

    Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the neurobiological substrate in the child compares to the adult is unknown. In particular, developmental differences in the network for audiovisual speech comprehension could manifest through the incorporation of additional brain regions, or through different patterns of effective connectivity. In the present study we used functional magnetic resonance imaging and structural equation modeling (SEM) to characterize the developmental changes in network interactions for audiovisual speech comprehension. The brain response was recorded while children 8- to 11-years-old and adults passively listened to stories under audiovisual (AV) and auditory-only (A) conditions. Results showed that in children and adults, AV comprehension activated the same fronto-temporo-parietal network of regions known for their contribution to speech production and perception. However, the SEM network analysis revealed age-related differences in the functional interactions among these regions. In particular, the influence of the posterior inferior frontal gyrus/ventral premotor cortex on supramarginal gyrus differed across age groups during AV, but not A speech. This functional pathway might be important for relating motor and sensory information used by the listener to identify speech sounds. Further, its development might reflect changes in the mechanisms that relate visual speech information to articulatory speech representations through experience producing and perceiving speech. 2009 Elsevier Inc. All rights reserved.

  17. Neural correlates of quality during perception of audiovisual stimuli

    CERN Document Server

    Arndt, Sebastian

    2016-01-01

    This book presents a new approach to examining perceived quality of audiovisual sequences. It uses electroencephalography to understand how exactly user quality judgments are formed within a test participant, and what might be the physiologically-based implications when being exposed to lower quality media. The book redefines experimental paradigms of using EEG in the area of quality assessment so that they better suit the requirements of standard subjective quality testings. Therefore, experimental protocols and stimuli are adjusted accordingly. .

  18. The Digital Turn in the French Audiovisual Model

    OpenAIRE

    Olivier Alexandre

    2016-01-01

    This article deals with the digital turn in the French audiovisual model. An organizational and legal system has evolved with changing technology and economic forces over the past thirty years. The high-income television industry served as the key element during the 1980s to compensate for a shifting value economy from movie theaters to domestic screens and personal devices. However, the growing competition in the TV sector and the rise of tech companies have initiated a disruption process. A...

  19. Digital preservation of audiovisual files within PrestoPRIME

    OpenAIRE

    Addis, Matthew; Allasia, Walter; Bailer, Werner; Boch, Laurent; Gallo, Francesco; Schallauer, Peter; Phillips, Stephen

    2011-01-01

    PrestoPRIME is a European project aiming at digital preservation of audiovisual files. In our scenarios we can observe a combination of lifecycles related to content, systems, services, and organisations involved both in the media and technical businesses. Taking as reference the OAIS model, we followed the principle of keeping everything under control, from all perspectives: configuration, resource management, data integrity, content quality and metadata handling. As a result, in the future ...

  20. Non-standard photography methods in audiovisual journalism

    OpenAIRE

    Géla, František

    2015-01-01

    The aim of the diploma thesis "Non- standard photography methods in audiovisual journalism" is to present image and production methods that are used in television news and journalism - particularly those ones that defy standard methods. These methods appear, considering technological development and the endeavour of making neutral visual space of television journalism, more attractive. First chapter of the thesis presents television news, its history, characteristics, elements and typology of...

  1. Montagem e remontagem na produção audiovisual de Guel Arraes

    Directory of Open Access Journals (Sweden)

    Yvana Fechine

    2008-11-01

    Full Text Available As minisséries O auto da Compadecida (1999 e A invenção do Brasil (2000, produzidas por Guel Arraes para a TV e, posteriormente, reeditadas e distribuídas como filme um ano depois de sua exibição pela Rede Globo, inauguraram uma nova lógica de produção no mercado audiovisual brasileiro. A transformação dessas minisséries em filmes não pode, no entanto, ser pensada como adaptação, pois o que temos, a partir do mesmo material anteriormente gravado, é um processo de "remontagem". Haveria, então, um tipo de montagem inerente a tais produtos audiovisuais, pensados, já na origem, para o trânsito entre meios? O presente artigo discute a questão e propõe que o caminho encontrado por Guel Arraes para obter esses resultados do tipo "dois em um", que "funcionam" tanto como programa de TV, quanto como filme, foi o apelo ao que descreveremos como "montagem em módulos". Palavras-chave televisão, filme, montagem Abstract The TV series O auto da Compadecida (1999 and A invenção do Brasil (2000, produced by Guel Arraes, and reedited and distributed as film one year after their exhibition on Rede Globo, introduced a new logic of production into the Brazilian audiovisual market. The transformation of these series into films cannot, however, be seen as an adaptation, even though the same recorded material is used in the process of re-editing. Would there, therefore, be a type of editing (montage inherent to these audiovisual products destined to transit between the two kinds of media, television and film? The present article discusses the issue and proposes that the way which Guel Arraes found to reach these results – in the "two-in-one" type, which "work" equally well as a television program and as a film – was what we can describe as editing (montage in modules. Key words television, film, editing

  2. Audiovisual temporal fusion in 6-month-old infants

    Directory of Open Access Journals (Sweden)

    Franziska Kopp

    2014-07-01

    Full Text Available The aim of this study was to investigate neural dynamics of audiovisual temporal fusion processes in 6-month-old infants using event-related brain potentials (ERPs. In a habituation-test paradigm, infants did not show any behavioral signs of discrimination of an audiovisual asynchrony of 200 ms, indicating perceptual fusion. In a subsequent EEG experiment, audiovisual synchronous stimuli and stimuli with a visual delay of 200 ms were presented in random order. In contrast to the behavioral data, brain activity differed significantly between the two conditions. Critically, N1 and P2 latency delays were not observed between synchronous and fused items, contrary to previously observed N1 and P2 latency delays between synchrony and perceived asynchrony. Hence, temporal interaction processes in the infant brain between the two sensory modalities varied as a function of perceptual fusion versus asynchrony perception. The visual recognition components Pb and Nc were modulated prior to sound onset, emphasizing the importance of anticipatory visual events for the prediction of auditory signals. Results suggest mechanisms by which young infants predictively adjust their ongoing neural activity to the temporal synchrony relations to be expected between vision and audition.

  3. Visual Target Localization, the Effect of Allocentric Audiovisual Reference Frame

    Directory of Open Access Journals (Sweden)

    David Hartnagel

    2011-10-01

    Full Text Available Visual allocentric references frames (contextual cues affect visual space perception (Diedrichsen et al., 2004; Walter et al., 2006. On the other hand, experiments have shown a change of visual perception induced by binaural stimuli (Chandler, 1961; Carlile et al., 2001. In the present study we investigate the effect of visual and audiovisual allocentred reference frame on visual localization and straight ahead pointing. Participant faced a black part-spherical screen (92cm radius. The head was maintained aligned with the body. Participant wore headphone and a glove with motion capture markers. A red laser point was displayed straight ahead as fixation point. The visual target was a 100ms green laser point. After a short delay, the green laser reappeared and participant had to localize target with a trackball. Straight ahead blind pointing was required before and after series of 48 trials. Visual part of the bimodal allocentred reference frame was provided by a vertical red laser line (15° left or 15° right, auditory part was provided by 3D sound. Five conditions were tested, no-reference, visual reference (left/right, audiovisual reference (left/right. Results show that the significant effect of bimodal audiovisual reference is not different from the visual reference one.

  4. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    Science.gov (United States)

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  5. Temporal adaptation to audiovisual asynchrony generalizes across different sound frequencies

    Directory of Open Access Journals (Sweden)

    Jordi eNavarra

    2012-05-01

    Full Text Available The human brain exhibits a highly-adaptive ability to reduce natural asynchronies between visual and auditory signals. Even though this mechanism robustly modulates the subsequent perception of sounds and visual stimuli, it is still unclear how such a temporal realignment is attained. In the present study, we investigated whether or not temporal adaptation generalizes across different sound frequencies. In a first exposure phase, participants adapted to a fixed 220-ms audiovisual asynchrony or else to synchrony for 3min. In a second phase, the participants performed simultaneity judgments (SJs regarding pairs of audiovisual stimuli that were presented at different stimulus onset asynchronies (SOAs and included either the same tone as in the exposure phase (a 250Hz beep, another low-pitched beep (300Hz, or a high-pitched beep (2500Hz. Temporal realignment was always observed (when comparing SJ performance after exposure to asynchrony vs. synchrony, regardless of the frequency of the sound tested. This suggests that temporal recalibration influences the audiovisual perception of sounds in a frequency non-specific manner and may imply the participation of non-primary perceptual areas of the brain that are not constrained by certain physical features such as sound frequency.

  6. Event Related Potentials Index Rapid Recalibration to Audiovisual Temporal Asynchrony

    Science.gov (United States)

    Simon, David M.; Noel, Jean-Paul; Wallace, Mark T.

    2017-01-01

    Asynchronous arrival of multisensory information at the periphery is a ubiquitous property of signals in the natural environment due to differences in the propagation time of light and sound. Rapid adaptation to these asynchronies is crucial for the appropriate integration of these multisensory signals, which in turn is a fundamental neurobiological process in creating a coherent perceptual representation of our dynamic world. Indeed, multisensory temporal recalibration has been shown to occur at the single trial level, yet the mechanistic basis of this rapid adaptation is unknown. Here, we investigated the neural basis of rapid recalibration to audiovisual temporal asynchrony in human participants using a combination of psychophysics and electroencephalography (EEG). Consistent with previous reports, participant’s perception of audiovisual temporal synchrony on a given trial (t) was influenced by the temporal structure of stimuli on the previous trial (t−1). When examined physiologically, event related potentials (ERPs) were found to be modulated by the temporal structure of the previous trial, manifesting as late differences (>125 ms post second-stimulus onset) in central and parietal positivity on trials with large stimulus onset asynchronies (SOAs). These findings indicate that single trial adaptation to audiovisual temporal asynchrony is reflected in modulations of late evoked components that have previously been linked to stimulus evaluation and decision-making. PMID:28381993

  7. Estrategias de comprensión audiovisual para estudiantes sinohablantes

    Directory of Open Access Journals (Sweden)

    María Isabel Gibert Escofet

    2015-10-01

    Full Text Available En este estudio partimos de dos referencias básicas para la enseñanza del E/LE: el Marco Común Europeo de Referencia para las lenguas (Consejo de Europa 2001 y el Plan Curricular del Instituto Cervantes (Instituto Cervantes 2006. El proyecto presentado obedece a la necesidad de subsanar los problemas de comprensión auditiva que presenta el alumnado chino. Como estos grupos-meta están poco acostumbrados a aprender con estrategias distintas a las implicadas en la memorización, partimos de la hipótesis de que el desarrollo de las estrategias de comprensión audiovisual y el trabajo cooperativo les haría, por una parte, más competentes en la comprensión auditiva y audiovisual, y, por otra, en la aceptación e interiorización de una metodología más comunicativa de aprendizaje. Así pues, el objetivo principal planteado es el desarrollar estas estrategias de comprensión audiovisual a través del trabajo cooperativo y, así, ayudar a los estudiantes a responsabilizarse de su propio aprendizaje.

  8. Musical expertise induces audiovisual integration of abstract congruency rules.

    Science.gov (United States)

    Paraskevopoulos, Evangelos; Kuchenbuch, Anja; Herholz, Sibylle C; Pantev, Christo

    2012-12-12

    Perception of everyday life events relies mostly on multisensory integration. Hence, studying the neural correlates of the integration of multiple senses constitutes an important tool in understanding perception within an ecologically valid framework. The present study used magnetoencephalography in human subjects to identify the neural correlates of an audiovisual incongruency response, which is not generated due to incongruency of the unisensory physical characteristics of the stimulation but from the violation of an abstract congruency rule. The chosen rule-"the higher the pitch of the tone, the higher the position of the circle"-was comparable to musical reading. In parallel, plasticity effects due to long-term musical training on this response were investigated by comparing musicians to non-musicians. The applied paradigm was based on an appropriate modification of the multifeatured oddball paradigm incorporating, within one run, deviants based on a multisensory audiovisual incongruent condition and two unisensory mismatch conditions: an auditory and a visual one. Results indicated the presence of an audiovisual incongruency response, generated mainly in frontal regions, an auditory mismatch negativity, and a visual mismatch response. Moreover, results revealed that long-term musical training generates plastic changes in frontal, temporal, and occipital areas that affect this multisensory incongruency response as well as the unisensory auditory and visual mismatch responses.

  9. Audiovisual integration for speech during mid-childhood: electrophysiological evidence.

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-12-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7-8-year-olds and 10-11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Human infants orient to biological motion rather than audiovisual synchrony.

    Science.gov (United States)

    Falck-Ytter, Terje; Bakker, Marta; von Hofsten, Claes

    2011-06-01

    Both orienting to audiovisual synchrony and to biological motion are adaptive responses. The ability to integrate correlated information from multiple senses reduces processing load and underlies the perception of a multimodal and unified world. Perceiving biological motion facilitates filial attachment and detection of predators/prey. In the literature, these mechanisms are discussed in isolation. In this eye-tracking study, we tested their relative strengths in young human infants. We showed five-month-old infants point-light animation pairs of human motion, accompanied by a soundtrack. We found that audiovisual synchrony was a strong determinant of attention when it was embedded in biological motion (two upright animations). However, when biological motion was shown together with distorted biological motion (upright animation and inverted animation, respectively), infants looked at the upright animation and disregarded audiovisual synchrony. Thus, infants oriented to biological motion rather than multimodally unified physical events. These findings have important implications for understanding the developmental trajectory of brain specialization in early human infancy. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Audiovisual integration of speech in a patient with Broca's Aphasia.

    Science.gov (United States)

    Andersen, Tobias S; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia.

  12. Infants' preference for native audiovisual speech dissociated from congruency preference.

    Directory of Open Access Journals (Sweden)

    Kathleen Shaw

    Full Text Available Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces. Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English and non-native (Spanish language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.

  13. Value of audiovisual records in intercultural education/Valor de los registros audiovisuales en educacion intercultural

    National Research Council Canada - National Science Library

    Bautista, Antonio; Rayon, Laura; de las Heras, Ana

    2012-01-01

    ... , y de plantear algunas soluciones dadas desde la antropologia audiovisual, analizamos la naturaleza de algunas situaciones de educacion intercultural recogidas en los dos colegios -etnografias- que...

  14. Effectiveness of audiovisual distraction with computerized delivery of anesthesia during the placement of stainless steel crowns in children with Down syndrome

    OpenAIRE

    Fakhruddin, Kausar Sadia; El Batawi, Hisham; Gorduysus, M. O.

    2017-01-01

    Objective: Assessing effectiveness of audiovisual (AV) distraction with/without video eyewear and computerized delivery system-intrasulcular (CDS-IS) for local anesthesia during placement of stainless steel crowns for the management of pathological tooth grinding in children with Down syndrome. Materials and Methods: This clinical study includes 22 children (13 boys and 9 girls), with mean age being 7.1 years. The study involved three sessions 1-week apart. During Session I, dental prophylaxi...

  15. Artivismo, activismo y sin autoría audiovisual : el caso del colectivo Cine sin Autor (CsA) / Artivism, activism and audiovisual authorship: the case of Cine sin Autor (CsA)

    OpenAIRE

    Sedeño Valdellós, Ana

    2017-01-01

    Without authorship is an audiovisual production process that problematize some of the canonical or fixed ideas of the transfer about authorshio and authority from one individual to the common people on issues of audiovisual production. In previous works we have already highlighted some of its characteristics and analyzed some works and practices from a collective (Cine sin Autor or CsA) that we called as "audiovisual without authorship", an alternative for an audiovisual language and with a p...

  16. A general audiovisual temporal processing deficit in adult readers with dyslexia

    NARCIS (Netherlands)

    Francisco, A.A.; Jesse, A.; Groen, M.A.; McQueen, J.M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with

  17. Film Studies in Motion : From Audiovisual Essay to Academic Research Video

    NARCIS (Netherlands)

    Kiss, Miklós; van den Berg, Thomas

    2016-01-01

    Our (co-written with Thomas van den Berg) ‪media rich,‬ ‪‎open access‬ ‪‎Scalar‬ ‪e-book‬ on the ‪‎Audiovisual Essay‬ practice is available online: http://scalar.usc.edu/works/film-studies-in-motion Audiovisual essaying should be more than an appropriation of traditional video artistry, or a mere

  18. Audiovisual Speech Perception and Eye Gaze Behavior of Adults with Asperger Syndrome

    Science.gov (United States)

    Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-01-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…

  19. Threats and opportunities for new audiovisual cultural heritage archive services: the Dutch case

    NARCIS (Netherlands)

    Ongena, G.; Huizer, E.; van de Wijngaert, Lidwien

    2012-01-01

    Purpose The purpose of this paper is to analyze the business-to-consumer market for digital audiovisual archiving services. In doing so we identify drivers, threats, and opportunities for new services based on audiovisual archives in the cultural heritage domain. By analyzing the market we provide

  20. First clinical implementation of audiovisual biofeedback in liver cancer stereotactic body radiation therapy

    Science.gov (United States)

    Tse, Regina; Martin, Darren; McLean, Lisa; Cho, Gwi; Hill, Robin; Pickard, Sheila; Aston, Paul; Huang, Chen‐Yu; Makhija, Kuldeep; O'Brien, Ricky; Keall, Paul

    2015-01-01

    Summary This case report details a clinical trial's first recruited liver cancer patient who underwent a course of stereotactic body radiation therapy treatment utilising audiovisual biofeedback breathing guidance. Breathing motion results for both abdominal wall motion and tumour motion are included. Patient 1 demonstrated improved breathing motion regularity with audiovisual biofeedback. A training effect was also observed. PMID:26247520

  1. A General Audiovisual Temporal Processing Deficit in Adult Readers with Dyslexia

    Science.gov (United States)

    Francisco, Ana A.; Jesse, Alexandra; Groen, Margriet A.; McQueen, James M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of…

  2. Beta-Band Functional Connectivity Influences Audiovisual Integration in Older Age: An EEG Study

    Directory of Open Access Journals (Sweden)

    Luyao Wang

    2017-08-01

    Full Text Available Audiovisual integration occurs frequently and has been shown to exhibit age-related differences via behavior experiments or time-frequency analyses. In the present study, we examined whether functional connectivity influences audiovisual integration during normal aging. Visual, auditory, and audiovisual stimuli were randomly presented peripherally; during this time, participants were asked to respond immediately to the target stimulus. Electroencephalography recordings captured visual, auditory, and audiovisual processing in 12 old (60–78 years and 12 young (22–28 years male adults. For non-target stimuli, we focused on alpha (8–13 Hz, beta (13–30 Hz, and gamma (30–50 Hz bands. We applied the Phase Lag Index to study the dynamics of functional connectivity. Then, the network topology parameters, which included the clustering coefficient, path length, small-worldness global efficiency, local efficiency and degree, were calculated for each condition. For the target stimulus, a race model was used to analyze the response time. Then, a Pearson correlation was used to test the relationship between each network topology parameters and response time. The results showed that old adults activated stronger connections during audiovisual processing in the beta band. The relationship between network topology parameters and the performance of audiovisual integration was detected only in old adults. Thus, we concluded that old adults who have a higher load during audiovisual integration need more cognitive resources. Furthermore, increased beta band functional connectivity influences the performance of audiovisual integration during normal aging.

  3. Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception

    DEFF Research Database (Denmark)

    Baart, Martijn; Lindborg, Alma Cornelia; Andersen, Tobias S

    2017-01-01

    Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure...

  4. Audiovisual News, Cartoons, and Films as Sources of Authentic Language Input and Language Proficiency Enhancement

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2012-01-01

    In today's audiovisually driven world, various audiovisual programs can be incorporated as authentic sources of potential language input for second language acquisition. In line with this view, the present research aimed at discovering the effectiveness of exposure to news, cartoons, and films as three different types of authentic audiovisual…

  5. Audiovisual Narrative Creation and Creative Retrieval: How Searching for a Story Shapes the Story

    NARCIS (Netherlands)

    Sauer, Sabrina

    2017-01-01

    Media professionals – such as news editors, image researchers, and documentary filmmakers - increasingly rely on online access to digital content within audiovisual archives to create narratives. Retrieving audiovisual sources therefore requires an in-depth knowledge of how to find sources

  6. Audiovisual perception of natural speech is impaired in adult dyslexics: an ERP study.

    Science.gov (United States)

    Rüsseler, J; Gerth, I; Heldmann, M; Münte, T F

    2015-02-26

    The present study used event-related brain potentials (ERPs) to investigate audiovisual integration processes in the perception of natural speech in a group of German adult developmental dyslexic readers. Twelve dyslexic and twelve non-dyslexic adults viewed short videos of a male German speaker. Disyllabic German nouns served as stimulus material. The auditory and the visual stimulus streams were segregated to create four conditions: in the congruent condition, the spoken word and the auditory word were identical. In the incongruent condition, the auditory and the visual word (i.e., the lip movements of the utterance) were different. Furthermore, on half of the trials, white noise (45 dB SPL) was superimposed on the auditory trace. Subjects had to say aloud the word they understood after they viewed the video. Behavioral data. Dyslexic readers committed more errors compared to normal readers in the noise conditions, and this effect was particularly present for congruent trials. ERPs showed a distinct N170 component at temporo-parietal electrodes that was smaller in amplitude for dyslexic readers. Both, normal and dyslexic readers, showed a clear effect of noise at centro-parietal electrodes between 300 and 600 ms. An analysis of error trials reflecting audiovisual integration (verbal responses in the incongruent noise condition that are a mix of the visual and the auditory word) revealed more positive ERPs for dyslexic readers at temporo-parietal electrodes 200-500 ms poststimulus. For normal readers, no such effect was present. These findings are discussed as reflecting increased effort in dyslexics under circumstances of distorted acoustic input. The superimposition of noise leads dyslexics to rely more on the integration of auditory and visual input (lip reading). Furthermore, the smaller N170-amplitudes indicate deficits in the processing of moving faces in dyslexic adults. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  7. From digital and audiovisual competence to media competence: dimensions and indicators/De la competencia digital y audiovisual a la competencia mediatica: dimensiones e indicadores

    National Research Council Canada - National Science Library

    Perez, Ma Amor; Delgado, Agueda

    2012-01-01

    La necesidad de plantear la conceptualizacion de la competencia mediatica conduce a una perspectiva mas amplia en la que convergen aspectos vinculados a la competencia audiovisual y a la competencia digital...

  8. The organization and reorganization of audiovisual speech perception in the first year of life.

    Science.gov (United States)

    Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F

    2017-04-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.

  9. On the relevance of script writing basics in audiovisual translation practice and training

    Directory of Open Access Journals (Sweden)

    Juan José Martínez-Sierra

    2012-07-01

    Full Text Available http://dx.doi.org/10.5007/2175-7968.2012v1n29p145   Audiovisual texts possess characteristics that clearly differentiate audiovisual translation from both oral and written translation, and prospective screen translators are usually taught about the issues that typically arise in audiovisual translation. This article argues for the development of an interdisciplinary approach that brings together Translation Studies and Film Studies, which would prepare future audiovisual translators to work with the nature and structure of a script in mind, in addition to the study of common and diverse translational aspects. Focusing on film, the article briefly discusses the nature and structure of scripts, and identifies key points in the development and structuring of a plot. These key points and various potential hurdles are illustrated with examples from the films Chinatown and La habitación de Fermat. The second part of this article addresses some implications for teaching audiovisual translation.

  10. Multisensory integration of drumming actions: musical expertise affects perceived audiovisual asynchrony.

    Science.gov (United States)

    Petrini, Karin; Dahl, Sofia; Rocchesso, Davide; Waadeland, Carl Haakon; Avanzini, Federico; Puce, Aina; Pollick, Frank E

    2009-09-01

    We investigated the effect of musical expertise on sensitivity to asynchrony for drumming point-light displays, which varied in their physical characteristics (Experiment 1) or in their degree of audiovisual congruency (Experiment 2). In Experiment 1, 21 repetitions of three tempos x three accents x nine audiovisual delays were presented to four jazz drummers and four novices. In Experiment 2, ten repetitions of two audiovisual incongruency conditions x nine audiovisual delays were presented to 13 drummers and 13 novices. Participants gave forced-choice judgments of audiovisual synchrony. The results of Experiment 1 show an enhancement in experts' ability to detect asynchrony, especially for slower drumming tempos. In Experiment 2 an increase in sensitivity to asynchrony was found for incongruent stimuli; this increase, however, is attributable only to the novice group. Altogether the results indicated that through musical practice we learn to ignore variations in stimulus characteristics that otherwise would affect our multisensory integration processes.

  11. Bayesian Calibration of Simultaneity in Audiovisual Temporal Order Judgments

    Science.gov (United States)

    Yamamoto, Shinya; Miyazaki, Makoto; Iwano, Takayuki; Kitazawa, Shigeru

    2012-01-01

    After repeated exposures to two successive audiovisual stimuli presented in one frequent order, participants eventually perceive a pair separated by some lag time in the same order as occurring simultaneously (lag adaptation). In contrast, we previously found that perceptual changes occurred in the opposite direction in response to tactile stimuli, conforming to Bayesian integration theory (Bayesian calibration). We further showed, in theory, that the effect of Bayesian calibration cannot be observed when the lag adaptation was fully operational. This led to the hypothesis that Bayesian calibration affects judgments regarding the order of audiovisual stimuli, but that this effect is concealed behind the lag adaptation mechanism. In the present study, we showed that lag adaptation is pitch-insensitive using two sounds at 1046 and 1480 Hz. This enabled us to cancel lag adaptation by associating one pitch with sound-first stimuli and the other with light-first stimuli. When we presented each type of stimulus (high- or low-tone) in a different block, the point of simultaneity shifted to “sound-first” for the pitch associated with sound-first stimuli, and to “light-first” for the pitch associated with light-first stimuli. These results are consistent with lag adaptation. In contrast, when we delivered each type of stimulus in a randomized order, the point of simultaneity shifted to “light-first” for the pitch associated with sound-first stimuli, and to “sound-first” for the pitch associated with light-first stimuli. The results clearly show that Bayesian calibration is pitch-specific and is at work behind pitch-insensitive lag adaptation during temporal order judgment of audiovisual stimuli. PMID:22792297

  12. Bayesian calibration of simultaneity in audiovisual temporal order judgments.

    Directory of Open Access Journals (Sweden)

    Shinya Yamamoto

    Full Text Available After repeated exposures to two successive audiovisual stimuli presented in one frequent order, participants eventually perceive a pair separated by some lag time in the same order as occurring simultaneously (lag adaptation. In contrast, we previously found that perceptual changes occurred in the opposite direction in response to tactile stimuli, conforming to bayesian integration theory (bayesian calibration. We further showed, in theory, that the effect of bayesian calibration cannot be observed when the lag adaptation was fully operational. This led to the hypothesis that bayesian calibration affects judgments regarding the order of audiovisual stimuli, but that this effect is concealed behind the lag adaptation mechanism. In the present study, we showed that lag adaptation is pitch-insensitive using two sounds at 1046 and 1480 Hz. This enabled us to cancel lag adaptation by associating one pitch with sound-first stimuli and the other with light-first stimuli. When we presented each type of stimulus (high- or low-tone in a different block, the point of simultaneity shifted to "sound-first" for the pitch associated with sound-first stimuli, and to "light-first" for the pitch associated with light-first stimuli. These results are consistent with lag adaptation. In contrast, when we delivered each type of stimulus in a randomized order, the point of simultaneity shifted to "light-first" for the pitch associated with sound-first stimuli, and to "sound-first" for the pitch associated with light-first stimuli. The results clearly show that bayesian calibration is pitch-specific and is at work behind pitch-insensitive lag adaptation during temporal order judgment of audiovisual stimuli.

  13. La regulación audiovisual: argumentos a favor y en contra The audio-visual regulation: the arguments for and against

    Directory of Open Access Journals (Sweden)

    Jordi Sopena Palomar

    2008-03-01

    Full Text Available El artículo analiza la efectividad de la regulación audiovisual y valora los diversos argumentos a favor y en contra de la existencia de consejos reguladores a nivel estatal. El debate sobre la necesidad de un organismo de este calado en España todavía persiste. La mayoría de los países comunitarios se han dotado de consejos competentes en esta materia, como es el caso del OFCOM en el Reino Unido o el CSA en Francia. En España, la regulación audiovisual se limita a organismos de alcance autonómico, como son el Consejo Audiovisual de Navarra, el de Andalucía y el Consell de l’Audiovisual de Catalunya (CAC, cuyo modelo también es abordado en este artículo. The article analyzes the effectiveness of the audio-visual regulation and assesses the different arguments for and against the existence of the broadcasting authorities at the state level. The debate of the necessity of a Spanish organism of regulation is still active. Most of the European countries have created some competent authorities, like the OFCOM in United Kingdom and the CSA in France. In Spain, the broadcasting regulation is developed by regional organisms, like the Consejo Audiovisual de Navarra, the Consejo Audiovisual de Andalucía and the Consell de l’Audiovisual de Catalunya (CAC, whose case is also studied in this article.

  14. A comparative study of approaches to audiovisual translation

    OpenAIRE

    Aldea, Silvia

    2016-01-01

    For those who are not new to the world of Japanese animation, known mainly as anime, the debate of "dub vs. sub" is by no means anything out of the ordinary, but rather a very heated argument amongst fans. The study will focus on the differences in the US English version between the two approaches of translating audio-visual media, namely subtitling (official subtitles and fanmade subtitles) and dubbing, in a qualitative context. More precisely, which of the two approaches can store the most ...

  15. Voice activity detection using audio-visual information

    DEFF Research Database (Denmark)

    Petsatodis, Theodore; Pnevmatikakis, Aristodemos; Boukis, Christos

    2009-01-01

    An audio-visual voice activity detector that uses sensors positioned distantly from the speaker is presented. Its constituting unimodal detectors are based on the modeling of the temporal variation of audio and visual features using Hidden Markov Models; their outcomes are fused using a post......-decision scheme. The Mel-Frequency Cepstral Coefficients and the vertical mouth opening are the chosen audio and visual features respectively, both augmented with their first-order derivatives. The proposed system is assessed using far-field recordings from four different speakers and under various levels...

  16. A Joint Audio-Visual Approach to Audio Localization

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2015-01-01

    Localization of audio sources is an important research problem, e.g., to facilitate noise reduction. In the recent years, the problem has been tackled using distributed microphone arrays (DMA). A common approach is to apply direction-of-arrival (DOA) estimation on each array (denoted as nodes...... time-of-flight cameras. Moreover, we propose an optimal method for weighting such DOA and range information for audio localization. Our experiments on both synthetic and real data show that there is a clear, potential advantage of using the joint audiovisual localization framework....

  17. Materialism.

    Science.gov (United States)

    Melnyk, Andrew

    2012-05-01

    Materialism is nearly universally assumed by cognitive scientists. Intuitively, materialism says that a person's mental states are nothing over and above his or her material states, while dualism denies this. Philosophers have introduced concepts (e.g., realization and supervenience) to assist in formulating the theses of materialism and dualism with more precision, and distinguished among importantly different versions of each view (e.g., eliminative materialism, substance dualism, and emergentism). They have also clarified the logic of arguments that use empirical findings to support materialism. Finally, they have devised various objections to materialism, objections that therefore serve also as arguments for dualism. These objections typically center around two features of mental states that materialism has had trouble in accommodating. The first feature is intentionality, the property of representing, or being about, objects, properties, and states of affairs external to the mental states. The second feature is phenomenal consciousness, the property possessed by many mental states of there being something it is like for the subject of the mental state to be in that mental state. WIREs Cogn Sci 2012, 3:281-292. doi: 10.1002/wcs.1174 For further resources related to this article, please visit the WIREs website. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Los metadatos asociados a la información audiovisual televisiva por “agentes externos” al servicio de documentación: validez, uso y posibilidades

    Directory of Open Access Journals (Sweden)

    Jorge Caldera-Serrano

    2016-03-01

    Full Text Available Se identifican los metadatos asociados a las imágenes que ingresan en los departamentos de documentación de las televisiones. Estos metadatos externos a la propia gestión documental pueden ser utilizados y contar con un valor positivo en el marco del análisis documental de la información audiovisual en televisión. Igualmente se señalan en qué momentos del proceso de la generación de la información pueden incorporarse metadatos descriptivos a la información audiovisual, tanto en la información procedente de agentes externos a la propia cadena, como aquel material que se produce en la propia empresa televisiva.

  19. Researching embodied learning by using videographic participation for data collection and audiovisual narratives for dissemination - illustrated by the encounter between two acrobats

    DEFF Research Database (Denmark)

    Degerbøl, Stine; Svendler Nielsen, Charlotte

    2015-01-01

    concerned with the senses from the field of sport sciences and from the field of visual anthropology and sensory ethnography, the article concludes that using videographic participation and creating audiovisual narratives might be a good option to capture the multisensuous dimensions of a learning situation.......The article concerns doing ethnography in education and it reflects upon using 'videographic participation' for data collection and the concept of 'audiovisual narratives' for dissemination, which is inspired by the idea of developing academic video. The article takes a narrative approach...... to qualitative research and presents a case from contemporary circus education examining embodied learning, whereas the particular focus in this article is methodology and the development of a dissemination strategy for empirical material generated through videographic participation. Drawing on contributions...

  20. Talker variability in audio-visual speech perception.

    Science.gov (United States)

    Heald, Shannon L M; Nusbaum, Howard C

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker's face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker's face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.

  1. A imagem-ritmo e o videoclipe no audiovisual

    Directory of Open Access Journals (Sweden)

    Felipe de Castro Muanis

    2012-12-01

    Full Text Available A televisão pode ser um espaço de reunião entre som e imagem em um dispositivo que possibilita a imagem-ritmo – dando continuidade à teoria da imagem de Gilles Deleuze, proposta para o cinema. Ela agregaria, simultaneamente, ca-racterísticas da imagem-movimento e da imagem-tempo, que se personificariam na construção de imagens pós-modernas, em produtos audiovisuais não necessariamente narrativos, porém populares. Filmes, videogames, videoclipes e vinhetas em que a música conduz as imagens permitiriam uma leitura mais sensorial. O audiovisual como imagem-música abre, assim, para uma nova forma de percepção além da textual tradicional, fruto da interação entre ritmo, texto e dispositivo. O tempo das imagens em movimento no audiovisual está atrelado inevitável e prioritariamente ao som. Elas agregam possibilidades não narrativas que se realizam, na maioria das vezes, sobre a lógica do ritmo musical, so-bressaindo-se como um valor fundamental, observado nos filmes Sem Destino (1969, Assassinos por Natureza (1994 e Corra Lola Corra (1998.

  2. Compliments in Audiovisual Translation – issues in character identity

    Directory of Open Access Journals (Sweden)

    Isabel Fernandes Silva

    2011-12-01

    Full Text Available Over the last decades, audiovisual translation has gained increased significance in Translation Studies as well as an interdisciplinary subject within other fields (media, cinema studies etc. Although many articles have been published on communicative aspects of translation such as politeness, only recently have scholars taken an interest in the translation of compliments. This study will focus on both these areas from a multimodal and pragmatic perspective, emphasizing the links between these fields and how this multidisciplinary approach will evidence the polysemiotic nature of the translation process. In Audiovisual Translation both text and image are at play, therefore, the translation of speech produced by the characters may either omit (because it is provided by visualgestual signs or it may emphasize information. A selection was made of the compliments present in the film What Women Want, our focus being on subtitles which did not successfully convey the compliment expressed in the source text, as well as analyze the reasons for this, namely difference in register, Culture Specific Items and repetitions. These differences lead to a different portrayal/identity/perception of the main character in the English version (original soundtrack and subtitled versions in Portuguese and Italian.

  3. Valores occidentales en el discurso publicitario audiovisual argentino

    Directory of Open Access Journals (Sweden)

    Isidoro Arroyo Almaraz

    2012-04-01

    Full Text Available En el presente artículo se desarrolla un análisis del discurso publicitario audiovisual argentino. Se pretende identificar los valores sociales que comunica con mayor predominancia y su posible vinculación con los valores característicos de la sociedad occidental posmoderna. Con este propósito se analizó la frecuencia de aparición de valores sociales para el estudio de 28 anuncios de diferentes anunciantes . Como modelo de análisis se utilizó el modelo “Seven/Seven” (siete pecados capitales y siete virtudes cardinales ya que se considera que los valores tradicionales son herederos de las virtudes y los pecados, utilizados por la publicidad para resolver necesidades relacionadas con el consumo. La publicidad audiovisual argentina promueve y anima ideas relacionadas con las virtudes y pecados a través de los comportamientos de los personajes de los relatos audiovisuales. Los resultados evidencian una mayor frecuencia de valores sociales caracterizados como pecados que de valores sociales caracterizados como virtudes ya que los pecados se transforman a través de la publicidad en virtudes que dinamizan el deseo y que favorecen el consumo fortaleciendo el aprendizaje de las marcas. Finalmente, a partir de los resultados obtenidos se reflexiona acerca de los usos y alcances sociales que el discurso publicitario posee.

  4. Nuevos modelos de producción audiovisual

    Directory of Open Access Journals (Sweden)

    Lic. Raquel Miranda Cáceres

    2003-01-01

    Full Text Available La industria televisiva está comenzando a sufrir una transformación originada por dos factores: la explosión de Internet y la revolución digital. La sinergia producida por estos acontecimientos nos encamina hacia una nueva vida, una era de conjunción audiovisual entre distintos sectores, televisión y PC, de la que se beneficiará el telespectador.Estamos en los albores de un modo nuevo de comunicarnos, comprar, jugar, en un mundo donde los grandes avances tecnológicos convergerán de forma irremediable. Estamos ante un cambio de cultura audiovisual. Sin embargo, es muy complicado predecir las características que tendrá la Televisión del futuro, cuando adquiera su madurez, ya que nos encontramos en pleno proceso creativo, queda mucho por recorrer y su formación final será un planeta aún por descubrir que ahora mismo sólo podemos vislumbrar. Algunos expertos llaman a esta revolución de la imagen post televisión, otros lo llaman futuro.

  5. Evidence for a Mechanism Encoding Audiovisual Spatial Separation

    Directory of Open Access Journals (Sweden)

    Emily Orchard-Mills

    2011-10-01

    Full Text Available Auditory and visual spatial representations are produced by distinct processes, drawing on separate neural inputs and occurring in different regions of the brain. We tested for a bimodal spatial representation using a spatial increment discrimination task. Discrimination thresholds for synchronously presented but spatially separated audiovisual stimuli were measured for base separations ranging from 0° to 45°. In a dark anechoic chamber, the spatial interval was defined by azimuthal separation of a white-noise burst from a speaker on a movable robotic arm and a checkerboard patch 5° wide projected onto an acoustically transparent screen. When plotted as a function of base interval, spatial increment thresholds exhibited a J-shaped pattern. Thresholds initially declined, the minimum occurring at base separations approximately equal to the individual observer's detection threshold and thereafter rose log-linearly according to Weber's law. This pattern of results, known as the ‘dipper function’, would be expected if the auditory and visual signals defining the spatial interval converged onto an early sensory filter encoding audiovisual space. This mechanism could be used to encode spatial separation of auditory and visual stimuli.

  6. [Guided home-based vestibular rehabilitation assisted by audiovisual media].

    Science.gov (United States)

    Trinidad Ruiz, Gabriel; Domínguez Pedroso, Mónica; Cruz de la Piedad, Eduardo; Solís Vázquez, Raquel; Samaniego Regalado, Beatriz; Rejas Ugena, Eladio

    2010-01-01

    To describe the creation and validation process of a new audiovisual support model for the design of guided home-based vestibular rehabilitation programs (GHVR), we introduce a prospective experimental study. 89 patients who underwent vestibular rehabilitation (VR) were evaluated throughout 2009. For the model design, we built a video library with VR exercises that can be combined using DVD creation software to tailor VR protocols. Treatment incidents, adherence, need to convert to a posturography-based program and variations in the Dizziness Handicap Inventory (DHI) score and dynamic visual acuity (DVA) were assessed. A good response was found, not only with respect to adherence (5.6% abandonment), but also in the clinical parameters, with a mean DHI score variation of 33.14 points, and a decrease in lines lost in the DVA test from 4.24 to 1.52 lines at the end of the treatment. Our study results show the possibility of building an audiovisual aid for creating GHVR programs. Copyright © 2010 Elsevier España, S.L. All rights reserved.

  7. Information-Driven Active Audio-Visual Source Localization.

    Directory of Open Access Journals (Sweden)

    Niclas Schult

    Full Text Available We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source's position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot's mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system's performance and discuss possible areas of application.

  8. Authentic Language Input Through Audiovisual Technology and Second Language Acquisition

    Directory of Open Access Journals (Sweden)

    Taher Bahrani

    2014-09-01

    Full Text Available Second language acquisition cannot take place without having exposure to language input. With regard to this, the present research aimed at providing empirical evidence about the low and the upper-intermediate language learners’ preferred type of audiovisual programs and language proficiency development outside the classroom. To this end, 60 language learners (30 low level and 30 upper-intermediate level were asked to have exposure to their preferred types of audiovisual program(s outside the classroom and keep a diary of the amount and the type of exposure. The obtained data indicated that the low-level participants preferred cartoons and the upper-intermediate participants preferred news more. To find out which language proficiency level could improve its language proficiency significantly, a post-test was administered. The results indicated that only the upper-intermediate language learners gained significant improvement. Based on the findings, the quality of the language input should be given priority over the amount of exposure.

  9. Intensidades y retóricas del texto audiovisual

    Directory of Open Access Journals (Sweden)

    Gian Maria Tore

    2011-03-01

    Full Text Available Las páginas que siguen se proponen desarrollar una concepción semiótica que se apoya esencialmente en las teorías de Louis Hjelmslev y de Claude Zilberberg. A través del estudio de un texto audiovisual, el íncipit del filme Pickpocket de Robert Bresson (1959, no se tratará de ilustrar la semiótica de Hjelmslev ni de Zilberberg, y menos aún de aplicarla; se trata más bien de experimentarla, puesto que dicha teoría textual no ha sido aplicada o lo ha sido escasamente, al dominio de lo audiovisual.1 Nos atendremos, por una parte, a dar cuenta, con esa teoría, del texto fílmico en cuestión; y por otra parte, a mostrar su interés y su alcance heurístico, tanto para la teoría del lenguaje como para los estudios fílmicos.

  10. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    Science.gov (United States)

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  11. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    Science.gov (United States)

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  12. Effect of attentional load on audiovisual speech perception: Evidence from ERPs

    Directory of Open Access Journals (Sweden)

    Agnès eAlsius

    2014-07-01

    Full Text Available Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e. a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  13. Audio-visual temporal recalibration can be constrained by content cues regardless of spatial overlap

    Directory of Open Access Journals (Sweden)

    Warrick eRoseboom

    2013-04-01

    Full Text Available It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this was necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; Experiment 1 and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; Experiment 2 we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  14. Neural Dynamics of Audiovisual Speech Integration under Variable Listening Conditions: An Individual Participant Analysis

    Directory of Open Access Journals (Sweden)

    Nicholas eAltieri

    2013-09-01

    Full Text Available Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend & Nozawa, 1995, a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of -12 dB, and S/N ratio of -18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude in lower auditory S/N ratios (higher capacity/efficiency compared to the high S/N ratio (low capacity/inefficient integration. The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity.

  15. Contributions of the interactive decoupage to reading and analyzing interactive audiovisual works in cybermedia.

    Directory of Open Access Journals (Sweden)

    Pere Freixa

    2014-05-01

    Full Text Available The epigraph “interactive storytelling” is applied to a series of journalistic pieces developed as interactive audiovisual stories that are becoming increasingly prominent in cybermedia. This article suggests the definition and description of an analysis system, the “interactive decoupage”, where the parameters to observe are established when reading an interactive audiovisual application. The interactive decoupage is a formal analysis system of interactive audiovisual works which allows for a thorough observation of the aspects present in any interactive audiovisual work: structure, content, access’ interfaces and the interaction dialogues the work suggests. The different parts of the decoupage are described, as well as the phases and procedure to execute it. The tool presented is a document for the researcher that provides a detailed description of the elements used by the authors of an interactive audiovisual project to develop and produce its script. The possibility of confronting the interactive audiovisual work with its “decoupage” description provides the analyst with more insight into the creative processes of interactive audiovisual projects.

  16. Efficient visual search from synchronized auditory signals requires transient audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.

  17. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap.

    Science.gov (United States)

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin'ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  18. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG.

    Directory of Open Access Journals (Sweden)

    Yao Lu

    Full Text Available Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave, while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  19. The audiovisual mounting narrative as a basis for the documentary film interactive: news studies

    Directory of Open Access Journals (Sweden)

    Mgs. Denis Porto Renó

    2008-01-01

    Full Text Available This paper presents a literature review and experiment results from pilot-doctoral research "assembly language visual narrative for the documentary film interactive," which defend the thesis that there are features interactive audio and video editing of the movie, even as causing agent of interactivity. The search for interactive audio-visual formats are present in international investigations, but sob glances technology. He believes that this paper is to propose possible formats for interactive audiovisual production film, video, television, computer and cell phone from the postmodern society. Key words: Audiovisual, language, interactivity, cinema interactive, documentary, communication.

  20. La estacíon de trabajo del traductor audiovisual: Herramientas y Recursos.

    Directory of Open Access Journals (Sweden)

    Anna Matamala

    2005-01-01

    Full Text Available In this article, we discuss the relationship between audiovisual translation and new technologies, and describe the characteristics of the audiovisual translator´s workstation, especially as regards dubbing and voiceover. After presenting the tools necessary for the translator to perform his/ her task satisfactorily as well as pointing to future perspectives, we make a list of sources that can be consulted in order to solve translation problems, including those available on the Internet. Keywords: audiovisual translation, new technologies, Internet, translator´s tools.

  1. Evaluation of an Audio-Visual Novela to Improve Beliefs, Attitudes and Knowledge toward Dementia: A Mixed-Methods Approach.

    Science.gov (United States)

    Grigsby, Timothy J; Unger, Jennifer B; Molina, Gregory B; Baron, Mel

    2017-01-01

    Dementia is a clinical syndrome characterized by progressive degeneration in cognitive ability that limits the capacity for independent living. Interventions are needed to target the medical, social, psychological, and knowledge needs of caregivers and patients. This study used a mixed methods approach to evaluate the effectiveness of a dementia novela presented in an audio-visual format in improving dementia attitudes, beliefs and knowledge. Adults from Los Angeles (N = 42, 83% female, 90% Hispanic/Latino, mean age = 42.2 years, 41.5% with less than a high school education) viewed an audio-visual novela on dementia. Participants completed surveys immediately before and after viewing the material. The novela produced significant improvements in overall knowledge (t(41) = -9.79, p novela can be useful for improving attitudes and knowledge about dementia, but further work is needed to investigate the relation with health disparities in screening and treatment behaviors. Audio visual novelas are an innovative format for health education and change attitudes and knowledge about dementia.

  2. Atypical audiovisual speech integration in infants at risk for autism.

    Directory of Open Access Journals (Sweden)

    Jeanne A Guiraud

    Full Text Available The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/- audio/ba/and the congruent visual/ba/- audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/- audio/ga/display compared with the congruent visual/ga/- audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16 = 17.153, p = 0.001. The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25 = 0.09, p = 0.767, in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41 = 4.466, p = 0.041. In some cases this reduced ability might lead to the poor communication skills characteristic of autism.

  3. Atypical audiovisual speech integration in infants at risk for autism.

    Science.gov (United States)

    Guiraud, Jeanne A; Tomalski, Przemyslaw; Kushnerenko, Elena; Ribeiro, Helena; Davies, Kim; Charman, Tony; Elsabbagh, Mayada; Johnson, Mark H

    2012-01-01

    The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/- audio/ba/and the congruent visual/ba/- audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/- audio/ga/display compared with the congruent visual/ga/- audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16) = 17.153, p = 0.001). The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25) = 0.09, p = 0.767), in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41) = 4.466, p = 0.041). In some cases this reduced ability might lead to the poor communication skills characteristic of autism.

  4. Reflections on autobiographical memory in the contemporary audiovisual medium

    Directory of Open Access Journals (Sweden)

    Marcio Markendorf

    2013-06-01

    Full Text Available The biographical discourse follows a traditional positivist model, focused on relations of cause and effect, and composes a chronological logic to present a life story. In the last decades, scholars have questioned this crystallized form and have offered new possibilities for the writing of the biographies. The audiovisual product, object of interest of this work, despite certain advantages over paper model, still produces screenplays in which the slices of life of an individual are an ordered succession of events in time. The purpose of this article is therefore reflect on how memory is present in the biographical films and what kind of contemporary modifications have been introduced into the trivial model so as to remove the mythical aspect that involves the biographees.

  5. Gaze-direction-based MEG averaging during audiovisual speech perception

    Directory of Open Access Journals (Sweden)

    Lotta Hirvenkari

    2010-03-01

    Full Text Available To take a step towards real-life-like experimental setups, we simultaneously recorded magnetoencephalographic (MEG signals and subject’s gaze direction during audiovisual speech perception. The stimuli were utterances of /apa/ dubbed onto two side-by-side female faces articulating /apa/ (congruent and /aka/ (incongruent in synchrony, repeated once every 3 s. Subjects (N = 10 were free to decide which face they viewed, and responses were averaged to two categories according to the gaze direction. The right-hemisphere 100-ms response to the onset of the second vowel (N100m’ was a fifth smaller to incongruent than congruent stimuli. The results demonstrate the feasibility of realistic viewing conditions with gaze-based averaging of MEG signals.

  6. Handicrafts production: documentation and audiovisual dissemination as sociocultural appreciation technology

    Directory of Open Access Journals (Sweden)

    Luciana Alvarenga

    2016-01-01

    Full Text Available The paper presents the results of scientific research, technology and innovation project in the creative economy sector, conducted from January 2014 to January 2015 that aimed to document and disclose the artisans and handicraft production of Vila de Itaúnas, ES, Brasil. The process was developed from initial conversations, followed by planning and conducting participatory workshops for documentation and audiovisual dissemination around the production of handicrafts and its relation to biodiversity and local culture. The initial objective was to promote expression and diffusion spaces of knowledge among and for the local population, also reaching a regional, state and national public. Throughout the process, it was found that the participatory workshops and the collective production of a virtual site for disclosure of practices and products contributed to the development and socio-cultural recognition of artisan and craft in the region.

  7. A simple and efficient method to enhance audiovisual binding tendencies

    Directory of Open Access Journals (Sweden)

    Brian Odegaard

    2017-04-01

    Full Text Available Individuals vary in their tendency to bind signals from multiple senses. For the same set of sights and sounds, one individual may frequently integrate multisensory signals and experience a unified percept, whereas another individual may rarely bind them and often experience two distinct sensations. Thus, while this binding/integration tendency is specific to each individual, it is not clear how plastic this tendency is in adulthood, and how sensory experiences may cause it to change. Here, we conducted an exploratory investigation which provides evidence that (1 the brain’s tendency to bind in spatial perception is plastic, (2 that it can change following brief exposure to simple audiovisual stimuli, and (3 that exposure to temporally synchronous, spatially discrepant stimuli provides the most effective method to modify it. These results can inform current theories about how the brain updates its internal model of the surrounding sensory world, as well as future investigations seeking to increase integration tendencies.

  8. The Digital Turn in the French Audiovisual Model

    Directory of Open Access Journals (Sweden)

    Olivier Alexandre

    2016-07-01

    Full Text Available This article deals with the digital turn in the French audiovisual model. An organizational and legal system has evolved with changing technology and economic forces over the past thirty years. The high-income television industry served as the key element during the 1980s to compensate for a shifting value economy from movie theaters to domestic screens and personal devices. However, the growing competition in the TV sector and the rise of tech companies have initiated a disruption process. A challenged French conception copyright, the weakened position of TV channels and the scaling of content market all now call into question the sustainability of the French model in a digital era.

  9. Audio-visual active speaker tracking in cluttered indoors environments.

    Science.gov (United States)

    Talantzis, Fotios; Pnevmatikakis, Aristodemos; Constantinides, Anthony G

    2009-02-01

    We propose a system for detecting the active speaker in cluttered and reverberant environments where more than one person speaks and moves. Rather than using only audio information, the system utilizes audiovisual information from multiple acoustic and video sensors that feed separate audio and video tracking modules. The audio module operates using a particle filter (PF) and an information-theoretic framework to provide accurate acoustic source location under reverberant conditions. The video subsystem combines in 3-D a number of 2-D trackers based on a variation of Stauffer's adaptive background algorithm with spatiotemporal adaptation of the learning parameters and a Kalman tracker in a feedback configuration. Extensive experiments show that gains are to be expected when fusion of the separate modalities is performed to detect the active speaker.

  10. The subjective duration of audiovisual looming and receding stimuli.

    Science.gov (United States)

    Grassi, Massimo; Pavan, Andrea

    2012-08-01

    Looming visual stimuli (log-increasing in proximal size over time) and auditory stimuli (of increasing sound intensity over time) have been shown to be perceived as longer than receding visual and auditory stimuli (i.e., looming stimuli reversed in time). Here, we investigated whether such asymmetry in subjective duration also occurs for audiovisual looming and receding stimuli, as well as for stationary stimuli (i.e., stimuli that do not change in size and/or intensity over time). Our results showed a great temporal asymmetry in audition but a null asymmetry in vision. In contrast, the asymmetry in audiovision was moderate, suggesting that multisensory percepts arise from the integration of unimodal percepts in a maximum-likelihood fashion.

  11. A LINGUAGEM AUDIOVISUAL COMO PRÁTICA ESCOLAR

    Directory of Open Access Journals (Sweden)

    Simone Berle

    2012-01-01

    Full Text Available O ensaio discute a relação entre o cinema e a escola para tematizar a linguagem audiovisual e suas implicações nas práticas escolares. Mesmo com o acesso a materiais e recursos audiovisuais, o cinema comparece no cotidiano escolar como apoio pedagógico diante da hierarquização e redução das linguagens à leitura e à escrita na educação das crianças. Para discutir a necessária pluralização de experiências com as linguagens, enquanto prática escolar, busca dialogar com a proposta de Jorge Larrosa, de substituir o par teoria/prática pelo par experiência/sentido para pensar a educação e com a concepção do humano como ser histórico e produtor de história em Paul Ricoeur. Nosso olhar de educadoras e pesquisadoras da infância interroga a naturalizada presença da linguagem audiovisual na educação das crianças, para destacar a desconsideração pela pluralidade de acessos midiáticos que as crianças podem interagir atualmente. Não reivindica a inclusão do cinema nos currículos, enquanto área de conhecimento a ser contemplada como “conteúdo”, mas aponta a importância do ampliar as aprendizagens, no cotidiano escolar, ao reivindicar a pluralização dos processos de aprender a complexificar repertórios linguageiros.

  12. Putative mechanisms mediating tolerance for audiovisual stimulus onset asynchrony.

    Science.gov (United States)

    Bhat, Jyoti; Miller, Lee M; Pitt, Mark A; Shahin, Antoine J

    2015-03-01

    Audiovisual (AV) speech perception is robust to temporal asynchronies between visual and auditory stimuli. We investigated the neural mechanisms that facilitate tolerance for audiovisual stimulus onset asynchrony (AVOA) with EEG. Individuals were presented with AV words that were asynchronous in onsets of voice and mouth movement and judged whether they were synchronous or not. Behaviorally, individuals tolerated (perceived as synchronous) longer AVOAs when mouth movement preceded the speech (V-A) stimuli than when the speech preceded mouth movement (A-V). Neurophysiologically, the P1-N1-P2 auditory evoked potentials (AEPs), time-locked to sound onsets and known to arise in and surrounding the primary auditory cortex (PAC), were smaller for the in-sync than the out-of-sync percepts. Spectral power of oscillatory activity in the beta band (14-30 Hz) following the AEPs was larger during the in-sync than out-of-sync perception for both A-V and V-A conditions. However, alpha power (8-14 Hz), also following AEPs, was larger for the in-sync than out-of-sync percepts only in the V-A condition. These results demonstrate that AVOA tolerance is enhanced by inhibiting low-level auditory activity (e.g., AEPs representing generators in and surrounding PAC) that code for acoustic onsets. By reducing sensitivity to acoustic onsets, visual-to-auditory onset mapping is weakened, allowing for greater AVOA tolerance. In contrast, beta and alpha results suggest the involvement of higher-level neural processes that may code for language cues (phonetic, lexical), selective attention, and binding of AV percepts, allowing for wider neural windows of temporal integration, i.e., greater AVOA tolerance. Copyright © 2015 the American Physiological Society.

  13. The audio-visual revolution: do we really need it?

    Science.gov (United States)

    Townsend, I

    1979-03-01

    In the United Kingdom, The audio-visual revolution has steadily gained converts in the nursing profession. Nurse tutor courses now contain information on the techniques of educational technology and schools of nursing increasingly own (or wish to own) many of the sophisticated electronic aids to teaching that abound. This is taking place at a time of hitherto inexperienced crisis and change. Funds have been or are being made available to buy audio-visual equipment. But its purchase and use relies on satisfying personal whim, prejudice or educational fashion, not on considerations of educational efficiency. In the rush of enthusiasm, the overwhelmed teacher (everywhere; the phenomenon is not confined to nursing) forgets to ask the searching, critical questions: 'Why should we use this aid?','How effective is it?','And, at what?'. Influential writers in this profession have repeatedly called for a more responsible attitude towards published research work of other fields. In an attempt to discover what is known about the answers to this group of questions, an eclectic look at media research is taken and the widespread dissatisfaction existing amongst international educational technologists is noted. The paper isolates out of the literature several causative factors responsible for the present state of affairs. Findings from the field of educational television are cited as representative of an aid which has had a considerable amount of time and research directed at it. The concluding part of the paper shows the decisions to be taken in using or not using educational media as being more complicated than might at first appear.

  14. Audiovisual emotional processing and neurocognitive functioning in patients with depression

    Directory of Open Access Journals (Sweden)

    Sophie eDoose-Grünefeld

    2015-01-01

    Full Text Available Alterations in the processing of emotional stimuli (e.g. facial expressions, prosody, music have repeatedly been reported in patients with major depression. Such impairments may result from the likewise prevalent executive deficits in these patients. However, studies investigating this relationship are rare. Moreover, most studies to date have only assessed impairments in unimodal emotional processing, whereas in real life, emotions are primarily conveyed through more than just one sensory channel. The current study therefore aimed at investigating multi-modal emotional processing in patients with depression and to assess the relationship between emotional and neurocognitive impairments. 41 patients suffering from major depression and 41 never-depressed healthy controls participated in an audiovisual (faces-sounds emotional integration paradigm as well as a neurocognitive test battery. Our results showed that depressed patients were specifically impaired in the processing of positive auditory stimuli as they rated faces significantly more fearful when presented with happy than with neutral sounds. Such an effect was absent in controls. Findings in emotional processing in patients did not correlate with BDI-scores. Furthermore, neurocognitive findings revealed significant group differences for two of the tests. The effects found in audiovisual emotional processing, however, did not correlate with performance in the neurocognitive tests.In summary, our results underline the diversity of impairments going along with depression and indicate that deficits found for unimodal emotional processing cannot trivially be generalized to deficits in a multi-modal setting. The mechanisms of impairments therefore might be far more complex than previously thought. Our findings furthermore contradict the assumption that emotional processing deficits in major depression are associated with impaired attention or inhibitory functioning.

  15. On the Role of Crossmodal Prediction in Audiovisual Emotion Perception

    Directory of Open Access Journals (Sweden)

    Sarah eJessen

    2013-07-01

    Full Text Available Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others’ emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of crossmodal prediction. In emotion perception, as in most other settings, visual information precedes the auditory one. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, it has not been addressed so far in audiovisual emotion perception. Based on the current state of the art in (a crossmodal prediction and (b multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG and magnetoencephalographic (MEG studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow for a more reliable prediction of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 response in the EEG and the duration of visual emotional but not non-emotional information. If the assumption that emotional content allows for more reliable predictions can be corroborated in future studies, crossmodal prediction is a crucial factor in our understanding of multisensory emotion perception.

  16. Cortical Response Similarities Predict which Audiovisual Clips Individuals Viewed, but Are Unrelated to Clip Preference

    National Research Council Canada - National Science Library

    Bridwell, David A; Roth, Cullen; Gupta, Cota Navin; Calhoun, Vince D

    2015-01-01

    .... These inter-subject correlation's (ISC's) emerge from similarities in individual's cortical response to the shared audiovisual inputs, which may be related to their emergent cognitive and perceptual experience...

  17. An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex

    National Research Council Canada - National Science Library

    Okada, Kayoko; Venezia, Jonathan H; Matchin, William; Saberi, Kourosh; Hickok, Gregory

    2013-01-01

    .... While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur...

  18. Strategies for media literacy: Audiovisual skills and the citizenship in Andalusia

    Directory of Open Access Journals (Sweden)

    Ignacio Aguaded-Gómez

    2012-07-01

    Full Text Available Media consumption is an undeniable fact in present-day society. The hours that members of all social segments spend in front of a screen take up a large part of their leisure time worldwide. Audiovisual communication becomes especially important within the context of today’s digital society (society-network, where information and communication technologies pervade all corners of everyday life. However, people do not own enough audiovisual media skills to cope with this mass media omnipresence. Neither the education system nor civic associations, or the media themselves, have promoted audiovisual skills to make people critically competent when viewing media. This study aims to provide an updated conceptualization of the “audiovisual skill” in this digital environment and transpose it onto a specific interventional environment, seeking to detect needs and shortcomings, plan global strategies to be adopted by governments and devise training programmes for the various sectors involved.

  19. Voice over: Audio-visual congruency and content recall in the gallery setting

    National Research Council Canada - National Science Library

    Merle T Fairhurst; Minnie Scott; Ophelia Deroy

    2017-01-01

    ...? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain...

  20. Convergent Cultures: The Disappearance of Commissioned Audiovisual Productions in The Netherlands

    Directory of Open Access Journals (Sweden)

    Bas Agterberg

    2014-12-01

    Full Text Available The article analyses the changes in production and consumption in the audiovisual industry and the way the so-called ‘ephemeral’ commissioned productions are scarcely preserved. New technologies and the liberal economic policies and internationalisation changed the media landscape in the 1980s. Audiovisual companies created a broad range of products within the audiovisual industry. This also resulted in a democratisation of the use of media as well as new formats of programmes and distribution for commissioned productions. By looking at a specific company that recently handed over a collection to the Netherlands Institute for Sound and Vision, challenges and issues of preserving video and digital and interactive audiovisual productions are discussed.

  1. Audiovisual Webjournalism: An analysis of news on UOL News and on TV UERJ Online

    Directory of Open Access Journals (Sweden)

    Leila Nogueira

    2008-06-01

    Full Text Available This work shows the development of audiovisual webjournalism on the Brazilian Internet. This paper, based on the analysis of UOL News on UOL TV – pioneer format on commercial web television - and of UERJ Online TV – first on-line university television in Brazil - investigates the changes in the gathering, production and dissemination processes of audiovisual news when it starts to be transmitted through the web. Reflections of authors such as Herreros (2003, Manovich (2001 and Gosciola (2003 are used to discuss the construction of audiovisual narrative on the web. To comprehend the current changes in today’s webjournalism, we draw on the concepts developed by Fidler (1997; Bolter and Grusin (1998; Machado (2000; Mattos (2002 and Palacios (2003. We may conclude that the organization of narrative elements in cyberspace makes for the efficiency of journalistic messages, while establishing the basis of a particular language for audiovisual news on the Internet.

  2. Audio/visual analysis for high-speed TV advertisement detection from MPEG bitstream

    OpenAIRE

    Sadlier, David A.

    2002-01-01

    Advertisement breaks dunng or between television programmes are typically flagged by senes of black-and-silent video frames, which recurrendy occur in order to audio-visually separate individual advertisement spots from one another. It is the regular prevalence of these flags that enables automatic differentiauon between what is programme content and what is advertisement break. Detection of these audio-visual depressions within broadcast television content provides a basis on which advertise...

  3. El documentalista audiovisual als serveis de documentació de les televisions locals

    OpenAIRE

    Martínez, Virginia

    2008-01-01

    The irruption of digital systems into the televisions has opened a new front for those audiovisual documentalists working on a television documentation centre. To the traditional tasks as cataloguing and storage, news tasks common to digital contents management have been added, such as metadata generation and management, or information flow control of servers and archives. This poster is based on the audiovisual documentalist figure on the local television, and it shows the environment wher...

  4. Effect of Audiovisual Treatment Information on Relieving Anxiety in Patients Undergoing Impacted Mandibular Third Molar Removal.

    Science.gov (United States)

    Choi, Sung-Hwan; Won, Ji-Hoon; Cha, Jung-Yul; Hwang, Chung-Ju

    2015-11-01

    The authors hypothesized that an audiovisual slide presentation that provided treatment information regarding the removal of an impacted mandibular third molar could improve patient knowledge of postoperative complications and decrease anxiety in young adults before and after surgery. A group that received an audiovisual description was compared with a group that received the conventional written description of the procedure. This randomized clinical trial included young adult patients who required surgical removal of an impacted mandibular third molar and fulfilled the predetermined criteria. The predictor variable was the presentation of an audiovisual slideshow. The audiovisual informed group provided informed consent after viewing an audiovisual slideshow. The control group provided informed consent after reading a written description of the procedure. The outcome variables were the State-Trait Anxiety Inventory, the Dental Anxiety Scale, a self-reported anxiety questionnaire, completed immediately before and 1 week after surgery, and a postoperative questionnaire about the level of understanding of potential postoperative complications. The data were analyzed with χ(2) tests, independent t tests, Mann-Whitney U  tests, and Spearman rank correlation coefficients. Fifty-one patients fulfilled the inclusion criteria. The audiovisual informed group was comprised of 20 men and 5 women; the written informed group was comprised of 21 men and 5 women. The audiovisual informed group remembered significantly more information than the control group about a potential allergic reaction to local anesthesia or medication and potential trismus (P informed group had lower self-reported anxiety scores than the control group 1 week after surgery (P informing patients of the treatment with an audiovisual slide presentation could improve patient knowledge about postoperative complications and aid in alleviating anxiety after the surgical removal of an impacted mandibular

  5. BILINGUAL MULTIMODAL SYSTEM FOR TEXT-TO-AUDIOVISUAL SPEECH AND SIGN LANGUAGE SYNTHESIS

    OpenAIRE

    Karpov, A.A.; M. Zelezny

    2014-01-01

    We present a conceptual model, architecture and software of a multimodal system for audio-visual speech and sign language synthesis by the input text. The main components of the developed multimodal synthesis system (signing avatar) are: automatic text processor for input text analysis; simulation 3D model of human's head; computer text-to-speech synthesizer; a system for audio-visual speech synthesis; simulation 3D model of human’s hands and upper body; multimodal user interface integrating ...

  6. GÖRSEL-İŞİTSEL ÇEVİRİ / AUDIOVISUAL TRANSLATION

    Directory of Open Access Journals (Sweden)

    Sevtap GÜNAY KÖPRÜLÜ

    2016-04-01

    Full Text Available Audiovisual translation dating back to the silent film era is a special translation method which has been developed for the translation of the movies and programs shown on TV and cinema. Therefore, in the beginning, the term “film translation” was used for this type of translation. Due to the growing number of audiovisual texts it has attracted the interest of scientists and has been assessed under the translation studies. Also in our country the concept of film translation was used for this area, but recently, the concept of audio-visual has been used. Since it not only encompasses the films but also covers all the audio-visual communicatian tools as well, especially in scientific field. In this study, the aspects are analyzed which should be taken into consideration by the translator during the audio-visual translation process within the framework of source text, translated text, film, technical knowledge and knowledge. At the end of the study, it is shown that there are some factors, apart from linguistic and paralinguistic factors and they must be considered carefully as they can influence the quality of the translation. And it is also shown that the given factors require technical knowledge in translation. In this sense, the audio-visual translation is accessed from a different angle compared to the other researches which have been done.

  7. Audio-visual speech perception in noise: Implanted children and young adults versus normal hearing peers.

    Science.gov (United States)

    Taitelbaum-Swead, Riki; Fostick, Leah

    2017-01-01

    The purpose of the current study was to evaluate auditory, visual and audiovisual speech perception abilities among two groups of cochlear implant (CI) users: prelingual children and long-term young adults, as compared to their normal hearing (NH) peers. Prospective cohort study that included 50 participants, divided into two groups of CI (10 children and 10 adults), and two groups of normal hearing peers (15 participants each). Speech stimuli included monosyllabic meaningful and nonsense words in a signal to noise ratio of 0 dB. Speech stimuli were introduced via auditory, visual and audiovisual modalities. (1) CI children and adults show lower speech perception accuracy with background noise in audiovisual and auditory modalities, as compared to NH peers, but significantly higher visual speech perception scores. (2) CI children are superior to CI adults in speech perception in noise via auditory modality, but inferior in the visual one. Both CI children and CI adults had similar audiovisual integration. The findings of the current study show that in spite of the fact that the CI children were implanted bilaterally, at a very young age, and using advanced technology, they still have difficulties in perceiving speech in adverse listening conditions even when adding the visual modality. This suggests that adding audiovisual training might be beneficial for this group by improving their audiovisual integration in difficult listening situations. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Linguistic experience and audio-visual perception of non-native fricatives.

    Science.gov (United States)

    Wang, Yue; Behne, Dawn M; Jiang, Haisheng

    2008-09-01

    This study examined the effects of linguistic experience on audio-visual (AV) perception of non-native (L2) speech. Canadian English natives and Mandarin Chinese natives differing in degree of English exposure [long and short length of residence (LOR) in Canada] were presented with English fricatives of three visually distinct places of articulation: interdentals nonexistent in Mandarin and labiodentals and alveolars common in both languages. Stimuli were presented in quiet and in a cafe-noise background in four ways: audio only (A), visual only (V), congruent AV (AVc), and incongruent AV (AVi). Identification results showed that overall performance was better in the AVc than in the A or V condition and better in quiet than in cafe noise. While the Mandarin long LOR group approximated the native English patterns, the short LOR group showed poorer interdental identification, more reliance on visual information, and greater AV-fusion with the AVi materials, indicating the failure of L2 visual speech category formation with the short LOR non-natives and the positive effects of linguistic experience with the long LOR non-natives. These results point to an integrated network in AV speech processing as a function of linguistic background and provide evidence to extend auditory-based L2 speech learning theories to the visual domain.

  9. Smoking Education for Low-Educated Adolescents: Comparing Print and Audiovisual Messages.

    Science.gov (United States)

    de Graaf, Anneke; van den Putte, Bas; Zebregs, Simon; Lammers, Jeroen; Neijens, Peter

    2016-11-01

    This study aims to provide insight into which modality is most effective for educating low-educated adolescents about smoking. It compares the persuasive effects of print and audiovisual smoking education materials. We conducted a field experiment with two conditions (print vs. video) and three measurement times (Time 1, Time 2, and Time 3). A total of 221 high school students in the second year of the lowest levels of education in the Netherlands participated at all three time points of the study. Results showed that participants in both conditions had more negative beliefs about smoking after being exposed to the smoking education than before, but there were no differences between the print and video version in this effect. However, the video version did make the attitude toward smoking more negative at Time 3 compared to baseline, whereas the text version did not, which suggests that the video version was more effective for educating low-educated adolescents about smoking. © 2016 Society for Public Health Education.

  10. La traducción audiovisual en la enseñanza de una LE: la subtitulación como herramienta metodológica para la adquisición de léxico Audiovisual / Translation in the teaching of a FL: Subtitling as a Methodological Tool for Lexis acquisition

    Directory of Open Access Journals (Sweden)

    Betlem Soler Pardo

    2017-09-01

    Full Text Available Resumen: La traducción y los materiales audiovisuales han demostrado ser herramientas eficaces para el aprendizaje de una lengua extranjera. Hemos querido abordar la traducción audiovisual desde el punto de vista didáctico tomando una de sus modalidades, la subtitulación, para obtener evidencia de su eficacia como método pedagógico para la adquisición de léxico en una lengua extranjera. Para ello, hemos creado una serie de actividades basadas en un vídeo con subtítulos con el que pretendemos obtener un incremento en la adquisición de léxico y una mejora en la comprensión lectora y auditiva, y la expresión escrita de los alumnos. Abstract: Translation and audiovisual materials have proven effective tools for foreign language acquisition. This article addresses audiovisual translation from a pedagogical perspective, focussing primarily on subtitling. The aim is to document their effectiveness as a teaching method for the acquisition of vocabulary in a foreign language. In order to achieve this goal, I have created a series of activities based on a vídeo with subtitles designed to optimise the acquisition of vocabulary and facilitate improvement in students’ reading, listening, and writing skills.

  11. Desarrollo de una prueba de comprensión audiovisual

    Directory of Open Access Journals (Sweden)

    Casañ Núñez, Juan Carlos

    2016-06-01

    Full Text Available Este artículo forma parte de una investigación doctoral que estudia el uso de preguntas de comprensión audiovisual integradas en la imagen del vídeo como subtítulos y sincronizadas con los fragmentos de vídeo relevantes. Anteriormente se han publicado un marco teórico que describe esta técnica (Casañ Núñez, 2015b y un ejemplo en una secuencia didáctica (Casañ Núñez, 2015a. El presente trabajo detalla el proceso de planificación, diseño y experimentación de una prueba de comprensión audiovisual con dos variantes que será administrada junto con otros instrumentos en estudios cuasiexperimentales con grupos de control y tratamiento. Fundamentalmente, se pretende averiguar si la subtitulación de las preguntas facilita la comprensión, si aumenta el tiempo que los estudiantes miran en dirección a la pantalla y conocer la opinión del grupo de tratamiento sobre esta técnica. En la fase de experimentación se efectuaron seis estudios. En el último estudio piloto participaron cuarenta y un estudiantes de ELE (veintidós en el grupo de control y diecinueve en el de tratamiento. Las observaciones de los informantes durante la administración de la prueba y su posterior corrección sugirieron que las indicaciones sobre la estructura del test, las presentaciones de los textos de entrada, la explicación sobre el funcionamiento de las preguntas subtituladas para el grupo experimental y la redacción de los ítems resultaron comprensibles. Los datos de las dos variantes del instrumento se sometieron a sendos análisis de facilidad, discriminación, fiabilidad y descriptivos. También se calcularon las correlaciones entre los test y dos tareas de un examen de comprensión auditiva. Los resultados mostraron que las dos versiones de la prueba estaban preparadas para ser administradas.

  12. Can personality traits predict pathological responses to audiovisual stimulation?

    Science.gov (United States)

    Yambe, Tomoyuki; Yoshizawa, Makoto; Fukudo, Shin; Fukuda, Hiroshi; Kawashima, Ryuta; Shizuka, Kazuhiko; Nanka, Shunsuke; Tanaka, Akira; Abe, Ken-ichi; Shouji, Tomonori; Hongo, Michio; Tabayashi, Kouichi; Nitta, Shin-ichi

    2003-10-01

    pathophysiological reaction to the audiovisual stimulations. As for the photo sensitive epilepsy, it was reported to be only 5-10% for all patients. Therefore, 90% or more of the cause could not be determined in patients who started a morbid response. The results in this study suggest that the autonomic function was connected to the mental tendency of the objects. By examining such directivity, it is expected that subjects, which show morbid reaction to an audiovisual stimulation, can be screened beforehand.

  13. Youth Solid Waste Educational Materials List, November 1991.

    Science.gov (United States)

    Cornell Univ., Ithaca, NY. Cooperative Extension Service.

    This guide provides a brief description and ordering information for approximately 300 educational materials for grades K-12 on the subject of solid waste. The materials cover a variety of environmental issues and actions related to solid waste management. Entries are divided into five sections including audiovisual programs, books, magazines,…

  14. Hearing flashes and seeing beeps: Timing audiovisual events.

    Directory of Open Access Journals (Sweden)

    Manuel Vidal

    Full Text Available Many events from daily life are audiovisual (AV. Handclaps produce both visual and acoustic signals that are transmitted in air and processed by our sensory systems at different speeds, reaching the brain multisensory integration areas at different moments. Signals must somehow be associated in time to correctly perceive synchrony. This project aims at quantifying the mutual temporal attraction between senses and characterizing the different interaction modes depending on the offset. In every trial participants saw four beep-flash pairs regularly spaced in time, followed after a variable delay by a fifth event in the test modality (auditory or visual. A large range of AV offsets was tested. The task was to judge whether the last event came before/after what was expected given the perceived rhythm, while attending only to the test modality. Flashes were perceptually shifted in time toward beeps, the attraction being stronger for lagging than leading beeps. Conversely, beeps were not shifted toward flashes, indicating a nearly total auditory capture. The subjective timing of the visual component resulting from the AV interaction could easily be forward but not backward in time, an intuitive constraint stemming from minimum visual processing delays. Finally, matching auditory and visual time-sensitivity with beeps embedded in pink noise produced very similar mutual attractions of beeps and flashes. Breaking the natural auditory preference for timing allowed vision to take over as well, showing that this preference is not hardwired.

  15. Audio-visual perception system for a humanoid robotic head.

    Science.gov (United States)

    Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M; Bandera, Juan P; Romero-Garces, Adrian; Reche-Lopez, Pedro

    2014-05-28

    One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.

  16. Nuevas pantallas y política audiovisual

    Directory of Open Access Journals (Sweden)

    Francisco Sierra Caballero

    2016-11-01

    Full Text Available La guerra de las pantallas es hoy la quiebra de un orden televisivo en transición a una ecología compleja post Galaxia Marconi, basada en los nuevos hábitos de consumo y de vida. Un problema político, sin ninguna duda, si entendemos que la Comunicación es una Ciencia de lo Común. Una interpretación simple del futuro del audiovisual tiende a poner énfasis solo en las transformaciones tecnológicas. Ciertamente, los cambios en equipamientos, la revolución digital es un factor disruptor del sistema cultural que hay que tomar en cuenta por su relevancia. Ahora bien, insistimos, el acto de ver, la discrecionalidad de la ventana indiscreta nos confronta con el universo ético y político de la mediación como reproducción social. Pues la tecnología no es neutral, ni la comunicación un simple instrumento de transmisión.

  17. Audio-Visual Perception System for a Humanoid Robotic Head

    Directory of Open Access Journals (Sweden)

    Raquel Viciana-Abad

    2014-05-01

    Full Text Available One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.

  18. Audiovisual speech perception development at varying levels of perceptual processing

    Science.gov (United States)

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-01-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children. PMID:27106318

  19. TDT y servicio público. Retos del audiovisual iberoamericano

    Directory of Open Access Journals (Sweden)

    Francisco Sierra Caballero

    2011-03-01

    Full Text Available ¿Qué viabilidad tienen los medios públicos en Latinoamérica? ¿La radiotelevisión pública está acometiendo con garantías de éxito los retos de la Sociedad de la Información? ¿Qué sentido tiene hoy plantear la defensa del servicio público audiovisual ante la convergencia tecnológica que lideran las industrias culturales y los operadores privados? Son las preguntas que se intentan responder a partir de un análisis de situación de los medios públicos en la región. Se establecen tres retos importantes para que la radio televisión pública pueda ser una vía plausible: políticas culturales, apertura del espacio público y la democracia nacional, y acceso de las minorías y el pluralismo cultural.

  20. Audiovisual education and breastfeeding practices: A preliminary report

    Directory of Open Access Journals (Sweden)

    V. C. Nikodem

    1993-05-01

    Full Text Available A randomized control trial was conducted at the Coronation Hospital, to evaluate the effect of audiovisual breastfeeding education. Within 72 hours after delivery, 340 women who agreed to participate were allocated randomly to view one of two video programmes, one of which dealt with breastfeeding. To determine the effect of the programme on infant feeding a structured questionnaire was administered to 108 women who attended the six week postnatal check-up. Alternative methods, such as telephonic interviews (24 and home visits (30 were used to obtain information from subjects who did not attend the postnatal clinic. Comparisons of mother-infant relationships and postpartum depression showed no significant differences. Similar proportions of each group reported that their baby was easy to manage, and that they felt close to and could communicate well with it. While the overall number of mothers who breast-fed was not significantly different between the two groups, there was a trend towards fewer mothers in the study group supplementing with bottle feeding. It was concluded that the effectiveness of aidiovisual education alone is limited, and attention should be directed towards personal follow-up and support for breastfeeding mothers.

  1. Audio-Visual Integration Modifies Emotional Judgment in Music

    Directory of Open Access Journals (Sweden)

    Shen-Yuan Su

    2011-10-01

    Full Text Available The conventional view that perceived emotion in music is derived mainly from auditory signals has led to neglect of the contribution of visual image. In this study, we manipulated mode (major vs. minor and examined the influence of a video image on emotional judgment in music. Melodies in either major or minor mode were controlled for tempo and rhythm and played to the participants. We found that Taiwanese participants, like Westerners, judged major melodies as expressing positive, and minor melodies negative, emotions. The major or minor melodies were then paired with video images of the singers, which were either emotionally congruent or incongruent with their modes. Results showed that participants perceived stronger positive or negative emotions with congruent audio-visual stimuli. Compared to listening to music alone, stronger emotions were perceived when an emotionally congruent video image was added and weaker emotions were perceived when an incongruent image was added. We therefore demonstrate that mode is important to perceive the emotional valence in music and that treating musical art as a purely auditory event might lose the enhanced emotional strength perceived in music, since going to a concert may lead to stronger perceived emotion than listening to the CD at home.

  2. Audio-visual assistance in co-creating transition knowledge

    Science.gov (United States)

    Hezel, Bernd; Broschkowski, Ephraim; Kropp, Jürgen P.

    2013-04-01

    Earth system and climate impact research results point to the tremendous ecologic, economic and societal implications of climate change. Specifically people will have to adopt lifestyles that are very different from those they currently strive for in order to mitigate severe changes of our known environment. It will most likely not suffice to transfer the scientific findings into international agreements and appropriate legislation. A transition is rather reliant on pioneers that define new role models, on change agents that mainstream the concept of sufficiency and on narratives that make different futures appealing. In order for the research community to be able to provide sustainable transition pathways that are viable, an integration of the physical constraints and the societal dynamics is needed. Hence the necessary transition knowledge is to be co-created by social and natural science and society. To this end, the Climate Media Factory - in itself a massively transdisciplinary venture - strives to provide an audio-visual connection between the different scientific cultures and a bi-directional link to stake holders and society. Since methodology, particular language and knowledge level of the involved is not the same, we develop new entertaining formats on the basis of a "complexity on demand" approach. They present scientific information in an integrated and entertaining way with different levels of detail that provide entry points to users with different requirements. Two examples shall illustrate the advantages and restrictions of the approach.

  3. Hearing flashes and seeing beeps: Timing audiovisual events.

    Science.gov (United States)

    Vidal, Manuel

    2017-01-01

    Many events from daily life are audiovisual (AV). Handclaps produce both visual and acoustic signals that are transmitted in air and processed by our sensory systems at different speeds, reaching the brain multisensory integration areas at different moments. Signals must somehow be associated in time to correctly perceive synchrony. This project aims at quantifying the mutual temporal attraction between senses and characterizing the different interaction modes depending on the offset. In every trial participants saw four beep-flash pairs regularly spaced in time, followed after a variable delay by a fifth event in the test modality (auditory or visual). A large range of AV offsets was tested. The task was to judge whether the last event came before/after what was expected given the perceived rhythm, while attending only to the test modality. Flashes were perceptually shifted in time toward beeps, the attraction being stronger for lagging than leading beeps. Conversely, beeps were not shifted toward flashes, indicating a nearly total auditory capture. The subjective timing of the visual component resulting from the AV interaction could easily be forward but not backward in time, an intuitive constraint stemming from minimum visual processing delays. Finally, matching auditory and visual time-sensitivity with beeps embedded in pink noise produced very similar mutual attractions of beeps and flashes. Breaking the natural auditory preference for timing allowed vision to take over as well, showing that this preference is not hardwired.

  4. Material Docent Tecnologies de la Comunicació I

    OpenAIRE

    López-Olano, Carlos

    2015-01-01

    El document forma part dels materials docents programats mitjançant l'ajut del Servei de Política Lingüística de la Universitat de València. Material Docent de l'assignatura Tecnologies de la Comunicació I, de primer curs del grau de Comunicació Audiovisual de la Universitat de València. teaching book for the subject Communication Technology I, included in the first degree of Audiovisual Communication career, University of Valencia.

  5. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  6. Multidimensional Attributes of the Sense of Presence in Audio-Visual Content

    Directory of Open Access Journals (Sweden)

    Kazutomo Fukue

    2011-10-01

    Full Text Available The sense of presence is crucial for evaluating audio-visual equipment and content. To clarify the multidimensional attributes of the sense, we conducted three experiments on audio, visual, and audio-visual content items. Initially 345 adjectives, which express the sense of presence, were collected and the number of adjectives was reduced to 40 pairs based on the KJ method. Forty scenes were recorded with a high-definition video camera while their sounds were recorded using a dummy head. Each content item was reproduced with a 65-inch display and headphones in three conditions of audio-only, visual-only and audio-visual. Twenty-one subjects evaluated them using the 40 pairs of adjectives by the Semantic Differential method with seven-point scales. The sense of presence in each content item was also evaluated using a Likert scale. The experimental data was analyzed by the factor analysis and four, five and five factors were extracted for audio, visual, and audio-visual conditions, respectively. The multiple regression analysis revealed that audio and audio-visual presences were explained by the extracted factors, although further consideration is required for the visual presence. These results indicated that the factors of psychological loading and activity are relevant for the sense of presence.

  7. THE AUDIO-VISUAL DISTRACTION MINIMIZES THE CHILDREN’S LEVEL OF ANXIETY DURING CIRCUMCISION

    Directory of Open Access Journals (Sweden)

    Farida Juanita

    2017-07-01

    Full Text Available Introduction: Circumcision is one of minor surgery that usually done for school age children. Most of the children appear to be anxious enough. Audio-visual distraction is one of the methods that researcher want to applied to decrease children’s anxiety level during circumcision. The objective of this study was to identify the effect of audio-visual distraction to decrease children’s anxiety level during circumcision. Method: Non randomized pretest-posttest control group design was used in this study. There were 21 children divided into two groups, control group (n=13 receive intervention as usual, otherwise the intervention group (n=8 receive audio-visual distraction during circumcision. By using self report (scale of anxiety and physiological measures of anxiety (pulse rate per minute, children are evaluated before and after the intervention. Result:  The result showed that audio-visual distraction is efective to decrease the anxiety level of school age children during cicumcision with significance difference on the decrease of anxiety level between control and intervention group (p=0.000 and significance difference on the pulse rate per minute between control and intervention group (p=0.006. Discussion: It can be concluded that by applying the audio-visual distraction during circumcision could be minimized the children’s anxiety. The audio visual is needed for children to manage and reduce anxiety during invasive therapy through mecanism of distraction.

  8. Neurofunctional underpinnings of audiovisual emotion processing in teens with Autism Spectrum Disorders

    Directory of Open Access Journals (Sweden)

    Krissy A.R. Doyle-Thomas

    2013-05-01

    Full Text Available Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD. Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n=18 and typically developing controls (n=16 during audiovisual and unimodal emotion processing . Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviours, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that during audiovisual emotion matching individuals with ASD may rely on a parietofrontal network to compensate for atypical brain activity elsewhere.

  9. Causation model of autism: Audiovisual brain specialization in infancy competes with social brain networks.

    Science.gov (United States)

    Heffler, Karen Frankel; Oestreicher, Leonard M

    2016-06-01

    Earliest identifiable findings in autism indicate that the autistic brain develops differently from the typical brain in the first year of life, after a period of typical development. Twin studies suggest that autism has an environmental component contributing to causation. Increased availability of audiovisual (AV) materials and viewing practices of infants parallel the time frame of the rise in prevalence of autism spectrum disorder (ASD). Studies have shown an association between ASD and increased TV/cable screen exposure in infancy, suggesting AV exposure in infancy as a possible contributing cause of ASD. Infants are attracted to the saliency of AV materials, yet do not have the experience to recognize these stimuli as socially relevant. The authors present a developmental model of autism in which exposure to screen-based AV input in genetically susceptible infants stimulates specialization of non-social sensory processing in the brain. Through a process of neuroplasticity, the autistic infant develops the skills that are driven by the AV viewing. The AV developed neuronal pathways compete with preference for social processing, negatively affecting development of social brain pathways and causing global developmental delay. This model explains atypical face and speech processing, as well as preference for AV synchrony over biological motion in ASD. Neural hyper-connectivity, enlarged brain size and special abilities in visual, auditory and motion processing in ASD are also explained by the model. Positive effects of early intervention are predicted by the model. Researchers studying causation of autism have largely overlooked AV exposure in infancy as a potential contributing factor. The authors call for increased public awareness of the association between early screen viewing and ASD, and a concerted research effort to determine the extent of causal relationship. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    Directory of Open Access Journals (Sweden)

    Magnus eAlm

    2015-07-01

    Full Text Available Gender and age have been found to affect adults’ audio-visual (AV speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20-30 years and middle-aged adults (50-60 years with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy towards more visually dominated responses.

  11. Developing an Audiovisual Notebook as a Self-Learning Tool in Histology: Perceptions of Teachers and Students

    Science.gov (United States)

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four…

  12. The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study.

    Science.gov (United States)

    Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence

    2017-09-25

    At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Effect of Anti-Tobacco Audiovisual Messages on Knowledge and Attitude towards Tobacco Use in North India

    Directory of Open Access Journals (Sweden)

    Jagdish Kaur

    2012-01-01

    Full Text Available Context: Tobacco use is one of the leading preventable causes of death globally. Mass media plays a significant role in initiation as well as in control of tobacco use. Aims: To assess the effect of viewing anti-tobacco audiovisual messages on knowledge and attitudinal change towards tobacco use. Settings and Design: Interventional community-based study. Materials and Methods: A total of 1999 cinema attendees (age 10 years and above, irrespective of their smoking or tobacco using status, were selected from four cinema halls (two urban, one semi-urban, and one rural site. In pre-exposure phase 1000 subjects and in post-exposure phase 999 subjects were interviewed using a pre-tested questionnaire. After collecting baseline information, the other days were chosen for screening the audiovisual spots that were shown twice per show. After the show, subjects were interviewed to assess its effect. Statistical Analysis Used: Proportions of two independent groups were compared and statistically significance using chi-square test was accepted if error was less than 0.05%. Results: Overall 784 (39.2% subjects were tobacco users, 52.6% were non-tobacco users and 8.2% were former tobacco users. Important factors for initiation of tobacco use were peer pressure (62%, imitating elders (53.4% and imitating celebrity (63.5%. Tobacco users were significantly less likely than non-tobacco users to recall watching the spots during movie (72.1% vs. 79.1%. Anti-tobacco advertisement gave inspiration to 37% of subjects not to use tobacco. The celebrity in advertisement influenced the people′s attention. There was significant improvement in knowledge and attitudes towards anti-tobacco legal and public health measures in post exposure group. Conclusions: The anti-tobacco advertisements have been found to be effective in enhancing knowledge as well as in transforming to positive attitude of the people about tobacco use.

  14. Correlation between audio-visual enhancement of speech in different noise environments and SNR: a combined behavioral and electrophysiological study.

    Science.gov (United States)

    Liu, B; Lin, Y; Gao, X; Dang, J

    2013-09-05

    In the present study, we investigated the multisensory gain as the difference of speech recognition accuracies between the audio-visual (AV) and auditory-only (A) conditions, and the multisensory gain as the difference between the event-related potentials (ERPs) evoked under the AV condition and the sum of the ERPs evoked under the A and visual-only (V) conditions in different noise environments. Videos of a female speaker articulating the Chinese monosyllable words accompanied with different levels of pink noise were used as the stimulus materials. The selected signal-to-noise ratios (SNRs) were -16, -12, -8, -4 and 0 dB. Under the A, V and AV conditions the accuracy of the speech recognition was measured and the ERPs evoked under different conditions were analyzed, respectively. The behavioral results showed that the maximum gain as the difference of speech recognition accuracies between the AV and A conditions was at the -12 dB SNR. The ERP results showed that the multisensory gain as the difference between the ERPs evoked under the AV condition and the sum of ERPs evoked under the A and V conditions at the -12 dB SNR was significantly higher than those at the other SNRs in the time window of 130-200 ms in the area from frontal to central region. The multisensory gains in audio-visual speech recognition at different SNRs were not completely accordant with the principle of inverse effectiveness, but confirmed to cross-modal stochastic resonance. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Audiovisual quality assessment in communications applications: Current status, trends and challenges

    DEFF Research Database (Denmark)

    Korhonen, Jari

    2010-01-01

    Audiovisual quality assessment is one of the major challenges in multimedia communications. Traditionally, algorithm-based (objective) assessment methods have focused primarily on the compression artifacts. However, compression is only one of the numerous factors influencing the perception. In co...... addressed in practical quality metrics is the co-impact of audio and video qualities. This paper provides an overview of the current trends and challenges in objective audiovisual quality assessment, with emphasis on communication applications......Audiovisual quality assessment is one of the major challenges in multimedia communications. Traditionally, algorithm-based (objective) assessment methods have focused primarily on the compression artifacts. However, compression is only one of the numerous factors influencing the perception....... In communications applications, transmission errors, including packet losses and bit errors, can be a significant source of quality degradation. Also the environmental factors, such as background noise, ambient light and display characteristics, pose an impact on perception. A third aspect that has not been widely...

  16. [From oral history to the research film: the audiovisual as a tool of the historian].

    Science.gov (United States)

    Mattos, Hebe; Abreu, Martha; Castro, Isabel

    2017-01-01

    An analytical essay of the process of image production, audiovisual archive formation, analysis of sources, and creation of the filmic narrative of the four historiographic films that form the DVD set Passados presentes (Present pasts) from the Oral History and Image Laboratory of Universidade Federal Fluminense (Labhoi/UFF). From excerpts from the audiovisual archive of Labhoi and the films made, the article analyzes: how the problem of research (the memory of slavery, and the legacy of the slave song in the agrofluminense region) led us to the production of images in a research situation; the analytical shift in relation to the cinematographic documentary and the ethnographic film; the specificities of revisiting the audiovisual collection constituted by the formulation of new research problems.

  17. Robot Command Interface Using an Audio-Visual Speech Recognition System

    Science.gov (United States)

    Ceballos, Alexánder; Gómez, Juan; Prieto, Flavio; Redarce, Tanneguy

    In recent years audio-visual speech recognition has emerged as an active field of research thanks to advances in pattern recognition, signal processing and machine vision. Its ultimate goal is to allow human-computer communication using voice, taking into account the visual information contained in the audio-visual speech signal. This document presents a command's automatic recognition system using audio-visual information. The system is expected to control the laparoscopic robot da Vinci. The audio signal is treated using the Mel Frequency Cepstral Coefficients parametrization method. Besides, features based on the points that define the mouth's outer contour according to the MPEG-4 standard are used in order to extract the visual speech information.

  18. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate...... this issue, Tuomainen et al. (2005) used sine-wave speech stimuli created from three time-varying sine waves tracking the formants of a natural speech signal. Naïve observers tend not to recognize sine wave speech as speech but become able to decode its phonetic content when informed of the speech...... of audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase...

  19. A Similarity-Based Approach for Audiovisual Document Classification Using Temporal Relation Analysis

    Directory of Open Access Journals (Sweden)

    Ferrane Isabelle

    2011-01-01

    Full Text Available Abstract We propose a novel approach for video classification that bases on the analysis of the temporal relationships between the basic events in audiovisual documents. Starting from basic segmentation results, we define a new representation method that is called Temporal Relation Matrix (TRM. Each document is then described by a set of TRMs, the analysis of which makes events of a higher level stand out. This representation has been first designed to analyze any audiovisual document in order to find events that may well characterize its content and its structure. The aim of this work is to use this representation to compute a similarity measure between two documents. Approaches for audiovisual documents classification are presented and discussed. Experimentations are done on a set of 242 video documents and the results show the efficiency of our proposals.

  20. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity

    Science.gov (United States)

    Gibney, Kyla D.; Aligbe, Enimielen; Eggleston, Brady A.; Nunes, Sarah R.; Kerkhoff, Willa G.; Dean, Cassandra L.; Kwakye, Leslie D.

    2017-01-01

    The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a

  1. AUDIOVISUAL RESOURCES ON THE TEACHING PROCESS IN SURGICAL TECHNIQUE.

    Science.gov (United States)

    Pupulim, Guilherme Luiz Lenzi; Ioris, Rafael Augusto; Gama, Ricardo Ribeiro; Ribas, Carmen Australia Paredes Marcondes; Malafaia, Osvaldo; Gama, Mirnaluci

    2015-01-01

    The development of didactic means to create opportunities to permit complete and repetitive viewing of surgical procedures is of great importance nowadays due to the increasing difficulty of doing in vivo training. Thus, audiovisual resources favor the maximization of living resources used in education, and minimize problems arising only with verbalism. To evaluate the use of digital video as a pedagogical strategy in surgical technique teaching in medical education. Cross-sectional study with 48 students of the third year of medicine, when studying in the surgical technique discipline. They were divided into two groups with 12 in pairs, both subject to the conventional method of teaching, and one of them also exposed to alternative method (video) showing the technical details. All students did phlebotomy in the experimental laboratory, with evaluation and assistance of the teacher/monitor while running. Finally, they answered a self-administered questionnaire related to teaching method when performing the operation. Most of those who did not watch the video took longer time to execute the procedure, did more questions and needed more faculty assistance. The total exposed to video followed the chronology of implementation and approved the new method; 95.83% felt able to repeat the procedure by themselves, and 62.5% of those students that only had the conventional method reported having regular capacity of technique assimilation. In both groups mentioned having regular difficulty, but those who have not seen the video had more difficulty in performing the technique. The traditional method of teaching associated with the video favored the ability to understand and transmitted safety, particularly because it is activity that requires technical skill. The technique with video visualization motivated and arouse interest, facilitated the understanding and memorization of the steps for procedure implementation, benefiting the students performance.

  2. Audiovisual correspondence between musical timbre and visual shapes.

    Directory of Open Access Journals (Sweden)

    Mohammad eAdeli

    2014-05-01

    Full Text Available This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g. simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against colored shapes and once against grayscale shapes. Subjects had to select the visual equivalent of a given sound i.e. its shape, color (or grayscale and vertical position. This scenario permitted studying the associations between normalized timbre and visual shapes as well as some of the previous findings for more complex stimuli. 119 subjects (31 females and 88 males participated in the online experiment. Subjects included 36 claimed professional musicians, 47 claimed amateur musicians and 36 claimed non-musicians. 31 subjects have also claimed to have synesthesia-like experiences. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects have strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the two previous shapes. Color or grayscale had no effect on timbre-shape associations. Fundamental frequency was not associated with height, grayscale or color. The significant correspondence between timbre and shape revealed by the present work allows designing substitution systems which might help the blind to perceive shapes through timbre.

  3. Early and late beta-band power reflect audiovisual perception in the McGurk illusion.

    Science.gov (United States)

    Roa Romero, Yadira; Senkowski, Daniel; Keil, Julian

    2015-04-01

    The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13-30 Hz) at short (0-500 ms) and long (500-800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept. Copyright © 2015 the American Physiological Society.

  4. Does hearing aid use affect audiovisual integration in mild hearing impairment?

    Science.gov (United States)

    Gieseler, Anja; Tahden, Maike A S; Thiel, Christiane M; Colonius, Hans

    2018-02-16

    There is converging evidence for altered audiovisual integration abilities in hearing-impaired individuals and those with profound hearing loss who are provided with cochlear implants, compared to normal-hearing adults. Still, little is known on the effects of hearing aid use on audiovisual integration in mild hearing loss, although this constitutes one of the most prevalent conditions in the elderly and, yet, often remains untreated in its early stages. This study investigated differences in the strength of audiovisual integration between elderly hearing aid users and those with the same degree of mild hearing loss who were not using hearing aids, the non-users, by measuring their susceptibility to the sound-induced flash illusion. We also explored the corresponding window of integration by varying the stimulus onset asynchronies. To examine general group differences that are not attributable to specific hearing aid settings but rather reflect overall changes associated with habitual hearing aid use, the group of hearing aid users was tested unaided while individually controlling for audibility. We found greater audiovisual integration together with a wider window of integration in hearing aid users compared to their age-matched untreated peers. Signal detection analyses indicate that a change in perceptual sensitivity as well as in bias may underlie the observed effects. Our results and comparisons with other studies in normal-hearing older adults suggest that both mild hearing impairment and hearing aid use seem to affect audiovisual integration, possibly in the sense that hearing aid use may reverse the effects of hearing loss on audiovisual integration. We suggest that these findings may be particularly important for auditory rehabilitation and call for a longitudinal study.

  5. Atypical rapid audio-visual temporal recalibration in autism spectrum disorders.

    Science.gov (United States)

    Noel, Jean-Paul; De Niear, Matthew A; Stevenson, Ryan; Alais, David; Wallace, Mark T

    2017-01-01

    Changes in sensory and multisensory function are increasingly recognized as a common phenotypic characteristic of Autism Spectrum Disorders (ASD). Furthermore, much recent evidence suggests that sensory disturbances likely play an important role in contributing to social communication weaknesses-one of the core diagnostic features of ASD. An established sensory disturbance observed in ASD is reduced audiovisual temporal acuity. In the current study, we substantially extend these explorations of multisensory temporal function within the framework that an inability to rapidly recalibrate to changes in audiovisual temporal relations may play an important and under-recognized role in ASD. In the paradigm, we present ASD and typically developing (TD) children and adolescents with asynchronous audiovisual stimuli of varying levels of complexity and ask them to perform a simultaneity judgment (SJ). In the critical analysis, we test audiovisual temporal processing on trial t as a condition of trial t - 1. The results demonstrate that individuals with ASD fail to rapidly recalibrate to audiovisual asynchronies in an equivalent manner to their TD counterparts for simple and non-linguistic stimuli (i.e., flashes and beeps, hand-held tools), but exhibit comparable rapid recalibration for speech stimuli. These results are discussed in terms of prior work showing a speech-specific deficit in audiovisual temporal function in ASD, and in light of current theories of autism focusing on sensory noise and stability of perceptual representations. Autism Res 2017, 10: 121-129. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  6. La narratología audiovisual como método de análisis

    OpenAIRE

    Cuevas, E.

    2009-01-01

    Esta lección pretende desarrollar una visión sintética, al tiempo que útil para el análisis, de la metodología propuesta por la narratología en el ámbito audiovisual. Para ello se parte del trabajo de Gerárd Genette en literatura, simplificando sus propuestas cuando resulten de escasa eficacia analítica en el medio audiovisual, o ampliándolas, en el caso de que este medio plantee opciones no existentes en literatura. El trabajo arranca con una breve reflexión acerca del alcance an...

  7. Influence of auditory and audiovisual stimuli on the right-left prevalence effect

    DEFF Research Database (Denmark)

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension...... was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension...

  8. Introducción: Cine, política audiovisual y comunicación

    OpenAIRE

    Susana Sel

    2016-01-01

    Cine, televisión, audiovisual y comunicación, cada vez más indisolublemente asociados, son centrales en el proceso de transformación del capitalismo contemporáneo. La dinámica industrial que se despliega a escala global cuestiona la coherencia, la autonomía y la existencia misma de entidades históricamente constituidas como el cine. Al abordar el campo audiovisual en esta etapa de tecnologías digitales, nos enfrentamos a un objeto que opera asimismo modificando el régimen de percepción, enten...

  9. The effects of hearing protectors on auditory localization: evidence from audio-visual target acquisition.

    Science.gov (United States)

    Bolia, R S; McKinley, R L

    2000-01-01

    Response times (RT) in an audio-visual target acquisition task were collected from 3 participants while wearing either circumaural earmuffs, foam earplugs, or no hearing protection. Analyses revealed that participants took significantly longer to locate and identify an audio-visual target in both hearing protector conditions than they did in the unoccluded condition, suggesting a disturbance of the cues used by listeners to localize sounds in space. RTs were significantly faster in both hearing protector conditions than in a non-audio control condition, indicating that auditory localization was not completely disrupted. Results are discussed in terms of safety issues involved with wearing hearing protectors in an occupational environment.

  10. A possible neurophysiological correlate of audiovisual binding and unbinding in speech perception.

    Science.gov (United States)

    Ganesh, Attigodu C; Berthommier, Frédéric; Vilain, Coriandre; Sato, Marc; Schwartz, Jean-Luc

    2014-01-01

    Audiovisual (AV) speech integration of auditory and visual streams generally ends up in a fusion into a single percept. One classical example is the McGurk effect in which incongruent auditory and visual speech signals may lead to a fused percept different from either visual or auditory inputs. In a previous set of experiments, we showed that if a McGurk stimulus is preceded by an incongruent AV context (composed of incongruent auditory and visual speech materials) the amount of McGurk fusion is largely decreased. We interpreted this result in the framework of a two-stage "binding and fusion" model of AV speech perception, with an early AV binding stage controlling the fusion/decision process and likely to produce "unbinding" with less fusion if the context is incoherent. In order to provide further electrophysiological evidence for this binding/unbinding stage, early auditory evoked N1/P2 responses were here compared during auditory, congruent and incongruent AV speech perception, according to either prior coherent or incoherent AV contexts. Following the coherent context, in line with previous electroencephalographic/magnetoencephalographic studies, visual information in the congruent AV condition was found to modify auditory evoked potentials, with a latency decrease of P2 responses compared to the auditory condition. Importantly, both P2 amplitude and latency in the congruent AV condition increased from the coherent to the incoherent context. Although potential contamination by visual responses from the visual cortex cannot be discarded, our results might provide a possible neurophysiological correlate of early binding/unbinding process applied on AV interactions.

  11. Audiovisual Biofeedback Improves Cine–Magnetic Resonance Imaging Measured Lung Tumor Motion Consistency

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Danny [Radiation Physics Laboratory, Sydney Medical School, The University of Sydney, Sidney, NSW (Australia); Greer, Peter B. [School of Mathematical and Physical Sciences, The University of Newcastle, Newcastle, NSW (Australia); Department of Radiation Oncology, Calvary Mater Newcastle, Newcastle, NSW (Australia); Ludbrook, Joanna; Arm, Jameen; Hunter, Perry [Department of Radiation Oncology, Calvary Mater Newcastle, Newcastle, NSW (Australia); Pollock, Sean; Makhija, Kuldeep; O' brien, Ricky T. [Radiation Physics Laboratory, Sydney Medical School, The University of Sydney, Sidney, NSW (Australia); Kim, Taeho [Radiation Physics Laboratory, Sydney Medical School, The University of Sydney, Sidney, NSW (Australia); Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia (United States); Keall, Paul, E-mail: paul.keall@sydney.edu.au [Radiation Physics Laboratory, Sydney Medical School, The University of Sydney, Sidney, NSW (Australia)

    2016-03-01

    Purpose: To assess the impact of an audiovisual (AV) biofeedback on intra- and interfraction tumor motion for lung cancer patients. Methods and Materials: Lung tumor motion was investigated in 9 lung cancer patients who underwent a breathing training session with AV biofeedback before 2 3T magnetic resonance imaging (MRI) sessions. The breathing training session was performed to allow patients to become familiar with AV biofeedback, which uses a guiding wave customized for each patient according to a reference breathing pattern. In the first MRI session (pretreatment), 2-dimensional cine-MR images with (1) free breathing (FB) and (2) AV biofeedback were obtained, and the second MRI session was repeated within 3-6 weeks (mid-treatment). Lung tumors were directly measured from cine-MR images using an auto-segmentation technique; the centroid and outlier motions of the lung tumors were measured from the segmented tumors. Free breathing and AV biofeedback were compared using several metrics: intra- and interfraction tumor motion consistency in displacement and period, and the outlier motion ratio. Results: Compared with FB, AV biofeedback improved intrafraction tumor motion consistency by 34% in displacement (P=.019) and by 73% in period (P<.001). Compared with FB, AV biofeedback improved interfraction tumor motion consistency by 42% in displacement (P<.046) and by 74% in period (P=.005). Compared with FB, AV biofeedback reduced the outlier motion ratio by 21% (P<.001). Conclusions: These results demonstrated that AV biofeedback significantly improved intra- and interfraction lung tumor motion consistency for lung cancer patients. These results demonstrate that AV biofeedback can facilitate consistent tumor motion, which is advantageous toward achieving more accurate medical imaging and radiation therapy procedures.

  12. How can audiovisual pathways enhance the temporal resolution of time-compressed speech in blind subjects?

    Directory of Open Access Journals (Sweden)

    Ingo eHertrich

    2013-08-01

    Full Text Available In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1 the auditory system, (2 supramodal representations, and (3 frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for "reading" texts at ultra-fast speaking rates (> 16 syllables/s, exceeding by far the normal range of 6 syllables/s. An fMRI study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic (MEG measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the "bottleneck" for understanding time-compressed speech seems related to a demand for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments considering cross-modal adjustments in space, time, and object recognition.

  13. How can audiovisual pathways enhance the temporal resolution of time-compressed speech in blind subjects?

    Science.gov (United States)

    Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann

    2013-01-01

    In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for "reading" texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the "bottleneck" for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object recognition.

  14. More Materiales Tocante Los Latinos. A Bibliography of Materials on the Spanish-American.

    Science.gov (United States)

    Harrigan, Joan, Comp.

    A bibliography of materials published between 1964 and 1969 on the Spanish American is presented to assist librarians and educators in locating Hispano instructional aids. Over 120 annotated entries list audio-visual aids and reading materials for students of all ages, professional materials for educators including librarians, ERIC materials…

  15. Audiovisual heritage preservation in Earth and Space Science Informatics: Videos from Free and Open Source Software for Geospatial (FOSS4G) conferences in the TIB|AV-Portal.

    Science.gov (United States)

    Löwe, Peter; Marín Arraiza, Paloma; Plank, Margret

    2016-04-01

    continues to grow - and so does the number of topics to be addressed at conferences. Up to now, commercial Web 2.0 platforms like Youtube and Vimeo were used. However, these platforms lack capabilities for long-term archiving and scientific citation, such as persistent identifiers that permit the citation of specific intervals of the overall content. To address these issues, the scientific library community has started to implement improved multimedia archiving and retrieval services for scientific audiovisual content which fulfil these requirements. Using the reference case of the OSGeo conference video recordings, this paper gives an overview over the new and growing collection activities by the German National Library of Science and Technology for audiovisual content in Geoinformatics/ESSI in the TIB|AV Portal for audiovisual content. Following a successful start in 2014 and positive response from the OSGeo Community, the TIB acquisition strategy for OSGeo video material was extended to include German, European, North-American and global conference content. The collection grows steadily by new conference content and also by harvesting of past conference videos from commercial Web 2.0 platforms like Youtube and Vimeo. This positions the TIB|AV-Portal as a reliable and concise long-term resource for innovation mining, education and scholarly research within the ESSI context both within Academia and Industry.

  16. Teletoxicology: Patient Assessment Using Wearable Audiovisual Streaming Technology.

    Science.gov (United States)

    Skolnik, Aaron B; Chai, Peter R; Dameff, Christian; Gerkin, Richard; Monas, Jessica; Padilla-Jones, Angela; Curry, Steven

    2016-12-01

    Audiovisual streaming technologies allow detailed remote patient assessment and have been suggested to change management and enhance triage. The advent of wearable, head-mounted devices (HMDs) permits advanced teletoxicology at a relatively low cost. A previously published pilot study supports the feasibility of using the HMD Google Glass® (Google Inc.; Mountain View, CA) for teletoxicology consultation. This study examines the reliability, accuracy, and precision of the poisoned patient assessment when performed remotely via Google Glass®. A prospective observational cohort study was performed on 50 patients admitted to a tertiary care center inpatient toxicology service. Toxicology fellows wore Google Glass® and transmitted secure, real-time video and audio of the initial physical examination to a remote investigator not involved in the subject's care. High-resolution still photos of electrocardiograms (ECGs) were transmitted to the remote investigator. On-site and remote investigators recorded physical examination findings and ECG interpretation. Both investigators completed a brief survey about the acceptability and reliability of the streaming technology for each encounter. Kappa scores and simple agreement were calculated for each examination finding and electrocardiogram parameter. Reliability scores and reliability difference were calculated and compared for each encounter. Data were available for analysis of 17 categories of examination and ECG findings. Simple agreement between on-site and remote investigators ranged from 68 to 100 % (median = 94 %, IQR = 10.5). Kappa scores could be calculated for 11/17 parameters and demonstrated slight to fair agreement for two parameters and moderate to almost perfect agreement for nine parameters (median = 0.653; substantial agreement). The lowest Kappa scores were for pupil size and response to light. On a 100-mm visual analog scale (VAS), mean comfort level was 93 and mean reliability rating was 89 for

  17. Vocabulary Teaching in Foreign Language via Audiovisual Method Technique of Listening and Following Writing Scripts

    Science.gov (United States)

    Bozavli, Ebubekir

    2017-01-01

    The objective is hereby study is to compare the effects of conventional and audiovisual methods on learning efficiency and success of retention with regard to vocabulary teaching in foreign language. Research sample consists of 21 undergraduate and 7 graduate students studying at Department of French Language Teaching, Kazim Karabekir Faculty of…

  18. Acceptance of online audio-visual cultural heritage archive services: a study of the general public

    NARCIS (Netherlands)

    Ongena, G.; van de Wijngaert, Lidwien; Huizer, E.

    2013-01-01

    Introduction. This study examines the antecedents of user acceptance of an audio-visual heritage archive for a wider audience (i.e., the general public) by extending the technology acceptance model with the concepts of perceived enjoyment, nostalgia proneness and personal innovativeness. Method. A

  19. The Netherlands: The representativeness of trade unions and employer associations in the audiovisual sector

    NARCIS (Netherlands)

    Grunell, M.

    2013-01-01

    The relevance of the Dutch audiovisual sector in terms of the number of employees is negligible. However, in qualitative terms, the sector is influential in Dutch society. The characteristics of collective bargaining are defined by the division into public and commercial broadcasting. In public

  20. Hearing Impairment and Audiovisual Speech Integration Ability: A Case Study Report

    Directory of Open Access Journals (Sweden)

    Nicholas eAltieri

    2014-07-01

    Full Text Available Research in audiovisual speech perception has demonstrated that sensory factors such as auditory and visual acuity are associated with a listener’s ability to extract and combine auditory and visual speech cues. This case study report examined audiovisual integration using a newly developed measure of capacity in a sample of hearing-impaired listeners. Capacity assessments are unique because they examine the contribution of reaction-time (RT as well as accuracy to determine the extent to which a listener efficiently combines auditory and visual speech cues relative to independent race model predictions. Multisensory speech integration ability was examined in two experiments: An open-set sentence recognition and a closed set speeded-word recognition study that measured capacity. Most germane to our approach, capacity illustrated speed-accuracy tradeoffs that may be predicted by audiometric configuration. Results revealed that some listeners benefit from increased accuracy, but fail to benefit in terms of speed on audiovisual relative to unisensory trials. Conversely, other listeners may not benefit in the accuracy domain but instead show an audiovisual processing time benefit.

  1. Audiovisual Asynchrony Detection and Speech Intelligibility in Noise With Moderate to Severe Sensorineural Hearing Impairment

    NARCIS (Netherlands)

    Baskent, Deniz; Bazo, Danny

    2011-01-01

    Objective: The objective of this study is to explore the sensitivity to intermodal asynchrony in audiovisual speech with moderate to severe sensorineural hearing loss. Based on previous studies, two opposing expectations were an increase in sensitivity, as hearing-impaired listeners heavily rely on

  2. Musicians have enhanced audiovisual multisensory binding: experience-dependent effects in the double-flash illusion.

    Science.gov (United States)

    Bidelman, Gavin M

    2016-10-01

    Musical training is associated with behavioral and neurophysiological enhancements in auditory processing for both musical and nonmusical sounds (e.g., speech). Yet, whether the benefits of musicianship extend beyond enhancements to auditory-specific skills and impact multisensory (e.g., audiovisual) processing has yet to be fully validated. Here, we investigated multisensory integration of auditory and visual information in musicians and nonmusicians using a double-flash illusion, whereby the presentation of multiple auditory stimuli (beeps) concurrent with a single visual object (flash) induces an illusory perception of multiple flashes. We parametrically varied the onset asynchrony between auditory and visual events (leads and lags of ±300 ms) to quantify participants' "temporal window" of integration, i.e., stimuli in which auditory and visual cues were fused into a single percept. Results show that musically trained individuals were both faster and more accurate at processing concurrent audiovisual cues than their nonmusician peers; nonmusicians had a higher susceptibility for responding to audiovisual illusions and perceived double flashes over an extended range of onset asynchronies compared to trained musicians. Moreover, temporal window estimates indicated that musicians' windows (multisensory integration and audiovisual binding. Collectively, findings indicate a more refined binding of auditory and visual cues in musically trained individuals. We conclude that experience-dependent plasticity of intensive musical experience extends beyond simple listening skills, improving multimodal processing and the integration of multiple sensory systems in a domain-general manner.

  3. Prediction-based Audiovisual Fusion for Classification of Non-Linguistic Vocalisations

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Prediction plays a key role in recent computational models of the brain and it has been suggested that the brain constantly makes multisensory spatiotemporal predictions. Inspired by these findings we tackle the problem of audiovisual fusion from a new perspective based on prediction. We train

  4. Audio-visual Classification and Fusion of Spontaneous Affect Data in Likelihood Space

    NARCIS (Netherlands)

    Nicolaou, Mihalis A.; Gunes, Hatice; Pantic, Maja

    2010-01-01

    This paper focuses on audio-visual (using facial expression, shoulder and audio cues) classification of spontaneous affect, utilising generative models for classification (i) in terms of Maximum Likelihood Classification with the assumption that the generative model structure in the classifier is

  5. KAMAN PELAYANAN MEDIA AUDIOVISUAL: STUDI KASUS DI THE BRITISH COUNCIL JAKARTA

    Directory of Open Access Journals (Sweden)

    Hindar Purnomo

    2015-12-01

    Full Text Available Tujuan penelitian ini adalah untuk mengetahui cara penyelenggaraan pelayanan media AV, efektivitas pelayanan serta tingkat kepuasan pemustaka terhadap berbagai aspek pelayanan. Penelitian dilakukan di The British Council Jakarta dengan cara evaluasi karena dengan cara ini dapat diketahui berbagai fenomena yang terjadi. Perpustakaan British Council menyediakan tiga jenis media yaitu berupa kaset video, kaset audio, dan siaran televisi BBC. Subjek penelitian adalah pemakai jasa pelaya-nan media audiovisual yang terdaftar sebagai anggota. Subjek dikelompokkan berdasarkan kelompok usia dan kelompok tujuan pemanfaatan media AV. Data angket terkumpul sebanyak 157 responden (75,48% kemudian dianalisis secara statistik dengan uji analisis varian sate arah Kruskal-Wallis. Hasil penelitian menunjukkan bahwa ketiga media tersebut diminati oleh banyak pemakai terutama pada kelompok usia muda. Sebagian besar pemustaka lebih menyukai jenis fiksi dibandingkan jenis nonfiksi, mereka menggunakan media audiovisual untuk mencari informasi pengetahuan. Pelayanan media audiovisual terbukti sangat efektif dilihat dari angka keterpakaian koleksi maupun tingkat kepuasan pemakain. Hasil uji hipotesis menunjukkan bahwa antarkelompok usia maupun tujuan kegunaan tidak ada perbedaan yang berarti dalam menanggapi berbagai aspek pelayanan media audiovisual. Kata Kunci: MediaAudio Visual-Layanan Perpustakaan

  6. Media literacy: no longer the shrinking violet of European audiovisual media regulation?

    NARCIS (Netherlands)

    McGonagle, T.; Nikoltchev, S.

    2011-01-01

    The lead article in this IRIS plus provides a critical analysis of how the European audiovisual regulatory and policy framework seeks to promote media literacy. It examines pertinent definitional issues and explores the main rationales for the promotion of media literacy as a regulatory and policy

  7. Attention to affective audio-visual information: Comparison between musicians and non-musicians

    NARCIS (Netherlands)

    Weijkamp, J.; Sadakata, M.

    2017-01-01

    Individuals with more musical training repeatedly demonstrate enhanced auditory perception abilities. The current study examined how these enhanced auditory skills interact with attention to affective audio-visual stimuli. A total of 16 participants with more than 5 years of musical training

  8. Effect of an audiovisual message for tetanus booster vaccination broadcast in the waiting room.

    Science.gov (United States)

    Eubelen, Caroline; Brendel, Fannette; Belche, Jean-Luc; Freyens, Anne; Vanbelle, Sophie; Giet, Didier

    2011-09-28

    General practitioners (GPs) often lack time and resources to invest in health education; audiovisual messages broadcast in the waiting room may be a useful educational tool. This work was designed to assess the effect of a message inviting patients to ask for a tetanus booster vaccination. A quasi experimental study was conducted in a Belgian medical practice consisting of 6 GPs and 4 waiting rooms (total: 20,000 contacts/year). A tetanus booster vaccination audiovisual message was continuously broadcast for 6 months in 2 randomly selected waiting rooms (intervention group--3 GPs) while the other 2 waiting rooms remained unequipped (control group--3 GPs). At the end of the 6-month period, the number of vaccine adult-doses delivered by local pharmacies in response to GPs' prescriptions was recorded. As a reference, the same data were also collected retrospectively for the general practice during the same 6-month period of the previous year. During the 6-month reference period where no audiovisual message was broadcast in the 4 waiting rooms, the number of prescriptions presented for tetanus vaccines was respectively 52 (0.44%) in the intervention group and 33 (0.38%) in the control group (p = 0.50). By contrast, during the 6-month study period, the number of prescriptions differed between the two groups (p Broadcasting an audiovisual health education message in the GPs' waiting room was associated with a significant increase in the number of adult tetanus booster vaccination prescriptions delivered by local pharmacies.

  9. Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation

    Directory of Open Access Journals (Sweden)

    Briony eBanks

    2015-08-01

    Full Text Available Perceptual adaptation allows humans to understand a variety of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker’s facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese accent with audiovisual or audio-only cues, without separate training. Participants’ eye gaze was recorded to verify that they looked at the speaker’s face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, they do not improve perceptual adaptation.

  10. Multimodal indexing of digital audio-visual documents: A case study for cultural heritage data

    NARCIS (Netherlands)

    Carmichael, J.; Larson, M.; Marlow, J.; Newman, E.; Clough, P.; Oomen, J.; Sav, S.

    2008-01-01

    This paper describes a multimedia multimodal information access sub-system (MIAS) for digital audio-visual documents, typically presented in streaming media format. The system is designed to provide both professional and general users with entry points into video documents that are relevant to their

  11. Seminario latinoamericano de didactica de los medios audiovisuales (Latin American Seminar on Teaching with Audiovisual Aids).

    Science.gov (United States)

    Eduplan Informa, 1971

    1971-01-01

    This seminar on the use of audiovisual aids reached several conclusions on the need for and the use of such aids in Latin America. The need for educational innovation in the face of a new society, a new type of communication, and a new vision of man is stressed. A new definition of teaching and learning as a fundamental process of communication is…

  12. Audiovisual discrimination between speech and laughter: Why and when visual information might help

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Past research on automatic laughter classification/detection has focused mainly on audio-based approaches. Here we present an audiovisual approach to distinguishing laughter from speech, and we show that integrating the information from audio and video channels may lead to improved performance over

  13. Audiovisual biofeedback guided breath-hold improves lung tumor position reproducibility and volume consistency

    Directory of Open Access Journals (Sweden)

    Danny Lee, PhD

    2017-07-01

    Conclusions: This study demonstrated that audiovisual biofeedback can be used to improve the reproducibility and consistency of breath-hold lung tumor position and volume, respectively. These results may provide a pathway to achieve more accurate lung cancer radiation treatment in addition to improving various medical imaging and treatments by using breath-hold procedures.

  14. Joint evaluation of communication quality and user experience in an audio-visual virtual reality meeting

    DEFF Research Database (Denmark)

    Møller, Anders Kalsgaard; Hoffmann, Pablo F.; Carrozzino, Marcello

    2013-01-01

    The state-of-the-art speech intelligibility tests are created with the purpose of evaluating acoustic communication devices and not for evaluating audio-visual virtual reality systems. This paper present a novel method to evaluate a communication situation based on both the speech intelligibility...

  15. Learning from Audio-Visual Media: The Open University Experience. IET Papers on Broadcasting No. 183.

    Science.gov (United States)

    Bates, A. W.

    This paper describes how audiovisual media have influenced the way students have learned--or failed to learn--at the Open University at Walton Hall. The paper is based in part on results from a large body of research that has repeatedly demonstrated the interrelatedness of a wide range of factors in determining how or what students learn from…

  16. Brief Report: Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders

    Science.gov (United States)

    Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.

    2014-01-01

    Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their…

  17. The Social Conditions for a Selection of Instructional Audio-Visual Media.

    Science.gov (United States)

    Mariet, Francois

    1980-01-01

    Economic and social factors such as budgets, availability, and current fashion can cause both manufacturers and users of instructional technology to make irrational choices about use of audiovisual media. Common circumstances surrounding both rational and irrational selections are outlined and discussed. (MSE)

  18. Psychophysics of the McGurk and Other Audiovisual Speech Integration Effects

    Science.gov (United States)

    Jiang, Jintao; Bernstein, Lynne E.

    2011-01-01

    When the auditory and visual components of spoken audiovisual nonsense syllables are mismatched, perceivers produce four different types of perceptual responses, auditory correct, visual correct, fusion (the so-called "McGurk effect"), and combination (i.e., two consonants are reported). Here, quantitative measures were developed to account for…

  19. Audiovisual interventions to reduce the use of general anaesthesia with paediatric patients during radiation therapy.

    Science.gov (United States)

    Willis, D; Barry, P

    2010-06-01

    Clinical audiovisual interventions were implemented to avoid the use of general anaesthesia with children undergoing radiation therapy treatment. A retrospective audit and case study review was conducted to evaluate the utility of distraction interventions aimed at improving immobilisation and reducing separation anxiety for children aged between 2 and 6 years old who received radiation therapy. A simple, inexpensive audiovisual system was established using commercially available equipment. Paediatric patients could elect to (i) use a closed-circuit TV system that allowed them to see their carer(s); (ii) watch a DVD movie; or (iii) listen to carer(s) on a microphone during their treatment. Over a 2-year period (March 2007-May 2009), 37 paediatric patients aged 2-6 years received radiation therapy at the centre. Twenty-four children participated in audiovisual interventions, and 92% (n = 22) of these children did not require the use of general anaesthesia for some or all of their treatment. Case study review illustrates the utility and limitations of the system. The audit and case studies suggest that the audiovisual interventions provided supportive care and reduced the need to anaesthetise children undergoing radiation therapy treatment.

  20. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech and music

    Directory of Open Access Journals (Sweden)

    Hwee Ling eLee

    2014-08-01

    Full Text Available This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogues of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms. Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past three years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  1. Audiovisual semantic interference and attention : Evidence from the attentional blink paradigm

    NARCIS (Netherlands)

    Van der Burg, Erik; Brederoo, Sanne G.; Nieuwenstein, Mark R.; Theeuwes, Jan; Olivers, Christian N. L.

    In the present study we investigate the role of attention in audiovisual semantic interference, by using an attentional blink paradigm. Participants were asked to make an unspeeded response to the identity of a visual target letter. This target letter was preceded at various SOAs by a synchronized

  2. Online Dissection Audio-Visual Resources for Human Anatomy: Undergraduate Medical Students' Usage and Learning Outcomes

    Science.gov (United States)

    Choi-Lundberg, Derek L.; Cuellar, William A.; Williams, Anne-Marie M.

    2016-01-01

    In an attempt to improve undergraduate medical student preparation for and learning from dissection sessions, dissection audio-visual resources (DAVR) were developed. Data from e-learning management systems indicated DAVR were accessed by 28% ± 10 (mean ± SD for nine DAVR across three years) of students prior to the corresponding dissection…

  3. Technical Considerations in the Delivery of Audio-Visual Course Content.

    Science.gov (United States)

    Lightfoot, Jay M.

    2002-01-01

    In an attempt to provide students with the benefit of the latest technology, some instructors include multimedia content on their class Web sites. This article introduces the basic terms and concepts needed to understand the multimedia domain. Provides a brief tutorial designed to help instructors create good, consistent audio-visual content. (AEF)

  4. Development of an Estimation Model for Instantaneous Presence in Audio-Visual Content

    National Research Council Canada - National Science Library

    OZAWA, Kenji; TSUKAHARA, Shota; KINOSHITA, Yuichiro; MORISE, Masanori

    2016-01-01

    ...: system presence and content presence. In this study we focused on content presence. To estimate the overall presence of a content item, we have developed estimation models for the sense of presence in audio-only and audio-visual content...

  5. Audiovisual Speech Perception in Children with Developmental Language Disorder in Degraded Listening Conditions

    Science.gov (United States)

    Meronen, Auli; Tiippana, Kaisa; Westerholm, Jari; Ahonen, Timo

    2013-01-01

    Purpose: The effect of the signal-to-noise ratio (SNR) on the perception of audiovisual speech in children with and without developmental language disorder (DLD) was investigated by varying the noise level and the sound intensity of acoustic speech. The main hypotheses were that the McGurk effect (in which incongruent visual speech alters the…

  6. A Guide to Audio-Visual References: Selection and Ordering Sources.

    Science.gov (United States)

    Bonn, Thomas L., Comp.

    Audio-visual reference sources and finding guides to identify media for classroom utilization are compiled in this list of sources at State University of New York College at Cortland libraries. Citations with annotations and library locations are included under the subject headings: (1) general sources for all media formats; (2) reviews, guides,…

  7. Primary School Pupils' Response to Audio-Visual Learning Process in Port-Harcourt

    Science.gov (United States)

    Olube, Friday K.

    2015-01-01

    The purpose of this study is to examine primary school children's response on the use of audio-visual learning processes--a case study of Chokhmah International Academy, Port-Harcourt (owned by Salvation Ministries). It looked at the elements that enhance pupils' response to educational television programmes and their hindrances to these…

  8. Strategies for Media Literacy: Audiovisual Skills and the Citizenship in Andalusia

    Science.gov (United States)

    Aguaded-Gomez, Ignacio; Perez-Rodriguez, M. Amor

    2012-01-01

    Media consumption is an undeniable fact in present-day society. The hours that members of all social segments spend in front of a screen take up a large part of their leisure time worldwide. Audiovisual communication becomes especially important within the context of today's digital society (society-network), where information and communication…

  9. Effects of congenital hearing loss and cochlear implantation on audiovisual speech perception in infants and children.

    Science.gov (United States)

    Bergeson, Tonya R; Houston, Derek M; Miyamoto, Richard T

    2010-01-01

    Cochlear implantation has recently become available as an intervention strategy for young children with profound hearing impairment. In fact, infants as young as 6 months are now receiving cochlear implants (CIs), and even younger infants are being fitted with hearing aids (HAs). Because early audiovisual experience may be important for normal development of speech perception, it is important to investigate the effects of a period of auditory deprivation and amplification type on multimodal perceptual processes of infants and children. The purpose of this study was to investigate audiovisual perception skills in normal-hearing (NH) infants and children and deaf infants and children with CIs and HAs of similar chronological ages. We used an Intermodal Preferential Looking Paradigm to present the same woman's face articulating two words ("judge" and "back") in temporal synchrony on two sides of a TV monitor, along with an auditory presentation of one of the words. The results showed that NH infants and children spontaneously matched auditory and visual information in spoken words; deaf infants and children with HAs did not integrate the audiovisual information; and deaf infants and children with CIs initially did not initially integrate the audiovisual information but gradually matched the auditory and visual information in spoken words. These results suggest that a period of auditory deprivation affects multimodal perceptual processes that may begin to develop normally after several months of auditory experience.

  10. Audiovisual Translation and Assistive Technology: Towards a Universal Design Approach for Online Education

    Science.gov (United States)

    Patiniotaki, Emmanouela

    2016-01-01

    Audiovisual Translation (AVT) and Assistive Technology (AST) are two fields that share common grounds within accessibility-related research, yet they are rarely studied in combination. The reason most often lies in the fact that they have emerged from different disciplines, i.e. Translation Studies and Computer Science, making a possible combined…

  11. Audiovisual translation in Cameroon: An Analysis of Voice-over in ...

    African Journals Online (AJOL)

    Other techniques of information dissemination such as voice-over which is a mode of audiovisual translation are lagging behind due to lack of understanding of the process. In academic circles, very little has been published on voice-over in the country unfortunately thus, very few people really understand what it is all about.

  12. AN EXPERIMENTAL EVALUATION OF AUDIO-VISUAL METHODS--CHANGING ATTITUDES TOWARD EDUCATION.

    Science.gov (United States)

    LOWELL, EDGAR L.; AND OTHERS

    AUDIOVISUAL PROGRAMS FOR PARENTS OF DEAF CHILDREN WERE DEVELOPED AND EVALUATED. EIGHTEEN SOUND FILMS AND ACCOMPANYING RECORDS PRESENTED INFORMATION ON HEARING, LIPREADING AND SPEECH, AND ATTEMPTED TO CHANGE PARENTAL ATTITUDES TOWARD CHILDREN AND SPOUSES. TWO VERSIONS OF THE FILMS AND RECORDS WERE NARRATED BY (1) "STARS" WHO WERE…

  13. Age-Related Differences in Audiovisual Interactions of Semantically Different Stimuli

    Science.gov (United States)

    Viggiano, Maria Pia; Giovannelli, Fabio; Giganti, Fiorenza; Rossi, Arianna; Metitieri, Tiziana; Rebai, Mohamed; Guerrini, Renzo; Cincotta, Massimo

    2017-01-01

    Converging results have shown that adults benefit from congruent multisensory stimulation in the identification of complex stimuli, whereas the developmental trajectory of the ability to integrate multisensory inputs in children is less well understood. In this study we explored the effects of audiovisual semantic congruency on identification of…

  14. Superior Temporal Activation in Response to Dynamic Audio-Visual Emotional Cues

    Science.gov (United States)

    Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.

    2009-01-01

    Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…

  15. Automatic audio-visual fusion for aggression detection using meta-information

    NARCIS (Netherlands)

    Lefter, I.; Burghouts, G.J.; Rothkrantz, L.J.M.

    2012-01-01

    We propose a new method for audio-visual sensor fusion and apply it to automatic aggression detection. While a variety of definitions of aggression exist, in this paper we see it as any kind of behavior that has a disturbing effect on others. We have collected multi- and unimodal assessments by

  16. A comparative study on automatic audio-visual fusion for aggression detection using meta-information

    NARCIS (Netherlands)

    Lefter, I.; Rothkrantz, L.J.M.; Burghouts, G.J.

    2013-01-01

    Multimodal fusion is a complex topic. For surveillance applications audio-visual fusion is very promising given the complementary nature of the two streams. However, drawing the correct conclusion from multi-sensor data is not straightforward. In previous work we have analysed a database with audio-

  17. Effects and limitations of an AED with audiovisual feedback for cardiopulmonary resuscitation: a randomized manikin study.

    Science.gov (United States)

    Fischer, Henrik; Gruber, Julia; Neuhold, Stephanie; Frantal, Sophie; Hochbrugger, Eva; Herkner, Harald; Schöchl, Herbert; Steinlechner, Barbara; Greif, Robert

    2011-07-01

    Correctly performed basic life support (BLS) and early defibrillation are the most effective measures to treat sudden cardiac arrest. Audiovisual feedback improves BLS. Automated external defibrillators (AED) with feedback technology may play an important role in improving CPR quality. The aim of this simulation study was to investigate if an AED with audiovisual feedback improves CPR parameters during standard BLS performed by trained laypersons. With ethics committee approval and informed consent, 68 teams (2 flight attendants each) performed 12 min of standard CPR with the AED's audiovisual feedback mechanism enabled or disabled. We recorded CPR quality parameters during resuscitation on a manikin in this open, prospective, randomized controlled trial. Between the feedback and control-group we measured differences in compression depth and rate as main outcome parameters and effective compressions, correct hand position, and incomplete decompression as secondary outcome parameters. An effective compression was defined as a compression with correct depth, hand position, and decompression. The feedback-group delivered compression rates closest to the recommended guidelines (101 ± 9 vs. 109 ± 15/min, p=0.009), more effective compressions (20 ± 18 vs. 5 ± 6%, pAED's audiovisual feedback system improved some CPR-quality parameters, thus confirming findings of earlier studies with the notable exception of decreased compression depth, which is a key parameter that might be linked to reduced cardiac output. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  18. The Education, Audiovisual and Culture Executive Agency: Helping You Grow Your Project

    Science.gov (United States)

    Education, Audiovisual and Culture Executive Agency, European Commission, 2011

    2011-01-01

    The Education, Audiovisual and Culture Executive Agency (EACEA) is a public body created by a Decision of the European Commission and operates under its supervision. It is located in Brussels and has been operational since January 2006. Its role is to manage European funding opportunities and networks in the fields of education and training,…

  19. A linguagem audiovisual da lousa digital interativa no contexto educacional/Audiovisual language of the digital interactive whiteboard in the educational environment

    Directory of Open Access Journals (Sweden)

    Rosária Helena Ruiz Nakashima

    2006-01-01

    Full Text Available Neste artigo serão apresentadas informações sobre a lousa digital como um instrumento que proporciona a inserção da linguagem audiovisual no contexto escolar. Para o funcionamento da lousa digital interativa é necessário que esteja conectada a um computador e este a um projetor multimídia, sendo que, através da tecnologia Digital Vision Touch (DViT, a superfície desse quadro torna-se sensível ao toque. Dessa forma, utilizando-se o dedo, professores e alunos executarão funções que aumentam a interatividade com as atividades propostas na lousa. Serão apresentadas duas possibilidades de atividades pedagógicas, destacando as áreas do conhecimento de Ciências e Língua Portuguesa, que poderão ser aplicadas na educação infantil, com alunos de cinco a seis anos. Essa tecnologia reflete a evolução de um tipo de linguagem que não é mais baseada somente na oralidade e na escrita, mas também é audiovisual e dinâmica, pois permite que o sujeito além de receptor, seja produtor de informações. Portanto, a escola deve aproveitar esses recursos tecnológicos que facilitam o trabalho com a linguagem audiovisual em sala de aula, permitindo a elaboração de aulas mais significativas e inovadoras.In this paper we present some information about the digital interactive whiteboard and its use as a tool to introduce the audiovisual language in the educational environment. The digital interactive whiteboard is connected to both a computer and a multimedia projector and it uses the Digital Vision Touch (DViT, which means that the screen is touch-sensitive. By touching with their fingers, both teachers and pupils have access to functionalities that increase the interactivity with the activities worked during the class. We present two pedagogical activities to be used in Science and Portuguese classes, for five- and six-years old pupils. This new technology is the result of the evolution of a new type of communication, which is not grounded

  20. Inactivation of Primate Prefrontal Cortex Impairs Auditory and Audiovisual Working Memory.

    Science.gov (United States)

    Plakke, Bethany; Hwang, Jaewon; Romanski, Lizabeth M

    2015-07-01

    The prefrontal cortex is associated with cognitive functions that include planning, reasoning, decision-making, working memory, and communication. Neurophysiology and neuropsychology studies have established that dorsolateral prefrontal cortex is essential in spatial working memory while the ventral frontal lobe processes language and communication signals. Single-unit recordings in nonhuman primates has shown that ventral prefrontal (VLPFC) neurons integrate face and vocal information and are active during audiovisual working memory. However, whether VLPFC is essential in remembering face and voice information is unknown. We therefore trained nonhuman primates in an audiovisual working memory paradigm using naturalistic face-vocalization movies as memoranda. We inactivated VLPFC, with reversible cortical cooling, and examined performance when faces, vocalizations or both faces and vocalization had to be remembered. We found that VLPFC inactivation impaired subjects' performance in audiovisual and auditory-alone versions of the task. In contrast, VLPFC inactivation did not disrupt visual working memory. Our studies demonstrate the importance of VLPFC in auditory and audiovisual working memory for social stimuli but suggest a different role for VLPFC in unimodal visual processing. The ventral frontal lobe, or inferior frontal gyrus, plays an important role in audiovisual communication in the human brain. Studies with nonhuman primates have found that neurons within ventral prefrontal cortex (VLPFC) encode both faces and vocalizations and that VLPFC is active when animals need to remember these social stimuli. In the present study, we temporarily inactivated VLPFC by cooling the cortex while nonhuman primates performed a working memory task. This impaired the ability of subjects to remember a face and vocalization pair or just the vocalization alone. Our work highlights the importance of the primate VLPFC in the processing of faces and vocalizations in a manner that

  1. "I am, I am nothing, I am a story ever told": Performing personas - erotic expression in audiovisual performances Ney Matogrosso the authorization context of a dictatorship

    Directory of Open Access Journals (Sweden)

    Robson Pereira da Silva

    2016-01-01

    Full Text Available There is, in this article, the research into the performing transgressions of Ney Matogrosso in the context of the Brazilian dictatorship civil military. Thus, we understand the marginal personas (types / archetypes displayed procedurally in performative procedures artist Ney Matogrosso (phonogram, cover, brochure, spectacles, meaning his work over the years 1970 and 1980. The artist, in his works, reverses the concepts governed by affluent culture, this practice, consisting of audiovisual materials, widespread in the cultural industry, which ensures historically the performer of the production of the materiality of Music Popular Brazilian (MPB - recording performance. This study highlights the historicity of aesthetic subversion in Ney Matogrosso, as a front political attitude to the producer authoritarian regime prohibitions the erotic potential.

  2. Gestión documental de la información audiovisual deportiva en las televisiones generalistas Documentary management of the sport audio-visual information in the generalist televisions

    Directory of Open Access Journals (Sweden)

    Jorge Caldera Serrano

    2005-01-01

    Full Text Available Se analiza la gestión de la información audiovisual deportiva en el marco de los Sistemas de Información Documental de las cadenas estatales, zonales y locales. Para ello se realiza un realiza un recorrido por la cadena documental que realiza la información audiovisual deportiva con el fin de ir analizando cada uno de los parámetros, mostrando así una serie de recomendaciones y normativas para la confección del registro audiovisual deportivo. Evidentemente la documentación deportiva audiovisual no se diferencia en exceso del análisis de otros tipos documentales televisivos por lo que se lleva a cabo una profundización yampliación de su gestión y difusión, mostrando el flujo informacional dentro del Sistema.The management of the sport audio-visual documentation of the Information Systems of the state, zonal and local chains is analyzed within the framework. For it it is made makes a route by the documentary chain that makes the sport audio-visual information with the purpose of being analyzing each one of the parameters, showing therefore a series of recommendations and norms for the preparation of the sport audio-visual registry. Evidently the audio-visual sport documentation difference in excess of the analysis of other televising documentary types reason why is not carried out a deepening and extension of its management and diffusion, showing the informational flow within the System.

  3. Audio-Visual Tibetan Speech Recognition Based on a Deep Dynamic Bayesian Network for Natural Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Yue Zhao

    2012-12-01

    Full Text Available Audio-visual speech recognition is a natural and robust approach to improving human-robot interaction in noisy environments. Although multi-stream Dynamic Bayesian Network and coupled HMM are widely used for audio-visual speech recognition, they fail to learn the shared features between modalities and ignore the dependency of features among the frames within each discrete state. In this paper, we propose a Deep Dynamic Bayesian Network (DDBN to perform unsupervised extraction of spatial-temporal multimodal features from Tibetan audio-visual speech data and build an accurate audio-visual speech recognition model under a no frame-independency assumption. The experiment results on Tibetan speech data from some real-world environments showed the proposed DDBN outperforms the state-of-art methods in word recognition accuracy.

  4. An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex: e68959

    National Research Council Canada - National Science Library

    Kayoko Okada; Jonathan H Venezia; William Matchin; Kourosh Saberi; Gregory Hickok

    2013-01-01

    .... While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur...

  5. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    Science.gov (United States)

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  6. Appraisal of Educational Materials for AVLINE: A Project of the Association of American Medical Colleges and National Library of Medicine.

    Science.gov (United States)

    Johnson, Jenny K.

    In response to a growing need for a comprehensive evaluation and cataloging of nontextbook materials in the field of health sciences, the Educational Materials Project, sponsored by the National Library of Medicine, developed a clearinghouse system called AVLINE (Audiovisual on Line) for the dissemination of information about these materials. To…

  7. Legal and ethical issues surrounding the online dissemination of audiovisual archives: needs, practices and solutions developed in France

    OpenAIRE

    Fellous-Sigrist, Myriam; Ginouvès, Véronique

    2014-01-01

    International audience; Providing citizens with easy access to the results of research via online dissemination can revitalize the relationship between science and society. If research in the humanities and social sciences, and in particular their audiovisual sources, are to be included in this movement, an effort has to be made to understand and adapt to the requirements of online dissemination. Until recently, furthermore, audiovisual archives were rarely consulted by the public. Embracing ...

  8. Engineering - A World of Possibilities. A List of Audio Visual Aids and Written Materials.

    Science.gov (United States)

    National Academy of Sciences-National Research Council, Washington, DC. Committee on Minorities in Engineering

    Presented is a list of audiovisual aids and written materials related to careers in science and engineering, with special emphasis on the involvement of minority students in technical careers. Each item is briefly described (if the material is a film, its length and whether color or black and white is included), the cost (if any) of the material…

  9. Collecting National and International Data on the Production of Audio, Visual, and Microform Materials.

    Science.gov (United States)

    Frase, Robert W.

    This paper reviews UNESCO activities for the collection of national production data of audiovisual materials and microforms and presents possible approaches to the task. UNESCO has for some years collected data on the production of printed materials, but while recognizing the need for collecting similar statistics on nonprint media, it has not yet…

  10. Environnement et elaboration de materiel pedagogique (The Environment and the Elaboration of Instructional Materials).

    Science.gov (United States)

    Capelle, Marie-Jose; Archard-Bayle, Guy

    1982-01-01

    Describes the method and the instructional materials entitled "Contacts," which were developed specifically for Nigeria. The discussion covers the use of audiovisual supplementary material, the essentially African sociocultural reference of the text, the methodology peculiar to the Nigeria plurilingual situation, and the goals for both…

  11. Fear of Falling and Older Adult Peer Production of Audio-Visual Discussion Material

    Science.gov (United States)

    Bailey, Cathy; King, Karen; Dromey, Ben; Wynne, Ciaran

    2010-01-01

    A growing body of work suggests that negative stereotypes of, and associations between, falling, fear of falling, and ageing, may mean that older adults reject falls information and advice. Against a widely accepted backdrop of demographic ageing in Europe and that alleviating the impacts of falls and fear of falling are pressing health care…

  12. International Directory of Audio-Visual and Programmed Foreign Language Courses and Materials. Preprints, Part 3.

    Science.gov (United States)

    Institut fuer Kommunikationsforschung, Berlin (West Germany). Documentation Div.

    A directory of over 20 foreign language courses lists classes alphabetically by student language and target language. The course information refers directly to the questionnaire, duplicated at the beginning of the directory, which covers general information, course availability, area of use, principles and goals, organization, tests, auditory…

  13. Perception of the Multisensory Coherence of Fluent Audiovisual Speech in Infancy: Its Emergence & the Role of Experience

    Science.gov (United States)

    Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa

    2014-01-01

    To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038

  14. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

    Science.gov (United States)

    Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa

    2015-02-01

    To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. A Double Dissociation between Anterior and Posterior Superior Temporal Gyrus for Processing Audiovisual Speech Demonstrated by Electrocorticography.

    Science.gov (United States)

    Ozker, Muge; Schepers, Inga M; Magnotti, John F; Yoshor, Daniel; Beauchamp, Michael S

    2017-06-01

    Human speech can be comprehended using only auditory information from the talker's voice. However, comprehension is improved if the talker's face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl's gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech.

  16. RECURSO AUDIOVISUAL PAA ENSEÑAR Y APRENDER EN EL AULA: ANÁLISIS Y PROPUESTA DE UN MODELO FORMATIVO

    Directory of Open Access Journals (Sweden)

    Damian Marilu Mendoza Zambrano

    2015-09-01

    Full Text Available La usabilidad de los recursos audiovisuales, gráficos y digitales, que en la actualidad se están introduciendo en el sistema educativo se despliega en varios países de la región como Chile, Colombia, México, Cuba, El Salvador, Uruguay y Venezuela. Se analiza y se justifica subtemas relacionados con la enseñanza de los medios, desde la iniciativa de España y Portugal; países que fueron convirtiéndose en protagonistas internacionales de algunos modelos educativos en el contexto universitario. Debido a la extensión y focalización en la informática y las redes de información y comunicación en la internet; el audiovisual como instrumento tecnológico va ganando espacios como un recurso dinámico e integrador; con características especiales que lo distingue del resto de los medios que conforman el ecosistema audiovisual. Como resultado de esta investigación se proponen dos líneas de aplicación: A. Propuesta del lenguaje icónico y audiovisual como objetivo de aprendizaje y/o materia curricular en los planes de estudio universitarios con talleres para el desarrollo del documento audiovisual, la fotografía digital y la producción audiovisual y B. Uso de los recursos audiovisuales como medio educativo, lo que implicaría un proceso previo de capacitación a la comunidad docente en actividades recomendadas al profesorado y alumnado respectivamente. En consecuencia, se presentan sugerencias que permiten implementar ambas líneas de acción académica.PALABRAS CLAVE: Alfabetización Mediática; Educación Audiovisual; Competencia Mediática; Educomunicación.AUDIOVISUAL RESOURCE FOR TEACHING AND LEARNING IN THE CLASSROOM: ANALYSIS AND PROPOSAL OF A TRAINING MODELABSTRACTThe usage of the graphic and digital audiovisual resources in Education that is been applied in the present, have displayed in countries such as Chile, Colombia, Mexico, Cuba, El Salvador, Uruguay, and Venezuela. The analysis and justification of the topics related to the

  17. Perception of audiovisual speech synchrony for native and non-native language.

    Science.gov (United States)

    Navarra, Jordi; Alsius, Agnès; Velasco, Ignacio; Soto-Faraco, Salvador; Spence, Charles

    2010-04-06

    To what extent does our prior experience with the correspondence between audiovisual stimuli influence how we subsequently bind them? We addressed this question by testing English and Spanish speakers (having little prior experience of Spanish and English, respectively) on a crossmodal simultaneity judgment (SJ) task with English or Spanish spoken sentences. The results revealed that the visual speech stream had to lead the auditory speech stream by a significantly larger interval in the participants' native language than in the non-native language for simultaneity to be perceived. Critically, the difference in temporal processing between perceiving native vs. non-native language tends to disappear as the amount of experience with the non-native language increases. We propose that this modulation of multisensory temporal processing as a function of prior experience is a consequence of the constraining role that visual information plays in the temporal alignment of audiovisual speech signals. Copyright 2010 Elsevier B.V. All rights reserved.

  18. Audiovisual presentation of video-recorded stimuli at a high frame rate.

    Science.gov (United States)

    Lidestam, Björn

    2014-06-01

    A method for creating and presenting video-recorded synchronized audiovisual stimuli at a high frame rate-which would be highly useful for psychophysical studies on, for example, just-noticeable differences and gating-is presented. Methods for accomplishing this include recording audio and video separately using an exact synchronization signal, editing the recordings and finding exact synchronization points, and presenting the synchronized audiovisual stimuli with a desired frame rate on a cathode ray tube display using MATLAB and Psychophysics Toolbox 3. The methods from an empirical gating study (Moradi, Lidestam, & Rönnberg, Frontiers in Psychology 4:359, 2013) are presented as an example of the implementation of playback at 120 fps.

  19. [Effects of real-time audiovisual feedback on secondary-school students' performance of chest compressions].

    Science.gov (United States)

    Abelairas-Gómez, Cristian; Rodríguez-Núñez, Antonio; Vilas-Pintos, Elisardo; Prieto Saborit, José Antonio; Barcala-Furelos, Roberto

    2015-06-01

    To describe the quality of chest compressions performed by secondary-school students trained with a realtime audiovisual feedback system. The learners were 167 students aged 12 to 15 years who had no prior experience with cardiopulmonary resuscitation (CPR). They received an hour of instruction in CPR theory and practice and then took a 2-minute test, performing hands-only CPR on a child mannequin (Prestan Professional Child Manikin). Lights built into the mannequin gave learners feedback about how many compressions they had achieved and clicking sounds told them when compressions were deep enough. All the learners were able to maintain a steady enough rhythm of compressions and reached at least 80% of the targeted compression depth. Fewer correct compressions were done in the second minute than in the first (P=.016). Real-time audiovisual feedback helps schoolchildren aged 12 to 15 years to achieve quality chest compressions on a mannequin.

  20. Audio-visual relaxation training for anxiety, sleep, and relaxation among Chinese adults with cardiac disease.

    Science.gov (United States)

    Tsai, Sing-Ling

    2004-12-01

    The long-term effect of an audio-visual relaxation training (RT) treatment involving deep breathing, exercise, muscle relaxation, guided imagery, and meditation was compared with routine nursing care for reducing anxiety, improving sleep, and promoting relaxation in Chinese adults with cardiac disease. This research was a quasi-experimental, two-group, pretest-posttest study. A convenience sample of 100 cardiology patients (41 treatment, 59 control) admitted to one large medical center hospital in the Republic of China (ROC) was studied for 1 year. The hypothesized relationships were supported. RT significantly (p anxiety, sleep, and relaxation in the treatment group as compared to the control group. It appears audio-visual RT might be a beneficial adjunctive therapy for adult cardiac patients. However, considerable further work using stronger research designs is needed to determine the most appropriate instructional methods and the factors that contribute to long-term consistent practice of RT with Chinese populations.

  1. Video digital na educação : aplicação da narrativa audiovisual

    OpenAIRE

    Karla Isabel de Souza

    2009-01-01

    Resumo: Esta investigação busca, através da narrativa audiovisual, aproximar a educação das novas tecnologias. A ferramenta tecnológica visada é o vídeo digital. A linha pedagógica seguida é a de construção de conhecimento, de Paulo Freire, junto com uma adequação didáticometodológica da educomunicação. As discussões metodológicas partem de estudos dos conceitos de narrativa audiovisual retirados de Jesús García Jiménes e de Francisco García García. Cada um dos elementos da narrativa audiovis...

  2. EXPLICITATION AND ADDITION TECHNIQUES IN AUDIOVISUAL TRANSLATION: A MULTIMODAL APPROACH OF ENGLISHINDONESIAN SUBTITLES

    Directory of Open Access Journals (Sweden)

    Ichwan Suyudi

    2017-12-01

    Full Text Available In audiovisual translation, the multimodality of the audiovisual text is both a challenge and a resource for subtitlers. This paper illustrates how multi-modes provide information that helps subtitlers to gain a better understanding of meaning-making practices that will influence them to make a decision-making in translating a certain verbal text. Subtitlers may explicit, add, and condense the texts based on the multi-modes as seen on the visual frames. Subtitlers have to consider the distribution and integration of the meanings of multi-modes in order to create comprehensive equivalence between the source and target texts. Excerpts of visual frames in this paper are taken from English films Forrest Gump (drama, 1996, and James Bond (thriller, 2010.

  3. Rehabilitation of balance-impaired stroke patients through audio-visual biofeedback

    DEFF Research Database (Denmark)

    Gheorghe, Cristina; Nissen, Thomas; Juul Rosengreen Christensen, Daniel

    2015-01-01

    training exercise without any technological input, (2) a visual biofeedback group, performing via visual input, and (3) an audio-visual biofeedback group, performing via audio and visual input. Results retrieved from comparisons between the data sets (2) and (3) suggested superior postural stability......This study explored how audio-visual biofeedback influences physical balance of seven balance-impaired stroke patients, between 33–70 years-of-age. The setup included a bespoke balance board and a music rhythm game. The procedure was designed as follows: (1) a control group who performed a balance...... between test sessions for (2). Regarding the data set (1), the testers were less motivated to perform training exercises although their performance was superior to (2) and (3). Conclusions are that the audio component motivated patients to train although the physical performance was decreased....

  4. The Effectiveness Of Arabic Cartoon As Audiovisual Media On The Mastery Of Insya

    Directory of Open Access Journals (Sweden)

    Elsa Silvia Nur Aulia

    2015-08-01

    Full Text Available This present study was motivated by the fact that students Arabic writing ability is still relatively weak. This research intends to provide innovation on Arabic learning and particularly to examine the effectiveness of using audio-visual media in learning Insya. In attempt to achieve the objectives quasi-experimental method under the frame of Non-Equivalent Control Group Design was utilized. In regard to testing the hypothesis the normal gain to both of the classes is also employed. Tested through the use of Mann Whitney U-test with significance level of 0.05 the result indicated Asymp Sig 2-tailed 0.00. Based on the testing criteria Ha alternative hypothesis is accepted if it is lower than 0.05. Therefore it can be concluded that Ha is accepted and H0 null hypothesis is rejected. This signifies that there is significant influence of audiovisual media i.e. Arabic cartoon on students Insya ability.

  5. Audiovisual Integration of Speech in a Patient with Broca’s Aphasia

    DEFF Research Database (Denmark)

    Andersen, Tobias; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech...... perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca......'s area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical...

  6. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    Science.gov (United States)

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. The early maximum likelihood estimation model of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2015-01-01

    Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely...... integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross...... focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual...

  8. Cómo medir la violencia audiovisual: principales métodos y estudios realizados

    Directory of Open Access Journals (Sweden)

    María Marcos Ramos

    2012-04-01

    Full Text Available El presente artículo, de carácter descriptivo, analiza los principales métodos científicos utilizados en las Ciencias Sociales para analizar la violencia audiovisual. Así, los experimentos de laboratorio, o investigación experimental., estudios de campo, estudios correlacionales, estudios de panel longitudinal, experimentos naturales, estudios de intervención y revisión de meta-análisis han sido los principales métodos empleados por los investigadores más destacados en el campo de las Ciencias Sociales. En este artículo se presentan las técnicas utilizadas así como las principales investigaciones realizadas, con las conclusiones más importantes sobre la violencia audiovisual.

  9. Theoretical and methodological notes on visual and audiovisual sources in researches on Life Stories and Self-referential Memorials

    Directory of Open Access Journals (Sweden)

    Maria Helena Menna Barreto Abrahão

    2014-01-01

    Full Text Available The text explicits the reflection that bases the use of pictures, films and videofilms as sources in research on Life Stories and Self-re- ferential Memorials in Teachers’ Education. After refering to the researches in which we use this support since 1988, we work with two complementary pairs of theoretical dimensions of narratives in visual and audiovisual sources and their use in such empirical rese- arch: subjectivity/truth and space/time. These dimensions are worked grounded in Barthes (1984 to propose an interpretative effort of these sources to understand the essence of photography according to the photographer and the essence of photography according to the photographed person concocted to the essence of photography according to the researcher. The Barthesian constructs studium and punctum are applied to reading the narratives of the filmic and pho- tographic material, reaching the more radical expression of the Bar- thesian puctum: real or representational death of the referent that serves the photos and movies. The discussion of these dimensions for the analysis of (audio visual sources is complemented with the support of several other authors.

  10. Development and testing of an audio-visual aid for improving infant oral health through primary caregiver education.

    Science.gov (United States)

    Alsada, Lisa H; Sigal, Michael J; Limeback, Hardy; Fiege, James; Kulkarni, Gajanan V

    2005-04-01

    To create and test an audio-visual (AV) aid for providing anticipatory guidance on infant oral health to caregivers. A DVD-video containing evidence-based information about infant oral health care and prevention in accordance with the American Academy of Pediatric Dentistry guidelines has been developed (www.utoronto.ca/dentistry/newsresources/kids/). It contains comprehensive anticipatory guidance in the areas of pregnancy, oral development, teething, diet and nutrition, oral hygiene, fluoride use, acquisition of oral bacteria, feeding and oral habits, causes and sequelae of early childhood caries, trauma prevention, early dental visits and regular dental visits. A questionnaire was developed to test the knowledge of expectant and young mothers (n = 11) and early childhood educators (n = 16) before and after viewing the video. A significant lack of knowledge about infant oral health was indicated by the proportion of "I don"t know" (22%) and incorrect (19%) responses to the questionnaire before the viewing. Significant improvement in knowledge (32%; range -3% to 57%; p aid. This AV aid promises to be an effective tool in providing anticipatory guidance regarding infant oral health in high-risk populations. Unlike existing educational materials, this aid provides a comprehensive, self-directed, evidence-based approach to the promotion of infant oral health. Widespread application of this prevention protocol has the potential to result in greater awareness, increased use of dental services and reduced incidence of preventable oral disease in the target populations.

  11. Student′s preference of various audiovisual aids used in teaching pre- and para-clinical areas of medicine

    Directory of Open Access Journals (Sweden)

    Navatha Vangala

    2015-01-01

    Full Text Available Introduction: The formal lecture is among the oldest teaching methods that have been widely used in medical education. Delivering a lecture is made easy and better by use of audiovisual aids (AV aids such as blackboard or whiteboard, an overhead projector, and PowerPoint presentation (PPT. Objective: To know the students preference of various AV aids and their use in medical education with an aim to improve their use in didactic lectures. Materials and Methods: The study was carried out among 230 undergraduate medical students of first and second M.B.B.S studying at Malla Reddy Medical College for Women, Hyderabad, Telangana, India during the month of November 2014. Students were asked to answer a questionnaire on the use of AV aids for various aspects of learning. Results: This study indicates that students preferred PPT, the most for a didactic lecture, for better perception of diagrams and flowcharts. Ninety-five percent of the students (first and second M.B.B.S were stimulated for further reading if they attended a lecture augmented by the use of visual aids. Teacher with good teaching skills and AV aids (58% was preferred most than a teacher with only good teaching skills (42%. Conclusion: Our study demonstrates that lecture delivered using PPT was more appreciated and preferred by the students. Furthermore, teachers with a proper lesson plan, good interactive and communicating skills are needed for an effective presentation of lecture.

  12. Neural mechanisms for the effect of prior knowledge on audiovisual integration.

    Science.gov (United States)

    Liu, Qiang; Zhang, Ye; Campos, Jennifer L; Zhang, Qinglin; Sun, Hong-Jin

    2011-05-01

    Converging evidence indicates that prior knowledge plays an important role in multisensory integration. However, the neural mechanisms underlying the processes with which prior knowledge is integrated with current sensory information remains unknown. In this study, we measured event-related potentials (ERPs) while manipulating prior knowledge using a novel visual letter recognition task in which auditory information was always presented simultaneously. The color of the letters was assigned to a particular probability of being associated with audiovisual congruency (e.g., green=high probability (HP) and blue=low probability (LP)). Results demonstrate that this prior began affecting reaction times to the congruent audiovisual stimuli at about the 900th trial. Consequently, the ERP data was analyzed in two phases: the "early phase" (trial 900). The effects of prior knowledge were revealed through difference waveforms generated by subtracting the ERPs for the congruent audiovisual stimuli in the LP condition from those in the HP condition. A frontal-central probability effect (90-120 ms) was observed in the early phase. A right parietal-occipital probability effect (40-96 ms) and a frontal-central probability effect (170-200 ms) were observed in the late phase. The results suggest that during the initial acquisition of the knowledge about the probability of congruency, the brain assigned more attention to audiovisual stimuli for the LP condition. Following the acquisition of this prior knowledge, it was then used during early stages of visual processing and modulated the activity of multisensory cortical areas. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Catalog of Audiovisual Productions. Volume 2. Navy and Marine Corps Productions

    Science.gov (United States)

    1984-06-01

    METOCKS M SLD7SW C/06 SCENCESBOIA ROTC S/06 SCIEICES, EIIARY S ROUNDS, SHYTTING U/0 SCIENCES POEIICAL C/0 ER 1/09 SCIENCE, -FR~ NOIA T/0 RUBBERSYTEI D...AND DISCRIMINATION. NARRATED BY REMARKS: AVAILABLE AT NAVY GENERAL AUDIOVISUAL SERIES TITLE: CODE OF CONDUCT - DAVID MARAN. LIBRAIES, AND WAINE CORPS...EFFECTIVE COINUICATIO. DR DAVID MCCELLAND DISCUSSES THE PROBLEMSD TRANSMISSION SECURITY REMARKS’ LIMITED INTERNAL DISTRIBUTION (THIS OF IDENTiFYING

  14. Auditory Perceptual Learning for Speech Perception Can Be Enhanced by Audiovisual Training

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2013-03-01

    Full Text Available Speech perception under audiovisual conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how audiovisual training might benefit or impede auditory perceptual learning speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures in a protocol with a fixed number of trials. In Experiment 1, paired-associates (PA audiovisual (AV training of one group of participants was compared with audio-only (AO training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct. PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early audiovisual speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

  15. Comparison for younger and older adults: Stimulus temporal asynchrony modulates audiovisual integration.

    Science.gov (United States)

    Ren, Yanna; Ren, Yanling; Yang, Weiping; Tang, Xiaoyu; Wu, Fengxia; Wu, Qiong; Takahashi, Satoshi; Ejima, Yoshimichi; Wu, Jinglong

    2018-02-01

    Recent research has shown that the magnitudes of responses to multisensory information are highly dependent on the stimulus structure. The temporal proximity of multiple signal inputs is a critical determinant for cross-modal integration. Here, we investigated the influence that temporal asynchrony has on audiovisual integration in both younger and older adults using event-related potentials (ERP). Our results showed that in the simultaneous audiovisual condition, except for the earliest integration (80-110ms), which occurred in the occipital region for older adults was absent for younger adults, early integration was similar for the younger and older groups. Additionally, late integration was delayed in older adults (280-300ms) compared to younger adults (210-240ms). In audition‑leading vision conditions, the earliest integration (80-110ms) was absent in younger adults but did occur in older adults. Additionally, after increasing the temporal disparity from 50ms to 100ms, late integration was delayed in both younger (from 230 to 290ms to 280-300ms) and older (from 210 to 240ms to 280-300ms) adults. In the audition-lagging vision conditions, integration only occurred in the A100V condition for younger adults and in the A50V condition for older adults. The current results suggested that the audiovisual temporal integration pattern differed between the audition‑leading and audition-lagging vision conditions and further revealed the varying effect of temporal asynchrony on audiovisual integration in younger and older adults. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. "You're Talking Like the Computer in the Movie". Allusions in Audiovisual Translation

    Directory of Open Access Journals (Sweden)

    Irene Ranzato

    2014-06-01

    Full Text Available This article explores allusions as specific cultural references, contained either explicitly or implicitly in dramatic dialogues and images of television shows. Moreover, it looks particularly at the problem of their translation, as it has been discussed in Translation Studies and in Audiovisual Translation. A quantitative and qualitative analysis of these elements has been carried out on the basis of a substantial corpus of television programmes selected and categorized by means of a Descriptive Translations Studies paradigm.

  17. Audio-visual interaction in visual motion detection: Synchrony versus Asynchrony.

    Science.gov (United States)

    Rosemann, Stephanie; Wefel, Inga-Maria; Elis, Volkan; Fahle, Manfred

    Detection and identification of moving targets is of paramount importance in everyday life, even if it is not widely tested in optometric practice, mostly for technical reasons. There are clear indications in the literature that in perception of moving targets, vision and hearing interact, for example in noisy surrounds and in understanding speech. The main aim of visual perception, the ability that optometry aims to optimize, is the identification of objects, from everyday objects to letters, but also the spatial orientation of subjects in natural surrounds. To subserve this aim, corresponding visual and acoustic features from the rich spectrum of signals supplied by natural environments have to be combined. Here, we investigated the influence of an auditory motion stimulus on visual motion detection, both with a concrete (left/right movement) and an abstract auditory motion (increase/decrease of pitch). We found that incongruent audiovisual stimuli led to significantly inferior detection compared to the visual only condition. Additionally, detection was significantly better in abstract congruent than incongruent trials. For the concrete stimuli the detection threshold was significantly better in asynchronous audiovisual conditions than in the unimodal visual condition. We find a clear but complex pattern of partly synergistic and partly inhibitory audio-visual interactions. It seems that asynchrony plays only a positive role in audiovisual motion while incongruence mostly disturbs in simultaneous abstract configurations but not in concrete configurations. As in speech perception in hearing-impaired patients, patients suffering from visual deficits should be able to benefit from acoustic information. Copyright © 2017 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  18. The audiovisual mounting narrative as a basis for the documentary film interactive: news studies

    OpenAIRE

    Mgs. Denis Porto Renó

    2008-01-01

    This paper presents a literature review and experiment results from pilot-doctoral research "assembly language visual narrative for the documentary film interactive," which defend the thesis that there are features interactive audio and video editing of the movie, even as causing agent of interactivity. The search for interactive audio-visual formats are present in international investigations, but sob glances technology. He believes that this paper is to propose possible formats for interact...

  19. Impact of audio-visual storytelling in simulation learning experiences of undergraduate nursing students.

    Science.gov (United States)

    Johnston, Sandra; Parker, Christina N; Fox, Amanda

    2017-09-01

    Use of high fidelity simulation has become increasingly popular in nursing education to the extent that it is now an integral component of most nursing programs. Anecdotal evidence suggests that students have difficulty engaging with simulation manikins due to their unrealistic appearance. Introduction of the manikin as a 'real patient' with the use of an audio-visual narrative may engage students in the simulated learning experience and impact on their learning. A paucity of literature currently exists on the use of audio-visual narratives to enhance simulated learning experiences. This study aimed to determine if viewing an audio-visual narrative during a simulation pre-brief altered undergraduate nursing student perceptions of the learning experience. A quasi-experimental post-test design was utilised. A convenience sample of final year baccalaureate nursing students at a large metropolitan university. Participants completed a modified version of the Student Satisfaction with Simulation Experiences survey. This 12-item questionnaire contained questions relating to the ability to transfer skills learned in simulation to the real clinical world, the realism of the simulation and the overall value of the learning experience. Descriptive statistics were used to summarise demographic information. Two tailed, independent group t-tests were used to determine statistical differences within the categories. Findings indicated that students reported high levels of value, realism and transferability in relation to the viewing of an audio-visual narrative. Statistically significant results (t=2.38, psimulation to clinical practice. The subgroups of age and gender although not significant indicated some interesting results. High satisfaction with simulation was indicated by all students in relation to value and realism. There was a significant finding in relation to transferability on knowledge and this is vital to quality educational outcomes. Copyright © 2017. Published by

  20. Using multiple visual tandem streams in audio-visual speech recognition

    OpenAIRE

    Topkaya, İbrahim Saygın; Topkaya, Ibrahim Saygin; Erdoğan, Hakan; Erdogan, Hakan

    2011-01-01

    The method which is called the "tandem approach" in speech recognition has been shown to increase performance by using classifier posterior probabilities as observations in a hidden Markov model. We study the effect of using visual tandem features in audio-visual speech recognition using a novel setup which uses multiple classifiers to obtain multiple visual tandem features. We adopt the approach of multi-stream hidden Markov models where visual tandem features from two different classifiers ...

  1. Audiovisual associations alter the perception of low-level visual motion

    Directory of Open Access Journals (Sweden)

    Hulusi eKafaligonul

    2015-03-01

    Full Text Available Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role.

  2. An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex.

    Science.gov (United States)

    Okada, Kayoko; Venezia, Jonathan H; Matchin, William; Saberi, Kourosh; Hickok, Gregory

    2013-01-01

    Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.

  3. Visual and Auditory Components in the Perception of Asynchronous Audiovisual Speech.

    Science.gov (United States)

    García-Pérez, Miguel A; Alcalá-Quintana, Rocío

    2015-12-01

    Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal.

  4. Inverse effectiveness and multisensory interactions in visual event-related potentials with audiovisual speech.

    Science.gov (United States)

    Stevenson, Ryan A; Bushmakin, Maxim; Kim, Sunah; Wallace, Mark T; Puce, Aina; James, Thomas W

    2012-07-01

    In recent years, it has become evident that neural responses previously considered to be unisensory can be modulated by sensory input from other modalities. In this regard, visual neural activity elicited to viewing a face is strongly influenced by concurrent incoming auditory information, particularly speech. Here, we applied an additive-factors paradigm aimed at quantifying the impact that auditory speech has on visual event-related potentials (ERPs) elicited to visual speech. These multisensory interactions were measured across parametrically varied stimulus salience, quantified in terms of signal to noise, to provide novel insights into the neural mechanisms of audiovisual speech perception. First, we measured a monotonic increase of the amplitude of the visual P1-N1-P2 ERP complex during a spoken-word recognition task with increases in stimulus salience. ERP component amplitudes varied directly with stimulus salience for visual, audiovisual, and summed unisensory recordings. Second, we measured changes in multisensory gain across salience levels. During audiovisual speech, the P1 and P1-N1 components exhibited less multisensory gain relative to the summed unisensory components with reduced salience, while N1-P2 amplitude exhibited greater multisensory gain as salience was reduced, consistent with the principle of inverse effectiveness. The amplitude interactions were correlated with behavioral measures of multisensory gain across salience levels as measured by response times, suggesting that change in multisensory gain associated with unisensory salience modulations reflects an increased efficiency of visual speech processing.

  5. Cross-modal matching of audio-visual German and French fluent speech in infancy.

    Science.gov (United States)

    Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun

    2014-01-01

    The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants' audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life.

  6. Cross-modal matching of audio-visual German and French fluent speech in infancy.

    Directory of Open Access Journals (Sweden)

    Claudia Kubicek

    Full Text Available The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants' audio-visual matching ability of native (German and non-native (French fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life.

  7. An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Kayoko Okada

    Full Text Available Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS. Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.

  8. Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception.

    Science.gov (United States)

    Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki

    2016-10-13

    Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs' response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs' early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception.

  9. Commissioning and quality assurance for a respiratory training system based on audiovisual biofeedback

    Science.gov (United States)

    Cui, Guoqiang; Gopalan, Siddharth; Yamamoto, Tokihiro; Berger, Jonathan; Maxim, Peter G.; Keall, Paul J.

    2010-01-01

    A respiratory training system based on audiovisual biofeedback has been implemented at our institution. It is intended to improve patients’ respiratory regularity during four-dimensional (4D) computed tomography (CT) image acquisition. The purpose is to help eliminate the artifacts in 4D-CT images caused by irregular breathing, as well as improve delivery efficiency during treatment, where respiratory irregularity is a concern. This article describes the commissioning and quality assurance (QA) procedures developed for this peripheral respiratory training system, the Stanford Respiratory Training (START) system. Using the Varian real-time position management system for the respiratory signal input, the START software was commissioned and able to acquire sample respiratory traces, create a patient-specific guiding waveform, and generate audiovisual signals for improving respiratory regularity. Routine QA tests that include hardware maintenance, visual guiding-waveform creation, auditory sounds synchronization, and feedback assessment, have been developed for the START system. The QA procedures developed here for the START system could be easily adapted to other respiratory training systems based on audiovisual biofeedback. PMID:21081883

  10. The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.

    Science.gov (United States)

    Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano

    2017-01-01

    Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.

  11. Dissociating Attention and Audiovisual Integration in the Sound-Facilitatory Effect on Metacontrast Masking

    Directory of Open Access Journals (Sweden)

    Yi-Chia Chen

    2011-10-01

    Full Text Available In metacontrast masking, target visibility is impaired by a subsequent non-overlapping contour-matched mask, a phenomenon attributed to low-level processing. Previously we found that sound could reduce metacontrast masking (Yeh & Chen, 2010, and yet how it exerts its effect and whether the sound-triggered attention system plays a major role remains unsolved. Here we examine whether the sound-facilitatory effect is caused by alertness, attentional cueing, or audiovisual integration. Two sounds were either presented simultaneously with the target and the mask respectively, or one preceded the target by 100 ms and the other followed the mask 100 ms afterwards. No-sound and one-sound conditions were used for comparison. Participants discriminated the truncated part (up or down of the target, with four target-to-mask SOAs (14ms, 43ms, 114ms, and 157ms mixedly presented. Results showed that the attentional cueing effect was evident when compared to the one-leading sound condition. Additionally, selective (rather than overall improvement in accuracy and RT as a function of SOA was found with synchronized sound than without, suggesting audiovisual integration but not alertness. The audio-visual integration effect is attributed to enhanced temporal resolution but not temporal ventriloquism.

  12. The Problems and Challenges of Managing Crowd Sourced Audio-Visual Evidence

    Directory of Open Access Journals (Sweden)

    Harjinder Singh Lallie

    2014-04-01

    Full Text Available A number of recent incidents, such as the Stanley Cup Riots, the uprisings in the Middle East and the London riots have demonstrated the value of crowd sourced audio-visual evidence wherein citizens submit audio-visual footage captured on mobile phones and other devices to aid governmental institutions, responder agencies and law enforcement authorities to confirm the authenticity of incidents and, in the case of criminal activity, to identify perpetrators. The use of such evidence can present a significant logistical challenge to investigators, particularly because of the potential size of data gathered through such mechanisms and the added problems of time-lining disparate sources of evidence and, subsequently, investigating the incident(s. In this paper we explore this problem and, in particular, outline the pressure points for an investigator. We identify and explore a number of particular problems related to the secure receipt of the evidence, imaging, tagging and then time-lining the evidence, and the problem of identifying duplicate and near duplicate items of audio-visual evidence.

  13. Creación colectiva audiovisual y cultura colaborativa online. Proyectos y estrategias

    Directory of Open Access Journals (Sweden)

    Jordi Alberich Pascual

    2012-04-01

    Full Text Available El presente artículo analiza el desarrollo creciente de proyectos audiovisuales de creación colectiva en y a través de Internet. Para ello, se exploran en primer lugar las implicaciones para la redefinición de la función-autor tradicional que posibilitan los sistemas interactivos multimedia, así como su vinculación con estrategias de trabajo colaborativo en red. A continuación, centramos nuestra atención en el uso y desarrollo de recursos de software libre audiovisual, como ejemplo paradigmático de la vitalidad de una creciente cultura colaborativa en el ámbito audiovisual contemporáneo. Finalmente, el artículo concluye estableciendo las claves identificativas básicas de tres aproximaciones diferenciadas a las tareas y estrategias de trabajo implicadas en los proyectos de creación colectiva audiovisual analizados en el curso de nuestra investigación.

  14. On the Importance of Audiovisual Coherence for the Perceived Quality of Synthesized Visual Speech

    Directory of Open Access Journals (Sweden)

    Wesley Mattheyses

    2009-01-01

    Full Text Available Audiovisual text-to-speech systems convert a written text into an audiovisual speech signal. Typically, the visual mode of the synthetic speech is synthesized separately from the audio, the latter being either natural or synthesized speech. However, the perception of mismatches between these two information streams requires experimental exploration since it could degrade the quality of the output. In order to increase the intermodal coherence in synthetic 2D photorealistic speech, we extended the well-known unit selection audio synthesis technique to work with multimodal segments containing original combinations of audio and video. Subjective experiments confirm that the audiovisual signals created by our multimodal synthesis strategy are indeed perceived as being more synchronous than those of systems in which both modes are not intrinsically coherent. Furthermore, it is shown that the degree of coherence between the auditory mode and the visual mode has an influence on the perceived quality of the synthetic visual speech fragment. In addition, the audio quality was found to have only a minor influence on the perceived visual signal's quality.

  15. The cortical representation of the speech envelope is earlier for audiovisual speech than audio speech.

    Science.gov (United States)

    Crosse, Michael J; Lalor, Edmund C

    2014-04-01

    Visual speech can greatly enhance a listener's comprehension of auditory speech when they are presented simultaneously. Efforts to determine the neural underpinnings of this phenomenon have been hampered by the limited temporal resolution of hemodynamic imaging and the fact that EEG and magnetoencephalographic data are usually analyzed in response to simple, discrete stimuli. Recent research has shown that neuronal activity in human auditory cortex tracks the envelope of natural speech. Here, we exploit this finding by estimating a linear forward-mapping between the speech envelope and EEG data and show that the latency at which the envelope of natural speech is represented in cortex is shortened by >10 ms when continuous audiovisual speech is presented compared with audio-only speech. In addition, we use a reverse-mapping approach to reconstruct an estimate of the speech stimulus from the EEG data and, by comparing the bimodal estimate with the sum of the unimodal estimates, find no evidence of any nonlinear additive effects in the audiovisual speech condition. These findings point to an underlying mechanism that could account for enhanced comprehension during audiovisual speech. Specifically, we hypothesize that low-level acoustic features that are temporally coherent with the preceding visual stream may be synthesized into a speech object at an earlier latency, which may provide an extended period of low-level processing before extraction of semantic information.

  16. Audiovisual Integration of Speech in a Patient with Broca’s Aphasia

    Directory of Open Access Journals (Sweden)

    Tobias Søren Andersen

    2015-04-01

    Full Text Available Lesions to Broca’s area cause aphasia characterised by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca’s area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca’s area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca’s aphasia did not experience the McGurk illusion suggesting that an intact Broca’s area is necessary for audiovisual integration of speech. Here we describe a patient with Broca’s aphasia who experienced the McGurk illusion. This indicates that an intact Broca’s area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca’s area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke’s aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca’s aphasia.

  17. Child's dental fear: cause related factors and the influence of audiovisual modeling.

    Science.gov (United States)

    Mungara, Jayanthi; Injeti, Madhulika; Joseph, Elizabeth; Elangovan, Arun; Sakthivel, Rajendran; Selvaraju, Girija

    2013-01-01

    Delivery of effective dental treatment to a child patient requires thorough knowledge to recognize dental fear and its management by the application of behavioral management techniques. Children's Fear Survey Schedule - Dental Subscale (CFSS-DS) helps in identification of specific stimuli which provoke fear in children with regard to dental situation. Audiovisual modeling can be successfully used in pediatric dental practice. To assess the degree of fear provoked by various stimuli in the dental office and to evaluate the effect of audiovisual modeling on dental fear of children using CFSS-DS. Ninety children were divided equally into experimental (group I) and control (group II) groups and were assessed in two visits for their degree of fear and the effect of audiovisual modeling, with the help of CFSS-DS. The most fear-provoking stimulus for children was injection and the least was to open the mouth and having somebody look at them. There was no statistically significant difference in the overall mean CFSS-DS scores between the two groups during the initial session (P > 0.05). However, in the final session, a statistically significant difference was observed in the overall mean fear scores between the groups (P modeling resulted in a significant reduction of overall fear as well as specific fear in relation to most of the items. A significant reduction of fear toward dentists, doctors in general, injections, being looked at, the sight, sounds, and act of the dentist drilling, and having the nurse clean their teeth was observed.

  18. Maladaptive connectivity of Broca's area in schizophrenia during audiovisual speech perception: an fMRI study.

    Science.gov (United States)

    Szycik, G R; Ye, Z; Mohammadi, B; Dillo, W; Te Wildt, B T; Samii, A; Frieling, H; Bleich, S; Münte, T F

    2013-12-03

    Speech comprehension relies on auditory as well as visual information, and is enhanced in healthy subjects, when audiovisual (AV) information is present. Patients with schizophrenia have been reported to have problems regarding this AV integration process, but little is known about which underlying neural processes are altered. Functional magnetic resonance imaging was performed in 15 schizophrenia patients (SP) and 15 healthy controls (HC) to study functional connectivity of Broca's area by means of a beta series correlation method during perception of audiovisually presented bisyllabic German nouns, in which audio and video either matched or did not match. Broca's area of SP showed stronger connectivity with supplementary motor cortex for incongruent trials whereas HC connectivity was stronger for congruent trials. The right posterior superior temporal sulcus (RpSTS) area showed differences in connectivity for congruent and incongruent trials in HC in contrast to SP where the connectivity was similar for both conditions. These smaller differences in connectivity in SP suggest a less adaptive processing of audiovisually congruent and incongruent speech. The findings imply that AV integration problems in schizophrenia are associated with maladaptive connectivity of Broca's and RpSTS area in particular when confronted with incongruent stimuli. Results are discussed in light of recent AV speech perception models. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. Infant perception of audio-visual speech synchrony in familiar and unfamiliar fluent speech.

    Science.gov (United States)

    Pons, Ferran; Lewkowicz, David J

    2014-06-01

    We investigated the effects of linguistic experience and language familiarity on the perception of audio-visual (A-V) synchrony in fluent speech. In Experiment 1, we tested a group of monolingual Spanish- and Catalan-learning 8-month-old infants to a video clip of a person speaking Spanish. Following habituation to the audiovisually synchronous video, infants saw and heard desynchronized clips of the same video where the audio stream now preceded the video stream by 366, 500, or 666 ms. In Experiment 2, monolingual Catalan and Spanish infants were tested with a video clip of a person speaking English. Results indicated that in both experiments, infants detected a 666 and a 500 ms asynchrony. That is, their responsiveness to A-V synchrony was the same regardless of their specific linguistic experience or familiarity with the tested language. Compared to previous results from infant studies with isolated audiovisual syllables, these results show that infants are more sensitive to A-V temporal relations inherent in fluent speech. Furthermore, the absence of a language familiarity effect on the detection of A-V speech asynchrony at eight months of age is consistent with the broad perceptual tuning usually observed in infant response to linguistic input at this age. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Congruent and Incongruent Cues in Highly Familiar Audiovisual Action Sequences: An ERP Study

    Directory of Open Access Journals (Sweden)

    SM Wuerger

    2012-07-01

    Full Text Available In a previous fMRI study we found significant differences in BOLD responses for congruent and incongruent semantic audio-visual action sequences (whole-body actions and speech actions in bilateral pSTS, left SMA, left IFG, and IPL (Meyer, Greenlee, & Wuerger, JOCN, 2011. Here, we present results from a 128-channel ERP study that examined the time-course of these interactions using a one-back task. ERPs in response to congruent and incongruent audio-visual actions were compared to identify regions and latencies of differences. Responses to congruent and incongruent stimuli differed between 240–280 ms, 340–420 ms, and 460–660 ms after stimulus onset. A dipole analysis revealed that the difference around 250 ms can be partly explained by a modulation of sources in the vicinity of the superior temporal area, while the responses after 400 ms are consistent with sources in inferior frontal areas. Our results are in line with a model that postulates early recognition of congruent audiovisual actions in the pSTS, perhaps as a sensory memory buffer, and a later role of the IFG, perhaps in a generative capacity, in reconciling incongruent signals.

  1. Educar em comunicação audiovisual: um desafio para a Cuba “atualizada”

    Directory of Open Access Journals (Sweden)

    Liudmila Morales Alfonso

    2017-09-01

    Full Text Available O artigo analisa a pertinência da educação em comunicação audiovisual em Cuba, quando a atualização do modelo econômico e social se transforma em prioridade para o Governo. O “isolamento seletivo” que, por décadas, favoreceu a exclusividade da oferta audiovisual concentrada nos meios de comunicação estatais sofre um impacto a partir de 2008, com o auge do “pacote”, alternativa informal de distribuição de conteúdos. Assim, o público consome produtos audiovisuais estrangeiros de sua preferência, nos horários que escolhe. Contudo e, ante a mudança nos padrões  de consumo audiovisual, admitido por discursos oficiais e da imprensa, a estratégia governamental privilegia alternativas protecionistas ao “banal”, ao contrário de assumir responsabilidades formais para o empoderamento da cidadania.

  2. The Picmonic® Learning System: enhancing memory retention of medical sciences, using an audiovisual mnemonic Web-based learning platform

    Directory of Open Access Journals (Sweden)

    Yang A

    2014-05-01

    Full Text Available Adeel Yang,1,* Hersh Goel,1,* Matthew Bryan,2 Ron Robertson,1 Jane Lim,1 Shehran Islam,1 Mark R Speicher2 1College of Medicine, The University of Arizona, Tucson, AZ, USA; 2Arizona College of Osteopathic Medicine, Midwestern University, Glendale, AZ, USA *These authors contributed equally to this work Background: Medical students are required to retain vast amounts of medical knowledge on the path to becoming physicians. To address this challenge, multimedia Web-based learning resources have been developed to supplement traditional text-based materials. The Picmonic® Learning System (PLS; Picmonic, Phoenix, AZ, USA is a novel multimedia Web-based learning platform that delivers audiovisual mnemonics designed to improve memory retention of medical sciences. Methods: A single-center, randomized, subject-blinded, controlled study was conducted to compare the PLS with traditional text-based material for retention of medical science topics. Subjects were randomly assigned to use two different types of study materials covering several diseases. Subjects randomly assigned to the PLS group were given audiovisual mnemonics along with text-based materials, whereas subjects in the control group were given the same text-based materials with key terms highlighted. The primary endpoints were the differences in performance on immediate, 1 week, and 1 month delayed free-recall and paired-matching tests. The secondary endpoints were the difference in performance on a 1 week delayed multiple-choice test and self-reported satisfaction with the study materials. Differences were calculated using unpaired two-tailed t-tests. Results: PLS group subjects demonstrated improvements of 65%, 161%, and 208% compared with control group subjects on free-recall tests conducted immediately, 1 week, and 1 month after study of materials, respectively. The results of performance on paired-matching tests showed an improvement of up to 331% for PLS group subjects. PLS group

  3. fMR-adaptation indicates selectivity to audiovisual content congruency in distributed clusters in human superior temporal cortex

    Directory of Open Access Journals (Sweden)

    Blomert Leo

    2010-02-01

    Full Text Available Abstract Background Efficient multisensory integration is of vital importance for adequate interaction with the environment. In addition to basic binding cues like temporal and spatial coherence, meaningful multisensory information is also bound together by content-based associations. Many functional Magnetic Resonance Imaging (fMRI studies propose the (posterior superior temporal cortex (STC as the key structure for integrating meaningful multisensory information. However, a still unanswered question is how superior temporal cortex encodes content-based associations, especially in light of inconsistent results from studies comparing brain activation to semantically matching (congruent versus nonmatching (incongruent multisensory inputs. Here, we used fMR-adaptation (fMR-A in order to circumvent potential problems with standard fMRI approaches, including spatial averaging and amplitude saturation confounds. We presented repetitions of audiovisual stimuli (letter-speech sound pairs and manipulated the associative relation between the auditory and visual inputs (congruent/incongruent pairs. We predicted that if multisensory neuronal populations exist in STC and encode audiovisual content relatedness, adaptation should be affected by the manipulated audiovisual relation. Results The results revealed an occipital-temporal network that adapted independently of the audiovisual relation. Interestingly, several smaller clusters distributed over superior temporal cortex within that network, adapted stronger to congruent than to incongruent audiovisual repetitions, indicating sensitivity to content congruency. Conclusions These results suggest that the revealed clusters contain multisensory neuronal populations that encode content relatedness by selectively responding to congruent audiovisual inputs, since unisensory neuronal populations are assumed to be insensitive to the audiovisual relation. These findings extend our previously revealed mechanism for

  4. Interlibrary Loan of Audiovisuals May Bring a Lawsuit

    Science.gov (United States)

    Simpson, Carol

    2008-01-01

    Because so much information exists only in video format--from documentaries to instructional videos to entertainment films that create a significant amount of the popular culture--libraries have increased their collections of these materials. Nevertheless, no library could possibly afford to collect every item its patrons might one day request.…

  5. Assessment of Audiovisual Resource in two Selected Medical ...

    African Journals Online (AJOL)

    BACKGROUND: Access and adequate utilization of information materials is the prime objective of Information Centers. The information ... METHODS: The data instrument used was structured questionnaire administered to 200 of the 917 registered Students' population of the two Colleges and the response was 75%.

  6. Propuestas para la investigavción en comunicación audiovisual: publicidad social y creación colectiva en Internet / Research proposals for audiovisual communication: social advertising and collective creation on the internet

    Directory of Open Access Journals (Sweden)

    Teresa Fraile Prieto

    2011-09-01

    Full Text Available Resumen: La sociedad de la información digital plantea nuevos retos a los investigadores. A mediada que la comunicación audiovisual se ha consolidado como disciplina, los estudios culturales se muestran como una perspectiva de análisis ventajosa para acercarse a las nuevas prácticas creativas y de consumo del medio audiovisual. Este artículo defiende el estudio de los productos culturales audiovisuales que esta sociedad digital produce por cuanto son un testimonio de los cambios sociales que se operan en ella. En concreto se propone el acercamiento a la publicidad social y a los objetos de creación colectiva en Internet como medio para conocer las circunstancias de nuestra sociedad. Abstract: The information society poses new challenges to researchers. While audiovisual communication has been consolidated as a discipline, cultural studies is an advantageous analytical perspective to approach the new creative practices and consumption of audiovisual media. This article defends the study of audiovisual cultural products produced by the digital society because they are a testimony of the social changes taking place in it. Specifically, it proposes an approach to social advertising and objects of collective creation on the Internet as a means to know the circumstances of our society.

  7. Audiovisual biofeedback breathing guidance for lung cancer patients receiving radiotherapy: a multi-institutional phase II randomised clinical trial.

    Science.gov (United States)

    Pollock, Sean; O'Brien, Ricky; Makhija, Kuldeep; Hegi-Johnson, Fiona; Ludbrook, Jane; Rezo, Angela; Tse, Regina; Eade, Thomas; Yeghiaian-Alvandi, Roland; Gebski, Val; Keall, Paul J

    2015-07-18

    There is a clear link between irregular breathing and errors in medical imaging and radiation treatment. The audiovisual biofeedback system is an advanced form of respiratory guidance that has previously demonstrated to facilitate regular patient breathing. The clinical benefits of audiovisual biofeedback will be investigated in an upcoming multi-institutional, randomised, and stratified clinical trial recruiting a total of 75 lung cancer patients undergoing radiation therapy. To comprehensively perform a clinical evaluation of the audiovisual biofeedback system, a multi-institutional study will be performed. Our methodological framework will be based on the widely used Technology Acceptance Model, which gives qualitative scales for two specific variables, perceived usefulness and perceived ease of use, which are fundamental determinants for user acceptance. A total of 75 lung cancer patients will be recruited across seven radiation oncology departments across Australia. Patients will be randomised in a 2:1 ratio, with 2/3 of the patients being recruited into the intervention arm and 1/3 in the control arm. 2:1 randomisation is appropriate as within the interventional arm there is a screening procedure where only patients whose breathing is more regular with audiovisual biofeedback will continue to use this system for their imaging and treatment procedures. Patients within the intervention arm whose free breathing is more regular than audiovisual biofeedback in the screen procedure will remain in the intervention arm of the study but their imaging and treatment procedures will be performed without audiovisual biofeedback. Patients will also be stratified by treating institution and for treatment intent (palliative vs. radical) to ensure similar balance in the arms across the sites. Patients and hospital staff operating the audiovisual biofeedback system will complete questionnaires to assess their experience with audiovisual biofeedback. The objectives of this

  8. Concurrent audio-visual feedback for supporting drivers at intersections: A study using two linked driving simulators.

    Science.gov (United States)

    Houtenbos, M; de Winter, J C F; Hale, A R; Wieringa, P A; Hagenzieker, M P

    2017-04-01

    A large portion of road traffic crashes occur at intersections for the reason that drivers lack necessary visual information. This research examined the effects of an audio-visual display that provides real-time sonification and visualization of the speed and direction of another car approaching the crossroads on an intersecting road. The location of red blinking lights (left vs. right on the speedometer) and the lateral input direction of beeps (left vs. right ear in headphones) corresponded to the direction from where the other car approached, and the blink and beep rates were a function of the approaching car's speed. Two driving simulators were linked so that the participant and the experimenter drove in the same virtual world. Participants (N = 25) completed four sessions (two with the audio-visual display on, two with the audio-visual display off), each session consisting of 22 intersections at which the experimenter approached from the left or right and either maintained speed or slowed down. Compared to driving with the display off, the audio-visual display resulted in enhanced traffic efficiency (i.e., greater mean speed, less coasting) while not compromising safety (i.e., the time gap between the two vehicles was equivalent). A post-experiment questionnaire showed that the beeps were regarded as more useful than the lights. It is argued that the audio-visual display is a promising means of supporting drivers until fully automated driving is technically feasible. Copyright © 2016. Published by Elsevier Ltd.

  9. The effect of the lecture discussion teaching method with and without audio-visual augmentation on immediate and retention learning.

    Science.gov (United States)

    Andrusyszyn, M A

    1990-06-01

    This study determined whether students taught using the lecture-discussion method augmented with audio-visuals would achieve a higher mean score on an immediate post-test and delayed retention test than students presented with a lecture-discussion without audio-visuals. A convenience sample of 52 students divided into two groups voluntarily participated in the quasi-experiment. Two teaching sessions averaging 90 minutes in length were taught by the researcher. Learning and retention were measured by a 10-item multiple choice test with content validity. Immediate learning was measured with a post-test administered immediately following each of the teaching sessions. Delayed learning was measured with a retention test administered 25.5 days following the teaching sessions. Group data was analysed using an independent one tailed t-test for mean scores. Students attending the lecture-discussion with audio-visual augmentation did not achieve significantly higher mean scores on the two tests than the non-augmented group (p less than or equal to 0.05). Analysis using a paired t-test revealed that the difference in scores between the post-test and retention test for the group without audio-visual augmentation was significant (t = 2.31; p less than 0.05). Delayed retention appears to have been influenced by the use of audio-visuals. Nurse educators need to consider ways in which the lecture-discussion may be enhanced to maximise student learning and retention.

  10. Éticas y estéticas de la posmemoria en el audiovisual contemporáneo

    Directory of Open Access Journals (Sweden)

    Laia Quílez Esteve

    2015-10-01

    Full Text Available Si bien el concepto de la posmemoria se fragua a partir de las reflexiones derivadas en torno a la representación y transmisión del Holocausto, es cierto que en los últimos años dicho término también ha sido utilizado para designar un conjunto de producciones gestadas en otros contextos geográficos (España, Argentina, Chile… que apelan, por ello, a pasados traumáticos igualmente diversos. La comunicación que aquí presentamos pretende rastrear las bases y nudos conceptuales de lo que podríamos considerar las “estéticas (y éticas de la posmemoria”.Con este objetivo, trataremos de desentrañar los planteamientos que a nivel, formal e ideológico subyacen en gran parte de las producciones audiovisuales contemporáneas que recuperan, desde una marcada distancia generacional, el complejo y escurridizo material de la memoria.Palabras claves: Posmemoria, memoria generacional, Guerra civil española, cine documental, fotografía, trauma, Holocausto._________________________Abstract: although the concept of postmemory is forged from the reflections arising around the representation and transmission of the Holocaust, recently this term has also been used to describe a set of productions engendered in other geographical contexts (Spain, Argentina, Chile..., appealing, consequently, to various traumatic pasts. This paper aims to trace the conceptual bases and knots of what we might consider the "aesthetics (and ethics of postmemory". To this end, we try to unravel the different approaches that, in a narrative, ideological and formal level, underlie much of contemporary audiovisual productions that recover, from a marked generation gap, the complex and elusive material of memory.Keywords: Postmemory, generational memory, Spanish Civil War, documentary film, photography, trauma, Holocaust.

  11. Narrativa audiovisual, ontología y terrorismo: paradojas comunicativas en los videos del Estado Islámico

    Directory of Open Access Journals (Sweden)

    Aarón Rodríguez-Serrano

    2017-01-01

    Full Text Available El presente artículo analiza los procesos de significación e impacto global del material audiovisual generado por el autoproclamado Estado Islámico. Se trata, sin duda, de un objeto de estudio contemporáneo y de primer or-den, en cuanto gran parte de la crisis geopolítica en Oriente Próximo, así como los flujos migratorios hacia Europa, penden en este momento de las acciones bélicas de las fuerzas terroristas que emergieron de las llamadas primaveras árabes. Nuestra hipótesis de partida es que la técnica comunica-tiva de Estado Islámico no solo es una evolución salvaje de los propios pro-cesos comunicativos en las sociedades posmodernas, sino que, además, es profundamente paradójico e inconsistente en lo que a sus procesos de sig-nificación se refieren. Para analizar este material, se utilizará una metodolo-gía híbrida compuesta, en primer lugar, por una ontología política, en la que se rastrean las relaciones entre espectáculo, atrocidad, terrorismo y control político, incluyendo referencias a la existencia de la ley obscena como fundamento ideológico generador de mensajes. En segundo lugar, se repasarán someramente algunos de los recursos narrativos audiovisuales que generan significación en dichas piezas.

  12. Assessing the effect of culturally specific audiovisual educational interventions on attaining self-management skills for chronic obstructive pulmonary disease in Mandarin- and Cantonese-speaking patients: a randomized controlled trial.

    Science.gov (United States)

    Poureslami, Iraj; Kwan, Susan; Lam, Stephen; Khan, Nadia A; FitzGerald, John Mark

    2016-01-01

    Patient education is a key component in the management of chronic obstructive pulmonary disease (COPD). Delivering effective education to ethnic groups with COPD is a challenge. The objective of this study was to develop and assess the effectiveness of culturally and linguistically specific audiovisual educational materials in supporting self-management practices in Mandarin- and Cantonese-speaking patients. Educational materials were developed using participatory approach (patients involved in the development and pilot test of educational materials), followed by a randomized controlled trial that assigned 91 patients to three intervention groups with audiovisual educational interventions and one control group (pamphlet). The patients were recruited from outpatient clinics. The primary outcomes were improved inhaler technique and perceived self-efficacy to manage COPD. The secondary outcome was improved patient understanding of pulmonary rehabilitation procedures. Subjects in all three intervention groups, compared with control subjects, demonstrated postintervention improvements in inhaler technique (Pmanage a COPD exacerbation (Pmanaging COPD (Pmanagement practices. Self-management education led to improved proper use of medications, ability to manage COPD exacerbations, and ability to achieve goals in managing COPD. A relatively simple culturally appropriate disease management education intervention improved inhaler techniques and self-management practices. Further research is needed to assess the effectiveness of self-management education on behavioral change and patient empowerment strategies.

  13. Mujeres e industria audiovisual hoy: Involución, experimentación y nuevos modelos narrativos Women and the audiovisual (industry today: regression, experiment and new narrative models

    Directory of Open Access Journals (Sweden)

    Ana MARTÍNEZ-COLLADO MARTÍNEZ

    2011-07-01

    Full Text Available Este artículo analiza las prácticas artísticas audiovisuales en el contexto actual. Describe, en primer lugar, el proceso de involución de las prácticas audiovisuales realizadas por mujeres artistas. Las mujeres no están presentes ni como productoras, ni realizadoras, ni como ejecutivas de la industria audiovisual de tal manera que inevitablemente se reconstruyen y refuerzan los estereotipos tradicionales de género. A continuación el artículo se aproxima a la práctica artística audiovisual feminista en la década de los 70 y 80. Tomar la cámara se hizo absolutamente necesario no sólo para dar voz a muchas mujeres. Era necesario reinscribir los discursos ausentes y señalar un discurso crítico respecto a la representación cultural. Analiza, también, cómo estas prácticas a partir de la década de los 90 exploran nuevos modelos narrativos vinculados a las transformaciones de la subjetividad contemporánea, al tiempo que desarrollan su producción audiovisual en un “campo expandido” de exhibición. Por último, el artículo señala la relación de las prácticas feministas audiovisuales con el complejo territorio de la globalización y la sociedad de la información. La narración de la experiencia local ha encontrado en el audiovisual un medio privilegiado para señalar los problemas de la diferencia, la identidad, la raza y la etnicidad.This article analyses audiovisual art in the contemporary context. Firstly it describes the current regression of the role of women artists’ audiovisual practices. Women have little or no presence in the audiovisual industry as producers, filmmakers or executives, a condition that inevitably reconstitutes and reinforces traditional gender stereotypes. The article goes on to look at the feminist audiovisual practices of the nineteen seventies and eighties when women’s filmmaking became an absolutely necessity, not only to give voice to women but also to inscribe discourses found to be

  14. Processing of Audiovisually Congruent and Incongruent Speech in School-Age Children with a History of Specific Language Impairment: A Behavioral and Event-Related Potentials Study

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer; Macias, Danielle; Gustafson, Dana

    2015-01-01

    Previous studies indicate that at least some aspects of audiovisual speech perception are impaired in children with specific language impairment (SLI). However, whether audiovisual processing difficulties are also present in older children with a history of this disorder is unknown. By combining electrophysiological and behavioral measures, we…

  15. Plantilla 2: Particularidades del documento audiovisual. Orígenes de los servicios de documentación televisivos. El reto de la digitalización

    OpenAIRE

    Alemany, Dolores

    2011-01-01

    Particularidades del soporte físico y del mensaje audiovisual. Orígenes de la documentación audiovisual. Orígenes de los servicios de documentación televisivos. El reto de la digitalización de los archivos de televisión.

  16. Audiovisual Speech Perception in Infancy: The Influence of Vowel Identity and Infants' Productive Abilities on Sensitivity to (Mis)Matches between Auditory and Visual Speech Cues

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-01-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…

  17. Multimedia content production inside the classroom. A teaching proposal for journalism and audiovisual communication students

    Directory of Open Access Journals (Sweden)

    Eva Herrero Curiel

    2014-03-01

    Full Text Available The main objective of this article is to present and describe two multimedia experiences carried out during two practice groups in the Journalism and Audiovisual Communications program. Thirty students participated in Experience A during 14 teaching sessions, and the experience required each student to record a 3-minute interview of someone newsworthy within academia and, then, create a short documentary piece of up to 5 minutes. Experience B focused on content curation using Storify, and the ultimate goal of the practice exercise was to produce a story from different multimedia contents found within the platform. A SWOT analysis after integrating both experiences revealed that, although students were willing and motivated to use new technologies and produce audiovisual content, they also showed low motivation to work in groups, scant prior knowledge of the medium, and a lack of adaptation to complex situations. As such, the researchers conclude this type of experience can be valuable as the convergence of content and skills in audiovisual and journalistic settings responds to the courses’ demands and facilitates their adaptation to the EHEA requirements. Producción de contenido multimedia en el aula. Una propuesta docente para alumnos de periodismo y comunicación audiovisual Resumen El principal objetivo de este estudio de caso es presentar y describir dos experiencias multimedia llevadas a cabo en dos grupos prácticos de los grados de periodismo y comunicación audiovisual. En la Experiencia A participaron 30 estudiantes durante 14 sesiones docentes, y consistió en la grabación de una entrevista de 3 minutos a un personaje noticioso en el ámbito académico y, después, debían construir en grupo una pequeña pieza documental de, como máximo, 5 minutos de duración. La Experiencia B se centró en curación de contenidos utilizando Storify, en la que los alumnos construyeron una noticia a partir de diferentes contenidos multimedia

  18. Elementos diferenciales en la forma audiovisual de los videojuegos. Vinculación, presencia e inmersión. Differential elements in the audiovisual form of the video games. Bonding, presence and immersion.

    Directory of Open Access Journals (Sweden)

    María Gabino Campos

    2012-01-01

    Full Text Available In just over two decades the video games reach the top positions in the audiovisual sector. Different technical, economic and social facts make that video games are the main reference of entertainment for a growing number of millions. This phenomenon is also due to its creators develop stories with elements of interaction in order to achieve high investment of time by users. We investigate the concepts of bonding, presence and immersion for its implications in the sensory universe of video games and we show the state of the audiovisual research in this field in the first decade of the century.

  19. Mirando la realidad observando las pantallas. Activación diferencial en la percepción visual del movimiento real y aparente audiovisual con diferente montaje cinematográfico. Un estudio con profesionales y no profesionales del audiovisual

    OpenAIRE

    Martín-Pascual, Miguel Ángel

    2016-01-01

    El primer objetivo de esta investigación consiste en averiguar si hay alguna diferencia en los procesos perceptivos cerebrales mirando la realidad u observando imágenes en pantallas. El segundo es saber si, a través de las pantallas, hay diferencias entre la observación de hechos sin interrupción, y diferentes tipos de montaje audiovisual. El tercer objetivo concreto, consiste en investigar si los expertos y los profesionales de la imagen audiovisual perciben de manera diferente esas fuentes ...

  20. Sharing killed the AVMSD star: the impossibility of European audiovisual media regulation in the era of the sharing economy

    Directory of Open Access Journals (Sweden)

    Indrek Ibrus

    2016-06-01

    Full Text Available The paper focuses on the challenges that the ‘sharing economy’ presents to the updating of the European Union’s (EU Audiovisual Media Service Directive (AVMSD, part of the broader Digital Single Market (DSM strategy of the EU. It suggests that the convergence of media markets and the emergence of video-sharing platforms may make the existing regulative tradition obsolete. It demonstrates an emergent need for regulatory convergence – AVMSD to create equal terms for all technical forms of content distribution. It then shows how the operational logic of video-sharing platforms undermines the AVMSD logic aimed at creating demand for professionally produced European content – leading potentially to the liberalisation of the EU audiovisual services market. Lastly, it argues that the DSM strategy combined with sharing-related network effects may facilitate the evolution of the oligopolistic structure in the EU audiovisual market, potentially harmful for cultural diversity.

  1. The Impact of Politics 2.0 in the Spanish Social Media: Tracking the Conversations around the Audiovisual Political Wars

    Science.gov (United States)

    Noguera, José M.; Correyero, Beatriz

    After the consolidation of weblogs as interactive narratives and producers, audiovisual formats are gaining ground on the Web. Videos are spreading all over the Internet and establishing themselves as a new medium for political propaganda inside social media with tools so powerful like YouTube. This investigation proceeds in two stages: on one hand we are going to examine how this audiovisual formats have enjoyed an enormous amount of attention in blogs during the Spanish pre-electoral campaign for the elections of March 2008. On the other hand, this article tries to investigate the social impact of this phenomenon using data from a content analysis of the blog discussion related to these videos centered on the most popular Spanish political blogs. Also, we study when the audiovisual political messages (made by politicians or by users) "born" and "die" in the Web and with what kind of rules they do.

  2. NO-DO y las celadas del documento audiovisual

    Directory of Open Access Journals (Sweden)

    Vicente Sánchez-Biosca

    2010-01-01

    Full Text Available El día 4 de enero de 1943, los primeros cines de España abrían su programa con la proyección de un material inesperado: un noticiario de unos diez minutos de duración que repasaba lo que ese franquismo a la sazón apresado entre las garras de la guerra mundial consideraba «actualidad nacional». El noticiario iba, además, precedido –la ocasión lo merecía– de un prólogo de metraje equivalente que actuaba como declaración de intenciones. Lejos de autopresentarse como circunstancial, se prometía u...

  3. Big Data between audiovisual displays, artifacts, and aesthetic experience

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2016-01-01

    of large data sets – or Big Data – into the sphere of art and the aesthetic. Central to the discussion here is the analysis of how different structuring principles of data and the discourses that surround these principles shape our perception of data. This discussion involves considerations on various......This article discusses artistic practices and artifacts that are occupied with exploring data through visualization and sonification strategies as well as with translating data into materially solid formats and embodied processes. By means of these examples the overall aim of the article...... is to critically question how and whether such artistic practices can eventually lead to the experience and production of knowledge that could not otherwise be obtained via more traditional ways of data representation. The article, thus, addresses both the problems and possibilities entailed in extending the use...

  4. Presentation of political Alliances in the Romanian audiovisual media

    Directory of Open Access Journals (Sweden)

    Flaviu Calin RUS

    2011-01-01

    Full Text Available This material wishes to highlight the way in which the main political alliances have been formed in Romania in the last 20 years, as well as the way they have been reflected in the media. Moreover, we have tried to analyze the involvement of journalists and political analysts in explaining these political events. The study will focus on four political alliances, namely: CDR (the Romanian Democratic Convention, D.A. (Y.E.S. - Justice and Truth between PNL – the National Liberal Party and PD - the Democratic Party, ACD (the Centre-Right Alliance between PNL and PC – the Conservative Party and USL (the Social-Liberal Union between PSD – the Social Democrat Party, PNL and PC.

  5. A produção audiovisual na virtualização do ensino superior: subsídios para a formação docente/Audiovisual production in the virtualization of higher education: a contribution for teacher education

    Directory of Open Access Journals (Sweden)

    Dulce Márcia da Cruz

    2007-01-01

    Full Text Available O Brasil vive nos últimos dez anos uma crescente expansão da educação a distância (EAD e da virtualização da sala de aula no ensino superior. Se antes de 1995 a produção da EAD era uma tarefa dos profissionais de rádio e TV, com as mídias digitais esse processo também passa pelas mãos de docentes que podem produzir, transmitir e gerenciar cursos e disciplinas na internet, tornando-se autores da produção audiovisual e hipertextual de suas aulas. Visando contribuir para que os docentes tenham noções básicas sobre como produzir para a EAD e para disciplinas semi-presenciais usando meios audiovisuais e hipertextuais, este artigo descreve os elementos básicos que compõem a linguagem cinematográfica e as narrativas digitais que incorporam a interatividade. Finalmente, apresenta alguns fundamentos da produção para as mídias mais comuns na EAD brasileira: material impresso, teleconferência, videoconferência, multimídia/hipermídia e ambientes virtuais de aprendizagem. The past ten years had seen a significant expansion of the distance and hybrid education in Higher Education – HE in Brazil. Before 1995 the production of distance education (DE was a task of radio and TV professionals, with the adoption of digital media this process started to be a task of the teachers too, who can now produce, transmit and manage courses and disciplines in the Internet, becoming authors of the audiovisual and hypertextual production of its lessons. The objective of this article is to offer basic notions to the teachers about how to create DE and hybrid education incorporating audiovisual and hypertextual media, describing the main elements that compose the cinematographic language and the digital narratives that incorporate the interactivity. Finally, it presents some principles of the production for the most common media used for DE in Brazil: printed material, teleconference, videoconference, hypermedia/multimedia and Virtual Learning

  6. Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration

    Directory of Open Access Journals (Sweden)

    Maren Stropahl

    2017-01-01

    Full Text Available There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users (n = 18, untreated mild to moderately hearing impaired individuals (n = 18 and normal hearing controls (n = 17. Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the

  7. Timing in audiovisual speech perception: A mini review and new psychophysical data.

    Science.gov (United States)

    Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory

    2016-02-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.

  8. Development of Sensitivity to Audiovisual Temporal Asynchrony during Mid-Childhood

    Science.gov (United States)

    Kaganovich, Natalya

    2015-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal asynchrony in 7-8-year-olds, 10-11-year-olds, and adults by using a simultaneity judgment task (SJT). Additionally, we evaluated whether non-verbal intelligence, verbal ability, attention skills, or age influenced children's performance. On each trial, participants saw an explosion-shaped figure and heard a 2 kHz pure tone. These occurred at the following stimulus onset asynchronies (SOAs) - 0, 100, 200, 300, 400, and 500 ms. In half of all trials, the visual stimulus appeared first (VA condition) while in another half, the auditory stimulus appeared first (AV condition). Both groups of children were significantly more likely than adults to perceive asynchronous events as synchronous at all SOAs exceeding 100 ms, in both VA and AV conditions. Furthermore, only adults exhibited a significant shortening of RT at long SOAs compared to medium SOAs. Sensitivities to the VA and AV temporal asynchronies showed different developmental trajectories, with 10-11-year-olds outperforming 7-8-year-olds at the 300-500 ms SOAs, but only in the AV condition. Lastly, age was the only predictor of children's performance on the SJT. These results provide an important baseline against which children with developmental disorders associated with impaired audiovisual temporal function, such as autism, specific language impairment, and dyslexia may be compared. PMID:26569563

  9. Audiovisual perception in adults with amblyopia: a study using the McGurk effect.

    Science.gov (United States)

    Narinesingh, Cindy; Wan, Michael; Goltz, Herbert C; Chandrakumar, Manokaraananthan; Wong, Agnes M F

    2014-04-24

    The effects on multisensory integration have rarely been examined in amblyopia. The McGurk effect is a well-established audiovisual illusion that is manifested when an auditory phoneme is presented concurrently with an incongruent visual phoneme. Visually healthy viewers will hear a phoneme that does not match the actual auditory stimulus, having been perceptually influenced by the visual phoneme. This study examines audiovisual integration in adults with amblyopia. Twenty-two subjects with amblyopia and 25 visually healthy controls participated. Participants viewed videos of combinations of visual and auditory phonemes, and were asked to report what they heard. Some videos had congruent video and audio (control), whereas others had incongruent video and audio (McGurk). The McGurk effect is strongest when the visual phoneme dominates over the audio phoneme, resulting in low auditory accuracy on the task. Adults with amblyopia demonstrated a weaker McGurk effect than visually healthy controls (P = 0.01). The difference was greatest when viewing monocularly with the amblyopic eye, and it was also evident when viewing binocularly or monocularly with the fellow eye. No correlations were found between the strength of the McGurk effect and either visual acuity or stereoacuity in subjects with amblyopia. Subjects with amblyopia and controls showed a similar response pattern to different speakers and syllables, and subjects with amblyopia consistently demonstrated a weaker effect than controls. Abnormal visual experience early in life can have negative consequences for audiovisual integration that persists into adulthood in people with amblyopia. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  10. Enhancing clinical communication assessments using an audiovisual BCI for patients with disorders of consciousness

    Science.gov (United States)

    Wang, Fei; He, Yanbin; Qu, Jun; Xie, Qiuyou; Lin, Qing; Ni, Xiaoxiao; Chen, Yan; Pan, Jiahui; Laureys, Steven; Yu, Ronghao; Li, Yuanqing

    2017-08-01

    Objective. The JFK coma recovery scale-revised (JFK CRS-R), a behavioral observation scale, is widely used in the clinical diagnosis/assessment of patients with disorders of consciousness (DOC). However, the JFK CRS-R is associated with a high rate of misdiagnosis (approximately 40%) because DOC patients cannot provide sufficient behavioral responses. A brain-computer interface (BCI) that detects command/intention-specific changes in electroencephalography (EEG) signals without the need for behavioral expression may provide an alternative method. Approach. In this paper, we proposed an audiovisual BCI communication system based on audiovisual ‘yes’ and ‘no’ stimuli to supplement the JFK CRS-R for assessing the communication ability of DOC patients. Specifically, patients were given situation-orientation questions as in the JFK CRS-R and instructed to select the answers using the BCI. Main results. Thirteen patients (eight vegetative state (VS) and five minimally conscious state (MCS)) participated in our experiments involving both the BCI- and JFK CRS-R-based assessments. One MCS patient who received a score of 1 in the JFK CRS-R achieved an accuracy of 86.5% in the BCI-based assessment. Seven patients (four VS and three MCS) obtained unresponsive results in the JFK CRS-R-based assessment but responsive results in the BCI-based assessment, and 4 of those later improved scores in the JFK CRS-R-based assessment. Five patients (four VS and one MCS) obtained usresponsive results in both assessments. Significance. The experimental results indicated that the audiovisual BCI could provide more sensitive results than the JFK CRS-R and therefore supplement the JFK CRS-R.

  11. SU-E-J-29: Audiovisual Biofeedback Improves Tumor Motion Consistency for Lung Cancer Patients

    Energy Technology Data Exchange (ETDEWEB)

    Lee, D; Pollock, S; Makhija, K; Keall, P [The University of Sydney, Camperdown, NSW (Australia); Greer, P [The University of Newcastle, Newcastle, NSW (Australia); Calvary Mater Newcastle Hospital, Newcastle, NSW (Australia); Arm, J; Hunter, P [Calvary Mater Newcastle Hospital, Newcastle, NSW (Australia); Kim, T [The University of Sydney, Camperdown, NSW (Australia); University of Virginia Health System, Charlottesville, VA (United States)

    2014-06-01

    Purpose: To investigate whether the breathing-guidance system: audiovisual (AV) biofeedback improves tumor motion consistency for lung cancer patients. This will minimize respiratory-induced tumor motion variations across cancer imaging and radiotherapy procedues. This is the first study to investigate the impact of respiratory guidance on tumor motion. Methods: Tumor motion consistency was investigated with five lung cancer patients (age: 55 to 64), who underwent a training session to get familiarized with AV biofeedback, followed by two MRI sessions across different dates (pre and mid treatment). During the training session in a CT room, two patient specific breathing patterns were obtained before (Breathing-Pattern-1) and after (Breathing-Pattern-2) training with AV biofeedback. In each MRI session, four MRI scans were performed to obtain 2D coronal and sagittal image datasets in free breathing (FB), and with AV biofeedback utilizing Breathing-Pattern-2. Image pixel values of 2D images after the normalization of 2D images per dataset and Gaussian filter per image were used to extract tumor motion using image pixel values. The tumor motion consistency of the superior-inferior (SI) direction was evaluated in terms of an average tumor motion range and period. Results: Audiovisual biofeedback improved tumor motion consistency by 60% (p value = 0.019) from 1.0±0.6 mm (FB) to 0.4±0.4 mm (AV) in SI motion range, and by 86% (p value < 0.001) from 0.7±0.6 s (FB) to 0.1±0.2 s (AV) in period. Conclusion: This study demonstrated that audiovisual biofeedback improves both breathing pattern and tumor motion consistency for lung cancer patients. These results suggest that AV biofeedback has the potential for facilitating reproducible tumor motion towards achieving more accurate medical imaging and radiation therapy procedures.

  12. Timing of audiovisual inputs to the prefrontal cortex and multisensory integration.

    Science.gov (United States)

    Romanski, L M; Hwang, J

    2012-07-12

    A number of studies have demonstrated that the relative timing of audiovisual stimuli is especially important for multisensory integration of speech signals although the neuronal mechanisms underlying this complex behavior are unknown. Temporal coincidence and congruency are thought to underlie the successful merging of two intermodal stimuli into a coherent perceptual representation. It has been previously shown that single neurons in the non-human primate prefrontal cortex integrate face and vocalization information. However, these multisensory responses and the degree to which they depend on temporal coincidence have yet to be determined. In this study we analyzed the response latency of ventrolateral prefrontal (VLPFC) neurons to face, vocalization and combined face-vocalization stimuli and an offset (asynchronous) version of the face-vocalization stimulus. Our results indicate that for most prefrontal multisensory neurons, the response latency for the vocalization was the shortest, followed by the combined face-vocalization stimuli. The face stimulus had the longest onset response latency. When tested with a dynamic face-vocalization stimulus that had been temporally offset (asynchronous) one-third of multisensory cells in VLPFC demonstrated a change in response compared to the response to the natural, synchronous face-vocalization movie. Our results indicate that prefrontal neurons are sensitive to the temporal properties of audiovisual stimuli. A disruption in the temporal synchrony of an audiovisual signal which results in a change in the firing of communication related prefrontal neurons could underlie the loss in intelligibility which occurs with asynchronous speech stimuli. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  13. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    Directory of Open Access Journals (Sweden)

    David Alais

    Full Text Available BACKGROUND: An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. METHODOLOGY/PRINCIPAL FINDINGS: Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ. Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones was slightly weaker than visual learning (lateralised grating patches. Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. CONCLUSIONS/SIGNIFICANCE: The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns

  14. Individualized Study Guide on Apiculture: Instructor's Guide. Curriculum Materials for Agricultural Education.

    Science.gov (United States)

    Housmam, John L.; And Others

    The instructor's guide is coordinated for use with the student guide. The guide includes suggestions for teacher preparation, equipment and supply needs, suggested references, available audiovisual materials, open-ended questions for classroom discussion, educational opportunities for students, and a form for student evaluation of the study guide.…

  15. Language Practice with Multimedia Supported Web-Based Grammar Revision Material

    Science.gov (United States)

    Baturay, Meltem Huri; Daloglu, Aysegul; Yildirim, Soner

    2010-01-01

    The aim of this study was to investigate the perceptions of elementary-level English language learners towards web-based, multimedia-annotated grammar learning. WEBGRAM, a system designed to provide supplementary web-based grammar revision material, uses audio-visual aids to enrich the contextual presentation of grammar and allows learners to…

  16. Target Group Characteristics: Are Perceptional Modality Preferences Relevant for Instructional Material Design?

    Science.gov (United States)

    Jaspers, Fons

    1992-01-01

    Discussion of instructional materials design highlights perceptional modality preferences. Research on perception is reviewed; preferences for audio versus video, verbal versus pictorial, and listening versus reading are described; learning styles are considered; and theoretical and practical implications for audiovisual designers are suggested.…

  17. Bibliography of Russian Teaching Materials. Reports and Occasional Papers No. 16. Preliminary Edition.

    Science.gov (United States)

    Pockney, B. P., Comp.; Sollohub, N. S., Comp.

    This annotated bibliography provides teachers of Russian with a comprehensive guide to instructional materials. Entries are classified under: (1) audio-visual courses, (2) audio-lingual courses, (3) course books, (4) visual teaching aids, (5) audio teaching aids, (6) language laboratory drills, (7) reference grammars, (8) translations, essays,…

  18. Tiempo de crisis. El patrimonio audiovisual valenciano frente al cambio tecnológico

    Directory of Open Access Journals (Sweden)

    Lahoz Rodrigo, Juan Ignacio

    2014-07-01

    Full Text Available Tras tres décadas de autogobierno, la Generalitat Valenciana ha creado, fomentado, recopilado y restaurado un patrimonio audiovisual de incalculable interés cultural que tiene en la Filmoteca de CulturArts-IVAC y en el archivo de RTVV sus dos grandes centros de conservación. Este patrimonio se encuentra en un punto crítico por la necesidad de afrontar su transformación tecnológica en un momento de gran dificultad económica y política. El cierre de RTVV y la incertidumbre sobre el futuro de su archivo llevan a contraponer su carácter patrimonial a la tentación de privatizar su gestión y a recordar las recomendaciones de la UE y de la UNESCO para que sean archivos públicos y sin ánimo de lucro quienes se ocupen de la salvaguarda de las imágenes en movimiento. Si la fragilidad de los soportes de la cinematografía, del vídeo y de los ficheros digitales de imagen es la clave de su conservación a largo plazo, más determinante resulta hoy el imperio de la tecnología digital en todos los ámbitos de la generación, acceso y conservación de la producción audiovisual, pues conlleva un patrón de obsolescencia que puede suponer el bloqueo del patrimonio audiovisual valenciano si la Generalitat no le hace frente de forma inmediata y decidida: dotar a la Filmoteca de CulturArts-IVAC del equipamiento tecnológico necesario para la digitalización de sus fondos, dar continuidad a los planes de digitalización del archivo de RTVV y estimular los de todos los archivos audiovisuales de la Comunitat Valenciana, reforzar –en sintonía con las recomendaciones de la UE- el acento conservacionista de instrumentos como las ayudas públicas a la producción o el depósito legal y estimular el desarrollo del Catálogo del Patrimonio Audiovisual Valenciano son medidas que deben coadyuvar a la conservación a largo plazo de nuestro patrimonio.

  19. Desafíos y oportunidades para la diversidad del audiovisual en internet

    OpenAIRE

    García Leiva, María Trinidad

    2017-01-01

    A las puertas del primer cuarto del siglo XXI ya nadie cuestiona que la cadena de valor de la industria audiovisual se ha transformando. La era digital exhibe a la vez posibilidades de enriquecimiento cultural pero también despliega nuevos desafíos. Después de ofrecer un retrato general de las industrias audiovisuales en la era digital, en términos de agentes y lógicas entensión, y partiendo del caso español, se presenta un análisis de las ventajas y desventajas que existen para la diversidad...

  20. Audio-Visual Feedback for Self-monitoring Posture in Ballet Training

    DEFF Research Database (Denmark)

    Knudsen, Esben Winther; Hølledig, Malte Lindholm; Bach-Nielsen, Sebastian Siem

    2017-01-01

    An application for ballet training is presented that monitors the posture position (straightness of the spine and rotation of the pelvis) deviation from the ideal position in real-time. The human skeletal data is acquired through a Microsoft Kinect v2. The movement of the student is mirrored......-coded. In an experiment with 9-12 year-old dance students from a ballet school, comparing the audio-visual feedback modality with no feedback leads to an increase in posture accuracy (p card feedback and expert interviews indicate that the feedback is considered fun and useful...... for training independently from the teacher....