WorldWideScience

Sample records for audiovisual aids

  1. Audio-Visual Aids: Historians in Blunderland.

    Science.gov (United States)

    Decarie, Graeme

    1988-01-01

    A history professor relates his experiences producing and using audio-visual material and warns teachers not to rely on audio-visual aids for classroom presentations. Includes examples of popular audio-visual aids on Canada that communicate unintended, inaccurate, or unclear ideas. Urges teachers to exercise caution in the selection and use of…

  2. Audio-Visual Aids in Universities

    Science.gov (United States)

    Douglas, Jackie

    1970-01-01

    A report on the proceedings and ideas expressed at a one day seminar on "Audio-Visual Equipment--Its Uses and Applications for Teaching and Research in Universities." The seminar was organized by England's National Committee for Audio-Visual Aids in Education in conjunction with the British Universities Film Council. (LS)

  3. [Audio-visual aids and tropical medicine].

    Science.gov (United States)

    Morand, J J

    1989-01-01

    The author presents a list of the audio-visual productions about Tropical Medicine, as well as of their main characteristics. He thinks that the audio-visual educational productions are often dissociated from their promotion; therefore, he invites the future creator to forward his work to the Audio-Visual Health Committee.

  4. Proper Use of Audio-Visual Aids: Essential for Educators.

    Science.gov (United States)

    Dejardin, Conrad

    1989-01-01

    Criticizes educators as the worst users of audio-visual aids and among the worst public speakers. Offers guidelines for the proper use of an overhead projector and the development of transparencies. (DMM)

  5. The Use of Audio-Visual Aids in Teaching: A Study in the Saudi Girls Colleges.

    Science.gov (United States)

    Al-Sharhan, Jamal A.

    1993-01-01

    A survey of faculty in girls colleges in Riyadh, Saudi Arabia, investigated teaching experience, academic rank, importance of audiovisual aids, teacher training, availability of audiovisual centers, and reasons for not using audiovisual aids. Proposes changes to increase use of audiovisual aids: more training courses, more teacher release time,…

  6. Media Aid Beyond the Factual: Culture, Development, and Audiovisual Assistance

    Directory of Open Access Journals (Sweden)

    Benjamin A. J. Pearson

    2015-01-01

    Full Text Available This paper discusses audiovisual assistance, a form of development aid that focuses on the production and distribution of cultural and entertainment media such as fictional films and TV shows. While the first audiovisual assistance program dates back to UNESCO’s International Fund for the Promotion of Culture in the 1970s, the past two decades have seen a proliferation of audiovisual assistance that, I argue, is related to a growing concern for culture in post-2015 global development agendas. In this paper, I examine the aims and motivations behind the EU’s audiovisual assistance programs to countries in the Global South, using data from policy documents and semi-structured, in-depth interviews with Program Managers and administrative staff in Brussels. These programs prioritize forms of audiovisual content that are locally specific, yet globally tradable. Furthermore, I argue that they have an ambivalent relationship with traditional notions of international development, one that conceptualizes media not only as a means to achieve economic development and human rights aims, but as a form of development itself.

  7. Your Most Essential Audiovisual Aid--Yourself!

    Science.gov (United States)

    Hamp-Lyons, Elizabeth

    2012-01-01

    Acknowledging that an interested and enthusiastic teacher can create excitement for students and promote learning, the author discusses how teachers can improve their appearance, and, consequently, how their students perceive them. She offers concrete suggestions on how a teacher can be both a "visual aid" and an "audio aid" in the classroom.…

  8. Utilization of audio-visual aids by family welfare workers.

    Science.gov (United States)

    Naik, V R; Jain, P K; Sharma, B B

    1977-01-01

    Communication efforts have been an important component of the Indian Family Planning Welfare Program since its inception. However, its chief interests in its early years were clinical, until the adoption of the extension approach in 1963. Educational materials were developed, especially in the period 1965-8, to fit mass, group meeting and home visit approaches. Audiovisual aids were developed for use by extension workers, who had previously relied entirely on verbal approaches. This paper examines their use. A questionnaire was designed for workers in motivational programs at 3 levels: Village Level (Family Planning Health Assistant, Auxilliary Nurse-Midwife, Dias), Block Level (Public Health Nurse, Lady Health Visitor, Block Extension Educator), and District (District Extension Educator, District Mass Education and Information Officer). 3 Districts were selected from each State on the basis of overall family planning performance during 1970-2 (good, average, or poor). Units of other agencies were also included on the same basis. Findings: 1) Workers in all 3 categories preferred individual contacts over group meetings or mass approach. 2) 56-64% said they used audiovisual aids "sometimes" (when available). 25% said they used them "many times" and only 15.9% said "rarely." 3) More than 1/2 of workers in each category said they were not properly oriented toward the use of audiovisual aids. Nonavailability of the aids in the market was also cited. About 1/3 of village level and 1/2 of other workers said that the materials were heavy and liable to be damaged. Complexity, inaccuracy and confusion in use were not widely cited (less than 30%).

  9. Seminario latinoamericano de didactica de los medios audiovisuales (Latin American Seminar on Teaching with Audiovisual Aids).

    Science.gov (United States)

    Eduplan Informa, 1971

    1971-01-01

    This seminar on the use of audiovisual aids reached several conclusions on the need for and the use of such aids in Latin America. The need for educational innovation in the face of a new society, a new type of communication, and a new vision of man is stressed. A new definition of teaching and learning as a fundamental process of communication is…

  10. The Efficacy of an Audiovisual Aid in Teaching the Neo-Classical Screenplay Paradigm

    Science.gov (United States)

    Uys, P. G.

    2009-01-01

    This study interrogated the central theoretical statement that understanding and learning to apply the abstract concept of classical dramatic narrative structure can be addressed effectively through a useful audiovisual teaching method. The purpose of the study was to design an effective DVD teaching and learning aid, to justify the design through…

  11. Effects of audio-visual aids on foreign language test anxiety, reading and listening comprehension, and retention in EFL learners.

    Science.gov (United States)

    Lee, Shu-Ping; Lee, Shin-Da; Liao, Yuan-Lin; Wang, An-Chi

    2015-04-01

    This study examined the effects of audio-visual aids on anxiety, comprehension test scores, and retention in reading and listening to short stories in English as a Foreign Language (EFL) classrooms. Reading and listening tests, general and test anxiety, and retention were measured in English-major college students in an experimental group with audio-visual aids (n=83) and a control group without audio-visual aids (n=94) with similar general English proficiency. Lower reading test anxiety, unchanged reading comprehension scores, and better reading short-term and long-term retention after four weeks were evident in the audiovisual group relative to the control group. In addition, lower listening test anxiety, higher listening comprehension scores, and unchanged short-term and long-term retention were found in the audiovisual group relative to the control group after the intervention. Audio-visual aids may help to reduce EFL learners' listening test anxiety and enhance their listening comprehension scores without facilitating retention of such materials. Although audio-visual aids did not increase reading comprehension scores, they helped reduce EFL learners' reading test anxiety and facilitated retention of reading materials.

  12. An Audio-Visual Resource Notebook for Adult Consumer Education. An Annotated Bibliography of Selected Audio-Visual Aids for Adult Consumer Education, with Special Emphasis on Materials for Elderly, Low-Income and Handicapped Consumers.

    Science.gov (United States)

    Virginia State Dept. of Agriculture and Consumer Services, Richmond, VA.

    This document is an annotated bibliography of audio-visual aids in the field of consumer education, intended especially for use among low-income, elderly, and handicapped consumers. It was developed to aid consumer education program planners in finding audio-visual resources to enhance their presentations. Materials listed include 293 resources…

  13. Audio-Visual Aid in Teaching "Fatty Liver"

    Science.gov (United States)

    Dash, Sambit; Kamath, Ullas; Rao, Guruprasad; Prakash, Jay; Mishra, Snigdha

    2016-01-01

    Use of audio visual tools to aid in medical education is ever on a rise. Our study intends to find the efficacy of a video prepared on "fatty liver," a topic that is often a challenge for pre-clinical teachers, in enhancing cognitive processing and ultimately learning. We prepared a video presentation of 11:36 min, incorporating various…

  14. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals

    Directory of Open Access Journals (Sweden)

    Shahram Moradi

    2016-06-01

    Full Text Available The present study compared elderly hearing aid (EHA users (n = 20 with elderly normal-hearing (ENH listeners (n = 20 in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context.

  15. Using Play Activities and Audio-Visual Aids to Develop Speaking Skills

    Directory of Open Access Journals (Sweden)

    Casallas Mutis Nidia

    2000-08-01

    Full Text Available A project was conducted in order to improve oral proficiency in English through the use of play activities and audio-visual aids, with students of first grade in a bilingual school, in la Calera. They were between 6 and 7 years old. As the sample for this study, the fivestudents who had the lowest language oral proficiency were selected. According to the results, it is clear that the sample has improved their English oral proficiency a great deal. However, the process has to be continued because this skill needs constant practice in order to be developed.

  16. Development and utilization of low-cost audio-visual aids in population communication.

    Science.gov (United States)

    1980-07-01

    One of the reasons why population information has to a certain degree failed to create demand for family planning services is that the majority of information and communication materials being used have been developed in an urban setting, resulting in their inappropriateness to the target rural audiences. Furthermore, their having been evolved in urban centers has hampered their subsequent replication, distribution, and use in rural areas due to lack of funds, production and distribution resources. For this reason, many developing countries in Asia have begun to demand population materials which are low-cost and simple, more appropriate to rural audiences and within local production resources and capabilities. In the light of this identified need, the Population Communication Unit, with the assistance of the Population Education Mobile Team and Clearing House, Unesco, has collaborated with the Population Center Foundation of the Philippines to undertake a Regional Training Workshop on the Design, Development, and Utilization of Low-Cost Audiovisual Aids in the Philippines from 21-26 July 1980. The Workshop, which will be attended by communications personnel and materials developers from Bangladesh, Indonesia, Nepal, the Philippines, Sri Lanka and Thailand, will focus on developing the capabilities of midlevel population program personnel in conceptualizing, designing, developing, testing and utilizing simple and low-cost audiovisual materials. It is hoped that with the skills acquired from the Workshop, participants will be able to increase their capability in training their own personnel in the development of low-cost materials.

  17. Neuromodulatory Effects of Auditory Training and Hearing Aid Use on Audiovisual Speech Perception in Elderly Individuals

    Science.gov (United States)

    Yu, Luodi; Rao, Aparna; Zhang, Yang; Burton, Philip C.; Rishiq, Dania; Abrams, Harvey

    2017-01-01

    Although audiovisual (AV) training has been shown to improve overall speech perception in hearing-impaired listeners, there has been a lack of direct brain imaging data to help elucidate the neural networks and neural plasticity associated with hearing aid (HA) use and auditory training targeting speechreading. For this purpose, the current clinical case study reports functional magnetic resonance imaging (fMRI) data from two hearing-impaired patients who were first-time HA users. During the study period, both patients used HAs for 8 weeks; only one received a training program named ReadMyQuipsTM (RMQ) targeting speechreading during the second half of the study period for 4 weeks. Identical fMRI tests were administered at pre-fitting and at the end of the 8 weeks. Regions of interest (ROI) including auditory cortex and visual cortex for uni-sensory processing, and superior temporal sulcus (STS) for AV integration, were identified for each person through independent functional localizer task. The results showed experience-dependent changes involving ROIs of auditory cortex, STS and functional connectivity between uni-sensory ROIs and STS from pretest to posttest in both cases. These data provide initial evidence for the malleable experience-driven cortical functionality for AV speech perception in elderly hearing-impaired people and call for further studies with a much larger subject sample and systematic control to fill in the knowledge gap to understand brain plasticity associated with auditory rehabilitation in the aging population. PMID:28270763

  18. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli.

    Science.gov (United States)

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2016-06-17

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context.

  19. Development of a novel remote-controlled and self-contained audiovisual-aided interactive system for immobilizing claustrophobic patients.

    Science.gov (United States)

    Ju, Harang; Kim, Siyong; Read, Paul; Trifiletti, Daniel; Harrell, Andrew; Libby, Bruce; Kim, Taeho

    2015-05-08

    In radiotherapy, only a few immobilization systems, such as open-face mask and head mold with a bite plate, are available for claustrophobic patients with a certain degree of discomfort. The purpose of this study was to develop a remote-controlled and self-contained audiovisual (AV)-aided interactive system with the iPad mini with Retina display for intrafractional motion management in brain/H&N (head and neck) radiotherapy for claustrophobic patients. The self-contained, AV-aided interactive system utilized two tablet computers: one for AV-aided interactive guidance for the subject and the other for remote control by an operator. The tablet for audiovisual guidance traced the motion of a colored marker using the built-in front-facing camera, and the remote control tablet at the control room used infrastructure Wi-Fi networks for real-time communication with the other tablet. In the evaluation, a programmed QUASAR motion phantom was used to test the temporal and positional accuracy and resolution. Position data were also obtained from ten healthy volunteers with and without guidance to evaluate the reduction of intrafractional head motion in simulations of a claustrophobic brain or H&N case. In the phantom study, the temporal and positional resolution was 24 Hz and 0.2 mm. In the volunteer study, the average superior-inferior and right-left displacement was reduced from 1.9 mm to 0.3 mm and from 2.2 mm to 0.2 mm with AV-aided interactive guidance, respectively. The superior-inferior and right-left positional drift was reduced from 0.5 mm/min to 0.1 mm/min and from 0.4 mm/min to 0.04 mm/min with audiovisual-aided interactive guidance. This study demonstrated a reduction in intrafractional head motion using a remote-controlled and self-contained AV-aided interactive system of iPad minis with Retina display, easily obtainable and cost-effective tablet computers. This approach can potentially streamline clinical flow for claustrophobic patients without a head mask and

  20. Student′s preference of various audiovisual aids used in teaching pre- and para-clinical areas of medicine

    Directory of Open Access Journals (Sweden)

    Navatha Vangala

    2015-01-01

    Full Text Available Introduction: The formal lecture is among the oldest teaching methods that have been widely used in medical education. Delivering a lecture is made easy and better by use of audiovisual aids (AV aids such as blackboard or whiteboard, an overhead projector, and PowerPoint presentation (PPT. Objective: To know the students preference of various AV aids and their use in medical education with an aim to improve their use in didactic lectures. Materials and Methods: The study was carried out among 230 undergraduate medical students of first and second M.B.B.S studying at Malla Reddy Medical College for Women, Hyderabad, Telangana, India during the month of November 2014. Students were asked to answer a questionnaire on the use of AV aids for various aspects of learning. Results: This study indicates that students preferred PPT, the most for a didactic lecture, for better perception of diagrams and flowcharts. Ninety-five percent of the students (first and second M.B.B.S were stimulated for further reading if they attended a lecture augmented by the use of visual aids. Teacher with good teaching skills and AV aids (58% was preferred most than a teacher with only good teaching skills (42%. Conclusion: Our study demonstrates that lecture delivered using PPT was more appreciated and preferred by the students. Furthermore, teachers with a proper lesson plan, good interactive and communicating skills are needed for an effective presentation of lecture.

  1. Challenges of Using Audio-Visual Aids as Warm-Up Activity in Teaching Aviation English

    Science.gov (United States)

    Sahin, Mehmet; Sule, St.; Seçer, Y. E.

    2016-01-01

    This study aims to find out the challenges encountered in the use of video as audio-visual material as a warm-up activity in aviation English course at high school level. This study is based on a qualitative study in which focus group interview is used as the data collection procedure. The participants of focus group are four instructors teaching…

  2. Twenty-Fifth Annual Audio-Visual Aids Conference, Wednesday 9th to Friday 11th July 1975, Whitelands College, Putney SW15. Conference Preprints.

    Science.gov (United States)

    National Committee for Audio-Visual Aids in Education, London (England).

    Preprints of papers to be presented at the 25th annual Audio-Visual Aids Conference are collected along with the conference program. Papers include official messages, a review of the conference's history, and presentations on photography in education, using school broadcasts, flexibility in the use of television, the "communications generation,"…

  3. The Efficacy of Short-term Gated Audiovisual Speech Training for Improving Auditory Sentence Identification in Noise in Elderly Hearing Aid Users

    Science.gov (United States)

    Moradi, Shahram; Wahlin, Anna; Hällgren, Mathias; Rönnberg, Jerker; Lidestam, Björn

    2017-01-01

    This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants’ auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Conclusion: Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research. PMID:28348542

  4. THE EFFECT OF USING AUDIO-VISUAL AIDS VERSUS PICTURES ON FOREIGN LANGUAGE VOCABULARY LEARNING OF INDIVIDUALS WITH MILD INTELLECTUAL DISABILITY

    Directory of Open Access Journals (Sweden)

    Zahra Sadat NOORI

    2016-04-01

    Full Text Available This study aimed to examine the effect of using audio-visual aids and pictures on foreign language vocabulary learning of individuals with mild intellectual disability. Method: To this end, a comparison group quasi-experimental study was conducted along with a pre-test and a post-test. The participants were 16 individuals with mild intellectual disability living in a center for mentally disabled individuals in Dezfoul, Iran. They were all male individuals with the age range of 20 to 30. Their mother tongue was Persian, and they did not have any English background. In order to ensure that all participants were within the same IQ level, a standard IQ test, i.e. Colored Progressive Matrices test, was run. Afterwards, the participants were randomly assigned to two experimental groups; one group received the instruction through audio-visual aids, while the other group was taught through pictures. The treatment lasted for four weeks, 20 sessions on aggregate. A total number of 60 English words selected from the English package named 'The Smart Child' were taught. After the treatment, the participants took the posttest in which the researchers randomly selected 40 words from among the 60 target words. Results: The results of Mann-Whitney U-test indicated that using audio-visual aids was more effective than pictures in foreign language vocabulary learning of individuals with mild intellectual disability. Conclusions: It can be concluded that the use of audio-visual aids can be more effective than pictures in foreign language vocabulary learning of individuals with mild intellectual disability.

  5. Audio-visual speechreading in a group of hearing aid users. The effects of onset age, handicap age, and degree of hearing loss.

    Science.gov (United States)

    Tillberg, I; Rönnberg, J; Svärd, I; Ahlner, B

    1996-01-01

    Speechreading ability was investigated among hearing aid users with different time of onset and different degree of hearing loss. Audio-visual and visual-only performance were assessed. One group of subjects had been hearing-impaired for a large part of their lives, and the impairments appeared early in life. The other group of subjects had been impaired for a fewer number of years, and the impairments appeared later in life. Differences between the groups were obtained. There was no significant difference on the audio-visual test between the groups in spite of the fact that the early onset group scored very poorly auditorily. However, the early-onset group performed significantly better on the visual test. It was concluded that the visual information constituted the dominant coding strategy for the early onset group. An interpretation chiefly in terms of early onset may be the most appropriate, since dB loss variations as such are not related to speechreading skill.

  6. Evolving with modern technology: Impact of incorporating audiovisual aids in preanesthetic checkup clinics on patient education and anxiety

    Science.gov (United States)

    Kaur, Haramritpal; Singh, Gurpreet; Singh, Amandeep; Sharda, Gagandeep; Aggarwal, Shobha

    2016-01-01

    Background and Aims: Perioperative stress is an often ignored commonly occurring phenomenon. Little or no prior knowledge of anesthesia techniques can increase this significantly. Patients awaiting surgery may experience high level of anxiety. Preoperative visit is an ideal time to educate patients about anesthesia and address these fears. The present study evaluates two different approaches, i.e., standard interview versus informative audiovisual presentation with standard interview on information gain (IG) and its impact on patient anxiety during preoperative visit. Settings and Design: This prospective, double-blind, randomized study was conducted in a Tertiary Care Teaching Hospital in rural India over 2 months. Materials and Methods: This prospective, double-blind, randomized study was carried out among 200 American Society of Anesthesiologist Grade I and II patients in the age group 18–65 years scheduled to undergo elective surgery under general anesthesia. Patients were allocated to either one of the two equal-sized groups, Group A and Group B. Baseline anxiety and information desire component was assessed using Amsterdam Preoperative Anxiety and Information Scale for both the groups. Group A patients received preanesthetic interview with the anesthesiologist and were reassessed. Group B patients were shown a short audiovisual presentation about operation theater and anesthesia procedure followed by preanesthetic interview and were also reassessed. In addition, patient satisfaction score (PSS) and IG was assessed at the end of preanesthetic visit using standard questionnaire. Statistical Analysis Used: Data were expressed as mean and standard deviation. Nonparametric tests such as Kruskal–Wallis, Mann–Whitney, and Wilcoxon signed rank tests, and Student's t-test and Chi-square test were used for statistical analysis. Results: Patient's IG was significantly more in Group B (5.43 ± 0.55) as compared to Group A (4.41 ± 0.922) (P < 0.001). There was

  7. 运用电教手段优化竞技健美操专业教学%Improvement of Sports Aerobics Teaching by Electrical Audio-visual Aids

    Institute of Scientific and Technical Information of China (English)

    赵静

    2011-01-01

    This paper discusses the better effects on teaching methods, teaching course, content of courses, teaching purpose and teaching results by education with electrical audio-visual aids in Sports Aerobics teaching, It provides the basis to the use of electrical audio-visual aids in Sports Aerobics teaching.%文章主要针对在竞技健美操专业课教学中运用电教手段,以达到优化教学方法,优化教学过程、优化教学内容、优化教学目的及优化教学效果等进行阐述,旨在为竞技健美操专业教学过程中合理运用电教手段提供科学依据.

  8. Audiovisual Interaction

    Science.gov (United States)

    Möttönen, Riikka; Sams, Mikko

    Information about the objects and events in the external world is received via multiple sense organs, especially via eyes and ears. For example, a singing bird can be heard and seen. Typically, audiovisual objects are detected, localized and identified more rapidly and accurately than objects which are perceived via only one sensory system (see, e.g. Welch and Warren, 1986; Stein and Meredith, 1993; de Gelder and Bertelson, 2003; Calvert et al., 2004). The ability of the central nervous system to utilize sensory inputs mediated by different sense organs is called multisensory processing.

  9. Student performance and their perception of a patient-oriented problem-solving approach with audiovisual aids in teaching pathology: a comparison with traditional lectures

    Directory of Open Access Journals (Sweden)

    Arjun Singh

    2010-12-01

    Full Text Available Arjun SinghDepartment of Pathology, Sri Venkateshwara Medical College Hospital and Research Centre, Pondicherry, IndiaPurpose: We use different methods to train our undergraduates. The patient-oriented problem-solving (POPS system is an innovative teaching–learning method that imparts knowledge, enhances intrinsic motivation, promotes self learning, encourages clinical reasoning, and develops long-lasting memory. The aim of this study was to develop POPS in teaching pathology, assess its effectiveness, and assess students’ preference for POPS over didactic lectures.Method: One hundred fifty second-year MBBS students were divided into two groups: A and B. Group A was taught by POPS while group B was taught by traditional lectures. Pre- and post-test numerical scores of both groups were evaluated and compared. Students then completed a self-structured feedback questionnaire for analysis.Results: The mean (SD difference in pre- and post-test scores of groups A and B was 15.98 (3.18 and 7.79 (2.52, respectively. The significance of the difference between scores of group A and group B teaching methods was 16.62 (P < 0.0001, as determined by the z-test. Improvement in post-test performance of group A was significantly greater than of group B, demonstrating the effectiveness of POPS. Students responded that POPS facilitates self-learning, helps in understanding topics, creates interest, and is a scientific approach to teaching. Feedback response on POPS was strong in 57.52% of students, moderate in 35.67%, and negative in only 6.81%, showing that 93.19% students favored POPS over simple lectures.Conclusion: It is not feasible to enforce the PBL method of teaching throughout the entire curriculum; However, POPS can be incorporated along with audiovisual aids to break the monotony of dialectic lectures and as alternative to PBL.Keywords: medical education, problem-solving exercise, problem-based learning

  10. Historia audiovisual para una sociedad audiovisual

    Directory of Open Access Journals (Sweden)

    Julio Montero Díaz

    2013-04-01

    Full Text Available This article analyzes the possibilities of presenting an audiovisual history in a society in which audiovisual media has progressively gained greater protagonism. We analyze specific cases of films and historical documentaries and we assess the difficulties faced by historians to understand the keys of audiovisual language and by filmmakers to understand and incorporate history into their productions. We conclude that it would not be possible to disseminate history in the western world without audiovisual resources circulated through various types of screens (cinema, television, computer, mobile phone, video games.

  11. The Schema Features and Aesthetic Functions of the Foreign Language Teaching with Electric Audio-visual Aids%外语电化教学的图式特征与美育功能

    Institute of Scientific and Technical Information of China (English)

    齐欣

    2015-01-01

    外语电化教学对传统外语教学模式提出挑战的同时,其自身也面临着诸多的挑战,需要更多的理论支撑和功能研究。基于图式理论和美育教育,对外语电化教学图式特征及其隐性、感性、个性三种美育功能的创新审视,进一步丰富了外语电化教学的理论基础,并强调了其美育功能实现的必要性。%While the foreign language teaching with electric audio-visual aids brings about challenges to the traditional language teaching,it is also faced with many challenges,and more studies on its theoretical basis and functions are encouraged. On the basis of Schema Theory and aesthetic education,this paper makes an innovative examination of the schema features of foreign language teaching with electric audio-visual aids and its implicit,emotional,and personalized aesthetic functions,further enriches its theoretical basis and emphasizes the necessity of achieving its aesthetic functions.

  12. Digital audiovisual archives

    CERN Document Server

    Stockinger, Peter

    2013-01-01

    Today, huge quantities of digital audiovisual resources are already available - everywhere and at any time - through Web portals, online archives and libraries, and video blogs. One central question with respect to this huge amount of audiovisual data is how they can be used in specific (social, pedagogical, etc.) contexts and what are their potential interest for target groups (communities, professionals, students, researchers, etc.).This book examines the question of the (creative) exploitation of digital audiovisual archives from a theoretical, methodological, technical and practical

  13. Using a three-dimension head mounted displayer in audio-visual sexual stimulation aids in differential diagnosis of psychogenic from organic erectile dysfunction.

    Science.gov (United States)

    Moon, K-H; Song, P-H; Park, T-C

    2005-01-01

    We designed this study to compare the efficacy of using a three-dimension head mounted displayer (3-D HMD) and a conventional monitor in audio-visual sexual stimulation (AVSS) in differential diagnosis of psychogenic from organic erectile dysfunction (ED). Three groups of subjects such as psychogenic ED, organic ED, and healthy control received the evaluation. The change of penile tumescence in AVSS was monitored with Nocturnal Electrobioimpedance Volumetric Assessment and sexual arousal after AVSS was assessed by a simple question as being good, fair, or poor. Both the group of healthy control and psychogenic ED demonstrated a significantly higher rate of normal response in penile tumescence (P<0.05) and a significantly higher level of sexual arousal (P<0.05) if stimulated with 3-D HMD than conventional monitor. In the group of organic ED, even using 3-D HMD in AVSS could not give rise to a better response in both assessments. Therefore, we conclude that using a 3-D HMD in AVSS helps more to differentiate psychogenic from organic ED than a conventional monitor in AVSS.

  14. AIDS

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/000594.htm HIV/AIDS To use the sharing features on this page, ... immunodeficiency virus (HIV) is the virus that causes AIDS. When a person becomes infected with HIV, the ...

  15. Audiovisual integration of stimulus transients

    DEFF Research Database (Denmark)

    Andersen, Tobias; Mamassian, Pascal

    2008-01-01

    leaving only unsigned stimulus transients as the basis for audiovisual integration. Facilitation of luminance detection occurred even with varying audiovisual stimulus onset asynchrony and even when the sound lagged behind the luminance change by 75 ms supporting the interpretation that perceptual...

  16. The Audio-Visual Man.

    Science.gov (United States)

    Babin, Pierre, Ed.

    A series of twelve essays discuss the use of audiovisuals in religious education. The essays are divided into three sections: one which draws on the ideas of Marshall McLuhan and other educators to explore the newest ideas about audiovisual language and faith, one that describes how to learn and use the new language of audio and visual images, and…

  17. Training Aids for Online Instruction: An Analysis.

    Science.gov (United States)

    Guy, Robin Frederick

    This paper describes a number of different types of training aids currently employed in online training: non-interactive audiovisual presentations; interactive computer-based aids; partially interactive aids based on recorded searches; print-based materials; and kits. The advantages and disadvantages of each type of aid are noted, and a table…

  18. Application and design of audio-visual aids in stomatology teaching cariology, endodontology and operative dentistry in non-stomatology students%直观教学法在非口腔医学专业医学生牙体牙髓病教学中的设计与应用

    Institute of Scientific and Technical Information of China (English)

    倪雪岩; 吕亚林; 曹莹; 臧滔; 董坚; 丁芳; 李若萱

    2014-01-01

    Objective To evaluate the effects of audio-visual aids on stomatology teaching cariology , end-odontology and operative dentistry among non-stomatology students .Methods Totally 77 students from 2010-2011 matriculating classes of the Preventive Medicine Department of Capital Medical University were selected .Di-versified audio-visual aids were used comprehensively in teaching .An examination of theory and a follow-up survey were carried out and analyzed to obtain the feedback of the combined teaching methods .Results The students had better theoretical knowledge of endodontics; mean score was 24.2 ±1.1; questionnaire survey showed that 89.6%(69/77) of students had positive attitude towards the improvement of teaching method .90.9% of the students (70/77) that had audio-visual aids in stomatology teaching had good learning ability .Conclusions Ap-plication of audio-visual aids for stomatology teaching increases the interest in learning and improves the teaching effect.However, the integration should be carefully prepared in combination with cross teaching method and elicit -ation pedagogy in order to accomplish optimistic teaching results .%目的:评价在非口腔医学专业医学生牙体牙髓病教学中设计并实施口腔直观教学法的教学效果。方法以首都医科大学2010、2011级预防医学专业77名学生作为研究对象,授课时综合运用多种直观教学方式与手段,教学结束后,采用理论考核和问卷调查方式评价教学效果,分析学生对口腔直观教学法的评价。结果学生对牙体牙髓病学理论知识掌握较好,平均分为(24.2±1.1)分,问卷调查结果显示,89.6%(69/77)的学生对直观教学法给予肯定。90.9%(70/77)的学生认为应用直观教学法提高了学习能力。结论直观教学法的应用,增强了学习兴趣,提高了教学效果。直观教学法适用于牙体牙髓病学教学,但需要精心设计,将直观教学

  19. Plantilla 1: El documento audiovisual: elementos importantes

    OpenAIRE

    2011-01-01

    Concepto de documento audiovisual y de documentación audiovisual, profundizando en la distinción de documentación de imagen en movimiento con posible incorporación de sonido frente al concepto de documentación audiovisual según plantea Jorge Caldera. Diferenciación entre documentos audiovisuales, obras audiovisuales y patrimonio audiovisual según Félix del Valle.

  20. REPORT ON MEETING OF DIRECTORS OF NATIONAL AUDIO-VISUAL SERVICES AND DOCUMENTARY FILM UNITS IN SOUTH AND EAST ASIA, KUALA LUMPUR, 31 JULY - AUGUST 1961.

    Science.gov (United States)

    United Nations Educational, Scientific, and Cultural Organization, Paris (France).

    THE PURPOSE OF THIS MEETING WAS TO DEVELOP COOPERATIVE ACTION IN ASIA IN THE FIELD OF AUDIOVISUAL AIDS IN EDUCATION, BASED ON THE WORK OF EXISTING NATIONAL AUDIOVISUAL SERVICES AND DOCUMENTARY FILM UNITS AND TO CONSIDER COOPERATION BETWEEN THESE SERVICES AND UNITS AND THE INTERNATIONAL COUNCIL FOR EDUCATIONAL FILMS. THE FOLLOWING AGENDA WAS…

  1. Blacklist Established in Chinese Audiovisual Market

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The Chinese audiovisual market is to impose a ban on audiovisual product dealers whose licenses have been revoked for violatingthe law. This ban will prohibit them from dealing in audiovisual products for ten years. Their names are to be included on a blacklist made known to the public.

  2. Search in audiovisual broadcast archives

    NARCIS (Netherlands)

    Huurnink, B.

    2010-01-01

    Documentary makers, journalists, news editors, and other media professionals routinely require previously recorded audiovisual material for new productions. For example, a news editor might wish to reuse footage from overseas services for the evening news, or a documentary maker describing the histo

  3. Anglo-American Cataloging Rules. Chapter Twelve, Revised. Audiovisual Media and Special Instructional Materials.

    Science.gov (United States)

    American Library Association, Chicago, IL.

    Chapter 12 of the Anglo-American Cataloging Rules has been revised to provide rules for works in the principal audiovisual media (motion pictures, filmstrips, videorecordings, slides, and transparencies) as well as instructional aids (charts, dioramas, flash cards, games, kits, microscope slides, models, and realia). The rules for main and added…

  4. Bilingualism affects audiovisual phoneme identification.

    Science.gov (United States)

    Burfin, Sabine; Pascalis, Olivier; Ruiz Tada, Elisa; Costa, Albert; Savariaux, Christophe; Kandel, Sonia

    2014-01-01

    We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience-i.e., the exposure to a double phonological code during childhood-affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identification experiment with bilingual and monolingual adult participants. It was an ABX task involving a Bengali dental-retroflex contrast that does not exist in any of the participants' languages. The phonemes were presented in audiovisual (AV) and audio-only (A) conditions. The results revealed that in the audio-only condition monolinguals and bilinguals had difficulties in discriminating the retroflex non-native phoneme. They were phonologically "deaf" and assimilated it to the dental phoneme that exists in their native languages. In the audiovisual presentation instead, both groups could overcome the phonological deafness for the retroflex non-native phoneme and identify both Bengali phonemes. However, monolinguals were more accurate and responded quicker than bilinguals. This suggests that bilinguals do not use the same processes as monolinguals to decode visual speech.

  5. Bilingualism affects audiovisual phoneme identification

    Directory of Open Access Journals (Sweden)

    Sabine eBurfin

    2014-10-01

    Full Text Available We all go through a process of perceptual narrowing for phoneme identification. As we become experts in the languages we hear in our environment we lose the ability to identify phonemes that do not exist in our native phonological inventory. This research examined how linguistic experience –i.e., the exposure to a double phonological code during childhood– affects the visual processes involved in non-native phoneme identification in audiovisual speech perception. We conducted a phoneme identification experiment with bilingual and monolingual adult participants. It was an ABX task involving a Bengali dental-retroflex contrast that does not exist in any of the participants’ languages. The phonemes were presented in audiovisual (AV and audio-only (A conditions. The results revealed that in the audio-only condition monolinguals and bilinguals had difficulties in discriminating the retroflex non-native phoneme. They were phonologically deaf and assimilated it to the dental phoneme that exists in their native languages. In the audiovisual presentation instead, both groups could overcome the phonological deafness for the retroflex non-native phoneme and identify both Bengali phonemes. However, monolinguals were more accurate and responded quicker than bilinguals. This suggests that bilinguals do not use the same processes as monolinguals to decode visual speech.

  6. Decreased BOLD responses in audiovisual processing

    NARCIS (Netherlands)

    Wiersinga-Post, Esther; Tomaskovic, Sonja; Slabu, Lavinia; Renken, Remco; de Smit, Femke; Duifhuis, Hendrikus

    2010-01-01

    Audiovisual processing was studied in a functional magnetic resonance imaging study using the McGurk effect. Perceptual responses and the brain activity patterns were measured as a function of audiovisual delay. In several cortical and subcortical brain areas, BOLD responses correlated negatively wi

  7. Audiovisual Styling and the Film Experience

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2015-01-01

    Approaches to music and audiovisual meaning in film appear to be very different in nature and scope when considered from the point of view of experimental psychology or humanistic studies. Nevertheless, this article argues that experimental studies square with ideas of audiovisual perception and ...

  8. Cinco discursos da digitalidade audiovisual

    Directory of Open Access Journals (Sweden)

    Gerbase, Carlos

    2001-01-01

    Full Text Available Michel Foucault ensina que toda fala sistemática - inclusive aquela que se afirma “neutra” ou “uma desinteressada visão objetiva do que acontece” - é, na verdade, mecanismo de articulação do saber e, na seqüência, de formação de poder. O aparecimento de novas tecnologias, especialmente as digitais, no campo da produção audiovisual, provoca uma avalanche de declarações de cineastas, ensaios de acadêmicos e previsões de demiurgos da mídia.

  9. 29 CFR 2.13 - Audiovisual coverage prohibited.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage prohibited. 2.13 Section 2.13 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.13 Audiovisual coverage prohibited. The Department shall not permit audiovisual coverage of...

  10. Low cost training aids and devices

    Science.gov (United States)

    Lawver, J.; Lee, A.

    1984-01-01

    The need for advanced flight simulators for two engine aircraft is discussed. Cost effectiveness is a major requirement. Other training aids available for increased effectiveness are recommended. Training aids include: (1) audio-visual slides; (2) information transfer; (3) programmed instruction; and (4) interactive training systems.

  11. An audiovisual emotion recognition system

    Science.gov (United States)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  12. Audiovisual quality assessment and prediction for videotelephony

    CERN Document Server

    Belmudez, Benjamin

    2015-01-01

    The work presented in this book focuses on modeling audiovisual quality as perceived by the users of IP-based solutions for video communication like videotelephony. It also extends the current framework for the parametric prediction of audiovisual call quality. The book addresses several aspects related to the quality perception of entire video calls, namely, the quality estimation of the single audio and video modalities in an interactive context, the audiovisual quality integration of these modalities and the temporal pooling of short sample-based quality scores to account for the perceptual quality impact of time-varying degradations.

  13. Audiovisual segregation in cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Simon Landry

    Full Text Available It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition, as well as in normal controls. A visual speech recognition task (i.e. speechreading was administered either in silence or in combination with three types of auditory distractors: i noise ii reverse speech sound and iii non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users.

  14. The audio-visual revolution: do we really need it?

    Science.gov (United States)

    Townsend, I

    1979-03-01

    In the United Kingdom, The audio-visual revolution has steadily gained converts in the nursing profession. Nurse tutor courses now contain information on the techniques of educational technology and schools of nursing increasingly own (or wish to own) many of the sophisticated electronic aids to teaching that abound. This is taking place at a time of hitherto inexperienced crisis and change. Funds have been or are being made available to buy audio-visual equipment. But its purchase and use relies on satisfying personal whim, prejudice or educational fashion, not on considerations of educational efficiency. In the rush of enthusiasm, the overwhelmed teacher (everywhere; the phenomenon is not confined to nursing) forgets to ask the searching, critical questions: 'Why should we use this aid?','How effective is it?','And, at what?'. Influential writers in this profession have repeatedly called for a more responsible attitude towards published research work of other fields. In an attempt to discover what is known about the answers to this group of questions, an eclectic look at media research is taken and the widespread dissatisfaction existing amongst international educational technologists is noted. The paper isolates out of the literature several causative factors responsible for the present state of affairs. Findings from the field of educational television are cited as representative of an aid which has had a considerable amount of time and research directed at it. The concluding part of the paper shows the decisions to be taken in using or not using educational media as being more complicated than might at first appear.

  15. RECURSO AUDIOVISUAL PAA ENSEÑAR Y APRENDER EN EL AULA: ANÁLISIS Y PROPUESTA DE UN MODELO FORMATIVO

    Directory of Open Access Journals (Sweden)

    Damian Marilu Mendoza Zambrano

    2015-09-01

    teaching of the media means following the initiative of Spain and Portugal, the international protagonists of some university educational models was made. Due to the extension and focalization in information technology and web communication through the Internet, the audiovisual aid as a technological instrument have gained utility as a dynamic and conciliatory source with special characteristics that differs it form the other sources that belong to the audiovisual aids eco system. As a result of this research; two application means are proposed: A. Proposal of the iconic and audiovisual language as a learning objective and/or as a curriculum subject in the university syllabus that will include workshops for the development of the audiovisual document, digital photography and the audiovisual production. B. Usage of the audiovisual resources as education means which will imply a pre- training process to the teachers in the activities recommended for the teachers and students. As a consequence, suggestions that allow implementing both means of academic actions are presented.KEYWORDS: Media Literacy; Education Audiovisual; Media Competence; Educommunication.

  16. La Documentación Audiovisual en las empresas televisivas

    OpenAIRE

    2003-01-01

    The information systems and audio-visual documentation in the televisions are part of a great gear for the good operation of the audio-visual companies. In the present work are the main characteristics of the audio-visual documentation within the framework of the televising audio-visual organizations offering an express crossed on the aspects more excellent than the main users of these services must know. The article tries to demonstrate the importance and to show the possibilities that offer...

  17. Audiovisual integration facilitates unconscious visual scene processing.

    Science.gov (United States)

    Tan, Jye-Sheng; Yeh, Su-Ling

    2015-10-01

    Meanings of masked complex scenes can be extracted without awareness; however, it remains unknown whether audiovisual integration occurs with an invisible complex visual scene. The authors examine whether a scenery soundtrack can facilitate unconscious processing of a subliminal visual scene. The continuous flash suppression paradigm was used to render a complex scene picture invisible, and the picture was paired with a semantically congruent or incongruent scenery soundtrack. Participants were asked to respond as quickly as possible if they detected any part of the scene. Release-from-suppression time was used as an index of unconscious processing of the complex scene, which was shorter in the audiovisual congruent condition than in the incongruent condition (Experiment 1). The possibility that participants adopted different detection criteria for the 2 conditions was excluded (Experiment 2). The audiovisual congruency effect did not occur for objects-only (Experiment 3) and background-only (Experiment 4) pictures, and it did not result from consciously mediated conceptual priming (Experiment 5). The congruency effect was replicated when catch trials without scene pictures were added to exclude participants with high false-alarm rates (Experiment 6). This is the first study demonstrating unconscious audiovisual integration with subliminal scene pictures, and it suggests expansions of scene-perception theories to include unconscious audiovisual integration.

  18. Lip movements affect infants' audiovisual speech perception.

    Science.gov (United States)

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  19. 29 CFR 2.12 - Audiovisual coverage permitted.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage permitted. 2.12 Section 2.12 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.12 Audiovisual coverage permitted. The following are the types of hearings where the...

  20. Píndoles audiovisuals 3x3

    OpenAIRE

    Raja Nadales, Daniel

    2014-01-01

    Creació de tres Píndoles audiovisuals d'aproximadament 3 minuts de durada, compostes per una sèrie de consells relacionats amb la salut, la cura de pacients i el seu entorn, creant una funció d'utilitat a l'usuari. Les píndoles estan complementades per un llenguatge de fàcil comprensió i enteniment i estan subjectes a una lliure accessibilitat mitjançant la distribució per Internet, adaptades a qualsevol aparell electrònic de reproducció audiovisual.

  1. Audio-visual training-aid for speechreading

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich; Gebert, H.

    2011-01-01

    ‐recorded video material; it also allows the teacher to produce and combine a large number of individual lessons without the need of expensive recording equipment. Our system uses a scene manager to enhance teaching. It allows the creation of different scenarios that are composed of appropriate background images...... of classroom teaching, but the system may also be used as a new e‐learning or, in general, distance learning tool for hearing impaired people. It presents a facial animation on the computer screen with synchronized speech output and is driven by input text sequences in orthographic transcription. The input may...... modular structure of the software package and the centralized event manager, it is possible to add or replace specific modules when needed. The present version of our teacher‐student module uses a hierarchically structured composition of important single words and short phrases, supplemented by easy...

  2. El tratamiento documental del mensaje audiovisual Documentary treatment of the audio-visual message

    Directory of Open Access Journals (Sweden)

    Blanca Rodríguez Bravo

    2005-06-01

    Full Text Available Se analizan las peculiaridades del documento audiovisual y el tratamiento documental que sufre en las emisoras de televisión. Observando a las particularidades de la imagen que condicionan su análisis y recuperación, se establecen las etapas y procedimientos para representar el mensaje audiovisual con vistas a su reutilización. Por último se realizan algunas consideraciones acerca del procesamiento automático del video y de los cambios introducidos por la televisión digital.Peculiarities of the audio-visual document and the treatment it undergoes in TV broadcasting stations are analyzed. The particular features of images condition their analysis and recovery; this paper establishes stages and proceedings for the representation of audio-visual messages with a view to their re-usability Also, some considerations about the automatic processing of the video and the changes introduced by digital TV are made.

  3. Longevity and Depreciation of Audiovisual Equipment.

    Science.gov (United States)

    Post, Richard

    1987-01-01

    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  4. Audiovisual Asynchrony Detection in Human Speech

    Science.gov (United States)

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  5. Rapid, generalized adaptation to asynchronous audiovisual speech.

    Science.gov (United States)

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-01

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity.

  6. Audiovisual Prosody and Feeling of Knowing

    Science.gov (United States)

    Swerts, M.; Krahmer, E.

    2005-01-01

    This paper describes two experiments on the role of audiovisual prosody for signalling and detecting meta-cognitive information in question answering. The first study consists of an experiment, in which participants are asked factual questions in a conversational setting, while they are being filmed. Statistical analyses bring to light that the…

  7. Audiovisual vocal outburst classification in noisy conditions

    NARCIS (Netherlands)

    Eyben, Florian; Petridis, Stavros; Schuller, Björn; Pantic, Maja

    2012-01-01

    In this study, we investigate an audiovisual approach for classification of vocal outbursts (non-linguistic vocalisations) in noisy conditions using Long Short-Term Memory (LSTM) Recurrent Neural Networks and Support Vector Machines. Fusion of geometric shape features and acoustic low-level descript

  8. Active Methodology in the Audiovisual Communication Degree

    Science.gov (United States)

    Gimenez-Lopez, J. L.; Royo, T. Magal; Laborda, Jesus Garcia; Dunai, Larisa

    2010-01-01

    The paper describes the adaptation methods of the active methodologies of the new European higher education area in the new Audiovisual Communication degree under the perspective of subjects related to the area of the interactive communication in Europe. The proposed active methodologies have been experimentally implemented into the new academic…

  9. Reduced audiovisual recalibration in the elderly.

    Science.gov (United States)

    Chan, Yu Man; Pianta, Michael J; McKendrick, Allison M

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22-32 years old) and 15 older (64-74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age.

  10. Reduced audiovisual recalibration in the elderly

    Directory of Open Access Journals (Sweden)

    Yu Man eChan

    2014-08-01

    Full Text Available Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy ageing results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However the impact of ageing on audiovisual recalibration is unkonwn. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for fifteen younger (22-32 years old and fifteen older (64-74 years old healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230ms. The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post adaptation to synchrony, the younger and older observers had average window widths (±standard deviation of 326 (±80 and 448 (±105 ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers however perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous nor their synchrony window widths. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age.

  11. 36 CFR 1237.12 - What record elements must be created and preserved for permanent audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... created and preserved for permanent audiovisual records? 1237.12 Section 1237.12 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC... permanent audiovisual records? For permanent audiovisual records, the following record elements must...

  12. Audio-visual affective expression recognition

    Science.gov (United States)

    Huang, Thomas S.; Zeng, Zhihong

    2007-11-01

    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  13. Stuttering and speech naturalness: audio and audiovisual judgments.

    Science.gov (United States)

    Martin, R R; Haroldson, S K

    1992-06-01

    Unsophisticated raters, using 9-point interval scales, judged speech naturalness and stuttering severity of recorded stutterer and nonstutterer speech samples. Raters judged separately the audio-only and audiovisual presentations of each sample. For speech naturalness judgments of stutterer samples, raters invariably judged the audiovisual presentation more unnatural than the audio presentation of the same sample; but for the nonstutterer samples, there was no difference between audio and audiovisual naturalness ratings. Stuttering severity ratings did not differ significantly between audio and audiovisual presentations of the same samples. Rater reliability, interrater agreement, and intrarater agreement for speech naturalness judgments were assessed.

  14. Diminished sensitivity of audiovisual temporal order in autism spectrum disorder.

    Science.gov (United States)

    de Boer-Schellekens, Liselotte; Eussen, Mart; Vroomen, Jean

    2013-01-01

    We examined sensitivity of audiovisual temporal order in adolescents with autism spectrum disorder (ASD) using an audiovisual temporal order judgment (TOJ) task. In order to assess domain-specific impairments, the stimuli varied in social complexity from simple flash/beeps to videos of a handclap or a speaking face. Compared to typically-developing controls, individuals with ASD were generally less sensitive in judgments of audiovisual temporal order (larger just noticeable differences, JNDs), but there was no specific impairment with social stimuli. This suggests that people with ASD suffer from a more general impairment in audiovisual temporal processing.

  15. Audiovisual bimodal mutual compensation of Chinese

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The perception of human languages is inherently a multi-modalprocess, in which audio information can be compensated by visual information to improve the recognition performance. Such a phenomenon in English, German, Spanish and so on has been researched, but in Chinese it has not been reported yet. In our experiment, 14 syllables (/ba, bi, bian, biao, bin, de, di, dian, duo, dong, gai, gan, gen, gu/), extracted from Chinese audiovisual bimodal speech database CAVSR-1.0, were pronounced by 10 subjects. The audio-only stimuli, audiovisual stimuli, and visual-only stimuli were recognized by 20 observers. The audio-only stimuli and audiovisual stimuli both were presented under 5 conditions: no noise, SNR 0 dB, -8 dB, -12 dB, and -16 dB. The experimental result is studied and the following conclusions for Chinese speech are reached. Human beings can recognize visual-only stimuli rather well. The place of articulation determines the visual distinction. In noisy environment, audio information can remarkably be compensated by visual information and as a result the recognition performance is greatly improved.

  16. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.

    Science.gov (United States)

    Smayda, Kirsten E; Van Engen, Kristin J; Maddox, W Todd; Chandrasekaran, Bharath

    2016-01-01

    Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35) and thirty-three older adults (ages 60-90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both

  17. Neural Correlates of Audiovisual Integration of Semantic Category Information

    Science.gov (United States)

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-01-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded…

  18. Development of Sensitivity to Audiovisual Temporal Asynchrony during Midchildhood

    Science.gov (United States)

    Kaganovich, Natalya

    2016-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…

  19. Knowledge Generated by Audiovisual Narrative Action Research Loops

    Science.gov (United States)

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of getting to…

  20. Audiovisual Integration in High Functioning Adults with Autism

    Science.gov (United States)

    Keane, Brian P.; Rosenthal, Orna; Chun, Nicole H.; Shams, Ladan

    2010-01-01

    Autism involves various perceptual benefits and deficits, but it is unclear if the disorder also involves anomalous audiovisual integration. To address this issue, we compared the performance of high-functioning adults with autism and matched controls on experiments investigating the audiovisual integration of speech, spatiotemporal relations, and…

  1. Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special

    Science.gov (United States)

    Vroomen, Jean; Stekelenburg, Jeroen J.

    2011-01-01

    Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…

  2. Visual anticipatory information modulates multisensory interactions of artificial audiovisual stimuli.

    Science.gov (United States)

    Vroomen, Jean; Stekelenburg, Jeroen J

    2010-07-01

    The neural activity of speech sound processing (the N1 component of the auditory ERP) can be suppressed if a speech sound is accompanied by concordant lip movements. Here we demonstrate that this audiovisual interaction is neither speech specific nor linked to humanlike actions but can be observed with artificial stimuli if their timing is made predictable. In Experiment 1, a pure tone synchronized with a deformation of a rectangle induced a smaller auditory N1 than auditory-only presentations if the temporal occurrence of this audiovisual event was made predictable by two moving disks that touched the rectangle. Local autoregressive average source estimation indicated that this audiovisual interaction may be related to integrative processing in auditory areas. When the moving disks did not precede the audiovisual stimulus--making the onset unpredictable--there was no N1 reduction. In Experiment 2, the predictability of the leading visual signal was manipulated by introducing a temporal asynchrony between the audiovisual event and the collision of moving disks. Audiovisual events occurred either at the moment, before (too "early"), or after (too "late") the disks collided on the rectangle. When asynchronies varied from trial to trial--rendering the moving disks unreliable temporal predictors of the audiovisual event--the N1 reduction was abolished. These results demonstrate that the N1 suppression is induced by visual information that both precedes and reliably predicts audiovisual onset, without a necessary link to human action-related neural mechanisms.

  3. Audiovisual Processing in Children with and without Autism Spectrum Disorders

    Science.gov (United States)

    Mongillo, Elizabeth A.; Irwin, Julia R.; Whalen, D. H.; Klaiman, Cheryl; Carter, Alice S.; Schultz, Robert T.

    2008-01-01

    Fifteen children with autism spectrum disorders (ASD) and twenty-one children without ASD completed six perceptual tasks designed to characterize the nature of the audiovisual processing difficulties experienced by children with ASD. Children with ASD scored significantly lower than children without ASD on audiovisual tasks involving human faces…

  4. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    Science.gov (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  5. Electrophysiological assessment of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Dau, Torsten

    Speech perception integrates signal from ear and eye. This is witnessed by a wide range of audiovisual integration effects, such as ventriloquism and the McGurk illusion. Some behavioral evidence suggest that audiovisual integration of specific aspects is special for speech perception. However, o...

  6. Use of Audiovisual Texts in University Education Process

    Science.gov (United States)

    Aleksandrov, Evgeniy P.

    2014-01-01

    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  7. Decision-level fusion for audio-visual laughter detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, M.; Truong, K.; Poppe, R.; Pantic, M.

    2008-01-01

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laughter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is per

  8. Audiovisual Media and the Disabled. AV in Action 1.

    Science.gov (United States)

    Nederlands Bibliotheek en Lektuur Centrum, The Hague (Netherlands).

    Designed to provide information on public library services to the handicapped, this pamphlet contains case studies from three different countries on various aspects of the provision of audiovisual services to the disabled. The contents include: (1) "The Value of Audiovisual Materials in a Children's Hospital in Sweden" (Lis Byberg); (2)…

  9. Knowledge Generated by Audiovisual Narrative Action Research Loops

    Science.gov (United States)

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of…

  10. Trigger Videos on the Web: Impact of Audiovisual Design

    Science.gov (United States)

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  11. Audiovisual Matching in Speech and Nonspeech Sounds: A Neurodynamical Model

    Science.gov (United States)

    Loh, Marco; Schmid, Gabriele; Deco, Gustavo; Ziegler, Wolfram

    2010-01-01

    Audiovisual speech perception provides an opportunity to investigate the mechanisms underlying multimodal processing. By using nonspeech stimuli, it is possible to investigate the degree to which audiovisual processing is specific to the speech domain. It has been shown in a match-to-sample design that matching across modalities is more difficult…

  12. Teleconferences and Audiovisual Materials in Earth Science Education

    Science.gov (United States)

    Cortina, L. M.

    2007-05-01

    Unidad de Educacion Continua y a Distancia, Universidad Nacional Autonoma de Mexico, Coyoaca 04510 Mexico, MEXICO As stated in the special session description, 21st century undergraduate education has access to resources/experiences that go beyond university classrooms. However in some cases, resources may go largely unused and a number of factors may be cited such as logistic problems, restricted internet and telecommunication service access, miss-information, etc. We present and comment on our efforts and experiences at the National University of Mexico in a new unit dedicated to teleconferences and audio-visual materials. The unit forms part of the geosciences institutes, located in the central UNAM campus and campuses in other States. The use of teleconference in formal graduate and undergraduate education allows teachers and lecturers to distribute course material as in classrooms. Course by teleconference requires learning and student and teacher effort without physical contact, but they have access to multimedia available to support their exhibition. Well selected multimedia material allows the students to identify and recognize digital information to aid understanding natural phenomena integral to Earth Sciences. Cooperation with international partnerships providing access to new materials and experiences and to field practices will greatly add to our efforts. We will present specific examples of the experiences that we have at the Earth Sciences Postgraduate Program of UNAM with the use of technology in the education in geosciences.

  13. 7 CFR 3015.200 - Acknowledgement of support on publications and audiovisuals.

    Science.gov (United States)

    2010-01-01

    ... audiovisuals. 3015.200 Section 3015.200 Agriculture Regulations of the Department of Agriculture (Continued... Miscellaneous § 3015.200 Acknowledgement of support on publications and audiovisuals. (a) Definitions. Appendix A defines “audiovisual,” “production of an audiovisual,” and “publication.” (b)...

  14. Dissociating verbal and nonverbal audiovisual object processing.

    Science.gov (United States)

    Hocking, Julia; Price, Cathy J

    2009-02-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

  15. Categorization of natural dynamic audiovisual scenes.

    Directory of Open Access Journals (Sweden)

    Olli Rummukainen

    Full Text Available This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database.

  16. Audiovisual bimodal mutual compensation of Chinese

    Institute of Scientific and Technical Information of China (English)

    ZHOU; Zhi

    2001-01-01

    [1]Richard, P., Schumeyer, Kenneth E. B., The effect of visual information on word initial consonant perception of dysarthric speech, in Proc. ICSLP'96 October 3-6 1996, Philadephia, Pennsylvania, USA.[2]Goff, B. L., Marigny, T. G., Benoit, C., Read my lips...and my jaw! How intelligible are the components of a speaker's face? Eurospeech'95, 4th European Conference on Speech Communication and Technology, Madrid, September 1995.[3]McGurk, H., MacDonald, J. Hearing lips and seeing voices, Nature, 1976, 264: 746.[4]Duran A. F., Mcgurk effect in Spanish and German listeners: Influences of visual cues in the perception of Spanish and German confliction audio-visual stimuli, Eurospeech'95. 4th European Conference on Speech Communication and Technology, Madrid, September 1995.[5]Luettin, J., Visual speech and speaker recognition, Ph.D thesis, University of Sheffield, 1997.[6]Xu Yanjun, Du Limin, Chinese audiovisual bimodal speech database CAVSR1.0, Chinese Journal of Acoustics, to appear.[7]Zhang Jialu, Speech corpora and language input/output methods' evaluation, Chinese Applied Acoustics, 1994, 13(3): 5.

  17. How to Make Junior English Lessons Lively and Interesting by Different Teaching Aids

    Institute of Scientific and Technical Information of China (English)

    翟玮

    2002-01-01

    This paper is mainly concerned with the usage of teaching aids in junior English from three aspects: the visual aids,the audio-visual means, the body language and tone. By this means, it can give the students a comparatively real circumstances, attract the students' attention, enhance the students' interest in English and improve their consciousness of competition.

  18. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan.

    Science.gov (United States)

    Noel, Jean-Paul; De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.

  19. Audiovisual integration facilitates monkeys' short-term memory.

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2016-07-01

    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans.

  20. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan

    Science.gov (United States)

    De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T.

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations. PMID:27551918

  1. A Review on Audio-visual Translation Studies

    Institute of Scientific and Technical Information of China (English)

    李瑶

    2008-01-01

    <正>This paper is dedicated to a thorough review on the audio-visual related translations from both home and abroad.In reviewing the foreign achievements on this specific field of translation studies it can shed some lights on our national audio-visual practice and research.The review on the Chinese scholars’ audio-visual translation studies is to offer the potential developing direction and guidelines to the studies and aspects neglected as well.Based on the summary of relevant studies,possible topics for further studies are proposed.

  2. Educating Brazilian workers about AIDS.

    Science.gov (United States)

    1991-12-01

    This article contains a the script for a slide-tape presentation entitled Working Against AIDS, a presentation developed by the Brazil Family Planning Association (BEMFAM) which is designed to debunk common misconceptions about the disease. This audio-visual, which targets Brazilian workers, can be used during talks, seminars, and meetings. A discussion of the issues involved usually follows the presentation of Working Against AIDS. The presentation contains 30 illustrated slides (these are included in the article). The presentation begins by explaining that much of the information concerning AIDS is prejudicial and misleading. The next few slides point out some of the common misconceptions about AIDS, such as claims denying the existence of the disease, or suggestions that only homosexuals and prostitutes are at risk. The presentation then goes on to explain the ways in which the virus can and cannot be transmitted. Then it discusses how the virus destroys the body's natural defenses and explains the ensuing symptoms. Slides 14 and 15 point out that no cure yet exists for AIDS, making prevention essential. Slides 16-23 explain what actions are considered to be high risk and which ones do not entail risk. Noting that AIDS can be prevented, slide 24 says that the disease should not present an obstacle to spontaneous manifestations of human relations. The next slide explains that condoms should always be used when having sex with someone who could be infected with AIDS. Finally slides 26-30 demonstrate the proper way to use and dispose of a condom.

  3. Nuevos actores sociales en el escenario audiovisual

    Directory of Open Access Journals (Sweden)

    Gloria Rosique Cedillo

    2012-04-01

    Full Text Available A raíz de la entrada de las televisiones privadas al sector audiovisual español, el panorama de los contenidos de entretenimiento de la televisión generalista vivió cambios trascendentales que se vieron reflejados en las parrillas de programación. Esta situación ha abierto la polémica en torno a la disyuntiva de tener o no una televisión, sea pública o privada, que no cumple con las expectativas sociales esperadas. Esto ha motivado a que grupos civiles organizados en asociaciones de telespectadores, emprendan diversas acciones con el objetivo de incidir en el rumbo que los contenidos de entretenimiento vienen tomando, apostando fuertemente por la educación del receptor en relación a los medios audiovisuales, y por la participación ciudadana en torno a los temas televisivos.

  4. Ordinal models of audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2011-01-01

    Audiovisual information is integrated in speech perception. One manifestation of this is the McGurk illusion in which watching the articulating face alters the auditory phonetic percept. Understanding this phenomenon fully requires a computational model with predictive power. Here, we describe...... ordinal models that can account for the McGurk illusion. We compare this type of models to the Fuzzy Logical Model of Perception (FLMP) in which the response categories are not ordered. While the FLMP generally fit the data better than the ordinal model it also employs more free parameters in complex...... experiments when the number of response categories are high as it is for speech perception in general. Testing the predictive power of the models using a form of cross-validation we found that ordinal models perform better than the FLMP. Based on these findings we suggest that ordinal models generally have...

  5. An audiovisual database of English speech sounds

    Science.gov (United States)

    Frisch, Stefan A.; Nikjeh, Dee Adams

    2003-10-01

    A preliminary audiovisual database of English speech sounds has been developed for teaching purposes. This database contains all Standard English speech sounds produced in isolated words in word initial, word medial, and word final position, unless not allowed by English phonotactics. There is one example of each word spoken by a male and a female talker. The database consists of an audio recording, video of the face from a 45 deg angle off of center, and ultrasound video of the tongue in the mid-saggital plane. The files contained in the database are suitable for examination by the Wavesurfer freeware program in audio or video modes [Sjolander and Beskow, KTH Stockholm]. This database is intended as a multimedia reference for students in phonetics or speech science. A demonstration and plans for further development will be presented.

  6. On-line repository of audiovisual material feminist research methodology

    Directory of Open Access Journals (Sweden)

    Lena Prado

    2014-12-01

    Full Text Available This paper includes a collection of audiovisual material available in the repository of the Interdisciplinary Seminar of Feminist Research Methodology SIMReF (http://www.simref.net.

  7. A measure for assessing the effects of audiovisual speech integration.

    Science.gov (United States)

    Altieri, Nicholas; Townsend, James T; Wenger, Michael J

    2014-06-01

    We propose a measure of audiovisual speech integration that takes into account accuracy and response times. This measure should prove beneficial for researchers investigating multisensory speech recognition, since it relates to normal-hearing and aging populations. As an example, age-related sensory decline influences both the rate at which one processes information and the ability to utilize cues from different sensory modalities. Our function assesses integration when both auditory and visual information are available, by comparing performance on these audiovisual trials with theoretical predictions for performance under the assumptions of parallel, independent self-terminating processing of single-modality inputs. We provide example data from an audiovisual identification experiment and discuss applications for measuring audiovisual integration skills across the life span.

  8. Multistage audiovisual integration of speech: dissociating identification and detection

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech...... signal. Here we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers...... informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multi-stage account of audiovisual integration of speech in which the many attributes...

  9. Plan empresa productora de audiovisuales : La Central Audiovisual y Publicidad

    OpenAIRE

    Arroyave Velasquez, Alejandro

    2015-01-01

    El presente documento corresponde al plan de creación de empresa La Central Publicidad y Audiovisual, una empresa dedicada a la pre-producción, producción y post-producción de material de tipo audiovisual. La empresa estará ubicada en la ciudad de Cali y tiene como mercado objetivo atender los diferentes tipos de empresas de la ciudad, entre las cuales se encuentran las pequeñas, medianas y grandes empresas.

  10. Cinema, Vídeo, Digital: a virtualidade do audiovisual

    Directory of Open Access Journals (Sweden)

    Polidoro, Bruno

    2008-01-01

    Full Text Available O artigo propõe-se a refletir sobre as diversas manifestações contemporâneas do audiovisual, a partir das idéias de Vilém Flusser, focando-se no cinema, no vídeo e nas tecnologias digitais. Com os conceitos de Henri Bergson, busca perceber o audiovisual como uma virtualidade e, com isso, compreender o sentido de linguagem nesses diversos suportes de som e imagem

  11. Timing in audiovisual speech perception: A mini review and new psychophysical data.

    Science.gov (United States)

    Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory

    2016-02-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.

  12. Intermodal timing relations and audio-visual speech recognition by normal-hearing adults.

    Science.gov (United States)

    McGrath, M; Summerfield, Q

    1985-02-01

    Audio-visual identification of sentences was measured as a function of audio delay in untrained observers with normal hearing; the soundtrack was replaced by rectangular pulses originally synchronized to the closing of the talker's vocal folds and then subjected to delay. When the soundtrack was delayed by 160 ms, identification scores were no better than when no acoustical information at all was provided. Delays of up to 80 ms had little effect on group-mean performance, but a separate analysis of a subgroup of better lipreaders showed a significant trend of reduced scores with increased delay in the range from 0-80 ms. A second experiment tested the interpretation that, although the main disruptive effect of the delay occurred on a syllabic time scale, better lipreaders might be attempting to use intermodal timing cues at a phonemic level. Normal-hearing observers determined whether a 120-Hz complex tone started before or after the opening of a pair of liplike Lissajou figures. Group-mean difference limens (70.7% correct DLs) were - 79 ms (sound leading) and + 138 ms (sound lagging), with no significant correlation between DLs and sentence lipreading scores. It was concluded that most observers, whether good lipreaders or not, possess insufficient sensitivity to intermodal timing cues in audio-visual speech for them to be used analogously to voice onset time in auditory speech perception. The results of both experiments imply that delays of up to about 40 ms introduced by signal-processing algorithms in aids to lipreading should not materially affect audio-visual speech understanding.

  13. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech.

  14. 78 FR 48190 - Certain Audiovisual Components and Products Containing the Same Notice of Request for Statements...

    Science.gov (United States)

    2013-08-07

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Audiovisual Components and Products Containing the Same Notice of Request for Statements... limited exclusion order against certain infringing audiovisual components and products containing the...

  15. Audiovisual classification of vocal outbursts in human conversation using long-short-term memory networks

    NARCIS (Netherlands)

    Eyben, Florian; Petridis, Stavros; Schuller, Björn; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    2011-01-01

    We investigate classification of non-linguistic vocalisations with a novel audiovisual approach and Long Short-Term Memory (LSTM) Recurrent Neural Networks as highly successful dynamic sequence classifiers. As database of evaluation serves this year's Paralinguistic Challenge's Audiovisual Interest

  16. El fénix quiere vivir : algunas consideraciones sobre la documentación audiovisual

    OpenAIRE

    2003-01-01

    The paper presents an overview of the audio-visual documents, with a retrospective study and different points of view of national and foreign authors on the importance of the audio-visual materials and its organization, preservation and diffusion.

  17. Audio-Visual Integration of Emotional Information

    Directory of Open Access Journals (Sweden)

    Penny Bergman

    2011-10-01

    Full Text Available Emotions are central to our perception of the environment surrounding us (Berlyne, 1971. An important aspect in the emotional response to a sound is dependent on the meaning of the sound, ie, it is not the physical parameter per se that determines our emotional response to the sound but rather the source of the sound (Genell, 2008, and the relevance it has to the self (Tajadura-Jiménez et al 2010. When exposed to sound together with visual information, the information from both modalities is integrated, altering the perception of each modality, in order to generate a coherent experience. In emotional information this integration is rapid and without requirements of attentional processes (De Gelder, 1999. The present experiment investigates perception of pink noise in two visual settings in a within-subjects design. Nineteen participants rated the same sound twice in terms of pleasantness and arousal in either a pleasant or an unpleasant visual setting. The results showed that pleasantness of the sound decreased in the negative visual setting, thus suggesting an audio-visual integration, where the affective information in the visual modality is translated to the auditory modality when information-markers are lacking in it. The results are discussed in relation to theories of emotion perception.

  18. Temporal structure in audiovisual sensory selection.

    Directory of Open Access Journals (Sweden)

    Anne Kösem

    Full Text Available In natural environments, sensory information is embedded in temporally contiguous streams of events. This is typically the case when seeing and listening to a speaker or when engaged in scene analysis. In such contexts, two mechanisms are needed to single out and build a reliable representation of an event (or object: the temporal parsing of information and the selection of relevant information in the stream. It has previously been shown that rhythmic events naturally build temporal expectations that improve sensory processing at predictable points in time. Here, we asked to which extent temporal regularities can improve the detection and identification of events across sensory modalities. To do so, we used a dynamic visual conjunction search task accompanied by auditory cues synchronized or not with the color change of the target (horizontal or vertical bar. Sounds synchronized with the visual target improved search efficiency for temporal rates below 1.4 Hz but did not affect efficiency above that stimulation rate. Desynchronized auditory cues consistently impaired visual search below 3.3 Hz. Our results are interpreted in the context of the Dynamic Attending Theory: specifically, we suggest that a cognitive operation structures events in time irrespective of the sensory modality of input. Our results further support and specify recent neurophysiological findings by showing strong temporal selectivity for audiovisual integration in the auditory-driven improvement of visual search efficiency.

  19. [Current audiovisual technologies are a constituent of the continuing professional development concept].

    Science.gov (United States)

    Bezrukova, E Iu; Zatsepa, S A

    2009-01-01

    The paper is devoted to the topical problems of using innovation, information and communication technologies (ICT) in the higher medical education system, including in postgraduate professional education. The paper shows the key principles for organizing an audiovisual technology-based educational process and gives numerous practical examples of the real use of ICT in the education of not only medical, but also other specialists and the results of studies of applying the current technical aids of innovation professional education. Since each area of manpower training has its specificity and unique goals, the authors propose the highly effective decisions to organize an educational process, which fully take into consideration of the specific features of professional education. These technologies substantially expand access to educational resources, which is of great importance for a strategy of continuing professional development.

  20. A representação audiovisual das mulheres migradas The audiovisual representation of migrant women

    Directory of Open Access Journals (Sweden)

    Luciana Pontes

    2012-12-01

    Full Text Available Neste artigo analiso as representações sobre as mulheres migradas nos fundos audiovisuais de algumas entidades que trabalham com gênero e imigração em Barcelona. Por haver detectado nos audiovisuais analisados uma associação recorrente das mulheres migradas à pobreza, à criminalidade, à ignorância, à maternidade obrigatória e numerosa, à prostituição etc., busquei entender como tais representações tomam forma, estudando os elementos narrativos, estilísticos, visuais e verbais através dos quais se articulam essas imagens e discursos sobre as mulheres migradas.In this paper I analyze the representations of the migrant women at the audiovisual founds in some of the organizations that work with gender and immigration in Barcelona. At the audiovisuals I have found a recurring association of the migrant women with poverty, criminality, ignorance, passivity, undocumentation, gender violence, compulsory and numerous motherhood, prostitution, etc. Thus, I tried to understand the ways in which these representations are shaped, studying the narrative, stylistic, visual and verbal elements through which these images and discourses of the migrant women are articulated.

  1. Audiovisual Association Learning in the Absence of Primary Visual Cortex.

    Science.gov (United States)

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J; de Gelder, Beatrice

    2015-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning.

  2. Parametric packet-based audiovisual quality model for IPTV services

    CERN Document Server

    Garcia, Marie-Neige

    2014-01-01

    This volume presents a parametric packet-based audiovisual quality model for Internet Protocol TeleVision (IPTV) services. The model is composed of three quality modules for the respective audio, video and audiovisual components. The audio and video quality modules take as input a parametric description of the audiovisual processing path, and deliver an estimate of the audio and video quality. These outputs are sent to the audiovisual quality module which provides an estimate of the audiovisual quality. Estimates of perceived quality are typically used both in the network planning phase and as part of the quality monitoring. The same audio quality model is used for both these phases, while two variants of the video quality model have been developed for addressing the two application scenarios. The addressed packetization scheme is MPEG2 Transport Stream over Real-time Transport Protocol over Internet Protocol. In the case of quality monitoring, that is the case for which the network is already set-up, the aud...

  3. The Fungible Audio-Visual Mapping and its Experience

    Directory of Open Access Journals (Sweden)

    Adriana Sa

    2014-12-01

    Full Text Available This article draws a perceptual approach to audio-visual mapping. Clearly perceivable cause and effect relationships can be problematic if one desires the audience to experience the music. Indeed perception would bias those sonic qualities that fit previous concepts of causation, subordinating other sonic qualities, which may form the relations between the sounds themselves. The question is, how can an audio-visual mapping produce a sense of causation, and simultaneously confound the actual cause-effect relationships. We call this a fungible audio-visual mapping. Our aim here is to glean its constitution and aspect. We will report a study, which draws upon methods from experimental psychology to inform audio-visual instrument design and composition. The participants are shown several audio-visual mapping prototypes, after which we pose quantitative and qualitative questions regarding their sense of causation, and their sense of understanding the cause-effect relationships. The study shows that a fungible mapping requires both synchronized and seemingly non-related components – sufficient complexity to be confusing. As the specific cause-effect concepts remain inconclusive, the sense of causation embraces the whole. 

  4. 77 FR 22803 - Certain Audiovisual Components and Products Containing the Same; Institution of Investigation...

    Science.gov (United States)

    2012-04-17

    ... COMMISSION Certain Audiovisual Components and Products Containing the Same; Institution of Investigation... importation, and the sale within the United States after importation of certain audiovisual components and... certain audiovisual components and products containing the same that infringe one or more of claims 1,...

  5. 77 FR 16561 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-03-21

    ... COMMISSION Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint... complaint entitled Certain Audiovisual Components and Products Containing the Same, DN 2884; the Commission... within the United States after importation of certain audiovisual components and products containing...

  6. 77 FR 16560 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-03-21

    ... COMMISSION Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint... complaint entitled Certain Audiovisual Components and Products Containing the Same, DN 2884; the Commission... within the United States after importation of certain audiovisual components and products containing...

  7. 36 CFR 1237.10 - How must agencies manage their audiovisual, cartographic, and related records?

    Science.gov (United States)

    2010-07-01

    ... their audiovisual, cartographic, and related records? 1237.10 Section 1237.10 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.10 How must agencies manage their audiovisual, cartographic, and...

  8. Dynamic Bayesian Networks for Audio-Visual Speech Recognition

    Directory of Open Access Journals (Sweden)

    Liang Luhong

    2002-01-01

    Full Text Available The use of visual features in audio-visual speech recognition (AVSR is justified by both the speech generation mechanism, which is essentially bimodal in audio and visual representation, and by the need for features that are invariant to acoustic noise perturbation. As a result, current AVSR systems demonstrate significant accuracy improvements in environments affected by acoustic noise. In this paper, we describe the use of two statistical models for audio-visual integration, the coupled HMM (CHMM and the factorial HMM (FHMM, and compare the performance of these models with the existing models used in speaker dependent audio-visual isolated word recognition. The statistical properties of both the CHMM and FHMM allow to model the state asynchrony of the audio and visual observation sequences while preserving their natural correlation over time. In our experiments, the CHMM performs best overall, outperforming all the existing models and the FHMM.

  9. The role of visual spatial attention in audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias; Tiippana, K.; Laarni, J.

    2009-01-01

    integration did not change. Visual spatial attention was also able to select between the faces when lip reading. This suggests that visual spatial attention acts at the level of visual speech perception prior to audiovisual integration and that the effect propagates through audiovisual integration......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive...... but recent reports have challenged this view. Here we study the effect of visual spatial attention on the McGurk effect. By presenting a movie of two faces symmetrically displaced to each side of a central fixation point and dubbed with a single auditory speech track, we were able to discern the influences...

  10. Neural correlates of audiovisual speech processing in a second language.

    Science.gov (United States)

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance.

  11. Perceived synchrony for realistic and dynamic audiovisual events.

    Science.gov (United States)

    Eg, Ragnhild; Behne, Dawn M

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli.

  12. Electrophysiological evidence for speech-specific audiovisual integration.

    Science.gov (United States)

    Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode.

  13. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    Science.gov (United States)

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate

  14. AIDS: resource materials for school personnel.

    Science.gov (United States)

    Fulton, G B; Metress, E; Price, J H

    1987-01-01

    The AIDS dilemma continues to escalate, leaving a legacy that probably will affect the nation for years to come. The U.S. Centers for Disease Control, the National Academy of Sciences, and the U.S. Surgeon General have noted that in the absence of a vaccine or treatment for AIDS, education remains the only effective means to prevent the spread of the disease. Thus, schools have an important role in protecting the public health. To respond appropriately to the situation, school personnel must become familiar with relevant information and resources available concerning AIDS. This article first provides essential information about AIDS using a question-and-answer format. Second, policy statements addressing school attendance by students infected with the virus that causes AIDS are presented. Third, hotlines that can be used to obtain more detailed information about AIDS are described. Fourth, organizations that can provide information for school health education about AIDS are identified. Fifth, an annotated list of audiovisual materials that schools can use to provide education about AIDS is provided. Sixth, a bibliography of publications relevant to school health education about AIDS is offered.

  15. Speech-specificity of two audiovisual integration effects

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2010-01-01

    Seeing the talker’s articulatory mouth movements can influence the auditory speech percept both in speech identification and detection tasks. Here we show that these audiovisual integration effects also occur for sine wave speech (SWS), which is an impoverished speech signal that naïve observers...... often fail to perceive as speech. While audiovisual integration in the identification task only occurred when observers were informed of the speech-like nature of SWS, integration occurred in the detection task both for informed and naïve observers. This shows that both speech-specific and general...

  16. El archivo de RTVV: Patrimonio Audiovisual de la Humanidad

    Directory of Open Access Journals (Sweden)

    Hidalgo Goyanes, Paloma

    2014-07-01

    Full Text Available Los documentos audiovisuales son importantes para el estudio de los siglos XX y XXI. Los archivos de televisión contribuyen a la formación del imaginario colectivo y forman parte del Patrimonio Audiovisual de la Humanidad. La preservación del archivo audiovisual de la RTVV es responsabilidad de los poderes públicos, según se expresa en la legislación vigente y un derecho de los ciudadanos y de los contribuyentes como herederos de este patrimonio que refleja su historia, su cultura y su lengua.

  17. El archivo de RTVV: Patrimonio Audiovisual de la Humanidad

    OpenAIRE

    2014-01-01

    Los documentos audiovisuales son importantes para el estudio de los siglos XX y XXI. Los archivos de televisión contribuyen a la formación del imaginario colectivo y forman parte del Patrimonio Audiovisual de la Humanidad. La preservación del archivo audiovisual de la RTVV es responsabilidad de los poderes públicos, según se expresa en la legislación vigente y un derecho de los ciudadanos y de los contribuyentes como herederos de este patrimonio que refleja su historia, su cultura y su lengua...

  18. Evolution of audiovisual production in five Spanish Cybermedia

    Directory of Open Access Journals (Sweden)

    Javier Mayoral Sánchez

    2014-12-01

    Full Text Available This paper quantifies and analyzes the evolution of audiovisual production of five Spanish digital newspapers: abc.es, elconfidencial.com, elmundo.es, elpais.com and lavanguardia.com. So have been studied videos published on the five cover for four weeks (fourteen days in November 2011 and another fourteen in March 2014. This diachronic perspective has revealed a remarkable contradiction in online media about audiovisual products. Even with very considerable differences between them, the five analyzed media increasingly publish videos. They do it in in the most valued areas of their homepages. However, is not perceived in them a willingness to engage firmly

  19. Audiovisual Quality Fusion based on Relative Multimodal Complexity

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Reiter, Ulrich

    2011-01-01

    In multimodal presentations the perceived audiovisual quality assessment is significantly influenced by the content of both the audio and visual tracks. Based on our earlier subjective quality test for finding the optimal trade-off between audio and video quality, this paper proposes a novel method...... designed auditory and visual features, the relative complexity analysis model across sensory modalities is proposed for deriving the fusion parameter. Experimental results have demonstrated that the content adaptive fusion parameter can improve the prediction accuracy of objective audiovisual quality...

  20. Audiovisual integration in speech perception: a multi-stage process

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    investigate whether the integration of auditory and visual speech observed in these two audiovisual integration effects are specific traits of speech perception. We further ask whether audiovisual integration is undertaken in a single processing stage or multiple processing stages.......Integration of speech signals from ear and eye is a well-known feature of speech perception. This is evidenced by the McGurk illusion in which visual speech alters auditory speech perception and by the advantage observed in auditory speech detection when a visual signal is present. Here we...

  1. El audiovisual como medio sociocomunicativo: hacia una antropología audiovisual performativa

    Directory of Open Access Journals (Sweden)

    José Manuel Vidal-Gálvez

    2016-01-01

    Full Text Available Los recursos audiovisuales como vehículo de comunicación y representación del arte aplicados a la investigación social permiten fomentar un tipo de ciencia que vuelve su mirada más allá del mero diagnóstico científico. Posibilitan devolver el producto final empaquetado en un lenguaje sencillo y accesible, y reconocen, como principal objetivo, el retorno de sus conclusiones al ámbito social en el que se generó como vía hacia la catalización dialéctica y performativa del hecho social y comunicativo. En este texto, presentamos, a partir de trabajos empíricos realizados en España y en Ecuador, la viabilidad de la antropología audiovisual como medio para llevar a cabo una ciencia implicada con el colectivo representado y favorecedora del cambio social.

  2. Audio-Visual Equipment Depreciation. RDU-75-07.

    Science.gov (United States)

    Drake, Miriam A.; Baker, Martha

    A study was conducted at Purdue University to gather operational and budgetary planning data for the Libraries and Audiovisual Center. The objectives were: (1) to complete a current inventory of equipment including year of purchase, costs, and salvage value; (2) to determine useful life data for general classes of equipment; and (3) to determine…

  3. Audiovisual Vowel Monitoring and the Word Superiority Effect in Children

    Science.gov (United States)

    Fort, Mathilde; Spinelli, Elsa; Savariaux, Christophe; Kandel, Sonia

    2012-01-01

    The goal of this study was to explore whether viewing the speaker's articulatory gestures contributes to lexical access in children (ages 5-10) and in adults. We conducted a vowel monitoring task with words and pseudo-words in audio-only (AO) and audiovisual (AV) contexts with white noise masking the acoustic signal. The results indicated that…

  4. Developing a typology of humor in audiovisual media

    NARCIS (Netherlands)

    Buijzen, M.A.; Valkenburg, P.M.

    2004-01-01

    The main aim of this study was to develop and investigate a typology of humor in audiovisual media. We identified 41 humor techniques, drawing on Berger's (1976, 1993) typology of humor in narratives, audience research on humor preferences, and an inductive analysis of humorous commercials. We analy

  5. The Role of Audiovisual Mass Media News in Language Learning

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2011-01-01

    The present paper focuses on the role of audio/visual mass media news in language learning. In this regard, the two important issues regarding the selection and preparation of TV news for language learning are the content of the news and the linguistic difficulty. Content is described as whether the news is specialized or universal. Universal…

  6. Modelling and Retrieving Audiovisual Information - A Soccer Video Retrieval System

    NARCIS (Netherlands)

    Woudstra, A.; Velthausz, D.D.; Poot, de H.J.G.; Moelaart El-Hadidy, F.; Jonker, W.; Houtsma, M.A.W.; Heller, R.G.; Heemskerk, J.N.H.

    1998-01-01

    This paper describes the results of an ongoing collaborative project between KPN Research and the Telematics Institute on multimedia information handling. The focus of the paper is the modelling and retrieval of audiovisual information. The paper presents a general framework for modeling multimedia

  7. Producing Slide and Tape Presentations: Readings from "Audiovisual Instruction"--4.

    Science.gov (United States)

    Hitchens, Howard, Ed.

    Designed to serve as a reference and source of ideas on the use of slides in combination with audiocassettes for presentation design, this book of readings from Audiovisual Instruction magazine includes three papers providing basic tips on putting together a presentation, five articles describing techniques for improving the visual images, five…

  8. Kijkwijzer: The Dutch rating system for audiovisual productions

    NARCIS (Netherlands)

    Valkenburg, P.M.; Beentjes, J.W.J.; Nikken, P.; Tan, E.S.H.

    2002-01-01

    Kijkwijzer is the name of the new Dutch rating system in use since early 2001 to provide information about the possible harmful effects of movies, home videos and television programs on young people. The rating system is meant to provide audiovisual productions with both age-based and content-based

  9. Today's and tomorrow's retrieval practice in the audiovisual archive

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2010-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. We investigate to what extent content-based video retriev

  10. Content-based analysis improves audiovisual archive retrieval

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2012-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. In this paper, we take into account the information needs

  11. Preference for Audiovisual Speech Congruency in Superior Temporal Cortex.

    Science.gov (United States)

    Lüttke, Claudia S; Ekman, Matthias; van Gerven, Marcel A J; de Lange, Floris P

    2016-01-01

    Auditory speech perception can be altered by concurrent visual information. The superior temporal cortex is an important combining site for this integration process. This area was previously found to be sensitive to audiovisual congruency. However, the direction of this congruency effect (i.e., stronger or weaker activity for congruent compared to incongruent stimulation) has been more equivocal. Here, we used fMRI to look at the neural responses of human participants during the McGurk illusion--in which auditory /aba/ and visual /aga/ inputs are fused to perceived /ada/--in a large homogenous sample of participants who consistently experienced this illusion. This enabled us to compare the neuronal responses during congruent audiovisual stimulation with incongruent audiovisual stimulation leading to the McGurk illusion while avoiding the possible confounding factor of sensory surprise that can occur when McGurk stimuli are only occasionally perceived. We found larger activity for congruent audiovisual stimuli than for incongruent (McGurk) stimuli in bilateral superior temporal cortex, extending into the primary auditory cortex. This finding suggests that superior temporal cortex prefers when auditory and visual input support the same representation.

  12. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    Science.gov (United States)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  13. An Audio-Visual Lecture Course in Russian Culture

    Science.gov (United States)

    Leighton, Lauren G.

    1977-01-01

    An audio-visual course in Russian culture is given at Northern Illinois University. A collection of 4-5,000 color slides is the basis for the course, with lectures focussed on literature, philosophy, religion, politics, art and crafts. Acquisition, classification, storage and presentation of slides, and organization of lectures are discussed. (CHK)

  14. Voice activity detection using audio-visual information

    DEFF Research Database (Denmark)

    Petsatodis, Theodore; Pnevmatikakis, Aristodemos; Boukis, Christos

    2009-01-01

    An audio-visual voice activity detector that uses sensors positioned distantly from the speaker is presented. Its constituting unimodal detectors are based on the modeling of the temporal variation of audio and visual features using Hidden Markov Models; their outcomes are fused using a post-deci...

  15. Audiovisual Ethnography of Philippine Music: A Process-oriented Approach

    Directory of Open Access Journals (Sweden)

    Terada Yoshitaka

    2013-06-01

    Full Text Available Audiovisual documentation has been an important part of ethnomusicological endeavors, but until recently it was treated primarily as a tool of preservation and/or documentation that supplements written ethnography, albeit there a few notable exceptions. The proliferation of inexpensive video equipment has encouraged the unprecedented number of scholars and students in ethnomusicology to be involved in filmmaking, but its potential as a methodology has not been fully explored. As a small step to redefine the application of audiovisual media, Dr. Usopay Cadar, my teacher in Philippine music, and I produced two films: one on Maranao kolintang music and the other on Maranao culture in general, based on the audiovisual footage we collected in 2008. This short essay describes how the screenings of these films were organized in March 2013 for the diverse audiences in the Philippines, and what types of reactions and interactions transpired during the screenings. These screenings were organized both to obtain feedback about the content of the films from the caretakers and stakeholders of the documented tradition and to create a venue for interactions and collaborations to discuss the potential of audiovisual ethnography. Drawing from the analysis of the current project, I propose to regard film not as a fixed product but as a living and organic site that is open to commentaries and critiques, where changes can be made throughout the process. In this perspective, ‘filmmaking’ refers to the entire process of research, filming, editing and post-production activities.

  16. Market potential for interactive audio-visual media

    NARCIS (Netherlands)

    Leurdijk, A.; Limonard, S.

    2005-01-01

    NM2 (New Media for a New Millennium) develops tools for interactive, personalised and non-linear audio-visual content that will be tested in seven pilot productions. This paper looks at the market potential for these productions from a technological, a business and a users' perspective. It shows tha

  17. The Audiovisual Temporal Binding Window Narrows in Early Childhood

    Science.gov (United States)

    Lewkowicz, David J.; Flom, Ross

    2014-01-01

    Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…

  18. Neural Development of Networks for Audiovisual Speech Comprehension

    Science.gov (United States)

    Dick, Anthony Steven; Solodkin, Ana; Small, Steven L.

    2010-01-01

    Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the…

  19. Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders

    Science.gov (United States)

    Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.

    2013-01-01

    Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their typically developing peers. To shed light on possible differences in the maturation of audiovisual speech integration, we tested younger (ages 6-12) and older (ages 13-18) children with and without ASD on a task indexing such multisensory integration. To do this, we used the McGurk effect, in which the pairing of incongruent auditory and visual speech tokens typically results in the perception of a fused percept distinct from the auditory and visual signals, indicative of active integration of the two channels conveying speech information. Whereas little difference was seen in audiovisual speech processing (i.e., reports of McGurk fusion) between the younger ASD and TD groups, there was a significant difference at the older ages. While TD controls exhibited an increased rate of fusion (i.e., integration) with age, children with ASD failed to show this increase. These data suggest arrested development of audiovisual speech integration in ASD. The results are discussed in light of the extant literature and necessary next steps in research. PMID:24218241

  20. Decision-Level Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, Boris; Poel, Mannes; Truong, Khiet; Poppe, Ronald; Pantic, Maja; Popescu-Belis, Andrei; Stiefelhagen, Rainer

    2008-01-01

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laugh- ter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio- visual laughter detection is

  1. Sur Quatre Methodes Audio-Visuelles (On Four Audiovisual Methods)

    Science.gov (United States)

    Porquier, Remy; Vives, Robert

    1974-01-01

    This is a critical examination of four audiovisual methods for the teaching of French as a Foreign Language. The methods have as a common basis the interrelationship of image, dialogue, situation, and give grammar priority over vocabulary. (Text is in French.) (AM)

  2. Medical student's perceptions of different teaching aids from a tertiary care teaching institution

    Directory of Open Access Journals (Sweden)

    Inderjit Singh Bagga

    2016-07-01

    Conclusions: Student's preferences and feedback need to be taken into consideration when using multimedia modalities to present lectures to students. Feasible student suggestions must be implemented for further improving the use of audio-visual aids during didactic lectures to make teaching learning environment better. [Int J Res Med Sci 2016; 4(7.000: 2788-2791

  3. 14 CFR 141.41 - Flight simulators, flight training devices, and training aids.

    Science.gov (United States)

    2010-01-01

    ... hardware and software necessary to represent the aircraft in ground operations and flight operations; (3... and software for the systems installed that is necessary to simulate the aircraft in ground and flight... any audiovisual aid, projector, tape recorder, mockup, chart, or aircraft component listed in...

  4. Neural correlates of audiovisual integration in music reading.

    Science.gov (United States)

    Nichols, Emily S; Grahn, Jessica A

    2016-10-01

    Integration of auditory and visual information is important to both language and music. In the linguistic domain, audiovisual integration alters event-related potentials (ERPs) at early stages of processing (the mismatch negativity (MMN)) as well as later stages (P300(Andres et al., 2011)). However, the role of experience in audiovisual integration is unclear, as reading experience is generally confounded with developmental stage. Here we tested whether audiovisual integration of music appears similar to reading, and how musical experience altered integration. We compared brain responses in musicians and non-musicians on an auditory pitch-interval oddball task that evoked the MMN and P300, while manipulating whether visual pitch-interval information was congruent or incongruent with the auditory information. We predicted that the MMN and P300 would be largest when both auditory and visual stimuli deviated, because audiovisual integration would increase the neural response when the deviants were congruent. The results indicated that scalp topography differed between musicians and non-musicians for both the MMN and P300 response to deviants. Interestingly, musicians' musical training modulated integration of congruent deviants at both early and late stages of processing. We propose that early in the processing stream, visual information may guide interpretation of auditory information, leading to a larger MMN when auditory and visual information mismatch. At later attentional stages, integration of the auditory and visual stimuli leads to a larger P300 amplitude. Thus, experience with musical visual notation shapes the way the brain integrates abstract sound-symbol pairings, suggesting that musicians can indeed inform us about the role of experience in audiovisual integration.

  5. Context-specific effects of musical expertise on audiovisual integration

    Science.gov (United States)

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  6. Context-specific effects of musical expertise on audiovisual integration.

    Science.gov (United States)

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well.

  7. The effect of visual apparent motion on audiovisual simultaneity.

    Science.gov (United States)

    Kwon, Jinhwan; Ogawa, Ken-ichiro; Miyake, Yoshihiro

    2014-01-01

    Visual motion information from dynamic environments is important in multisensory temporal perception. However, it is unclear how visual motion information influences the integration of multisensory temporal perceptions. We investigated whether visual apparent motion affects audiovisual temporal perception. Visual apparent motion is a phenomenon in which two flashes presented in sequence in different positions are perceived as continuous motion. Across three experiments, participants performed temporal order judgment (TOJ) tasks. Experiment 1 was a TOJ task conducted in order to assess audiovisual simultaneity during perception of apparent motion. The results showed that the point of subjective simultaneity (PSS) was shifted toward a sound-lead stimulus, and the just noticeable difference (JND) was reduced compared with a normal TOJ task with a single flash. This indicates that visual apparent motion affects audiovisual simultaneity and improves temporal discrimination in audiovisual processing. Experiment 2 was a TOJ task conducted in order to remove the influence of the amount of flash stimulation from Experiment 1. The PSS and JND during perception of apparent motion were almost identical to those in Experiment 1, but differed from those for successive perception when long temporal intervals were included between two flashes without motion. This showed that the result obtained under the apparent motion condition was unaffected by the amount of flash stimulation. Because apparent motion was produced by a constant interval between two flashes, the results may be accounted for by specific prediction. In Experiment 3, we eliminated the influence of prediction by randomizing the intervals between the two flashes. However, the PSS and JND did not differ from those in Experiment 1. It became clear that the results obtained for the perception of visual apparent motion were not attributable to prediction. Our findings suggest that visual apparent motion changes temporal

  8. Definición del objeto de trabajo y conceptualización de los Sistemas de Información Audiovisual de la Televisión Defining the object of work and conceptualizing TV Audiovisual Information Systems

    Directory of Open Access Journals (Sweden)

    Inés-Carmen Póveda-López

    2010-04-01

    Full Text Available Se define el objeto de trabajo documental en los sistemas de información audiovisual de la televisión, partiendo de las distintas definiciones aportadas por los principales autores e instituciones sobre los conceptos de audiovisual, imagen en movimiento, sonido, documentación audiovisual, información audiovisual y documento audiovisual. Se llega así, por medio de la cuantificación y el análisis de las ideas y conceptos más repetidos en las definiciones analizadas, a definir un "Documento televisivo de imagen en movimiento".The object of documentary work in visual information systems on TV is defined on the basis of the various ideas provided by leading authors and institutions about the concepts of audiovisual, moving image, sound, audiovisual documentation, audiovisual information and audiovisual document. This takes us through quantification and analysis of the most recurrent ideas and concepts discussed in the studied definitions.

  9. Sistema audiovisual para reconocimiento de comandos Audiovisual system for recognition of commands

    Directory of Open Access Journals (Sweden)

    Alexander Ceballos

    2011-08-01

    Full Text Available Se presenta el desarrollo de un sistema automático de reconocimiento audiovisual del habla enfocado en el reconocimiento de comandos. La representación del audio se realizó mediante los coeficientes cepstrales de Mel y las primeras dos derivadas temporales. Para la caracterización del vídeo se hizo seguimiento automático de características visuales de alto nivel a través de toda la secuencia. Para la inicialización automática del algoritmo se emplearon transformaciones de color y contornos activos con información de flujo del vector gradiente ("GVF snakes" sobre la región labial, mientras que para el seguimiento se usaron medidas de similitud entre vecindarios y restricciones morfológicas definidas en el estándar MPEG-4. Inicialmente, se presenta el diseño del sistema de reconocimiento automático del habla, empleando únicamente información de audio (ASR, mediante Modelos Ocultos de Markov (HMMs y un enfoque de palabra aislada; posteriormente, se muestra el diseño de los sistemas empleando únicamente características de vídeo (VSR, y empleando características de audio y vídeo combinadas (AVSR. Al final se comparan los resultados de los tres sistemas para una base de datos propia en español y francés, y se muestra la influencia del ruido acústico, mostrando que el sistema de AVSR es más robusto que ASR y VSR.We present the development of an automatic audiovisual speech recognition system focused on the recognition of commands. Signal audio representation was done using Mel cepstral coefficients and their first and second order time derivatives. In order to characterize the video signal, a set of high-level visual features was tracked throughout the sequences. Automatic initialization of the algorithm was performed using color transformations and active contour models based on Gradient Vector Flow (GVF Snakes on the lip region, whereas visual tracking used similarity measures across neighborhoods and morphological

  10. Hearing Aids

    Science.gov (United States)

    ... more in both quiet and noisy situations. Hearing aids help people who have hearing loss from damage ... your doctor. There are different kinds of hearing aids. They differ by size, their placement on or ...

  11. AIDS (image)

    Science.gov (United States)

    AIDS (acquired immune deficiency syndrome) is caused by HIV (human immunodeficiency virus), and is a syndrome that ... life-threatening illnesses. There is no cure for AIDS, but treatment with antiviral medicine can suppress symptoms. ...

  12. Aid Effectiveness

    DEFF Research Database (Denmark)

    Arndt, Channing; Jones, Edward Samuel; Tarp, Finn

    Controversy over the aggregate impact of foreign aid has focused on reduced form estimates of the aid-growth link. The causal chain, through which aid affects developmental outcomes including growth, has received much less attention. We address this gap by: (i) specifying a structural model of th...

  13. Audio-visual interactions in product sound design

    Science.gov (United States)

    Özcan, Elif; van Egmond, René

    2010-02-01

    Consistent product experience requires congruity between product properties such as visual appearance and sound. Therefore, for designing appropriate product sounds by manipulating their spectral-temporal structure, product sounds should preferably not be considered in isolation but as an integral part of the main product concept. Because visual aspects of a product are considered to dominate the communication of the desired product concept, sound is usually expected to fit the visual character of a product. We argue that this can be accomplished successfully only on basis of a thorough understanding of the impact of audio-visual interactions on product sounds. Two experimental studies are reviewed to show audio-visual interactions on both perceptual and cognitive levels influencing the way people encode, recall, and attribute meaning to product sounds. Implications for sound design are discussed defying the natural tendency of product designers to analyze the "sound problem" in isolation from the other product properties.

  14. Specialization in audiovisual speech perception: a replication study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by bimodal integration in the McGurk effect. This integration effect may be specific to speech or be applied to all stimuli in general. To investigate this, Tuomainen et al. (2005) used sine-wave speech, which naïve observers may perceive as non-speech......, but hear as speech once informed of the linguistic origin of the signal. Combinations of sine-wave speech and incongruent video of the talker elicited a McGurk effect only for informed observers. This indicates that the audiovisual integration effect is specific to speech perception. However, observers...... of the speaker. Observers were required to report this after primary target categorization. We found a significant McGurk effect only in the natural speech and speech mode conditions supporting the finding of Tuomainen et al. Performance in the secondary task was similar in all conditions indicating...

  15. Audiovisual English-Arabic Translation: De Beaugrande's Perspective

    Directory of Open Access Journals (Sweden)

    Alaa Eddin Hussain

    2016-05-01

    Full Text Available This paper attempts to demonstrate the significance of the seven standards of textuality with special application to audiovisual English Arabic translation.  Ample and thoroughly analysed examples have been provided to help in audiovisual English-Arabic translation decision-making. A text is meaningful if and only if it carries meaning and knowledge to its audience, and is optimally activatable, recoverable and accessible.  The same is equally applicable to audiovisual translation (AVT. The latter should also carry knowledge which can be easily accessed by the TL audience, and be processed with least energy and time, i.e. achieving the utmost level of efficiency. Communication occurs only when that text is coherent, with continuity of senses and concepts that are appropriately linked. Coherence of a text will be achieved when all aspects of cohesive devices are well accounted for pragmatically.  This combined with a good amount of psycholinguistic element will provide a text with optimal communicative value. Non-text is certainly devoid of such components and ultimately non-communicative. Communicative knowledge can be classified into three categories: determinate knowledge, typical knowledge and accidental knowledge. To create dramatic suspense and the element of surprise, the text in AV environment, as in any dialogue, often carries accidental knowledge.  This unusual knowledge aims to make AV material interesting in the eyes of its audience. That cognitive environment is enhanced by an adequate employment of material (picture and sound, and helps to recover sense in the text. Hence, the premise of this paper is the application of certain aspects of these standards to AV texts taken from various recent feature films and documentaries, in order to facilitate the translating process and produce a final appropriate product.  Keywords: Arabic audiovisual translation, coherence, cohesion, textuality

  16. Audiovisual correspondence between musical timbre and visual shapes

    OpenAIRE

    Adeli, Mohammad; Rouat, Jean; Molotchnikoff, Stéphane

    2014-01-01

    This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g., simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against co...

  17. Audiovisual Ethnography of Philippine Music: A Process-oriented Approach

    OpenAIRE

    2013-01-01

    Audiovisual documentation has been an important part of ethnomusicological endeavors, but until recently it was treated primarily as a tool of preservation and/or documentation that supplements written ethnography, albeit there a few notable exceptions. The proliferation of inexpensive video equipment has encouraged the unprecedented number of scholars and students in ethnomusicology to be involved in filmmaking, but its potential as a methodology has not been fully explored. As a small step ...

  18. Neural correlates of quality during perception of audiovisual stimuli

    CERN Document Server

    Arndt, Sebastian

    2016-01-01

    This book presents a new approach to examining perceived quality of audiovisual sequences. It uses electroencephalography to understand how exactly user quality judgments are formed within a test participant, and what might be the physiologically-based implications when being exposed to lower quality media. The book redefines experimental paradigms of using EEG in the area of quality assessment so that they better suit the requirements of standard subjective quality testings. Therefore, experimental protocols and stimuli are adjusted accordingly. .

  19. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    Science.gov (United States)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  20. The role of emotion in dynamic audiovisual integration of faces and voices.

    Science.gov (United States)

    Kokinous, Jenny; Kotz, Sonja A; Tavano, Alessandro; Schröger, Erich

    2015-05-01

    We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration.

  1. Visual Target Localization, the Effect of Allocentric Audiovisual Reference Frame

    Directory of Open Access Journals (Sweden)

    David Hartnagel

    2011-10-01

    Full Text Available Visual allocentric references frames (contextual cues affect visual space perception (Diedrichsen et al., 2004; Walter et al., 2006. On the other hand, experiments have shown a change of visual perception induced by binaural stimuli (Chandler, 1961; Carlile et al., 2001. In the present study we investigate the effect of visual and audiovisual allocentred reference frame on visual localization and straight ahead pointing. Participant faced a black part-spherical screen (92cm radius. The head was maintained aligned with the body. Participant wore headphone and a glove with motion capture markers. A red laser point was displayed straight ahead as fixation point. The visual target was a 100ms green laser point. After a short delay, the green laser reappeared and participant had to localize target with a trackball. Straight ahead blind pointing was required before and after series of 48 trials. Visual part of the bimodal allocentred reference frame was provided by a vertical red laser line (15° left or 15° right, auditory part was provided by 3D sound. Five conditions were tested, no-reference, visual reference (left/right, audiovisual reference (left/right. Results show that the significant effect of bimodal audiovisual reference is not different from the visual reference one.

  2. The development of the perception of audiovisual simultaneity.

    Science.gov (United States)

    Chen, Yi-Chuan; Shore, David I; Lewis, Terri L; Maurer, Daphne

    2016-06-01

    We measured the typical developmental trajectory of the window of audiovisual simultaneity by testing four age groups of children (5, 7, 9, and 11 years) and adults. We presented a visual flash and an auditory noise burst at various stimulus onset asynchronies (SOAs) and asked participants to report whether the two stimuli were presented at the same time. Compared with adults, children aged 5 and 7 years made more simultaneous responses when the SOAs were beyond ± 200 ms but made fewer simultaneous responses at the 0 ms SOA. The point of subjective simultaneity was located at the visual-leading side, as in adults, by 5 years of age, the youngest age tested. However, the window of audiovisual simultaneity became narrower and response errors decreased with age, reaching adult levels by 9 years of age. Experiment 2 ruled out the possibility that the adult-like performance of 9-year-old children was caused by the testing of a wide range of SOAs. Together, the results demonstrate that the adult-like precision of perceiving audiovisual simultaneity is developed by 9 years of age, the youngest age that has been reported to date.

  3. Audiovisual temporal fusion in 6-month-old infants.

    Science.gov (United States)

    Kopp, Franziska

    2014-07-01

    The aim of this study was to investigate neural dynamics of audiovisual temporal fusion processes in 6-month-old infants using event-related brain potentials (ERPs). In a habituation-test paradigm, infants did not show any behavioral signs of discrimination of an audiovisual asynchrony of 200 ms, indicating perceptual fusion. In a subsequent EEG experiment, audiovisual synchronous stimuli and stimuli with a visual delay of 200 ms were presented in random order. In contrast to the behavioral data, brain activity differed significantly between the two conditions. Critically, N1 and P2 latency delays were not observed between synchronous and fused items, contrary to previously observed N1 and P2 latency delays between synchrony and perceived asynchrony. Hence, temporal interaction processes in the infant brain between the two sensory modalities varied as a function of perceptual fusion versus asynchrony perception. The visual recognition components Pb and Nc were modulated prior to sound onset, emphasizing the importance of anticipatory visual events for the prediction of auditory signals. Results suggest mechanisms by which young infants predictively adjust their ongoing neural activity to the temporal synchrony relations to be expected between vision and audition.

  4. Audiovisual integration for speech during mid-childhood: electrophysiological evidence.

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-12-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7-8-year-olds and 10-11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception.

  5. Audiovisual integration of speech in a patient with Broca's Aphasia.

    Science.gov (United States)

    Andersen, Tobias S; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia.

  6. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    Science.gov (United States)

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  7. Audiovisual integration of emotional signals from others’ social interactions.

    Directory of Open Access Journals (Sweden)

    Lukasz ePiwek

    2015-05-01

    Full Text Available Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g. the face-voice and/or body-sound of one actor. However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previously found in simpler social situations extends to more complex and ecological situations. Stimuli consisting of the biological motion and voice of two interacting agents were used in two experiments. In Experiment 1, participants were presented with visual, auditory, auditory filtered/noisy, and audiovisual congruent and incongruent clips. We asked participants to judge whether the two agents were interacting happily or angrily. In Experiment 2, another group of participants repeated the same task, as in Experiment 1, while trying to ignore either the visual or the auditory information. The findings from both experiments indicate that when the reliability of the auditory cue was decreased participants weighted more the visual cue in their emotional judgments. This in turn translated in increased emotion recognition accuracy for the multisensory condition. Our findings thus point to a common mechanism of multisensory integration of emotional signals irrespective of social stimulus complexity.

  8. Temporal structure and complexity affect audio-visual correspondence detection

    Directory of Open Access Journals (Sweden)

    Rachel N Denison

    2013-01-01

    Full Text Available Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration.

  9. Infants' preference for native audiovisual speech dissociated from congruency preference.

    Directory of Open Access Journals (Sweden)

    Kathleen Shaw

    Full Text Available Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces. Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English and non-native (Spanish language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.

  10. A desconstrução audiovisual do trailer

    Directory of Open Access Journals (Sweden)

    Patricia de Oliveira Iuva

    2010-06-01

    Full Text Available Para além das reflexões acerca de uma dada produção audiovisual, este artigo tem por finalidade ensaiar possíveis desconstruções da noção hegemônica da publicidade no trailer. Daí que, acerca do mesmo, é importante considerar que esse não está restrito, somente, à promoção de filmes, uma vez que se observa na televisão, no jornalismo, nos videoclipes, etc., a presença de audiovisuais com construções semelhantes às dos trailers. Como chamaríamos esses audiovisuais, uma vez que o termo trailer, em princípio, estaria restrito a peças que possuem relação a um filme? De tal modo, poderia se pensar, portanto, que existem movimentos no interior do trailer, que vão além da publicidade e do cinema. Neste sentido, então, é possível pensar que o que justifica a ocorrência do trailer não é a existência de um filme, mas sim a promessa da existência de um filme, o que pode constituir, possivelmente, uma forma de linguagem emergente da produção audiovisual. Ou seja, é possível vislumbrar no trailer uma composição audiovisual adequada a um dado padrão global de produção e, ao mesmo tempo, identificar a existência de elementos fluidos que escapam aos modelos pré-concebidos. A articulação de uma dada linguagem audiovisual com referências que vêm desde a produção dos videoclipes e influências das tecnologias analógico-digitais, possibilita-nos vislumbrar um movimento de autonomia estética e político-econômica da produção trailerífica. É neste contexto teórico-metodológico, entre a semiologia de Christian Metz e o conceito de desconstrução em Derrida, que o trabalho aborda a discussão do cinema e do audiovisual no interior do objeto trailer.

  11. Application and design of audio-visual aids stomatology teaching in orthodontic non-stomatology students%非口腔医学专业医学生口腔正畸学教学中“口腔直观教学法”的设计与应用

    Institute of Scientific and Technical Information of China (English)

    李若萱; 吕亚林; 王晓庚

    2012-01-01

    Objective This study is to discuss the effects of audio- visual aids stomatology teaching in undergraduate orthodontic training for students majoring in preventive medicine in two credit hours.Methods We selected 85 students from the 2007 and 2008 matriculating classes of the preventive medicine department of Capital Medical University.Using the eight-year orthodontic textbook as our reference,we taught the theory through the multimedia pathway in the first class hour,and implemented teaching by playing situation in the trainee class hour.A follow-up survey was carried out to obtain students' feedback on the combined teaching method.Results Our survey showed that the majority of students realized the goal of using the method and believed their interest in learning orthodontics was significantly enhanced.In fact,they became fascinated by orthodontics in the limited time of the study.Conclusions We concluded that the integration of object teaching combination with situational teaching is of great assistance to orthodontic training; however,the integration must be carefully prepared to ensure student participation,maximize the benefits of integration and improve the course from direct feedback.%目的 在2学时的非口腔医学专业本科学生口腔正畸学教学中设计并实施“口腔直观教学法”,并评价其教学效果.方法 以首都医科大学2007级和2008级预防医学专业85名学生作为研究对象,以八年制口腔正畸学教科书为教材,1学时理论教学采用多媒体形式,1学时见习教学采用情景扮演方式.教学结束后,采用理论考核和问卷调查方式评价教学效果,分析学生对“口腔直观教学法”的反馈评价.结果 学生对口腔正畸学理论知识掌握较好,大部分学生能够明确教学目的.学生认为“口腔直观教学法”增强了对学习口腔正畸学的兴趣,在极其有限的时间内,对口腔正畸学留下了深刻印象.结论 “口腔直观教学法”适合

  12. 36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?

    Science.gov (United States)

    2010-07-01

    ... RECORDS MANAGEMENT § 1237.18 What are the environmental standards for audiovisual records storage? (a... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false What are the environmental standards for audiovisual records storage? 1237.18 Section 1237.18 Parks, Forests, and Public...

  13. Search behavior of media professionals at an audiovisual archive: A transaction log analysis

    NARCIS (Netherlands)

    Huurnink, B.; Hollink, L.; van den Heuvel, W.; de Rijke, M.

    2010-01-01

    Finding audiovisual material for reuse in new programs is an important activity for news producers, documentary makers, and other media professionals. Such professionals are typically served by an audiovisual broadcast archive. We report on a study of the transaction logs of one such archive. The an

  14. Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection.

    Science.gov (United States)

    Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica

    2015-09-01

    Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration.

  15. Audiovisual News, Cartoons, and Films as Sources of Authentic Language Input and Language Proficiency Enhancement

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2012-01-01

    In today's audiovisually driven world, various audiovisual programs can be incorporated as authentic sources of potential language input for second language acquisition. In line with this view, the present research aimed at discovering the effectiveness of exposure to news, cartoons, and films as three different types of authentic audiovisual…

  16. 36 CFR 1237.26 - What materials and processes must agencies use to create audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... must agencies use to create audiovisual records? 1237.26 Section 1237.26 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.26 What materials and processes must agencies use to create...

  17. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... considerations in the maintenance of audiovisual records? 1237.20 Section 1237.20 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.20 What are special considerations in the maintenance of...

  18. Exploring Student Perceptions of Audiovisual Feedback via Screencasting in Online Courses

    Science.gov (United States)

    Mathieson, Kathleen

    2012-01-01

    Using Moore's (1993) theory of transactional distance as a framework, this action research study explored students' perceptions of audiovisual feedback provided via screencasting as a supplement to text-only feedback. A crossover design was employed to ensure that all students experienced both text-only and text-plus-audiovisual feedback and to…

  19. Age-related audiovisual interactions in the superior colliculus of the rat.

    Science.gov (United States)

    Costa, M; Piché, M; Lepore, F; Guillemot, J-P

    2016-04-21

    It is well established that multisensory integration is a functional characteristic of the superior colliculus that disambiguates external stimuli and therefore reduces the reaction times toward simple audiovisual targets in space. However, in a condition where a complex audiovisual stimulus is used, such as the optical flow in the presence of modulated audio signals, little is known about the processing of the multisensory integration in the superior colliculus. Furthermore, since visual and auditory deficits constitute hallmark signs during aging, we sought to gain some insight on whether audiovisual processes in the superior colliculus are altered with age. Extracellular single-unit recordings were conducted in the superior colliculus of anesthetized Sprague-Dawley adult (10-12 months) and aged (21-22 months) rats. Looming circular concentric sinusoidal (CCS) gratings were presented alone and in the presence of sinusoidally amplitude modulated white noise. In both groups of rats, two different audiovisual response interactions were encountered in the spatial domain: superadditive, and suppressive. In contrast, additive audiovisual interactions were found only in adult rats. Hence, superior colliculus audiovisual interactions were more numerous in adult rats (38%) than in aged rats (8%). These results suggest that intersensory interactions in the superior colliculus play an essential role in space processing toward audiovisual moving objects during self-motion. Moreover, aging has a deleterious effect on complex audiovisual interactions.

  20. Children with a History of SLI Show Reduced Sensitivity to Audiovisual Temporal Asynchrony: An ERP Study

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer; Leonard, Laurence B.; Gustafson, Dana; Macias, Danielle

    2014-01-01

    Purpose: The authors examined whether school-age children with a history of specific language impairment (H-SLI), their peers with typical development (TD), and adults differ in sensitivity to audiovisual temporal asynchrony and whether such difference stems from the sensory encoding of audiovisual information. Method: Fifteen H-SLI children, 15…

  1. Twice upon a time: multiple concurrent temporal recalibrations of audiovisual speech.

    Science.gov (United States)

    Roseboom, Warrick; Arnold, Derek H

    2011-07-01

    Audiovisual timing perception can recalibrate following prolonged exposure to asynchronous auditory and visual inputs. It has been suggested that this might contribute to achieving perceptual synchrony for auditory and visual signals despite differences in physical and neural signal times for sight and sound. However, given that people can be concurrently exposed to multiple audiovisual stimuli with variable neural signal times, a mechanism that recalibrates all audiovisual timing percepts to a single timing relationship could be dysfunctional. In the experiments reported here, we showed that audiovisual temporal recalibration can be specific for particular audiovisual pairings. Participants were shown alternating movies of male and female actors containing positive and negative temporal asynchronies between the auditory and visual streams. We found that audiovisual synchrony estimates for each actor were shifted toward the preceding audiovisual timing relationship for that actor and that such temporal recalibrations occurred in positive and negative directions concurrently. Our results show that humans can form multiple concurrent estimates of appropriate timing for audiovisual synchrony.

  2. 36 CFR 1237.16 - How do agencies store audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... facilities comply with 36 CFR part 1234. (b) For the storage of permanent, long-term temporary, or... audiovisual records? 1237.16 Section 1237.16 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.16 How...

  3. Changes of the Prefrontal EEG (Electroencephalogram) Activities According to the Repetition of Audio-Visual Learning.

    Science.gov (United States)

    Kim, Yong-Jin; Chang, Nam-Kee

    2001-01-01

    Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…

  4. A General Audiovisual Temporal Processing Deficit in Adult Readers with Dyslexia

    Science.gov (United States)

    Francisco, Ana A.; Jesse, Alexandra; Groen, Margriet A.; McQueen, James M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of…

  5. THE IROQUOIS, A BIBLIOGRAPHY OF AUDIO-VISUAL MATERIALS--WITH SUPPLEMENT. (TITLE SUPPLIED).

    Science.gov (United States)

    KELLERHOUSE, KENNETH; AND OTHERS

    APPROXIMATELY 25 SOURCES OF AUDIOVISUAL MATERIALS PERTAINING TO THE IROQUOIS AND OTHER NORTHEASTERN AMERICAN INDIAN TRIBES ARE LISTED ACCORDING TO TYPE OF AUDIOVISUAL MEDIUM. AMONG THE LESS-COMMON MEDIA ARE RECORDINGS OF IROQUOIS MUSIC AND DO-IT-YOURSELF REPRODUCTIONS OF IROQUOIS ARTIFACTS. PRICES ARE GIVEN WHERE APPLICABLE. (BR)

  6. Audiovisual Speech Perception and Eye Gaze Behavior of Adults with Asperger Syndrome

    Science.gov (United States)

    Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-01-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…

  7. A Management Review and Analysis of Purdue University Libraries and Audio-Visual Center.

    Science.gov (United States)

    Baaske, Jan; And Others

    A management review and analysis was conducted by the staff of the libraries and audio-visual center of Purdue University. Not only were the study team and the eight task forces drawn from all levels of the libraries and audio-visual center staff, but a systematic effort was sustained through inquiries, draft reports and open meetings to involve…

  8. Threats and opportunities for new audiovisual cultural heritage archive services: the Dutch case

    NARCIS (Netherlands)

    Ongena, Guido; Huizer, E.; Wijngaert, van de Lidwien

    2012-01-01

    Purpose The purpose of this paper is to analyze the business-to-consumer market for digital audiovisual archiving services. In doing so we identify drivers, threats, and opportunities for new services based on audiovisual archives in the cultural heritage domain. By analyzing the market we provide i

  9. The level of audiovisual print-speech integration deficits in dyslexia.

    Science.gov (United States)

    Kronschnabel, Jens; Brem, Silvia; Maurer, Urs; Brandeis, Daniel

    2014-09-01

    The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No

  10. Brand Aid

    DEFF Research Database (Denmark)

    Richey, Lisa Ann; Ponte, Stefano

    A critical account of the rise of celebrity-driven “compassionate consumption” Cofounded by the rock star Bono in 2006, Product RED exemplifies a new trend in celebrity-driven international aid and development, one explicitly linked to commerce, not philanthropy. Brand Aid offers a deeply informed...

  11. Foreign aid

    DEFF Research Database (Denmark)

    Tarp, Finn

    2008-01-01

    Foreign aid has evolved significantly since the Second World War in response to a dramatically changing global political and economic context. This article (a) reviews this process and associated trends in the volume and distribution of foreign aid; (b) reviews the goals, principles...

  12. Audio-visual perception of compressed speech by profoundly hearing-impaired subjects.

    Science.gov (United States)

    Drullman, R; Smoorenburg, G F

    1997-01-01

    For many people with profound hearing loss conventional hearing aids give only little support in speechreading. This study aims at optimizing the presentation of speech signals in the severely reduced dynamic range of the profoundly hearing impaired by means of multichannel compression and multichannel amplification. The speech signal in each of six 1-octave channels (125-4000 Hz) was compressed instantaneously, using compression ratios of 1, 2, 3, or 5, and a compression threshold of 35 dB below peak level. A total of eight conditions were composed in which the compression ratio varied per channel. Sentences were presented audio-visually to 16 profoundly hearing-impaired subjects and syllable intelligibility was measured. Results show that all auditory signals are valuable supplements to speechreading. No clear overall preference is found for any of the compression conditions, but relatively high compression ratios (> 3-5) have a significantly detrimental effect. Inspection of the individual results reveals that compression may be beneficial for one subject.

  13. La regulación audiovisual: argumentos a favor y en contra The audio-visual regulation: the arguments for and against

    Directory of Open Access Journals (Sweden)

    Jordi Sopena Palomar

    2008-03-01

    Full Text Available El artículo analiza la efectividad de la regulación audiovisual y valora los diversos argumentos a favor y en contra de la existencia de consejos reguladores a nivel estatal. El debate sobre la necesidad de un organismo de este calado en España todavía persiste. La mayoría de los países comunitarios se han dotado de consejos competentes en esta materia, como es el caso del OFCOM en el Reino Unido o el CSA en Francia. En España, la regulación audiovisual se limita a organismos de alcance autonómico, como son el Consejo Audiovisual de Navarra, el de Andalucía y el Consell de l’Audiovisual de Catalunya (CAC, cuyo modelo también es abordado en este artículo. The article analyzes the effectiveness of the audio-visual regulation and assesses the different arguments for and against the existence of the broadcasting authorities at the state level. The debate of the necessity of a Spanish organism of regulation is still active. Most of the European countries have created some competent authorities, like the OFCOM in United Kingdom and the CSA in France. In Spain, the broadcasting regulation is developed by regional organisms, like the Consejo Audiovisual de Navarra, the Consejo Audiovisual de Andalucía and the Consell de l’Audiovisual de Catalunya (CAC, whose case is also studied in this article.

  14. Aula virtual y presencial en aprendizaje de comunicación audiovisual y educación Virtual and Real Classroom in Learning Audiovisual Communication and Education

    Directory of Open Access Journals (Sweden)

    Josefina Santibáñez Velilla

    2010-10-01

    Full Text Available El modelo mixto de enseñanza-aprendizaje pretende utilizar las tecnologías de la información y de la comunicación (TIC para garantizar una formación más ajustada al Espacio Europeo de Educación Superior (EEES. Se formularon los siguientes objetivos de investigación: Averiguar la valoración que hacen los alumnos de Magisterio del aula virtual WebCT como apoyo a la docencia presencial, y conocer las ventajas del uso de la WebCT y de las TIC por los alumnos en el estudio de caso: «Valores y contravalores transmitidos por series televisivas visionadas por niños y adolescentes». La investigación se realizó con una muestra de 205 alumnos de la Universidad de La Rioja que cursaban la asignatura de «Tecnologías aplicadas a la Educación». Para la descripción objetiva, sistemática y cuantitativa del contenido manifiesto de los documentos se ha utilizado la técnica de análisis de contenido cualitativa y cuantitativa. Los resultados obtenidos demuestran que las herramientas de comunicación, contenidos y evaluación son valoradas favorablemente por los alumnos. Se llega a la conclusión de que la WebCT y las TIC constituyen un apoyo a la innovación metodológica del EEES basada en el aprendizaje centrado en el alumno. Los alumnos evidencian su competencia audiovisual en los ámbitos de análisis de valores y de expresión a través de documentos audiovisuales en formatos multimedia. Dichos alumnos aportan un nuevo sentido innovador y creativo al uso docente de series televisivas.The mixed model of Teaching-Learning intends to use Information and Communication Technologies (ICTs to guarantee an education more adjusted to European Space for Higher Education (ESHE. The following research objectives were formulated: 1 To find out the assessment made by teacher-training college students of the virtual classroom WebCT as an aid to face-to-face teaching. 2 To know the advantages of the use of WebCT and ICTs by students in the case study:

  15. Bayesian calibration of simultaneity in audiovisual temporal order judgments.

    Directory of Open Access Journals (Sweden)

    Shinya Yamamoto

    Full Text Available After repeated exposures to two successive audiovisual stimuli presented in one frequent order, participants eventually perceive a pair separated by some lag time in the same order as occurring simultaneously (lag adaptation. In contrast, we previously found that perceptual changes occurred in the opposite direction in response to tactile stimuli, conforming to bayesian integration theory (bayesian calibration. We further showed, in theory, that the effect of bayesian calibration cannot be observed when the lag adaptation was fully operational. This led to the hypothesis that bayesian calibration affects judgments regarding the order of audiovisual stimuli, but that this effect is concealed behind the lag adaptation mechanism. In the present study, we showed that lag adaptation is pitch-insensitive using two sounds at 1046 and 1480 Hz. This enabled us to cancel lag adaptation by associating one pitch with sound-first stimuli and the other with light-first stimuli. When we presented each type of stimulus (high- or low-tone in a different block, the point of simultaneity shifted to "sound-first" for the pitch associated with sound-first stimuli, and to "light-first" for the pitch associated with light-first stimuli. These results are consistent with lag adaptation. In contrast, when we delivered each type of stimulus in a randomized order, the point of simultaneity shifted to "light-first" for the pitch associated with sound-first stimuli, and to "sound-first" for the pitch associated with light-first stimuli. The results clearly show that bayesian calibration is pitch-specific and is at work behind pitch-insensitive lag adaptation during temporal order judgment of audiovisual stimuli.

  16. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    Science.gov (United States)

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions.

  17. THE COVERAGE OF THE TRAGEDIES IN THE AUDIOVISUAL MEDIA

    Directory of Open Access Journals (Sweden)

    Carlos Portas

    2013-11-01

    Full Text Available News about tragedies or disasters is one of the biggest challenges for journalists. These are extreme situations in which they must combine the inalienable right to truthful information with other inalienable rights, including respect for the privacy of people suffering. For this, the role of the professionals is crucial, but also the role of the audiovisual media companies. Journalists should to understand that in a tragic event involved people react in public, but that doesn't mean they are making public their reaction. A good reporter knows to discern what is news, what to ask, how and when to do it and, if appropriate, how to spread

  18. Sistemas de Registro Audiovisual del Patrimonio Urbano (SRAPU)

    OpenAIRE

    Conles, Liliana Eva

    2006-01-01

    El Sistema SRAPU es un método de relevamiento fílmico diseñado para configurar una base de datos interactiva del paisaje urbano. Sobre esta base se persigue la formulación de criterios ordenados en términos de: flexibilidad y eficacia económica, eficiencia en el manejo de datos, democratización de la información. El SRAPU se plantea como un registro audiovisual del patrimonio material e intangible en su singularidad y como conjunto histórico y natural. En su concepción involucra los pro...

  19. Proyecto educativo : herramientas de educación audiovisual

    OpenAIRE

    Boza Osuna, Luis

    2005-01-01

    El objeto de este trabajo es examinar la necesidad de informar y formar en educación audiovisual a familias, alumnos y profesores. Desde 1999, Telespectadores Asociados de Cataluña (TAC) decidió apostar decididamente por acercarse al mundo educativo, para dar respuesta a la evidente necesidad de las instituciones educativas de plantar cara a los efectos negativos de la televisión en los alumnos. Los directivos y profesionales de la enseñanza son perfectamente conscientes de la competencia des...

  20. Claves para reconocer los niveles de lectura crítica audiovisual en el niño Keys to Recognizing the Levels of Critical Audiovisual Reading in Children

    Directory of Open Access Journals (Sweden)

    Jacqueline Sánchez Carrero

    2012-03-01

    Full Text Available Diversos estudios con niños y adolescentes han demostrado que a mayor conocimiento del mundo de la producción y transmisión de mensajes audiovisuales, mayor capacidad adquieren para formarse un criterio propio ante la pantalla. En este artículo se aúnan tres experiencias de educación mediática realizadas en Venezuela, Colombia y España, desde el enfoque de la recepción crítica. Se proporcionan los indicadores que llevan a determinar los niveles de lectura crítica audiovisual en niños de entre 8 y 12 años, construidos a partir de procesos de intervención mediante talleres de alfabetización mediática. Los grupos han sido instruidos acerca del universo audiovisual, dándoles a conocer cómo se gestan los contenidos audiovisuales y el modo de analizarlos, desestructurarlos y recrearlos. Primero, se hace referencia al concepto en evolución de educación mediática. Después, se describen las experiencias comunes en los tres países para luego incidir en los indicadores que permiten medir el nivel de lectura crítica. Por último, se reflexiona sobre la necesidad de la educación mediática en la era de la multialfabetización. No es muy frecuente encontrar estudios que revelen las claves para reconocer qué grado de criticidad tiene un niño cuando visiona los contenidos de los distintos medios digitales. Es un tema fundamental pues permite saber con qué nivel de comprensión cuenta y cuál adquiere después de un proceso de formación en educación mediática.Based on the results of several projects carried out with children and adolescents, we can state that knowledge of production and broadcasting aids the acquisition of critical media skills. This article combines three media education experiences in Venezuela, Colombia and Spain driven by a critical reception approach. It presents leading indicators for determining the level of critical audiovisual reading in children aged 8-12 extracted from intervention processes through

  1. War on Film: Military History Education. Video tapes, Motion Pictures, and Related Audiovisual Aids

    Science.gov (United States)

    1987-01-01

    Preparedness Films FB Film Bulletin MF Miscellaneous Film RT Recordings on Tape SFR ’,;aff Film Reports STVM Soldier’s TV Magazine TAR The Army Reports...stroke. Wilson’s refusal to compromise on the League of Nations question was responsible for his failure . D-2. (SA VPIN 605007). The First Salt Talks...185() A.D).). D-/: TVT 21-82. (SAV PIN 705965). Part 7. Kentucky Rifle in the American Revo- lution. (15 rain.) D)estroying the myth of’ the backwoods

  2. Hearing Aid

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    A man realized that he needed to purchase ahearing aid, but he was unwilling to spend muchmoney. "How much do they run?"he asked theclerk. "That depends," said. the salesman. "Theyrun from 2 to 2000."

  3. Hearing Aids

    Science.gov (United States)

    ... slightly different from the ITC and is nearly hidden in the ear canal. Both canal hearing aids ... Privacy Policy & Terms of Use Visit the Nemours Web site. Note: All information on TeensHealth® is for ...

  4. Authentic Language Input Through Audiovisual Technology and Second Language Acquisition

    Directory of Open Access Journals (Sweden)

    Taher Bahrani

    2014-09-01

    Full Text Available Second language acquisition cannot take place without having exposure to language input. With regard to this, the present research aimed at providing empirical evidence about the low and the upper-intermediate language learners’ preferred type of audiovisual programs and language proficiency development outside the classroom. To this end, 60 language learners (30 low level and 30 upper-intermediate level were asked to have exposure to their preferred types of audiovisual program(s outside the classroom and keep a diary of the amount and the type of exposure. The obtained data indicated that the low-level participants preferred cartoons and the upper-intermediate participants preferred news more. To find out which language proficiency level could improve its language proficiency significantly, a post-test was administered. The results indicated that only the upper-intermediate language learners gained significant improvement. Based on the findings, the quality of the language input should be given priority over the amount of exposure.

  5. Valores occidentales en el discurso publicitario audiovisual argentino

    Directory of Open Access Journals (Sweden)

    Isidoro Arroyo Almaraz

    2012-04-01

    Full Text Available En el presente artículo se desarrolla un análisis del discurso publicitario audiovisual argentino. Se pretende identificar los valores sociales que comunica con mayor predominancia y su posible vinculación con los valores característicos de la sociedad occidental posmoderna. Con este propósito se analizó la frecuencia de aparición de valores sociales para el estudio de 28 anuncios de diferentes anunciantes . Como modelo de análisis se utilizó el modelo “Seven/Seven” (siete pecados capitales y siete virtudes cardinales ya que se considera que los valores tradicionales son herederos de las virtudes y los pecados, utilizados por la publicidad para resolver necesidades relacionadas con el consumo. La publicidad audiovisual argentina promueve y anima ideas relacionadas con las virtudes y pecados a través de los comportamientos de los personajes de los relatos audiovisuales. Los resultados evidencian una mayor frecuencia de valores sociales caracterizados como pecados que de valores sociales caracterizados como virtudes ya que los pecados se transforman a través de la publicidad en virtudes que dinamizan el deseo y que favorecen el consumo fortaleciendo el aprendizaje de las marcas. Finalmente, a partir de los resultados obtenidos se reflexiona acerca de los usos y alcances sociales que el discurso publicitario posee.

  6. A imagem-ritmo e o videoclipe no audiovisual

    Directory of Open Access Journals (Sweden)

    Felipe de Castro Muanis

    2012-12-01

    Full Text Available A televisão pode ser um espaço de reunião entre som e imagem em um dispositivo que possibilita a imagem-ritmo – dando continuidade à teoria da imagem de Gilles Deleuze, proposta para o cinema. Ela agregaria, simultaneamente, ca-racterísticas da imagem-movimento e da imagem-tempo, que se personificariam na construção de imagens pós-modernas, em produtos audiovisuais não necessariamente narrativos, porém populares. Filmes, videogames, videoclipes e vinhetas em que a música conduz as imagens permitiriam uma leitura mais sensorial. O audiovisual como imagem-música abre, assim, para uma nova forma de percepção além da textual tradicional, fruto da interação entre ritmo, texto e dispositivo. O tempo das imagens em movimento no audiovisual está atrelado inevitável e prioritariamente ao som. Elas agregam possibilidades não narrativas que se realizam, na maioria das vezes, sobre a lógica do ritmo musical, so-bressaindo-se como um valor fundamental, observado nos filmes Sem Destino (1969, Assassinos por Natureza (1994 e Corra Lola Corra (1998.

  7. Information-Driven Active Audio-Visual Source Localization.

    Directory of Open Access Journals (Sweden)

    Niclas Schult

    Full Text Available We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source's position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot's mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system's performance and discuss possible areas of application.

  8. Head Tracking of Auditory, Visual, and Audio-Visual Targets.

    Science.gov (United States)

    Leung, Johahn; Wei, Vincent; Burgess, Martin; Carlile, Simon

    2015-01-01

    The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20 to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual "bisensory" stimuli. Three metrics were measured-onset, RMS, and gain error. The results showed that tracking accuracy (RMS error) varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets.

  9. Compliments in Audiovisual Translation – issues in character identity

    Directory of Open Access Journals (Sweden)

    Isabel Fernandes Silva

    2011-12-01

    Full Text Available Over the last decades, audiovisual translation has gained increased significance in Translation Studies as well as an interdisciplinary subject within other fields (media, cinema studies etc. Although many articles have been published on communicative aspects of translation such as politeness, only recently have scholars taken an interest in the translation of compliments. This study will focus on both these areas from a multimodal and pragmatic perspective, emphasizing the links between these fields and how this multidisciplinary approach will evidence the polysemiotic nature of the translation process. In Audiovisual Translation both text and image are at play, therefore, the translation of speech produced by the characters may either omit (because it is provided by visualgestual signs or it may emphasize information. A selection was made of the compliments present in the film What Women Want, our focus being on subtitles which did not successfully convey the compliment expressed in the source text, as well as analyze the reasons for this, namely difference in register, Culture Specific Items and repetitions. These differences lead to a different portrayal/identity/perception of the main character in the English version (original soundtrack and subtitled versions in Portuguese and Italian.

  10. Head Tracking of Auditory, Visual and Audio-Visual Targets

    Directory of Open Access Journals (Sweden)

    Johahn eLeung

    2016-01-01

    Full Text Available The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20°/s to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual bisensory stimuli. Three metrics were measured – onset, RMS and gain error. The results showed that tracking accuracy (RMS error varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets.

  11. Video genre categorization and representation using audio-visual information

    Science.gov (United States)

    Ionescu, Bogdan; Seyerlehner, Klaus; Rasche, Christoph; Vertan, Constantin; Lambert, Patrick

    2012-04-01

    We propose an audio-visual approach to video genre classification using content descriptors that exploit audio, color, temporal, and contour information. Audio information is extracted at block-level, which has the advantage of capturing local temporal information. At the temporal structure level, we consider action content in relation to human perception. Color perception is quantified using statistics of color distribution, elementary hues, color properties, and relationships between colors. Further, we compute statistics of contour geometry and relationships. The main contribution of our work lies in harnessing the descriptive power of the combination of these descriptors in genre classification. Validation was carried out on over 91 h of video footage encompassing 7 common video genres, yielding average precision and recall ratios of 87% to 100% and 77% to 100%, respectively, and an overall average correct classification of up to 97%. Also, experimental comparison as part of the MediaEval 2011 benchmarking campaign demonstrated the efficiency of the proposed audio-visual descriptors over other existing approaches. Finally, we discuss a 3-D video browsing platform that displays movies using feature-based coordinates and thus regroups them according to genre.

  12. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    Science.gov (United States)

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  13. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG.

    Directory of Open Access Journals (Sweden)

    Yao Lu

    Full Text Available Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave, while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  14. Efficient visual search from synchronized auditory signals requires transient audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.

  15. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    Science.gov (United States)

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  16. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis.

    Science.gov (United States)

    Altieri, Nicholas; Wenger, Michael J

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of -12 dB, and S/N ratio of -18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity.

  17. Brain responses to audiovisual speech mismatch in infants are associated with individual differences in looking behaviour.

    Science.gov (United States)

    Kushnerenko, Elena; Tomalski, Przemyslaw; Ballieux, Haiko; Ribeiro, Helena; Potton, Anita; Axelsson, Emma L; Murphy, Elizabeth; Moore, Derek G

    2013-11-01

    Research on audiovisual speech integration has reported high levels of individual variability, especially among young infants. In the present study we tested the hypothesis that this variability results from individual differences in the maturation of audiovisual speech processing during infancy. A developmental shift in selective attention to audiovisual speech has been demonstrated between 6 and 9 months with an increase in the time spent looking to articulating mouths as compared to eyes (Lewkowicz & Hansen-Tift. (2012) Proc. Natl Acad. Sci. USA, 109, 1431-1436; Tomalski et al. (2012) Eur. J. Dev. Psychol., 1-14). In the present study we tested whether these changes in behavioural maturational level are associated with differences in brain responses to audiovisual speech across this age range. We measured high-density event-related potentials (ERPs) in response to videos of audiovisually matching and mismatched syllables /ba/ and /ga/, and subsequently examined visual scanning of the same stimuli with eye-tracking. There were no clear age-specific changes in ERPs, but the amplitude of audiovisual mismatch response (AVMMR) to the combination of visual /ba/ and auditory /ga/ was strongly negatively associated with looking time to the mouth in the same condition. These results have significant implications for our understanding of individual differences in neural signatures of audiovisual speech processing in infants, suggesting that they are not strictly related to chronological age but instead associated with the maturation of looking behaviour, and develop at individual rates in the second half of the first year of life.

  18. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    Science.gov (United States)

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  19. The Influence of Selective and Divided Attention on Audiovisual Integration in Children.

    Science.gov (United States)

    Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong

    2016-01-24

    This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder.

  20. Effect of attentional load on audiovisual speech perception: Evidence from ERPs

    Directory of Open Access Journals (Sweden)

    Agnès eAlsius

    2014-07-01

    Full Text Available Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e. a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  1. Neural Dynamics of Audiovisual Speech Integration under Variable Listening Conditions: An Individual Participant Analysis

    Directory of Open Access Journals (Sweden)

    Nicholas eAltieri

    2013-09-01

    Full Text Available Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend & Nozawa, 1995, a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of -12 dB, and S/N ratio of -18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude in lower auditory S/N ratios (higher capacity/efficiency compared to the high S/N ratio (low capacity/inefficient integration. The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity.

  2. The audiovisual mounting narrative as a basis for the documentary film interactive: news studies

    Directory of Open Access Journals (Sweden)

    Mgs. Denis Porto Renó

    2008-01-01

    Full Text Available This paper presents a literature review and experiment results from pilot-doctoral research "assembly language visual narrative for the documentary film interactive," which defend the thesis that there are features interactive audio and video editing of the movie, even as causing agent of interactivity. The search for interactive audio-visual formats are present in international investigations, but sob glances technology. He believes that this paper is to propose possible formats for interactive audiovisual production film, video, television, computer and cell phone from the postmodern society. Key words: Audiovisual, language, interactivity, cinema interactive, documentary, communication.

  3. Audiovisual Integration of Speech in a Patient with Broca’s Aphasia

    DEFF Research Database (Denmark)

    Andersen, Tobias; Starrfelt, Randi

    2015-01-01

    's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical......, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing...

  4. Types of Hearing Aids

    Science.gov (United States)

    ... Devices Consumer Products Hearing Aids Types of Hearing Aids Share Tweet Linkedin Pin it More sharing options ... some features for hearing aids? What are hearing aids? Hearing aids are sound-amplifying devices designed to ...

  5. Atypical audiovisual speech integration in infants at risk for autism.

    Directory of Open Access Journals (Sweden)

    Jeanne A Guiraud

    Full Text Available The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/- audio/ba/and the congruent visual/ba/- audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/- audio/ga/display compared with the congruent visual/ga/- audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16 = 17.153, p = 0.001. The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25 = 0.09, p = 0.767, in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41 = 4.466, p = 0.041. In some cases this reduced ability might lead to the poor communication skills characteristic of autism.

  6. Handicrafts production: documentation and audiovisual dissemination as sociocultural appreciation technology

    Directory of Open Access Journals (Sweden)

    Luciana Alvarenga

    2016-01-01

    Full Text Available The paper presents the results of scientific research, technology and innovation project in the creative economy sector, conducted from January 2014 to January 2015 that aimed to document and disclose the artisans and handicraft production of Vila de Itaúnas, ES, Brasil. The process was developed from initial conversations, followed by planning and conducting participatory workshops for documentation and audiovisual dissemination around the production of handicrafts and its relation to biodiversity and local culture. The initial objective was to promote expression and diffusion spaces of knowledge among and for the local population, also reaching a regional, state and national public. Throughout the process, it was found that the participatory workshops and the collective production of a virtual site for disclosure of practices and products contributed to the development and socio-cultural recognition of artisan and craft in the region.

  7. The Digital Turn in the French Audiovisual Model

    Directory of Open Access Journals (Sweden)

    Olivier Alexandre

    2016-07-01

    Full Text Available This article deals with the digital turn in the French audiovisual model. An organizational and legal system has evolved with changing technology and economic forces over the past thirty years. The high-income television industry served as the key element during the 1980s to compensate for a shifting value economy from movie theaters to domestic screens and personal devices. However, the growing competition in the TV sector and the rise of tech companies have initiated a disruption process. A challenged French conception copyright, the weakened position of TV channels and the scaling of content market all now call into question the sustainability of the French model in a digital era.

  8. Innovación y competencia en la industria audiovisual

    OpenAIRE

    Motta, Jorge José

    2015-01-01

    Este artículo está orientado a analizar la relación existente entre innovación y formas e intensidad de la competencia empresarial en el mercado audiovisual, con especial referencia a la industria cinematográfica. Para ello se indaga en las características económicas de las principales tecnologías y en las formas de organización de la producción típicas del sector y se analiza cómo afectan la relación innovación – competencia. Además, se examina la importancia de la cultura y de los...

  9. Artimate: an articulatory animation framework for audiovisual speech synthesis

    CERN Document Server

    Steiner, Ingmar

    2012-01-01

    We present a modular framework for articulatory animation synthesis using speech motion capture data obtained with electromagnetic articulography (EMA). Adapting a skeletal animation approach, the articulatory motion data is applied to a three-dimensional (3D) model of the vocal tract, creating a portable resource that can be integrated in an audiovisual (AV) speech synthesis platform to provide realistic animation of the tongue and teeth for a virtual character. The framework also provides an interface to articulatory animation synthesis, as well as an example application to illustrate its use with a 3D game engine. We rely on cross-platform, open-source software and open standards to provide a lightweight, accessible, and portable workflow.

  10. 78 FR 63492 - Certain Audiovisual Components and Products Containing the Same; Notice of Commission...

    Science.gov (United States)

    2013-10-24

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Audiovisual Components and Products Containing the Same; Notice of Commission Determination To Review a Final Initial Determination Finding a Violation of Section 337 in Its...

  11. The development of sensorimotor influences in the audiovisual speech domain: Some critical questions

    Directory of Open Access Journals (Sweden)

    Bahia eGuellaï

    2014-08-01

    Full Text Available Speech researchers have long been interested in how auditory and visual speech signals are integrated, and recent work has revived interest in the role of speech production with respect to this process. Here we discuss these issues from a developmental perspective. Because speech perception abilities typically outstrip speech production abilities in infancy and childhood, it is unclear how speech-like movements could influence audiovisual speech perception in development. While work on this question is still in its preliminary stages, there is nevertheless increasing evidence that sensorimotor processes (defined here as any motor or proprioceptive process related to orofacial movements affect developmental audiovisual speech processing. We suggest three areas on which to focus in future research: i the relation between audiovisual speech perception and sensorimotor processes at birth, ii the pathways through which sensorimotor processes interact with audiovisual speech processing in infancy, and iii developmental change in sensorimotor pathways as speech production emerges in childhood.

  12. The development of sensorimotor influences in the audiovisual speech domain: some critical questions.

    Science.gov (United States)

    Guellaï, Bahia; Streri, Arlette; Yeung, H Henny

    2014-01-01

    Speech researchers have long been interested in how auditory and visual speech signals are integrated, and the recent work has revived interest in the role of speech production with respect to this process. Here, we discuss these issues from a developmental perspective. Because speech perception abilities typically outstrip speech production abilities in infancy and childhood, it is unclear how speech-like movements could influence audiovisual speech perception in development. While work on this question is still in its preliminary stages, there is nevertheless increasing evidence that sensorimotor processes (defined here as any motor or proprioceptive process related to orofacial movements) affect developmental audiovisual speech processing. We suggest three areas on which to focus in future research: (i) the relation between audiovisual speech perception and sensorimotor processes at birth, (ii) the pathways through which sensorimotor processes interact with audiovisual speech processing in infancy, and (iii) developmental change in sensorimotor pathways as speech production emerges in childhood.

  13. On the Role of Crossmodal Prediction in Audiovisual Emotion Perception

    Directory of Open Access Journals (Sweden)

    Sarah eJessen

    2013-07-01

    Full Text Available Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others’ emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of crossmodal prediction. In emotion perception, as in most other settings, visual information precedes the auditory one. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, it has not been addressed so far in audiovisual emotion perception. Based on the current state of the art in (a crossmodal prediction and (b multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG and magnetoencephalographic (MEG studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow for a more reliable prediction of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 response in the EEG and the duration of visual emotional but not non-emotional information. If the assumption that emotional content allows for more reliable predictions can be corroborated in future studies, crossmodal prediction is a crucial factor in our understanding of multisensory emotion perception.

  14. A LINGUAGEM AUDIOVISUAL COMO PRÁTICA ESCOLAR

    Directory of Open Access Journals (Sweden)

    Simone Berle

    2012-01-01

    Full Text Available O ensaio discute a relação entre o cinema e a escola para tematizar a linguagem audiovisual e suas implicações nas práticas escolares. Mesmo com o acesso a materiais e recursos audiovisuais, o cinema comparece no cotidiano escolar como apoio pedagógico diante da hierarquização e redução das linguagens à leitura e à escrita na educação das crianças. Para discutir a necessária pluralização de experiências com as linguagens, enquanto prática escolar, busca dialogar com a proposta de Jorge Larrosa, de substituir o par teoria/prática pelo par experiência/sentido para pensar a educação e com a concepção do humano como ser histórico e produtor de história em Paul Ricoeur. Nosso olhar de educadoras e pesquisadoras da infância interroga a naturalizada presença da linguagem audiovisual na educação das crianças, para destacar a desconsideração pela pluralidade de acessos midiáticos que as crianças podem interagir atualmente. Não reivindica a inclusão do cinema nos currículos, enquanto área de conhecimento a ser contemplada como “conteúdo”, mas aponta a importância do ampliar as aprendizagens, no cotidiano escolar, ao reivindicar a pluralização dos processos de aprender a complexificar repertórios linguageiros.

  15. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    Science.gov (United States)

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration.

  16. Trayectoria, educación universitaria y aprendizaje laboral en la producción audiovisual

    OpenAIRE

    Fernández Berdaguer, María Leticia

    2006-01-01

    Este documento analiza la influencia que tiene la educación universitaria en el trabajo de los profesionales del campo audiovisual. Para ello describe aspectos de la trayectoria de actores del campo audiovisual y de su percepción de la importancia de la educación universitaria y del aprendizaje laboral en el desempeño profesional. Facultad de Bellas Artes

  17. Brand Aid

    DEFF Research Database (Denmark)

    Richey, Lisa Ann; Ponte, Stefano

    2011-01-01

    activists, scholars and venture capitalists, discusses the pros and cons of changing the world by ‘voting with your dollars’. Lisa Ann Richey and Stefano Ponte (Professor at Roskilde University and Senior Researcher at DIIS respectively), authors of Brand Aid: Shopping Well to Save the World, highlight how...

  18. Multiple concurrent temporal recalibrations driven by audiovisual stimuli with apparent physical differences.

    Science.gov (United States)

    Yuan, Xiangyong; Bi, Cuihua; Huang, Xiting

    2015-05-01

    Out-of-synchrony experiences can easily recalibrate one's subjective simultaneity point in the direction of the experienced asynchrony. Although temporal adjustment of multiple audiovisual stimuli has been recently demonstrated to be spatially specific, perceptual grouping processes that organize separate audiovisual stimuli into distinctive "objects" may play a more important role in forming the basis for subsequent multiple temporal recalibrations. We investigated whether apparent physical differences between audiovisual pairs that make them distinct from each other can independently drive multiple concurrent temporal recalibrations regardless of spatial overlap. Experiment 1 verified that reducing the physical difference between two audiovisual pairs diminishes the multiple temporal recalibrations by exposing observers to two utterances with opposing temporal relationships spoken by one single speaker rather than two distinct speakers at the same location. Experiment 2 found that increasing the physical difference between two stimuli pairs can promote multiple temporal recalibrations by complicating their non-temporal dimensions (e.g., disks composed of two rather than one attribute and tones generated by multiplying two frequencies); however, these recalibration aftereffects were subtle. Experiment 3 further revealed that making the two audiovisual pairs differ in temporal structures (one transient and one gradual) was sufficient to drive concurrent temporal recalibration. These results confirm that the more audiovisual pairs physically differ, especially in temporal profile, the more likely multiple temporal perception adjustments will be content-constrained regardless of spatial overlap. These results indicate that multiple temporal recalibrations are based secondarily on the outcome of perceptual grouping processes.

  19. Detecting Functional Connectivity During Audiovisual Integration with MEG: A Comparison of Connectivity Metrics.

    Science.gov (United States)

    Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard

    2015-08-01

    In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.

  20. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    Science.gov (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration.

  1. Developmental Trajectory of Audiovisual Speech Integration in Early Infancy. A Review of Studies Using the McGurk Paradigm

    Directory of Open Access Journals (Sweden)

    Tomalski Przemysław

    2015-10-01

    Full Text Available Apart from their remarkable phonological skills young infants prior to their first birthday show ability to match the mouth articulation they see with the speech sounds they hear. They are able to detect the audiovisual conflict of speech and to selectively attend to articulating mouth depending on audiovisual congruency. Early audiovisual speech processing is an important aspect of language development, related not only to phonological knowledge, but also to language production during subsequent years. Th is article reviews recent experimental work delineating the complex developmental trajectory of audiovisual mismatch detection. Th e central issue is the role of age-related changes in visual scanning of audiovisual speech and the corresponding changes in neural signatures of audiovisual speech processing in the second half of the first year of life. Th is phenomenon is discussed in the context of recent theories of perceptual development and existing data on the neural organisation of the infant ‘social brain’.

  2. Tactile Aids

    Directory of Open Access Journals (Sweden)

    Mohtaramossadat Homayuni

    1996-04-01

    Full Text Available Tactile aids, which translate sound waves into vibrations that can be felt by the skin, have been used for decades by people with severe/profound hearing loss to enhance speech/language development and improve speechreading.The development of tactile aids dates from the efforts of Goults and his co-workers in the 1920s; Although The power supply was too voluminous and it was difficult to carry specially by children, it was too huge and heavy to be carried outside the laboratories and its application was restricted to the experimental usage. Nowadays great advances have been performed in producing this instrument and its numerous models is available in markets around the world.

  3. Negotiating Aid

    DEFF Research Database (Denmark)

    Whitfield, Lindsay; Fraser, Alastair

    2011-01-01

    This article presents a new analytical approach to the study of aid negotiations. Building on existing approaches but trying to overcome their limitations, it argues that factors outside of individual negotiations (or the `game' in game-theoretic approaches) significantly affect the preferences...... of actors, the negotiating strategies they fashion, and the success of those strategies. This approach was employed to examine and compare the experiences of eight countries: Botswana, Ethiopia, Ghana, Mali, Mozambique, Rwanda, Tanzania and Zambia. The article presents findings from these country studies...... which investigated the strategies these states have adopted in talks with aid donors, the sources of leverage they have been able to bring to bear in negotiations, and the differing degrees of control that they have been able to exercise over the policies agreed in negotiations and those implemented...

  4. The spatial reliability of task-irrelevant sounds modulates bimodal audiovisual integration: An event-related potential study.

    Science.gov (United States)

    Li, Qi; Yu, Hongtao; Wu, Yan; Gao, Ning

    2016-08-26

    The integration of multiple sensory inputs is essential for perception of the external world. The spatial factor is a fundamental property of multisensory audiovisual integration. Previous studies of the spatial constraints on bimodal audiovisual integration have mainly focused on the spatial congruity of audiovisual information. However, the effect of spatial reliability within audiovisual information on bimodal audiovisual integration remains unclear. In this study, we used event-related potentials (ERPs) to examine the effect of spatial reliability of task-irrelevant sounds on audiovisual integration. Three relevant ERP components emerged: the first at 140-200ms over a wide central area, the second at 280-320ms over the fronto-central area, and a third at 380-440ms over the parieto-occipital area. Our results demonstrate that ERP amplitudes elicited by audiovisual stimuli with reliable spatial relationships are larger than those elicited by stimuli with inconsistent spatial relationships. In addition, we hypothesized that spatial reliability within an audiovisual stimulus enhances feedback projections to the primary visual cortex from multisensory integration regions. Overall, our findings suggest that the spatial linking of visual and auditory information depends on spatial reliability within an audiovisual stimulus and occurs at a relatively late stage of processing.

  5. Physical and perceptual factors shape the neural mechanisms that integrate audiovisual signals in speech comprehension.

    Science.gov (United States)

    Lee, HweeLing; Noppeney, Uta

    2011-08-01

    Face-to-face communication challenges the human brain to integrate information from auditory and visual senses with linguistic representations. Yet the role of bottom-up physical (spectrotemporal structure) input and top-down linguistic constraints in shaping the neural mechanisms specialized for integrating audiovisual speech signals are currently unknown. Participants were presented with speech and sinewave speech analogs in visual, auditory, and audiovisual modalities. Before the fMRI study, they were trained to perceive physically identical sinewave speech analogs as speech (SWS-S) or nonspeech (SWS-N). Comparing audiovisual integration (interactions) of speech, SWS-S, and SWS-N revealed a posterior-anterior processing gradient within the left superior temporal sulcus/gyrus (STS/STG): Bilateral posterior STS/STG integrated audiovisual inputs regardless of spectrotemporal structure or speech percept; in left mid-STS, the integration profile was primarily determined by the spectrotemporal structure of the signals; more anterior STS regions discarded spectrotemporal structure and integrated audiovisual signals constrained by stimulus intelligibility and the availability of linguistic representations. In addition to this "ventral" processing stream, a "dorsal" circuitry encompassing posterior STS/STG and left inferior frontal gyrus differentially integrated audiovisual speech and SWS signals. Indeed, dynamic causal modeling and Bayesian model comparison provided strong evidence for a parallel processing structure encompassing a ventral and a dorsal stream with speech intelligibility training enhancing the connectivity between posterior and anterior STS/STG. In conclusion, audiovisual speech comprehension emerges in an interactive process with the integration of auditory and visual signals being progressively constrained by stimulus intelligibility along the STS and spectrotemporal structure in a dorsal fronto-temporal circuitry.

  6. Desarrollo de una prueba de comprensión audiovisual

    Directory of Open Access Journals (Sweden)

    Casañ Núñez, Juan Carlos

    2016-06-01

    Full Text Available Este artículo forma parte de una investigación doctoral que estudia el uso de preguntas de comprensión audiovisual integradas en la imagen del vídeo como subtítulos y sincronizadas con los fragmentos de vídeo relevantes. Anteriormente se han publicado un marco teórico que describe esta técnica (Casañ Núñez, 2015b y un ejemplo en una secuencia didáctica (Casañ Núñez, 2015a. El presente trabajo detalla el proceso de planificación, diseño y experimentación de una prueba de comprensión audiovisual con dos variantes que será administrada junto con otros instrumentos en estudios cuasiexperimentales con grupos de control y tratamiento. Fundamentalmente, se pretende averiguar si la subtitulación de las preguntas facilita la comprensión, si aumenta el tiempo que los estudiantes miran en dirección a la pantalla y conocer la opinión del grupo de tratamiento sobre esta técnica. En la fase de experimentación se efectuaron seis estudios. En el último estudio piloto participaron cuarenta y un estudiantes de ELE (veintidós en el grupo de control y diecinueve en el de tratamiento. Las observaciones de los informantes durante la administración de la prueba y su posterior corrección sugirieron que las indicaciones sobre la estructura del test, las presentaciones de los textos de entrada, la explicación sobre el funcionamiento de las preguntas subtituladas para el grupo experimental y la redacción de los ítems resultaron comprensibles. Los datos de las dos variantes del instrumento se sometieron a sendos análisis de facilidad, discriminación, fiabilidad y descriptivos. También se calcularon las correlaciones entre los test y dos tareas de un examen de comprensión auditiva. Los resultados mostraron que las dos versiones de la prueba estaban preparadas para ser administradas.

  7. Teaching AIDS.

    Science.gov (United States)

    Short, R V

    1989-06-01

    This article reviews a peer group Acquired Immunodeficiency Syndrome (AIDS) educational program at a university in Australia. Studies in the US have shown that most adolescents, although sexually active, do not believe they are likely to become infected with the Human Immunodeficiency Virus, and therefore do not attempt to modify their sexual behavior. A 1st step in educating students is to introduce them to condoms and impress upon them the fact that condoms should be used at the beginning of all sexual relationships, whether homosexual or heterosexual. In this program 3rd year medical students were targeted, as they are effective communicators and disseminators of information to the rest of the student body. After class members blow up condoms, giving them a chance to handle various brands and observe the varying degrees of strength, statistical evidence about the contraceptive failure rate of condoms (0.6-14.7 per 100 women-years) is discussed. Spermicides, such as nonoxynol-9 used in conjunction with condoms, are also discussed, as are condoms for women, packaging and marketing of condoms, including those made from latex and from the caecum of sheep, the latter condoms being of questionable effectiveness in preventing transmission of the virus. The care of terminal AIDS cases and current global and national statistics on AIDS are presented. The program also includes cash prizes for the best student essays on condom use, the distribution of condoms, condom key rings and T-shirts, and a student-run safe sex stand during orientation week. All of these activities are intended to involve students and attract the interest of the undergraduate community. Questionnaires administered to students at the end of the course revealed that the lectures were received favorably. Questionnaires administered to new medical and English students attending orientation week revealed that 72% of students thought the stand was a good idea and 81% and 83%, respectively found it

  8. Brand Aid

    DEFF Research Database (Denmark)

    Richey, Lisa Ann; Ponte, Stefano

    2011-01-01

    activists, scholars and venture capitalists, discusses the pros and cons of changing the world by ‘voting with your dollars’. Lisa Ann Richey and Stefano Ponte (Professor at Roskilde University and Senior Researcher at DIIS respectively), authors of Brand Aid: Shopping Well to Save the World, highlight how......Can Citizen Consumers Make a Difference? DIIS researcher contributes to a Boston Review - New Democracy Forum In the current issue of Boston Review (November/December 2011), contributors to a ‘New Democracy Forum’ debate whether Citizen Consumers can make a difference in stimulating responsible...

  9. Audio-Visual Perception System for a Humanoid Robotic Head

    Directory of Open Access Journals (Sweden)

    Raquel Viciana-Abad

    2014-05-01

    Full Text Available One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.

  10. Audio-visual perception system for a humanoid robotic head.

    Science.gov (United States)

    Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M; Bandera, Juan P; Romero-Garces, Adrian; Reche-Lopez, Pedro

    2014-01-01

    One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.

  11. Audio-Visual Integration Modifies Emotional Judgment in Music

    Directory of Open Access Journals (Sweden)

    Shen-Yuan Su

    2011-10-01

    Full Text Available The conventional view that perceived emotion in music is derived mainly from auditory signals has led to neglect of the contribution of visual image. In this study, we manipulated mode (major vs. minor and examined the influence of a video image on emotional judgment in music. Melodies in either major or minor mode were controlled for tempo and rhythm and played to the participants. We found that Taiwanese participants, like Westerners, judged major melodies as expressing positive, and minor melodies negative, emotions. The major or minor melodies were then paired with video images of the singers, which were either emotionally congruent or incongruent with their modes. Results showed that participants perceived stronger positive or negative emotions with congruent audio-visual stimuli. Compared to listening to music alone, stronger emotions were perceived when an emotionally congruent video image was added and weaker emotions were perceived when an incongruent image was added. We therefore demonstrate that mode is important to perceive the emotional valence in music and that treating musical art as a purely auditory event might lose the enhanced emotional strength perceived in music, since going to a concert may lead to stronger perceived emotion than listening to the CD at home.

  12. Audiovisual education and breastfeeding practices: A preliminary report

    Directory of Open Access Journals (Sweden)

    V. C. Nikodem

    1993-05-01

    Full Text Available A randomized control trial was conducted at the Coronation Hospital, to evaluate the effect of audiovisual breastfeeding education. Within 72 hours after delivery, 340 women who agreed to participate were allocated randomly to view one of two video programmes, one of which dealt with breastfeeding. To determine the effect of the programme on infant feeding a structured questionnaire was administered to 108 women who attended the six week postnatal check-up. Alternative methods, such as telephonic interviews (24 and home visits (30 were used to obtain information from subjects who did not attend the postnatal clinic. Comparisons of mother-infant relationships and postpartum depression showed no significant differences. Similar proportions of each group reported that their baby was easy to manage, and that they felt close to and could communicate well with it. While the overall number of mothers who breast-fed was not significantly different between the two groups, there was a trend towards fewer mothers in the study group supplementing with bottle feeding. It was concluded that the effectiveness of aidiovisual education alone is limited, and attention should be directed towards personal follow-up and support for breastfeeding mothers.

  13. Impact of language on functional connectivity for audiovisual speech integration

    Science.gov (United States)

    Shinozaki, Jun; Hiroe, Nobuo; Sato, Masa-aki; Nagamine, Takashi; Sekiyama, Kaoru

    2016-01-01

    Visual information about lip and facial movements plays a role in audiovisual (AV) speech perception. Although this has been widely confirmed, previous behavioural studies have shown interlanguage differences, that is, native Japanese speakers do not integrate auditory and visual speech as closely as native English speakers. To elucidate the neural basis of such interlanguage differences, 22 native English speakers and 24 native Japanese speakers were examined in behavioural or functional Magnetic Resonance Imaging (fMRI) experiments while mono-syllabic speech was presented under AV, auditory-only, or visual-only conditions for speech identification. Behavioural results indicated that the English speakers identified visual speech more quickly than the Japanese speakers, and that the temporal facilitation effect of congruent visual speech was significant in the English speakers but not in the Japanese speakers. Using fMRI data, we examined the functional connectivity among brain regions important for auditory-visual interplay. The results indicated that the English speakers had significantly stronger connectivity between the visual motion area MT and the Heschl’s gyrus compared with the Japanese speakers, which may subserve lower-level visual influences on speech perception in English speakers in a multisensory environment. These results suggested that linguistic experience strongly affects neural connectivity involved in AV speech integration. PMID:27510407

  14. Impact of language on functional connectivity for audiovisual speech integration.

    Science.gov (United States)

    Shinozaki, Jun; Hiroe, Nobuo; Sato, Masa-Aki; Nagamine, Takashi; Sekiyama, Kaoru

    2016-08-11

    Visual information about lip and facial movements plays a role in audiovisual (AV) speech perception. Although this has been widely confirmed, previous behavioural studies have shown interlanguage differences, that is, native Japanese speakers do not integrate auditory and visual speech as closely as native English speakers. To elucidate the neural basis of such interlanguage differences, 22 native English speakers and 24 native Japanese speakers were examined in behavioural or functional Magnetic Resonance Imaging (fMRI) experiments while mono-syllabic speech was presented under AV, auditory-only, or visual-only conditions for speech identification. Behavioural results indicated that the English speakers identified visual speech more quickly than the Japanese speakers, and that the temporal facilitation effect of congruent visual speech was significant in the English speakers but not in the Japanese speakers. Using fMRI data, we examined the functional connectivity among brain regions important for auditory-visual interplay. The results indicated that the English speakers had significantly stronger connectivity between the visual motion area MT and the Heschl's gyrus compared with the Japanese speakers, which may subserve lower-level visual influences on speech perception in English speakers in a multisensory environment. These results suggested that linguistic experience strongly affects neural connectivity involved in AV speech integration.

  15. Keeping time in the brain: Autism spectrum disorder and audiovisual temporal processing.

    Science.gov (United States)

    Stevenson, Ryan A; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Camarata, Stephen; Wallace, Mark T

    2016-07-01

    A growing area of interest and relevance in the study of autism spectrum disorder (ASD) focuses on the relationship between multisensory temporal function and the behavioral, perceptual, and cognitive impairments observed in ASD. Atypical sensory processing is becoming increasingly recognized as a core component of autism, with evidence of atypical processing across a number of sensory modalities. These deviations from typical processing underscore the value of interpreting ASD within a multisensory framework. Furthermore, converging evidence illustrates that these differences in audiovisual processing may be specifically related to temporal processing. This review seeks to bridge the connection between temporal processing and audiovisual perception, and to elaborate on emerging data showing differences in audiovisual temporal function in autism. We also discuss the consequence of such changes, the specific impact on the processing of different classes of audiovisual stimuli (e.g. speech vs. nonspeech, etc.), and the presumptive brain processes and networks underlying audiovisual temporal integration. Finally, possible downstream behavioral implications, and possible remediation strategies are outlined. Autism Res 2016, 9: 720-738. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  16. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection.

    Science.gov (United States)

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-06-30

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC.

  17. Neurofunctional underpinnings of audiovisual emotion processing in teens with autism spectrum disorders.

    Science.gov (United States)

    Doyle-Thomas, Krissy A R; Goldberg, Jeremy; Szatmari, Peter; Hall, Geoffrey B C

    2013-01-01

    Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD). Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n = 18) and typically developing controls (n = 16) during audiovisual and unimodal emotion processing. Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviors, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that in the absence of engaging integrative emotional networks during audiovisual emotion matching, teens with ASD may have recruited the parietofrontal network as an alternate compensatory system.

  18. Audiovisual focus of attention and its application to Ultra High Definition video compression

    Science.gov (United States)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  19. Bibliographic control of audiovisuals: analysis of a cataloging project using OCLC.

    Science.gov (United States)

    Curtis, J A; Davison, F M

    1985-04-01

    The staff of the Quillen-Dishner College of Medicine Library cataloged 702 audiovisual titles between July 1, 1982, and June 30, 1983, using the OCLC database. This paper discusses the library's audiovisual collection and describes the method and scope of a study conducted during this project, the cataloging standards and conventions adopted, the assignment and use of NLM classification, the provision of summaries for programs, and the amount of staff time expended in cataloging typical items. An analysis of the use of OCLC for this project resulted in the following findings: the rate of successful searches for audiovisual copy was 82.4%; the error rate for records used was 41.9%; modifications were required in every record used; the Library of Congress and seven member institutions provided 62.8% of the records used. It was concluded that the effort to establish bibliographic control of audiovisuals is not widespread and that expanded and improved audiovisual cataloging by the Library of Congress and the National Library of Medicine would substantially contribute to that goal.

  20. Neurofunctional underpinnings of audiovisual emotion processing in teens with Autism Spectrum Disorders

    Directory of Open Access Journals (Sweden)

    Krissy A.R. Doyle-Thomas

    2013-05-01

    Full Text Available Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD. Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n=18 and typically developing controls (n=16 during audiovisual and unimodal emotion processing . Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviours, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that during audiovisual emotion matching individuals with ASD may rely on a parietofrontal network to compensate for atypical brain activity elsewhere.

  1. Speech and non-speech audio-visual illusions: a developmental study.

    Directory of Open Access Journals (Sweden)

    Corinne Tremblay

    Full Text Available It is well known that simultaneous presentation of incongruent audio and visual stimuli can lead to illusory percepts. Recent data suggest that distinct processes underlie non-specific intersensory speech as opposed to non-speech perception. However, the development of both speech and non-speech intersensory perception across childhood and adolescence remains poorly defined. Thirty-eight observers aged 5 to 19 were tested on the McGurk effect (an audio-visual illusion involving speech, the Illusory Flash effect and the Fusion effect (two audio-visual illusions not involving speech to investigate the development of audio-visual interactions and contrast speech vs. non-speech developmental patterns. Whereas the strength of audio-visual speech illusions varied as a direct function of maturational level, performance on non-speech illusory tasks appeared to be homogeneous across all ages. These data support the existence of independent maturational processes underlying speech and non-speech audio-visual illusory effects.

  2. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  3. APPLICATION OF PARTIAL LEAST SQUARES REGRESSION FOR AUDIO-VISUAL SPEECH PROCESSING AND MODELING

    Directory of Open Access Journals (Sweden)

    A. L. Oleinik

    2015-09-01

    Full Text Available Subject of Research. The paper deals with the problem of lip region image reconstruction from speech signal by means of Partial Least Squares regression. Such problems arise in connection with development of audio-visual speech processing methods. Audio-visual speech consists of acoustic and visual components (called modalities. Applications of audio-visual speech processing methods include joint modeling of voice and lips’ movement dynamics, synchronization of audio and video streams, emotion recognition, liveness detection. Method. Partial Least Squares regression was applied to solve the posed problem. This method extracts components of initial data with high covariance. These components are used to build regression model. Advantage of this approach lies in the possibility of achieving two goals: identification of latent interrelations between initial data components (e.g. speech signal and lip region image and approximation of initial data component as a function of another one. Main Results. Experimental research on reconstruction of lip region images from speech signal was carried out on VidTIMIT audio-visual speech database. Results of the experiment showed that Partial Least Squares regression is capable of solving reconstruction problem. Practical Significance. Obtained findings give the possibility to assert that Partial Least Squares regression is successfully applicable for solution of vast variety of audio-visual speech processing problems: from synchronization of audio and video streams to liveness detection.

  4. 36 CFR 1256.96 - What provisions apply to the transfer of USIA audiovisual records to the National Archives of the...

    Science.gov (United States)

    2010-07-01

    ... transfer of USIA audiovisual records to the National Archives of the United States? 1256.96 Section 1256.96... Information Agency Audiovisual Materials in the National Archives of the United States § 1256.96 What provisions apply to the transfer of USIA audiovisual records to the National Archives of the United...

  5. AXES-RESEARCH - A user-oriented tool for enhanced multimodal search and retrieval in audiovisual libraries

    NARCIS (Netherlands)

    P. van der Kreeft (Peggy); K. Macquarrie (Kay); M.J. Kemman (Max); M. Kleppe (Martijn); K. McGuinness (Kevin)

    2014-01-01

    textabstractAXES, Access for Audiovisual Archives, is a research project developing tools for new engaging ways to interact with audiovisual libraries, integrating advanced audio and video analysis technologies. The presented prototype is targeted at academic researchers and journalists. The tool al

  6. A Comparison of the Development of Audiovisual Integration in Children with Autism Spectrum Disorders and Typically Developing Children

    Science.gov (United States)

    Taylor, Natalie; Isaac, Claire; Milne, Elizabeth

    2010-01-01

    This study aimed to investigate the development of audiovisual integration in children with Autism Spectrum Disorder (ASD). Audiovisual integration was measured using the McGurk effect in children with ASD aged 7-16 years and typically developing children (control group) matched approximately for age, sex, nonverbal ability and verbal ability.…

  7. Developing an Audiovisual Notebook as a Self-Learning Tool in Histology: Perceptions of Teachers and Students

    Science.gov (United States)

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four…

  8. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    Directory of Open Access Journals (Sweden)

    Magnus eAlm

    2015-07-01

    Full Text Available Gender and age have been found to affect adults’ audio-visual (AV speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20-30 years and middle-aged adults (50-60 years with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy towards more visually dominated responses.

  9. Influence of auditory and audiovisual stimuli on the right-left prevalence effect

    DEFF Research Database (Denmark)

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did...... not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension...... was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension...

  10. A Similarity-Based Approach for Audiovisual Document Classification Using Temporal Relation Analysis

    Directory of Open Access Journals (Sweden)

    Ferrane Isabelle

    2011-01-01

    Full Text Available Abstract We propose a novel approach for video classification that bases on the analysis of the temporal relationships between the basic events in audiovisual documents. Starting from basic segmentation results, we define a new representation method that is called Temporal Relation Matrix (TRM. Each document is then described by a set of TRMs, the analysis of which makes events of a higher level stand out. This representation has been first designed to analyze any audiovisual document in order to find events that may well characterize its content and its structure. The aim of this work is to use this representation to compute a similarity measure between two documents. Approaches for audiovisual documents classification are presented and discussed. Experimentations are done on a set of 242 video documents and the results show the efficiency of our proposals.

  11. The early maximum likelihood estimation model of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2015-01-01

    Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely...... focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...

  12. Indexing method of digital audiovisual medical resources with semantic Web integration.

    Science.gov (United States)

    Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre

    2005-03-01

    Digitalization of audiovisual resources and network capability offer many possibilities which are the subject of intensive work in scientific and industrial sectors. Indexing such resources is a major challenge. Recently, the Motion Pictures Expert Group (MPEG) has developed MPEG-7, a standard for describing multimedia content. The goal of this standard is to develop a rich set of standardized tools to enable efficient retrieval from digital archives or the filtering of audiovisual broadcasts on the Internet. How could this kind of technology be used in the medical context? In this paper, we propose a simpler indexing system, based on the Dublin Core standard and compliant to MPEG-7. We use MeSH and the UMLS to introduce conceptual navigation. We also present a video-platform which enables encoding and gives access to audiovisual resources in streaming mode.

  13. Sincronía entre formas sonoras y formas visuales en la narrativa audiovisual

    Directory of Open Access Journals (Sweden)

    Lic. José Alfredo Sánchez Ríos

    1999-01-01

    Full Text Available ¿Dónde tiene que situarse el investigador para realizar un trabajo que lleve consigo un conocimiento más profundo para entender un fenómeno tan próximo y tan complejo como es la comunicación audiovisual que usa sonido e imagen a la vez? ¿Cuál es el papel del investigador en comunicación audiovisual para aportar nuevas aproximaciones en torno a su objeto de estudio? Desde esta perspectiva, pensamos que la nueva tarea del investigador en comunicación audiovisual será hacer una teoría menos interpretativa-subjetiva y encaminar sus observaciones hacia conocimientos segmentados que puedan ser demostrables, repetibles y autocuestionables, es decir, estudiar, elaborar y construir una teoría con un mayor y nuevo rigor metodológico.

  14. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    Directory of Open Access Journals (Sweden)

    Yuanqing Li

    Full Text Available One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG. The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  15. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    Science.gov (United States)

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.

  16. Early and late beta-band power reflect audiovisual perception in the McGurk illusion.

    Science.gov (United States)

    Roa Romero, Yadira; Senkowski, Daniel; Keil, Julian

    2015-04-01

    The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13-30 Hz) at short (0-500 ms) and long (500-800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept.

  17. Audiovisual correspondence between musical timbre and visual shapes.

    Science.gov (United States)

    Adeli, Mohammad; Rouat, Jean; Molotchnikoff, Stéphane

    2014-01-01

    This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g., simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against colored shapes and once against grayscale shapes. Subjects had to select the visual equivalent of a given sound i.e., its shape, color (or grayscale) and vertical position. This scenario permitted studying the associations between normalized timbre and visual shapes as well as some of the previous findings for more complex stimuli. One hundred and nineteen subjects (31 females and 88 males) participated in the online experiment. Subjects included 36 claimed professional musicians, 47 claimed amateur musicians, and 36 claimed non-musicians. Thirty-one subjects have also claimed to have synesthesia-like experiences. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects have strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the two previous shapes. Color or grayscale had no effect on timbre-shape associations. Fundamental frequency was not associated with height, grayscale or color. The significant correspondence between timbre and shape revealed by the present work allows designing substitution systems which might help the blind to perceive shapes through timbre.

  18. Audiovisual correspondence between musical timbre and visual shapes.

    Directory of Open Access Journals (Sweden)

    Mohammad eAdeli

    2014-05-01

    Full Text Available This article investigates the cross-modal correspondences between musical timbre and shapes. Previously, such features as pitch, loudness, light intensity, visual size, and color characteristics have mostly been used in studies of audio-visual correspondences. Moreover, in most studies, simple stimuli e.g. simple tones have been utilized. In this experiment, 23 musical sounds varying in fundamental frequency and timbre but fixed in loudness were used. Each sound was presented once against colored shapes and once against grayscale shapes. Subjects had to select the visual equivalent of a given sound i.e. its shape, color (or grayscale and vertical position. This scenario permitted studying the associations between normalized timbre and visual shapes as well as some of the previous findings for more complex stimuli. 119 subjects (31 females and 88 males participated in the online experiment. Subjects included 36 claimed professional musicians, 47 claimed amateur musicians and 36 claimed non-musicians. 31 subjects have also claimed to have synesthesia-like experiences. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects have strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the two previous shapes. Color or grayscale had no effect on timbre-shape associations. Fundamental frequency was not associated with height, grayscale or color. The significant correspondence between timbre and shape revealed by the present work allows designing substitution systems which might help the blind to perceive shapes through timbre.

  19. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity

    Science.gov (United States)

    Gibney, Kyla D.; Aligbe, Enimielen; Eggleston, Brady A.; Nunes, Sarah R.; Kerkhoff, Willa G.; Dean, Cassandra L.; Kwakye, Leslie D.

    2017-01-01

    The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a

  20. Comunicación audiovisual, una experiencia basada en el blended learning en la universidad

    Directory of Open Access Journals (Sweden)

    Mariona Grané Oró

    2004-01-01

    Full Text Available En los estudios de Comunicación Audiovisual de la Universidad de Barcelona, y bajo una perspectiva de blended learning, diferentes medios y diferentes recursos se disponen para el trabajo de alumnos y profesores. Pero el hecho de poder acceder a diferentes medios no garantiza la calidad en los procesos de enseñanza y aprendizaje. Conocer los recursos de que se dispone, saber planificar el proceso y organizar el uso de los mismos, es la clave para la formación de los alumnos de Comunicación Audiovisual.

  1. GÖRSEL-İŞİTSEL ÇEVİRİ / AUDIOVISUAL TRANSLATION

    OpenAIRE

    Sevtap GÜNAY KÖPRÜLÜ

    2016-01-01

    Audiovisual translation dating back to the silent film era is a special translation method which has been developed for the translation of the movies and programs shown on TV and cinema. Therefore, in the beginning, the term “film translation” was used for this type of translation. Due to the growing number of audiovisual texts it has attracted the interest of scientists and has been assessed under the translation studies. Also in our country the concept of film translation was used for this ...

  2. Rehabilitation of balance-impaired stroke patients through audio-visual biofeedback

    DEFF Research Database (Denmark)

    Gheorghe, Cristina; Nissen, Thomas; Juul Rosengreen Christensen, Daniel;

    2015-01-01

    This study explored how audio-visual biofeedback influences physical balance of seven balance-impaired stroke patients, between 33–70 years-of-age. The setup included a bespoke balance board and a music rhythm game. The procedure was designed as follows: (1) a control group who performed a balance...... training exercise without any technological input, (2) a visual biofeedback group, performing via visual input, and (3) an audio-visual biofeedback group, performing via audio and visual input. Results retrieved from comparisons between the data sets (2) and (3) suggested superior postural stability...

  3. O audiovisual na era Youtube: pro-amadores e o mercado

    Directory of Open Access Journals (Sweden)

    Meili, Angela Maria

    2011-01-01

    Full Text Available O presente artigo falará sobre a emergência dos formatos de vídeo para a internet, juntamente com uma nova economia do audiovisual, na qual as fronteiras entre amadorismo e profissionalismo apresentam-se menos definidas. Será feita uma reflexão acerca da plataforma YouTube e a formação desse mercado audiovisual, que apresenta relações estreitas com os formatos e métodos tradicionais de mídia, mas mantém uma estrutura colaborativa, de incentivo à novos talentos e à livre expressão

  4. A narrativa audiovisual publicitária : a forma comercial e a forma social

    OpenAIRE

    Vieira, Claúdia Virgínia Fernandes

    2009-01-01

    Dissertação de Mestrado em Ciências da Comunicação - Área de Especialização em Audiovisual e Multimédia A publicidade audiovisual é geralmente considerada uma forma de vender produtos ou serviços e, na sua maioria, é o que os anúncios estão dispostos a fazer. Temos, no entanto um outro tipo de anúncio, o institucional, que aqui chamamos publicidade social, feita para advertir o público de situações de risco ou fazer apelos para o melhoramento de assuntos pertinentes à sociedade ...

  5. UNDERSTANDING PROSE THROUGH TASK ORIENTED AUDIO-VISUAL ACTIVITY: AN AMERICAN MODERN PROSE COURSE AT THE FACULTY OF LETTERS, PETRA CHRISTIAN UNIVERSITY

    Directory of Open Access Journals (Sweden)

    Sarah Prasasti

    2001-01-01

    Full Text Available The method presented here provides the basis for a course in American prose for EFL students. Understanding and appreciation of American prose is a difficult task for the students because they come into contact with works that are full of cultural baggage and far apart from their own world. The audio visual aid is one of the alternatives to sensitize the students to the topic and the cultural background. Instead of proving the ready-made audio visual aids, teachers can involve students to actively engage in a more task oriented audiovisual project. Here, the teachers encourage their students to create their own audio visual aids using colors, pictures, sound and gestures as a point of initiation for further discussion. The students can use color that has become a strong element of fiction to help them calling up a forceful visual representation. Pictures can also stimulate the students to build their mental image. Sound and silence, which are a part of the fabric of literature, may also help them to increase the emotional impact.

  6. AIDS.gov

    Science.gov (United States)

    ... concerns. Search Services Share This Help National HIV/AIDS Strategy Check out NHAS's latest progress in the ... from AIDS.gov Read more AIDS.gov tweets AIDS.gov HIV/AIDS Basics • Federal Resources • Using New ...

  7. Aids for Handicapped Readers.

    Science.gov (United States)

    Library of Congress, Washington, DC. Div. for the Blind and Physically Handicapped.

    The reference circular provides information on approximately 50 reading and writing aids intended for physically or visually handicapped individuals. Described are low vision aids, aids for holding a book or turning pages, aids for reading in bed, handwriting aids, typewriters and accessories, braille writing equipment, sound reproducers, and aids…

  8. Macroeconomic Issues in Foreign Aid

    DEFF Research Database (Denmark)

    Hjertholm, Peter; Laursen, Jytte; White, Howard

    foreign aid, macroeconomics of aid, gap models, aid fungibility, fiscal response models, foreign debt,......foreign aid, macroeconomics of aid, gap models, aid fungibility, fiscal response models, foreign debt,...

  9. Crawling Aid

    Science.gov (United States)

    1982-01-01

    The Institute for the Achievement of Human Potential developed a device known as the Vehicle for Initial Crawling (VIC); the acronym is a tribute to the crawler's inventor, Hubert "Vic" Vykukal; is an effective crawling aid. The VIC is used by brain injured children who are unable to crawl due to the problems of weight-bearing and friction, caused by gravity. It is a rounded plywood frame large enough to support the child's torso, leaving arms and legs free to move. On its underside are three aluminum discs through which air is pumped to create an air-bearing surface that has less friction than a film of oil. Upper side contains the connection to the air supply and a pair of straps which restrain the child and cause the device to move with him. VIC is used with the intent to recreate the normal neurological connection between brain and muscles. Over repetitive use of the device the child develops his arm and leg muscles as well as coordination. Children are given alternating therapy, with and without the VIC until eventually the device is no longer needed.

  10. HIV and AIDS

    Science.gov (United States)

    ... Emergency Room? What Happens in the Operating Room? HIV and AIDS KidsHealth > For Kids > HIV and AIDS ... actually the virus that causes the disease AIDS. HIV Hurts the Immune System People who are HIV ...

  11. Heart attack first aid

    Science.gov (United States)

    First aid - heart attack; First aid - cardiopulmonary arrest; First aid - cardiac arrest ... A heart attack occurs when the blood flow that carries oxygen to the heart is blocked. The heart muscle ...

  12. Breathing difficulties - first aid

    Science.gov (United States)

    Difficulty breathing - first aid; Dyspnea - first aid; Shortness of breath - first aid ... Breathing difficulty is almost always a medical emergency. An exception is feeling slightly winded from normal activity, ...

  13. An audio-visual corpus for multimodal speech recognition in Dutch language

    NARCIS (Netherlands)

    Wojdel, J.; Wiggers, P.; Rothkrantz, L.J.M.

    2002-01-01

    This paper describes the gathering and availability of an audio-visual speech corpus for Dutch language. The corpus was prepared with the multi-modal speech recognition in mind and it is currently used in our research on lip-reading and bimodal speech recognition. It contains the prompts used also i

  14. Strategies for Media Literacy: Audiovisual Skills and the Citizenship in Andalusia

    Science.gov (United States)

    Aguaded-Gómez, Ignacio; Pérez-Rodríguez, M. Amor

    2012-01-01

    Media consumption is an undeniable fact in present-day society. The hours that members of all social segments spend in front of a screen take up a large part of their leisure time worldwide. Audiovisual communication becomes especially important within the context of today's digital society (society-network), where information and communication…

  15. Comparisons of Audio and Audiovisual Measures of Stuttering Frequency and Severity in Preschool-Age Children

    Science.gov (United States)

    Rousseau, Isabelle; Onslow, Mark; Packman, Ann; Jones, Mark

    2008-01-01

    Purpose: To determine whether measures of stuttering frequency and measures of overall stuttering severity in preschoolers differ when made from audio-only recordings compared with audiovisual recordings. Method: Four blinded speech-language pathologists who had extensive experience with preschoolers who stutter measured stuttering frequency and…

  16. Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli

    NARCIS (Netherlands)

    Talsma, Durk; Senkowski, Daniel; Woldorff, Marty G.

    2009-01-01

    The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory

  17. Evaluation of Modular EFL Educational Program (Audio-Visual Materials Translation & Translation of Deeds & Documents)

    Science.gov (United States)

    Imani, Sahar Sadat Afshar

    2013-01-01

    Modular EFL Educational Program has managed to offer specialized language education in two specific fields: Audio-visual Materials Translation and Translation of Deeds and Documents. However, no explicit empirical studies can be traced on both internal and external validity measures as well as the extent of compatibility of both courses with the…

  18. Acceptance of online audio-visual cultural heritage archive services: a study of the general public

    NARCIS (Netherlands)

    Ongena, G.; Wijngaert, van de L.A.L.; Huizer, E.

    2013-01-01

    Introduction. This study examines the antecedents of user acceptance of an audio-visual heritage archive for a wider audience (i.e., the general public) by extending the technology acceptance model with the concepts of perceived enjoyment, nostalgia proneness and personal innovativeness. Method. A W

  19. The audiovisual communication policy of the socialist Government (2004-2009: A neoliberal turn

    Directory of Open Access Journals (Sweden)

    Ramón Zallo, Ph. D.

    2010-01-01

    Full Text Available The first legislature of Jose Luis Rodriguez Zapatero’s government (2004-08 generated important initiatives for some progressive changes in the public communicative system. However, all of these initiatives have been dissolving in the second legislature to give way to a non-regulated and privatizing model that is detrimental to the public service. Three phases can be distinguished, even temporarily: the first one is characterized by interesting reforms; followed by contradictory reforms and, in the second legislature, an accumulation of counter reforms, that lead the system towards a communicative system model completely different from the one devised in the first legislature. This indicates that there has been not one but two different audiovisual policies running the cyclical route of the audiovisual policy from one end to the other. The emphasis has changed from the public service to private concentration; from decentralization to centralization; from the diffusion of knowledge to the accumulation and appropriation of the cognitive capital; from the Keynesian model - combined with the Schumpeterian model and a preference for social access - to a delayed return to the neoliberal model, after having distorted the market through public decisions in the benefit of the most important audiovisual services providers. All this seems to crystallize the impressive process of concentration occurring between audiovisual services providers in two large groups that would be integrated by Mediaset and Sogecable and - in negotiations - between Antena 3 and Imagina. A combination of neo-statist restructuring of the market and neo-liberalism.

  20. Superior Temporal Activation in Response to Dynamic Audio-Visual Emotional Cues

    Science.gov (United States)

    Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.

    2009-01-01

    Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…

  1. 16 CFR 307.8 - Requirements for disclosure in audiovisual and audio advertising.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Requirements for disclosure in audiovisual and audio advertising. 307.8 Section 307.8 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... advertising. In the case of advertisements for smokeless tobacco on videotapes, casettes, or...

  2. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech, and music.

    Science.gov (United States)

    Lee, Hweeling; Noppeney, Uta

    2014-01-01

    This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech, or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogs of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms). Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past 3 years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  3. Comparing Infants' Preference for Correlated Audiovisual Speech with Signal-Level Computational Models

    Science.gov (United States)

    Hollich, George; Prince, Christopher G.

    2009-01-01

    How much of infant behaviour can be accounted for by signal-level analyses of stimuli? The current paper directly compares the moment-by-moment behaviour of 8-month-old infants in an audiovisual preferential looking task with that of several computational models that use the same video stimuli as presented to the infants. One type of model…

  4. Infant Attention to Dynamic Audiovisual Stimuli: Look Duration from 3 to 9 Months of Age

    Science.gov (United States)

    Reynolds, Greg D.; Zhang, Dantong; Guy, Maggie W.

    2013-01-01

    The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3-, 6-, and 9-month-old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous…

  5. Hearing impairment and audiovisual speech integration ability: a case study report.

    Science.gov (United States)

    Altieri, Nicholas; Hudock, Daniel

    2014-01-01

    Research in audiovisual speech perception has demonstrated that sensory factors such as auditory and visual acuity are associated with a listener's ability to extract and combine auditory and visual speech cues. This case study report examined audiovisual integration using a newly developed measure of capacity in a sample of hearing-impaired listeners. Capacity assessments are unique because they examine the contribution of reaction-time (RT) as well as accuracy to determine the extent to which a listener efficiently combines auditory and visual speech cues relative to independent race model predictions. Multisensory speech integration ability was examined in two experiments: an open-set sentence recognition and a closed set speeded-word recognition study that measured capacity. Most germane to our approach, capacity illustrated speed-accuracy tradeoffs that may be predicted by audiometric configuration. Results revealed that some listeners benefit from increased accuracy, but fail to benefit in terms of speed on audiovisual relative to unisensory trials. Conversely, other listeners may not benefit in the accuracy domain but instead show an audiovisual processing time benefit.

  6. Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation.

    Science.gov (United States)

    Banks, Briony; Gowen, Emma; Munro, Kevin J; Adank, Patti

    2015-01-01

    Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker's facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants' eye gaze was recorded to verify that they looked at the speaker's face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation.

  7. Musicians have enhanced audiovisual multisensory binding: experience-dependent effects in the double-flash illusion.

    Science.gov (United States)

    Bidelman, Gavin M

    2016-10-01

    Musical training is associated with behavioral and neurophysiological enhancements in auditory processing for both musical and nonmusical sounds (e.g., speech). Yet, whether the benefits of musicianship extend beyond enhancements to auditory-specific skills and impact multisensory (e.g., audiovisual) processing has yet to be fully validated. Here, we investigated multisensory integration of auditory and visual information in musicians and nonmusicians using a double-flash illusion, whereby the presentation of multiple auditory stimuli (beeps) concurrent with a single visual object (flash) induces an illusory perception of multiple flashes. We parametrically varied the onset asynchrony between auditory and visual events (leads and lags of ±300 ms) to quantify participants' "temporal window" of integration, i.e., stimuli in which auditory and visual cues were fused into a single percept. Results show that musically trained individuals were both faster and more accurate at processing concurrent audiovisual cues than their nonmusician peers; nonmusicians had a higher susceptibility for responding to audiovisual illusions and perceived double flashes over an extended range of onset asynchronies compared to trained musicians. Moreover, temporal window estimates indicated that musicians' windows (audiovisual binding. Collectively, findings indicate a more refined binding of auditory and visual cues in musically trained individuals. We conclude that experience-dependent plasticity of intensive musical experience extends beyond simple listening skills, improving multimodal processing and the integration of multiple sensory systems in a domain-general manner.

  8. Convergent Cultures: the Disappearance of Commissioned Audiovisual Productions in the Netherlands

    NARCIS (Netherlands)

    B. Agterberg (Bas)

    2014-01-01

    textabstractThe article analyses the changes in production and consumption in the audiovisual industry and the way the so-called ‘ephemeral’ commissioned productions are scarcely preserved. New technologies and the liberal economic policies and internationalisation changed the media landscape in the

  9. The Education, Audiovisual and Culture Executive Agency: Helping You Grow Your Project

    Science.gov (United States)

    Education, Audiovisual and Culture Executive Agency, European Commission, 2011

    2011-01-01

    The Education, Audiovisual and Culture Executive Agency (EACEA) is a public body created by a Decision of the European Commission and operates under its supervision. It is located in Brussels and has been operational since January 2006. Its role is to manage European funding opportunities and networks in the fields of education and training,…

  10. 78 FR 63243 - Certain Audiovisual Components and Products Containing the Same; Commission Determination To...

    Science.gov (United States)

    2013-10-23

    ...''). 77 FR 22803 (Apr. 11, 2012). The complaint alleged violations of section 337 of the Tariff Act of... disapprove the Commission's action. See Presidential Memorandum of July 21, 2005, 70 FR 43251 (July 26, 2005... COMMISSION Certain Audiovisual Components and Products Containing the Same; Commission Determination...

  11. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    Science.gov (United States)

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  12. Multimodal indexing of digital audio-visual documents: A case study for cultural heritage data

    NARCIS (Netherlands)

    Carmichael, J.; Larson, M.; Marlow, J.; Newman, E.; Clough, P.; Oomen, J.; Sav, S.

    2008-01-01

    This paper describes a multimedia multimodal information access sub-system (MIAS) for digital audio-visual documents, typically presented in streaming media format. The system is designed to provide both professional and general users with entry points into video documents that are relevant to their

  13. Effects of audio-visual information and mode of speech on listener perceptions of alaryngeal speakers.

    Science.gov (United States)

    Evitts, Paul M; Van Dine, Ami; Holler, Aline

    2009-01-01

    There is minimal research on listener perceptions of an individual with a laryngectomy (IWL) based on audio-visual information. The aim of this research was to provide preliminary insight into whether listeners have different perceptions of an individual with a laryngectomy based on mode of presentation (audio-only vs. audio-visual) and mode of speech (tracheoesophageal, oesophageal, electrolaryngeal, normal). Thirty-four naïve listeners were randomly presented with a standard reading passage produced by one typical speaker from each mode of speech in both audio-only and audio-visual presentation mode. Listeners used a visual analogue scale (10 cm line) to indicate their perceptions of each speaker's personality. A significant effect for mode of speech was present. There was no significant difference in listener perceptions between mode of presentation using individual ratings. However, principal component analysis showed ratings were more favourable in the audio-visual mode. Results of this study suggest that visual information may only have a minor impact on listener perceptions of a speakers' personality and that mode of speech and degree of speech proficiency may only play a small role in listener perceptions. However, results should be interpreted with caution as results are based on only one speaker per mode of speech.

  14. Media literacy: no longer the shrinking violet of European audiovisual media regulation?

    NARCIS (Netherlands)

    McGonagle, T.; Nikoltchev, S.

    2011-01-01

    The lead article in this IRIS plus provides a critical analysis of how the European audiovisual regulatory and policy framework seeks to promote media literacy. It examines pertinent definitional issues and explores the main rationales for the promotion of media literacy as a regulatory and policy g

  15. Catalogo de peliculas educativas y otros materiales audiovisuales (Catalogue of Educational Films and other Audiovisual Materials).

    Science.gov (United States)

    Encyclopaedia Britannica, Inc., Chicago, IL.

    This catalogue of educational films and other audiovisual materials consists predominantly of films in Spanish and English which are intended for use in elementary and secondary schools. A wide variety of topics including films for social studies, language arts, humanities, physical and natural sciences, safety and health, agriculture, physical…

  16. Eyewitnesses of History: Italian Amateur Cinema as Cultural Heritage and Source for Audiovisual and Media Production

    NARCIS (Netherlands)

    Simoni, Paolo

    2015-01-01

    abstractThe role of amateur cinema as archival material in Italian media productions has only recently been discovered. Italy, as opposed to other European countries, lacked a local, regional and national policy for the collection and preservation of private audiovisual documents, which led, as a re

  17. Code CoAN 2010: The first Code of Audiovisual Media Co-regulation in Spain

    Directory of Open Access Journals (Sweden)

    Mercedes Muñoz-Saldaña, Ph.D.

    2011-01-01

    Full Text Available On 17 November 2009 the first co-regulation code for the audiovisual media sector was established in Spain: “2010 Co-regulation Code for the Quality of Audiovisual Contents in Navarra”. This Code is pioneering in the field and, taking into account the content of the recently approved General Law on Audiovisual Communication, is an example of the kind of work that shall be carried out in the future by Spain’s National Media Council (Consejo Estatal de Medios Audiovisuales, aka, CEMA or the corresponding regulatory body. This initiative shows the need to apply co-regulatory codes to the national systems of regulation in the audiovisual sector, as the European institutions urged in their latest Directive in 2010. This article addresses three issues that demonstrate the need for and advantages of applying co-regulation practices to guarantee the protection of minors, pluralism, and the promotion of media literacy: the failure of traditional regulatory instruments and the inefficiency of self-regulation; the conceptual definition of co-regulation as an instrument separated from self-regulation and regulation; and the added value of co-regulation in its application to concrete areas.

  18. The effect of spatial-temporal audiovisual disparities on saccades in a complex scene

    NARCIS (Netherlands)

    Wanrooij, M.M. van; Bell, A.H.; Munoz, D.P.; Opstal, A.J. van

    2009-01-01

    In a previous study we quantified the effect of multisensory integration on the latency and accuracy of saccadic eye movements toward spatially aligned audiovisual (AV) stimuli within a rich AV-background (Corneil et al. in J Neurophysiol 88:438-454, 2002). In those experiments both stimulus modalit

  19. Audiovisual Translation and Assistive Technology: Towards a Universal Design Approach for Online Education

    Science.gov (United States)

    Patiniotaki, Emmanouela

    2016-01-01

    Audiovisual Translation (AVT) and Assistive Technology (AST) are two fields that share common grounds within accessibility-related research, yet they are rarely studied in combination. The reason most often lies in the fact that they have emerged from different disciplines, i.e. Translation Studies and Computer Science, making a possible combined…

  20. Psychophysics of the McGurk and Other Audiovisual Speech Integration Effects

    Science.gov (United States)

    Jiang, Jintao; Bernstein, Lynne E.

    2011-01-01

    When the auditory and visual components of spoken audiovisual nonsense syllables are mismatched, perceivers produce four different types of perceptual responses, auditory correct, visual correct, fusion (the so-called "McGurk effect"), and combination (i.e., two consonants are reported). Here, quantitative measures were developed to account for…

  1. Audiovisual Speech Perception in Children with Developmental Language Disorder in Degraded Listening Conditions

    Science.gov (United States)

    Meronen, Auli; Tiippana, Kaisa; Westerholm, Jari; Ahonen, Timo

    2013-01-01

    Purpose: The effect of the signal-to-noise ratio (SNR) on the perception of audiovisual speech in children with and without developmental language disorder (DLD) was investigated by varying the noise level and the sound intensity of acoustic speech. The main hypotheses were that the McGurk effect (in which incongruent visual speech alters the…

  2. Brief Report: Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders

    Science.gov (United States)

    Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.

    2014-01-01

    Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their…

  3. Home Health Aides

    Science.gov (United States)

    ... specifications Help to keep clients engaged in their social networks and communities Home health aides, unlike personal care aides , typically work ... self-care and everyday tasks. They also provide social supports and assistance that enable clients to participate in their ... more information about home health aides, including voluntary credentials for aides, visit ...

  4. Aid and Development

    DEFF Research Database (Denmark)

    Tarp, Finn

    Foreign aid looms large in the public discourse; and international development assistance remains squarely on most policy agendas concerned with growth, poverty and inequality in Africa and elsewhere in the developing world. The present review takes a retrospective look at how foreign aid has...... evolved since World War II in response to a dramatically changing global political and economic context. I review the aid process and associated trends in the volume and distribution of aid and categorize some of the key goals, principles and institutions of the aid system. The evidence on whether aid has...... for aid in the future...

  5. Types of Foreign Aid

    DEFF Research Database (Denmark)

    Bjørnskov, Christian

    Foreign aid is given for many purposes and different intentions, yet most studies treat aid flows as a unitary concept. This paper uses factor analysis to separate aid flows into different types. The main types can be interpreted as aid for economic purposes, social purposes, and reconstruction......; a residual category captures remaining purposes. Estimating the growth effects of separable types of aid suggests that most aid has no effects while reconstruction aid has direct positive effects. Although this type only applies in special circumstances, it has become more prevalent in more recent years....

  6. On Copyright of Audiovisual Works%视听作品著作权问题探讨

    Institute of Scientific and Technical Information of China (English)

    董思远

    2013-01-01

    This paper introduces the connotation and denotation of audiovisual works, analyzes the relationship between audiovisual works and videos and then by drawing on audiovisual works copyright legislations in other countries and balancing interests between producers and authors of audiovisual works, puts forward some opinions and suggestions for the “copyright law”amendment.%本文介绍了视听作品的涵义和外延,分析了视听作品和录像制品的关系,然后通过借鉴各国有关视听作品著作权归属的立法,平衡视听作品制片人和作者之间的利益,试图为《著作权法》的修改提出一些意见和建议。

  7. Expressing the Needs of Digital Audio-Visual Applications in Different Communities of Practice for Long Term Preservation

    OpenAIRE

    Kumar, Naresh

    2014-01-01

    Digital audio-visual preservation is nerve of the research nowadays in this digital world, where use of audio-visuals in creation and storage of research data has increased rapidly. Thereby it has created many opportunities for new problems regarding their maintenance, preservation and future accessibility. Lack of awareness about the preservation tools and applications is a big issue today. To solve such issues a European Commission research project, Presto4U that aimed to enable semi-automa...

  8. THE IMPROVEMENT OF AUDIO-VISUAL BASED DANCE APPRECIATION LEARNING AMONG PRIMARY TEACHER EDUCATION STUDENTS OF MAKASSAR STATE UNIVERSITY

    OpenAIRE

    Wahira

    2014-01-01

    This research aimed to improve the skill in appreciating dances owned by the students of Primary Teacher Education of Makassar State University, to improve the perception towards audio-visual based art appreciation, to increase the students’ interest in audio-visual based art education subject, and to increase the students’ responses to the subject. This research was classroom action research using the research design created by Kemmis & MC. Taggart, which was conducted to 42 students of Prim...

  9. Panorama de les fonts audiovisuals internacionals en televisió : contingut, gestió i drets

    Directory of Open Access Journals (Sweden)

    López de Solís, Iris

    2014-12-01

    Full Text Available Les cadenes generalistes espanyoles (nacionals i autonòmiques disposen de diferents fonts audiovisuals per informar dels temes internacionals, com ara agències, consorcis de notícies i corresponsalies. En aquest article, a partir de les dades facilitades per diferents cadenes, s'aborda la cobertura, l'ús i la gestió d'aquestes fonts, així com també els seus drets d'ús i arxivament, i s'analitza la història i les eines en línia de les agències més emprades. Finalment, es descriu la tasca diària del departament d'Eurovision de TVE, al qual fa uns mesos s'han incorporat documentalistes que, a més de tractar documentalment el material audiovisual, duen a terme tasques d'edició i de producció.Las cadenas generalistas españolas (nacionales y autonómicas cuentan con diferentes fuentes audiovisuales para informar de los temas internacionales, como agencias, consorcios de noticias y corresponsalías. En este artículo, a partir de los datos facilitados por diferentes cadenas, se aborda la cobertura, el uso y la gestión de dichas fuentes, así como sus derechos de uso y archivado, y se analiza la historia y las herramientas en línea de las agencias más empleadas. Finalmente se describe la labor diaria del departamento de Eurovision de TVE, al que hace unos meses se han incorporado documentalistas que, además de tratar documentalmente el material audiovisual, realizan labores de edición y producción.At both national and regional levels, Spain’s main public service television channels rely upon a number of independent producers of audiovisual content to deliver news on international affairs, including news agencies and consortia and correspondent networks. Using the data provided by different channels, this paper examines the coverage, use and management of these sources as well as the regulations determining their use and storage. It also analyzes the history of the most prominent agencies and the online toolkits they offer

  10. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

    Science.gov (United States)

    Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa

    2015-02-01

    To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing.

  11. Aid and growth regressions

    DEFF Research Database (Denmark)

    Hansen, Henrik; Tarp, Finn

    2001-01-01

    This paper examines the relationship between foreign aid and growth in real GDP per capita as it emerges from simple augmentations of popular cross country growth specifications. It is shown that aid in all likelihood increases the growth rate, and this result is not conditional on ‘good’ policy....... There are, however, decreasing returns to aid, and the estimated effectiveness of aid is highly sensitive to the choice of estimator and the set of control variables. When investment and human capital are controlled for, no positive effect of aid is found. Yet, aid continues to impact on growth via...

  12. Aid and Development

    DEFF Research Database (Denmark)

    Tarp, Finn

    evolved since World War II in response to a dramatically changing global political and economic context. I review the aid process and associated trends in the volume and distribution of aid and categorize some of the key goals, principles and institutions of the aid system. The evidence on whether aid has......Foreign aid looms large in the public discourse; and international development assistance remains squarely on most policy agendas concerned with growth, poverty and inequality in Africa and elsewhere in the developing world. The present review takes a retrospective look at how foreign aid has...

  13. Aid and development

    DEFF Research Database (Denmark)

    Tarp, Finn

    2006-01-01

    evolved since World War II in response to a dramatically changing global political and economic context. I review the aid process and associated trends in the volume and distribution of aid and categorize some of the key goals, principles and institutions of the aid system. The evidence on whether aid has......Foreign aid looms large in the public discourse; and international development assistance remains squarely on most policy agendas concerned with growth, poverty and inequality in Africa and elsewhere in the developing world. The present review takes a retrospective look at how foreign aid has...

  14. Key elements of the audiovisual policy of the International Organization of la Francophonie / Líneas generales de la política audiovisual de la Organización Internacional de la Francofonía

    Directory of Open Access Journals (Sweden)

    Lic. Félix Redondo Casado; fredondo@inst.uc3m.es

    2009-01-01

    Full Text Available This paper investigates the key elements of the audiovisual policy of the International Organization of la Francophonie (OIF. The hypothesis to be tested is that the audiovisual policy of la Francophonie presents a fundamental concept of the audiovisual.This study is exploratory in nature and considers only the last ten years of la Francophonie. The research presents a mixed methodological approach that combines quantitative and qualitative data collection and analysis.Many items have been analyzed: frameworks for action and declarations, the structure of the organization in the audiovisual area and programs and major projects. One of the most important conclusions of this study is that audiovisual policy of the OIF is characterized by diversity, as well as by its link with culture. However, the OIF tries to ensure the presence of the French universe, ignoring the voices of the rest of the organization.Este trabajo aborda las líneas generales de la política audiovisual de la Organización Internacional de la Francofonía (OIF. La hipótesis que ha guiado el estudio es que la política audiovisual de la Francofonía presenta una concepción fundamental del audiovisual. El estudio es de carácter exploratorio y se ha centrado en los últimos diez años de la Francofonía. La investigación empleó un enfoque mixto que combinó datos cualitativos y cuantitativos en la recogida y en el análisis.Se han analizado varios elementos: los marcos de actuación y declaraciones, la estructura de la organización en el área audiovisual y los programas y principales proyectos. Una de las conclusiones más importantes del estudio es que la política audiovisual de la OIF se caracteriza por su diversidad, así como por su ligazón con la cultura. Sin embargo, la OIF trata de garantizar la presencia del universo francés, olvidando las voces del conjunto de la organización.

  15. Self-organizing maps for measuring similarity of audiovisual speech percepts

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich

    . Dependent on the training data, these other units may also be contextually immediate neighboring units. The poster demonstrates the idea with text material spoken by one individual subject using a set of simple audio-visual features. The data material for the training process consists of 44 labeled...... visual lip features is used. Phoneme-related receptive fields result on the SOM basis; they are speaker dependent and show individual locations and strain. Overlapping main slopes indicate a high similarity of respective units; distortion or extra peaks originate from the influence of other units...... sentences in German with a balanced phoneme repertoire. As a result it can be stated that (i) the SOM can be trained to map auditory and visual features in a topology-preserving way and (ii) they show strain due to the influence of other audio-visual units. The SOM can be used to measure similarity amongst...

  16. Prioritized MPEG-4 Audio-Visual Objects Streaming over the DiffServ

    Institute of Scientific and Technical Information of China (English)

    HUANG Tian-yun; ZHENG Chan

    2005-01-01

    The object-based scalable coding in MPEG-4 is investigated, and a prioritized transmission scheme of MPEG-4 audio-visual objects (AVOs) over the DiffServ network with the QoS guarantee is proposed. MPEG-4 AVOs are extracted and classified into different groups according to their priority values and scalable layers (visual importance). These priority values are mapped to the IP DiffServ per hop behaviors (PHB). This scheme can selectively discard packets with low importance, in order to avoid the network congestion. Simulation results show that the quality of received video can gracefully adapt to network state, as compared with the 'best-effort' manner. Also, by allowing the content provider to define prioritization of each audio-visual object, the adaptive transmission of object-based scalable video can be customized based on the content.

  17. Joint evaluation of communication quality and user experience in an audio-visual virtual reality meeting

    DEFF Research Database (Denmark)

    Møller, Anders Kalsgaard; Hoffmann, Pablo F.; Carrozzino, Marcello

    2013-01-01

    The state-of-the-art speech intelligibility tests are created with the purpose of evaluating acoustic communication devices and not for evaluating audio-visual virtual reality systems. This paper present a novel method to evaluate a communication situation based on both the speech intelligibility...... and the indexical characteristics of the speaker. The results will be available in the final paper. Index Terms: speech intelligibility , virtual reality, body language, telecommunication.......The state-of-the-art speech intelligibility tests are created with the purpose of evaluating acoustic communication devices and not for evaluating audio-visual virtual reality systems. This paper present a novel method to evaluate a communication situation based on both the speech intelligibility...

  18. Development of an audiovisual speech perception app for children with autism spectrum disorders.

    Science.gov (United States)

    Irwin, Julia; Preston, Jonathan; Brancazio, Lawrence; D'angelo, Michael; Turcios, Jacqueline

    2015-01-01

    Perception of spoken language requires attention to acoustic as well as visible phonetic information. This article reviews the known differences in audiovisual speech perception in children with autism spectrum disorders (ASD) and specifies the need for interventions that address this construct. Elements of an audiovisual training program are described. This researcher-developed program delivered via an iPad app presents natural speech in the context of increasing noise, but supported with a speaking face. Children are cued to attend to visible articulatory information to assist in perception of the spoken words. Data from four children with ASD ages 8-10 are presented showing that the children improved their performance on an untrained auditory speech-in-noise task.

  19. Modulations of 'late' event-related brain potentials in humans by dynamic audiovisual speech stimuli.

    Science.gov (United States)

    Lebib, Riadh; Papo, David; Douiri, Abdel; de Bode, Stella; Gillon Dowens, Margaret; Baudonnière, Pierre-Marie

    2004-11-30

    Lipreading reliably improve speech perception during face-to-face conversation. Within the range of good dubbing, however, adults tolerate some audiovisual (AV) discrepancies and lipreading, then, can give rise to confusion. We used event-related brain potentials (ERPs) to study the perceptual strategies governing the intermodal processing of dynamic and bimodal speech stimuli, either congruently dubbed or not. Electrophysiological analyses revealed that non-coherent audiovisual dubbings modulated in amplitude an endogenous ERP component, the N300, we compared to a 'N400-like effect' reflecting the difficulty to integrate these conflicting pieces of information. This result adds further support for the existence of a cerebral system underlying 'integrative processes' lato sensu. Further studies should take advantage of this 'N400-like effect' with AV speech stimuli to open new perspectives in the domain of psycholinguistics.

  20. Stream Weight Training Based on MCE for Audio-Visual LVCSR

    Institute of Scientific and Technical Information of China (English)

    LIU Peng; WANG Zuoying

    2005-01-01

    In this paper we address the problem of audio-visual speech recognition in the framework of the multi-stream hidden Markov model. Stream weight training based on minimum classification error criterion is discussed for use in large vocabulary continuous speech recognition (LVCSR). We present the lattice re-scoring and Viterbi approaches for calculating the loss function of continuous speech. The experimental results show that in the case of clean audio, the system performance can be improved by 36.1% in relative word error rate reduction when using state-based stream weights trained by a Viterbi approach, compared to an audio only speech recognition system. Further experimental results demonstrate that our audio-visual LVCSR system provides significant enhancement of robustness in noisy environments.

  1. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    Science.gov (United States)

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals.

  2. La estacíon de trabajo del traductor audiovisual: Herramientas y Recursos.

    Directory of Open Access Journals (Sweden)

    Anna Matamala

    2005-01-01

    Full Text Available En este artículo abordamos la relación entre traducción audiovisual y nuevas tecnologías y describimos las características que tiene la estación de trabajo del traductor audiovisual, especialmente en el caso del doblaje y del voice- over. Después de presentar las herramientas que necesita el traductor para llevar a cabo satisfactoriamente su tarea y apuntar vías de futuro, presentamos una relación de recursos que suele consultar para resolver los problemas de traducción, haciendo hincapié en los que están disponibles en Internet.

  3. Audiovisual Integration of Speech in a Patient with Broca’s Aphasia

    DEFF Research Database (Denmark)

    Andersen, Tobias; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech...... perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca......'s area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical...

  4. Globalization and pluralism: the function of public TV in the European audiovisual market

    OpenAIRE

    2007-01-01

    European audiovisual legislation focuses exclusively on a concept of external pluralism. It therefore seems necessary to adopt other policies and develop new measures to guarantee diversity. In order to implement this reform, a new, richer concept of pluralism must be sought that reflects the reality of the market. This would enable us to devise instruments to measure the real presence of pluralism in the media, and perform effective regulation to defend this right at every level. The ai...

  5. Child′s dental fear: Cause related factors and the influence of audiovisual modeling

    Directory of Open Access Journals (Sweden)

    Jayanthi Mungara

    2013-01-01

    Full Text Available Background: Delivery of effective dental treatment to a child patient requires thorough knowledge to recognize dental fear and its management by the application of behavioral management techniques. Children′s Fear Survey Schedule - Dental Subscale (CFSS-DS helps in identification of specific stimuli which provoke fear in children with regard to dental situation. Audiovisual modeling can be successfully used in pediatric dental practice. Aim: To assess the degree of fear provoked by various stimuli in the dental office and to evaluate the effect of audiovisual modeling on dental fear of children using CFSS-DS. Materials and Methods: Ninety children were divided equally into experimental (group I and control (group II groups and were assessed in two visits for their degree of fear and the effect of audiovisual modeling, with the help of CFSS-DS. Results: The most fear-provoking stimulus for children was injection and the least was to open the mouth and having somebody look at them. There was no statistically significant difference in the overall mean CFSS-DS scores between the two groups during the initial session (P > 0.05. However, in the final session, a statistically significant difference was observed in the overall mean fear scores between the groups (P < 0.01. Significant improvement was seen in group I, while no significant change was noted in case of group II. Conclusion: Audiovisual modeling resulted in a significant reduction of overall fear as well as specific fear in relation to most of the items. A significant reduction of fear toward dentists, doctors in general, injections, being looked at, the sight, sounds, and act of the dentist drilling, and having the nurse clean their teeth was observed.

  6. The audiovisual mounting narrative as a basis for the documentary film interactive: news studies

    OpenAIRE

    Mgs. Denis Porto Renó

    2008-01-01

    This paper presents a literature review and experiment results from pilot-doctoral research "assembly language visual narrative for the documentary film interactive," which defend the thesis that there are features interactive audio and video editing of the movie, even as causing agent of interactivity. The search for interactive audio-visual formats are present in international investigations, but sob glances technology. He believes that this paper is to propose possible formats for interact...

  7. Sight and sound out of synch: fragmentation and renormalisation of audiovisual integration and subjective timing.

    Science.gov (United States)

    Freeman, Elliot D; Ipser, Alberta; Palmbaha, Austra; Paunoiu, Diana; Brown, Peter; Lambert, Christian; Leff, Alex; Driver, Jon

    2013-01-01

    The sight and sound of a person speaking or a ball bouncing may seem simultaneous, but their corresponding neural signals are spread out over time as they arrive at different multisensory brain sites. How subjective timing relates to such neural timing remains a fundamental neuroscientific and philosophical puzzle. A dominant assumption is that temporal coherence is achieved by sensory resynchronisation or recalibration across asynchronous brain events. This assumption is easily confirmed by estimating subjective audiovisual timing for groups of subjects, which is on average similar across different measures and stimuli, and approximately veridical. But few studies have examined normal and pathological individual differences in such measures. Case PH, with lesions in pons and basal ganglia, hears people speak before seeing their lips move. Temporal order judgements (TOJs) confirmed this: voices had to lag lip-movements (by ∼200 msec) to seem synchronous to PH. Curiously, voices had to lead lips (also by ∼200 msec) to maximise the McGurk illusion (a measure of audiovisual speech integration). On average across these measures, PH's timing was therefore still veridical. Age-matched control participants showed similar discrepancies. Indeed, normal individual differences in TOJ and McGurk timing correlated negatively: subjects needing an auditory lag for subjective simultaneity needed an auditory lead for maximal McGurk, and vice versa. This generalised to the Stream-Bounce illusion. Such surprising antagonism seems opposed to good sensory resynchronisation, yet average timing across tasks was still near-veridical. Our findings reveal remarkable disunity of audiovisual timing within and between subjects. To explain this we propose that the timing of audiovisual signals within different brain mechanisms is perceived relative to the average timing across mechanisms. Such renormalisation fully explains the curious antagonistic relationship between disparate timing

  8. Towards a Future-Proof Framework for the Protection of Minors in European Audiovisual Media

    Directory of Open Access Journals (Sweden)

    Madeleine de Cock Buning

    2014-12-01

    Full Text Available Legal domains that can be characterized by their high rate of change caused by either societal needs or economic and technological innovations form a constant challenge for their regulatory and supervisory authorities. This contribution aims at turning this perspective from a challenge to an opportunity by finding regulatory ways that adapt flexibly to the changing realities by examining a model for a private-public regulatory and enforcement regime for the protection of minors in audiovisual media and defining conditions.

  9. The New Audiovisual Media Services Directive : Television without Frontiers, Television without Cultural Diversity

    OpenAIRE

    Burri, Mira

    2007-01-01

    After long deliberations, the European Community (EC) has completed the reform of its audiovisual media regulation. The paper examines the main tenets of this reform with particular focus on its implications for the diversity of cultural expressions in the European media landscape. It also takes into account the changed patterns of consumer and business behaviour due to the advances in digital media and their wider spread in society. The paper criticises the somewhat unimaginative approach of...

  10. Audiovisual quality assessment in communications applications: Current status, trends and challenges

    DEFF Research Database (Denmark)

    Korhonen, Jari

    2010-01-01

    Audiovisual quality assessment is one of the major challenges in multimedia communications. Traditionally, algorithm-based (objective) assessment methods have focused primarily on the compression artifacts. However, compression is only one of the numerous factors influencing the perception....... In communications applications, transmission errors, including packet losses and bit errors, can be a significant source of quality degradation. Also the environmental factors, such as background noise, ambient light and display characteristics, pose an impact on perception. A third aspect that has not been widely...

  11. Means Application and Meaning of Audio-visual Education Programme in the Education of Party School%浅谈电教手段在党校教育中的应用与意义

    Institute of Scientific and Technical Information of China (English)

    钱丽萍

    2009-01-01

    电化教学是利用现代科学技术成果,发展多种能储存、传递声像教育信息的媒体,采用先进的教学方法,控制教学过程的信息,以获得最优的教学效果.针对党校教育面对的群体的特殊性、时代性、实用性,电教手段也有它特殊的应用与意义.%It is to utilize technological achievement of modern science to teach with audiovisual aids, developing many kind can store, transmit the media of the educational information of the audiovideo, adopt the advanced teaching method, control the information of the teaching course,in order to obtain the optimum teaching result. Educate the particularity,era, practicability of the colony faced to the Party school,the audio-visual educa-tion programme means has its special application and meaning too.

  12. The Application of Audio-visual Media in Junior High School English Teaching%关于初中英语教学中电教手段的应用

    Institute of Scientific and Technical Information of China (English)

    江介香

    2012-01-01

    In junior high school English teaching, applying audio-visual media can motivate students' English learning interest. As a teaching aid, the application of audio-visual media in English classroom is the supplement and development of English classroom teaching, and it is helpful to the improvement of classroom teaching effect and it is of important meaning in cultivating students' comprehensive applying ability.%在初中英语教学中,运用电教手段可以激发学生英语学习的兴趣。作为一种辅助教学手段,电教手段运用于英语课堂中,是对英语课堂教学的补充和发展,有利于提高整体课堂教学效率,对于培养学生英语综合应用能力有着十分重要的意义。

  13. HIV/AIDS Coinfection

    Science.gov (United States)

    ... Laotian Mongolian Spanish Turkish Vietnamese Hindi Subscribe HIV/AIDS Coinfection Approximately 10% of the HIV-infected population ... Control and Prevention website to learn about HIV/AIDS and Viral Hepatitis guidelines and resources. Home About ...

  14. HIV/AIDS Basics

    Science.gov (United States)

    ... Partner Spotlight Awareness Days Get Tested Find an HIV testing site near you. Enter ZIP code or ... AIDS Get Email Updates on AAA Anonymous Feedback HIV/AIDS Media Infographics Syndicated Content Podcasts Slide Sets ...

  15. Neurological Complications of AIDS

    Science.gov (United States)

    ... in recent years has improved significantly because of new drugs and treatments. AIDS clinicians often fail to recognize ... in recent years has improved significantly because of new drugs and treatments. AIDS clinicians often fail to recognize ...

  16. Ciudadanía y competencia audiovisual en La Rioja: Panorama actual en la tercera edad

    Directory of Open Access Journals (Sweden)

    Josefina Santibáñez Velilla

    2012-09-01

    Full Text Available El consumo actual de medios por parte de la sociedad está generando nuevas formas de interpretar y analizar la información que se transmite en los diferentes soportes audiovisuales. En este estudio planteamos en primer lugar, la justificación teórica de la situación actual de la educación en medios y en segundo lugar, el análisis y resultados sobre el grado de conocimiento en competencia audiovisual de la muestra de mayores de 65 años de la Comunidad Autónoma de La Rioja (España seleccionada para el estudio. Los objetivos fundamentales son evaluar el grado de conocimiento de la competencia audiovisual de este colectivo, identificar diferencias entre la muestra regional y nacional y describir las dimensiones de alfabetización audiovisual. Para ello, se han tenido en cuenta el análisis de los criterios de evaluación de dicha competencia ateniendo a las dimensiones de ideología y valores, producción y programación, recepción y audiencia y tecnología. Finalmente, se exponen conclusiones que abren la puerta a nuevos planteamientos sobre prácticas de educación en medios y vías de trabajo futuras.

  17. Audiovisual associations alter the perception of low-level visual motion

    Directory of Open Access Journals (Sweden)

    Hulusi eKafaligonul

    2015-03-01

    Full Text Available Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role.

  18. Identifying Core Affect in Individuals from fMRI Responses to Dynamic Naturalistic Audiovisual Stimuli.

    Science.gov (United States)

    Kim, Jongwan; Wang, Jing; Wedell, Douglas H; Shinkareva, Svetlana V

    2016-01-01

    Recent research has demonstrated that affective states elicited by viewing pictures varying in valence and arousal are identifiable from whole brain activation patterns observed with functional magnetic resonance imaging (fMRI). Identification of affective states from more naturalistic stimuli has clinical relevance, but the feasibility of identifying these states on an individual trial basis from fMRI data elicited by dynamic multimodal stimuli is unclear. The goal of this study was to determine whether affective states can be similarly identified when participants view dynamic naturalistic audiovisual stimuli. Eleven participants viewed 5s audiovisual clips in a passive viewing task in the scanner. Valence and arousal for individual trials were identified both within and across participants based on distributed patterns of activity in areas selectively responsive to audiovisual naturalistic stimuli while controlling for lower level features of the stimuli. In addition, the brain regions identified by searchlight analyses to represent valence and arousal were consistent with previously identified regions associated with emotion processing. These findings extend previous results on the distributed representation of affect to multimodal dynamic stimuli.

  19. On the Importance of Audiovisual Coherence for the Perceived Quality of Synthesized Visual Speech

    Directory of Open Access Journals (Sweden)

    Wesley Mattheyses

    2009-01-01

    Full Text Available Audiovisual text-to-speech systems convert a written text into an audiovisual speech signal. Typically, the visual mode of the synthetic speech is synthesized separately from the audio, the latter being either natural or synthesized speech. However, the perception of mismatches between these two information streams requires experimental exploration since it could degrade the quality of the output. In order to increase the intermodal coherence in synthetic 2D photorealistic speech, we extended the well-known unit selection audio synthesis technique to work with multimodal segments containing original combinations of audio and video. Subjective experiments confirm that the audiovisual signals created by our multimodal synthesis strategy are indeed perceived as being more synchronous than those of systems in which both modes are not intrinsically coherent. Furthermore, it is shown that the degree of coherence between the auditory mode and the visual mode has an influence on the perceived quality of the synthetic visual speech fragment. In addition, the audio quality was found to have only a minor influence on the perceived visual signal's quality.

  20. Audiovisual Integration of Speech in a Patient with Broca’s Aphasia

    Directory of Open Access Journals (Sweden)

    Tobias Søren Andersen

    2015-04-01

    Full Text Available Lesions to Broca’s area cause aphasia characterised by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca’s area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca’s area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca’s aphasia did not experience the McGurk illusion suggesting that an intact Broca’s area is necessary for audiovisual integration of speech. Here we describe a patient with Broca’s aphasia who experienced the McGurk illusion. This indicates that an intact Broca’s area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca’s area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke’s aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca’s aphasia.

  1. Brain mechanisms that underlie the effects of motivational audiovisual stimuli on psychophysiological responses during exercise.

    Science.gov (United States)

    Bigliassi, Marcelo; Silva, Vinícius B; Karageorghis, Costas I; Bird, Jonathan M; Santos, Priscila C; Altimari, Leandro R

    2016-05-01

    Motivational audiovisual stimuli such as music and video have been widely used in the realm of exercise and sport as a means by which to increase situational motivation and enhance performance. The present study addressed the mechanisms that underlie the effects of motivational stimuli on psychophysiological responses and exercise performance. Twenty-two participants completed fatiguing isometric handgrip-squeezing tasks under two experimental conditions (motivational audiovisual condition and neutral audiovisual condition) and a control condition. Electrical activity in the brain and working muscles was analyzed by use of electroencephalography and electromyography, respectively. Participants were asked to squeeze the dynamometer maximally for 30s. A single-item motivation scale was administered after each squeeze. Results indicated that task performance and situational motivational were superior under the influence of motivational stimuli when compared to the other two conditions (~20% and ~25%, respectively). The motivational stimulus downregulated the predominance of low-frequency waves (theta) in the right frontal regions of the cortex (F8), and upregulated high-frequency waves (beta) in the central areas (C3 and C4). It is suggested that motivational sensory cues serve to readjust electrical activity in the brain; a mechanism by which the detrimental effects of fatigue on the efferent control of working muscles is ameliorated.

  2. Audiovisual associations alter the perception of low-level visual motion.

    Science.gov (United States)

    Kafaligonul, Hulusi; Oluk, Can

    2015-01-01

    Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role.

  3. Identifying Core Affect in Individuals from fMRI Responses to Dynamic Naturalistic Audiovisual Stimuli

    Science.gov (United States)

    Kim, Jongwan; Wang, Jing; Wedell, Douglas H.

    2016-01-01

    Recent research has demonstrated that affective states elicited by viewing pictures varying in valence and arousal are identifiable from whole brain activation patterns observed with functional magnetic resonance imaging (fMRI). Identification of affective states from more naturalistic stimuli has clinical relevance, but the feasibility of identifying these states on an individual trial basis from fMRI data elicited by dynamic multimodal stimuli is unclear. The goal of this study was to determine whether affective states can be similarly identified when participants view dynamic naturalistic audiovisual stimuli. Eleven participants viewed 5s audiovisual clips in a passive viewing task in the scanner. Valence and arousal for individual trials were identified both within and across participants based on distributed patterns of activity in areas selectively responsive to audiovisual naturalistic stimuli while controlling for lower level features of the stimuli. In addition, the brain regions identified by searchlight analyses to represent valence and arousal were consistent with previously identified regions associated with emotion processing. These findings extend previous results on the distributed representation of affect to multimodal dynamic stimuli. PMID:27598534

  4. High visual resolution matters in audiovisual speech perception, but only for some.

    Science.gov (United States)

    Alsius, Agnès; Wayne, Rachel V; Paré, Martin; Munhall, Kevin G

    2016-07-01

    The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.

  5. Audiovisual integration in near and far space: effects of changes in distance and stimulus effectiveness.

    Science.gov (United States)

    Van der Stoep, N; Van der Stigchel, S; Nijboer, T C W; Van der Smagt, M J

    2016-05-01

    A factor that is often not considered in multisensory research is the distance from which information is presented. Interestingly, various studies have shown that the distance at which information is presented can modulate the strength of multisensory interactions. In addition, our everyday multisensory experience in near and far space is rather asymmetrical in terms of retinal image size and stimulus intensity. This asymmetry is the result of the relation between the stimulus-observer distance and its retinal image size and intensity: an object that is further away is generally smaller on the retina as compared to the same object when it is presented nearer. Similarly, auditory intensity decreases as the distance from the observer increases. We investigated how each of these factors alone, and their combination, affected audiovisual integration. Unimodal and bimodal stimuli were presented in near and far space, with and without controlling for distance-dependent changes in retinal image size and intensity. Audiovisual integration was enhanced for stimuli that were presented in far space as compared to near space, but only when the stimuli were not corrected for visual angle and intensity. The same decrease in intensity and retinal size in near space did not enhance audiovisual integration, indicating that these results cannot be explained by changes in stimulus efficacy or an increase in distance alone, but rather by an interaction between these factors. The results are discussed in the context of multisensory experience and spatial uncertainty, and underline the importance of studying multisensory integration in the depth space.

  6. Electrophysiological correlates of individual differences in perception of audiovisual temporal asynchrony.

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer

    2016-06-01

    Sensitivity to the temporal relationship between auditory and visual stimuli is key to efficient audiovisual integration. However, even adults vary greatly in their ability to detect audiovisual temporal asynchrony. What underlies this variability is currently unknown. We recorded event-related potentials (ERPs) while participants performed a simultaneity judgment task on a range of audiovisual (AV) and visual-auditory (VA) stimulus onset asynchronies (SOAs) and compared ERP responses in good and poor performers to the 200ms SOA, which showed the largest individual variability in the number of synchronous perceptions. Analysis of ERPs to the VA200 stimulus yielded no significant results. However, those individuals who were more sensitive to the AV200 SOA had significantly more positive voltage between 210 and 270ms following the sound onset. In a follow-up analysis, we showed that the mean voltage within this window predicted approximately 36% of variability in sensitivity to AV temporal asynchrony in a larger group of participants. The relationship between the ERP measure in the 210-270ms window and accuracy on the simultaneity judgment task also held for two other AV SOAs with significant individual variability -100 and 300ms. Because the identified window was time-locked to the onset of sound in the AV stimulus, we conclude that sensitivity to AV temporal asynchrony is shaped to a large extent by the efficiency in the neural encoding of sound onsets.

  7. Visual and Auditory Components in the Perception of Asynchronous Audiovisual Speech.

    Science.gov (United States)

    García-Pérez, Miguel A; Alcalá-Quintana, Rocío

    2015-12-01

    Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal.

  8. Visual and audiovisual effects of isochronous timing on visual perception and brain activity.

    Science.gov (United States)

    Marchant, Jennifer L; Driver, Jon

    2013-06-01

    Understanding how the brain extracts and combines temporal structure (rhythm) information from events presented to different senses remains unresolved. Many neuroimaging beat perception studies have focused on the auditory domain and show the presence of a highly regular beat (isochrony) in "auditory" stimulus streams enhances neural responses in a distributed brain network and affects perceptual performance. Here, we acquired functional magnetic resonance imaging (fMRI) measurements of brain activity while healthy human participants performed a visual task on isochronous versus randomly timed "visual" streams, with or without concurrent task-irrelevant sounds. We found that visual detection of higher intensity oddball targets was better for isochronous than randomly timed streams, extending previous auditory findings to vision. The impact of isochrony on visual target sensitivity correlated positively with fMRI signal changes not only in visual cortex but also in auditory sensory cortex during audiovisual presentations. Visual isochrony activated a similar timing-related brain network to that previously found primarily in auditory beat perception work. Finally, activity in multisensory left posterior superior temporal sulcus increased specifically during concurrent isochronous audiovisual presentations. These results indicate that regular isochronous timing can modulate visual processing and this can also involve multisensory audiovisual brain mechanisms.

  9. Representation-based user interfaces for the audiovisual library of the year 2000

    Science.gov (United States)

    Aigrain, Philippe; Joly, Philippe; Lepain, Philippe; Longueville, Veronique

    1995-03-01

    The audiovisual library of the future will be based on computerized access to digitized documents. In this communication, we address the user interface issues which will arise from this new situation. One cannot simply transfer a user interface designed for the piece by piece production of some audiovisual presentation and make it a tool for accessing full-length movies in an electronic library. One cannot take a digital sound editing tool and propose it as a means to listen to a musical recording. In our opinion, when computers are used as mediations to existing contents, document representation-based user interfaces are needed. With such user interfaces, a structured visual representation of the document contents is presented to the user, who can then manipulate it to control perception and analysis of these contents. In order to build such manipulable visual representations of audiovisual documents, one needs to automatically extract structural information from the documents contents. In this communication, we describe possible visual interfaces for various temporal media, and we propose methods for the economically feasible large scale processing of documents. The work presented is sponsored by the Bibliotheque Nationale de France: it is part of the program aiming at developing for image and sound documents an experimental counterpart to the digitized text reading workstation of this library.

  10. The ventriloquist in periphery: impact of eccentricity-related reliability on audio-visual localization.

    Science.gov (United States)

    Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier

    2013-10-28

    The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.

  11. Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception

    Science.gov (United States)

    Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki

    2016-01-01

    Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs’ response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs’ early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception. PMID:27734953

  12. A Cross-Linguistic ERP Examination of Audiovisual Speech Perception between English and Japanese

    Directory of Open Access Journals (Sweden)

    Satoko Hisanaga

    2011-10-01

    Full Text Available According to recent ERP (event-related potentials studies, the visual speech facilitates the neural processing of auditory speech for speakers of European languages in audiovisual speech perception. We examined whether this visual facilitation is also the case for Japanese speakers for whom the weaker susceptibility of the visual influence has been behaviorally reported. We conducted a cross-linguistic experiment comparing ERPs of Japanese and English language groups (JL and EL when they were presented with audiovisual congruent as well as audio-only speech stimuli. The temporal facilitation by the additional visual speech was observed only for native speech stimuli, suggesting a role of articulating experiences for early ERP components. For native stimuli, the EL showed sustained visual facilitation for about 300 ms from audio onset. On the other hand, the visual facilitation was limited to the first 100 ms for the JL, and they rather showed a visual inhibitory effect at 300 ms from the audio onset. Thus the type of native language affects neural processing of visual speech in audiovisual speech perception. This inhibition is consistent with behaviorally reported weaker visual influence for the JL.

  13. Neural dynamics of audiovisual synchrony and asynchrony perception in 6-month-old infants

    Directory of Open Access Journals (Sweden)

    Franziska eKopp

    2013-01-01

    Full Text Available Young infants are sensitive to multisensory temporal synchrony relations, but the neural dynamics of temporal interactions between vision and audition in infancy are not well understood. We investigated audiovisual synchrony and asynchrony perception in 6-month-old infants using event-related potentials (ERP. In a prior behavioral experiment (n = 45, infants were habituated to an audiovisual synchronous stimulus and tested for recovery of interest by presenting an asynchronous test stimulus in which the visual stream was delayed with respect to the auditory stream by 400 ms. Infants who behaviorally discriminated the change in temporal alignment were included in further analyses. In the EEG experiment (final sample: n = 15, synchronous and asynchronous stimuli (visual delay of 400 ms were presented in random order. Results show latency shifts in the auditory ERP components N1 and P2 as well as the infant ERP component Nc. Latencies in the asynchronous condition were significantly longer than in the synchronous condition. After video onset but preceding the auditory onset, amplitude modulations propagating from posterior to anterior sites and related to the Pb component of infants' ERP were observed. Results suggest temporal interactions between the two modalities. Specifically, they point to the significance of anticipatory visual motion for auditory processing, and indicate young infants’ predictive capacities for audiovisual temporal synchrony relations.

  14. Creación colectiva audiovisual y cultura colaborativa online. Proyectos y estrategias

    Directory of Open Access Journals (Sweden)

    Jordi Alberich Pascual

    2012-04-01

    Full Text Available El presente artículo analiza el desarrollo creciente de proyectos audiovisuales de creación colectiva en y a través de Internet. Para ello, se exploran en primer lugar las implicaciones para la redefinición de la función-autor tradicional que posibilitan los sistemas interactivos multimedia, así como su vinculación con estrategias de trabajo colaborativo en red. A continuación, centramos nuestra atención en el uso y desarrollo de recursos de software libre audiovisual, como ejemplo paradigmático de la vitalidad de una creciente cultura colaborativa en el ámbito audiovisual contemporáneo. Finalmente, el artículo concluye estableciendo las claves identificativas básicas de tres aproximaciones diferenciadas a las tareas y estrategias de trabajo implicadas en los proyectos de creación colectiva audiovisual analizados en el curso de nuestra investigación.

  15. Inverse Effectiveness and Multisensory Interactions in Visual Event-Related Potentials with Audiovisual Speech

    Science.gov (United States)

    Bushmakin, Maxim; Kim, Sunah; Wallace, Mark T.; Puce, Aina; James, Thomas W.

    2013-01-01

    In recent years, it has become evident that neural responses previously considered to be unisensory can be modulated by sensory input from other modalities. In this regard, visual neural activity elicited to viewing a face is strongly influenced by concurrent incoming auditory information, particularly speech. Here, we applied an additive-factors paradigm aimed at quantifying the impact that auditory speech has on visual event-related potentials (ERPs) elicited to visual speech. These multisensory interactions were measured across parametrically varied stimulus salience, quantified in terms of signal to noise, to provide novel insights into the neural mechanisms of audiovisual speech perception. First, we measured a monotonic increase of the amplitude of the visual P1-N1-P2 ERP complex during a spoken-word recognition task with increases in stimulus salience. ERP component amplitudes varied directly with stimulus salience for visual, audiovisual, and summed unisensory recordings. Second, we measured changes in multisensory gain across salience levels. During audiovisual speech, the P1 and P1-N1 components exhibited less multisensory gain relative to the summed unisensory components with reduced salience, while N1-P2 amplitude exhibited greater multisensory gain as salience was reduced, consistent with the principle of inverse effectiveness. The amplitude interactions were correlated with behavioral measures of multisensory gain across salience levels as measured by response times, suggesting that change in multisensory gain associated with unisensory salience modulations reflects an increased efficiency of visual speech processing. PMID:22367585

  16. Aid and growth regressions

    DEFF Research Database (Denmark)

    Hansen, Henrik; Tarp, Finn

    2001-01-01

    . There are, however, decreasing returns to aid, and the estimated effectiveness of aid is highly sensitive to the choice of estimator and the set of control variables. When investment and human capital are controlled for, no positive effect of aid is found. Yet, aid continues to impact on growth via...... investment. We conclude by stressing the need for more theoretical work before this kind of cross-country regressions are used for policy purposes....

  17. A Study on the Usefulness of Audio-Visual Aids in EFL Classroom: Implications for Effective Instruction

    Science.gov (United States)

    Mathew, Nalliveettil George; Alidmat, Ali Odeh Hammoud

    2013-01-01

    A resourceful English language teacher equipped with eclecticism is desirable in English as a foreign language classroom. The challenges of classroom instruction increases when prescribed English as a Foreign Language (EFL) course books (textbooks) are constituted with too many interactive language proficiency activities. Most importantly, it has…

  18. On the Classroom Teaching with Electrical Audiovisual Aids%试论课堂教学的电教化

    Institute of Scientific and Technical Information of China (English)

    马碧芳

    2001-01-01

    课堂教学的电教化,是现代教学的特点.本文笔者对电化教学的特点以及电教课的组织、实施等问题作一粗浅的探讨,以期提高教学效果,实现教育的最优化.

  19. Designing State Aid Formulas

    Science.gov (United States)

    Zhao, Bo; Bradbury, Katharine

    2009-01-01

    This paper designs a new equalization-aid formula based on fiscal gaps of local communities. When states are in transition to a new local aid formula, the issue of whether and how to hold existing aid harmless poses a challenge. The authors show that some previous studies and the formulas derived from them give differential weights to existing and…

  20. Determinants of State Aid

    NARCIS (Netherlands)

    Buiren, K.; Brouwer, E.

    2010-01-01

    From economic theory we derive a set of hypotheses on the determination of state aid. Econometric analysis on EU state aid panel data is carried out to test whether the determinants we expect on the basis of theory, correspond to the occurrence of state aid in practice in the EU. We find that politi

  1. Fever: First Aid

    Science.gov (United States)

    First aid Fever: First aid Fever: First aid By Mayo Clinic Staff A fever is a rise in body temperature. It's usually a sign of infection. The ... 2 C) or higher Should I treat a fever? When you or your child is sick, the ...

  2. Stroke: First Aid

    Science.gov (United States)

    First aid Stroke: First aid Stroke: First aid By Mayo Clinic Staff A stroke occurs when there's bleeding into your brain or when normal blood flow to ... next several hours. Seek immediate medical assistance. A stroke is a true emergency. The sooner treatment is ...

  3. Aid and Development

    DEFF Research Database (Denmark)

    Tarp, Finn; Arndt, Channing; Jones, Edward Samuel

    inputs. We take as our point of departure a growth accounting analysis and review both intended and unintended effects of aid. Mozambique has benefited from sustained aid inflows in conflict, post-conflict and reconstruction periods. In each of these phases aid has made an unambiguous, positive...

  4. Investigating the impact of audio instruction and audio-visual biofeedback for lung cancer radiation therapy

    Science.gov (United States)

    George, Rohini

    Lung cancer accounts for 13% of all cancers in the Unites States and is the leading cause of deaths among both men and women. The five-year survival for lung cancer patients is approximately 15%.(ACS facts & figures) Respiratory motion decreases accuracy of thoracic radiotherapy during imaging and delivery. To account for respiration, generally margins are added during radiation treatment planning, which may cause a substantial dose delivery to normal tissues and increase the normal tissue toxicity. To alleviate the above-mentioned effects of respiratory motion, several motion management techniques are available which can reduce the doses to normal tissues, thereby reducing treatment toxicity and allowing dose escalation to the tumor. This may increase the survival probability of patients who have lung cancer and are receiving radiation therapy. However the accuracy of these motion management techniques are inhibited by respiration irregularity. The rationale of this thesis was to study the improvement in regularity of respiratory motion by breathing coaching for lung cancer patients using audio instructions and audio-visual biofeedback. A total of 331 patient respiratory motion traces, each four minutes in length, were collected from 24 lung cancer patients enrolled in an IRB-approved breathing-training protocol. It was determined that audio-visual biofeedback significantly improved the regularity of respiratory motion compared to free breathing and audio instruction, thus improving the accuracy of respiratory gated radiotherapy. It was also observed that duty cycles below 30% showed insignificant reduction in residual motion while above 50% there was a sharp increase in residual motion. The reproducibility of exhale based gating was higher than that of inhale base gating. Modeling the respiratory cycles it was found that cosine and cosine 4 models had the best correlation with individual respiratory cycles. The overall respiratory motion probability distribution

  5. Costless Aids for Language Teaching.

    Science.gov (United States)

    Moody, K. W.

    Intended primarily for language teachers in underfinanced school districts or underdeveloped countries where educational resources are scarce, this article suggests ways and means of using material resources as instructional realia. The author proposes several principles on which the use of audiovisual materials in the classroom should be based.…

  6. Propuestas para la investigavción en comunicación audiovisual: publicidad social y creación colectiva en Internet / Research proposals for audiovisual communication: social advertising and collective creation on the internet

    Directory of Open Access Journals (Sweden)

    Teresa Fraile Prieto

    2011-09-01

    Full Text Available Resumen: La sociedad de la información digital plantea nuevos retos a los investigadores. A mediada que la comunicación audiovisual se ha consolidado como disciplina, los estudios culturales se muestran como una perspectiva de análisis ventajosa para acercarse a las nuevas prácticas creativas y de consumo del medio audiovisual. Este artículo defiende el estudio de los productos culturales audiovisuales que esta sociedad digital produce por cuanto son un testimonio de los cambios sociales que se operan en ella. En concreto se propone el acercamiento a la publicidad social y a los objetos de creación colectiva en Internet como medio para conocer las circunstancias de nuestra sociedad. Abstract: The information society poses new challenges to researchers. While audiovisual communication has been consolidated as a discipline, cultural studies is an advantageous analytical perspective to approach the new creative practices and consumption of audiovisual media. This article defends the study of audiovisual cultural products produced by the digital society because they are a testimony of the social changes taking place in it. Specifically, it proposes an approach to social advertising and objects of collective creation on the Internet as a means to know the circumstances of our society.

  7. Conditional Aid Effectiveness

    DEFF Research Database (Denmark)

    Doucouliagos, Hristos; Paldam, Martin

    The AEL (aid effectiveness literature) studies the effect of development aid using econometrics on macro data. It contains about 100 papers of which a third analyzes conditional models where aid effectiveness depends upon z, so that aid only works for a certain range of the variable. The key term...... in this family of AEL models is thus an interaction term of z times aid. The leading candidates for z are a good policy index and aid itself. In this paper, meta-analysis techniques are used (i) to determine whether the AEL has established the said interaction terms, and (ii) to identify some of the determinants...... of the differences in results between studies. Taking all available studies in consideration, we find no support for conditionality with respect to policy, while conditionality regarding aid itself is dubious. However, the results differ depending on the authors’ institutional affiliation....

  8. Aid and Development

    DEFF Research Database (Denmark)

    Tarp, Finn; Arndt, Channing; Jones, Edward Samuel

    This paper considers the relationship between external aid and development in Mozambique from 1980 to 2004. The main objective is to identify the specific mechanisms through which aid has influenced the developmental trajectory of the country and whether one can plausibly link outcomes to aid...... inputs. We take as our point of departure a growth accounting analysis and review both intended and unintended effects of aid. Mozambique has benefited from sustained aid inflows in conflict, post-conflict and reconstruction periods. In each of these phases aid has made an unambiguous, positive...... contribution both enabling and supporting rapid growth since 1992. At the same time, the proliferation of donors and aid-supported interventions has burdened local administration and there is a distinct need to develop government accountability to its own citizens rather than donor agencies. In ensuring...

  9. China vs. AIDS

    Institute of Scientific and Technical Information of China (English)

    LURUCAI

    2004-01-01

    CHINA's first HIV positive diagnosis was in 1985, the victim an ArgentineAmerican. At that time most Chinese,medical workers included, thought of AIDS as a phenomenon occurring outside of China. Twenty years later, the number of HIV/AIDS patients has risen alarmingly. In 2003, the Chinese Ministry of Health launched an AIDS Epidemiological Investigation across China with the support of the WHO and UN AIDS Program. Its results show that there are currently 840,000 HIV carriers, including 80,000 people with full-blown AIDS, in 31 Chinese provinces, municipalities and autonomous regions. This means China has the second highest number of HIV/AIDS cases in Asia and 14th highest in the world. Statistics from the Chinese Venereal Disease and AIDS Prevention Association indicate that the majority of Chinese HIV carriers are young to middle aged, more than half of them between the ages of 20 and 29.

  10. Aid Effectiveness on Growth

    DEFF Research Database (Denmark)

    Doucouliagos, Hristos; Paldam, Martin

    The AEL (aid effectiveness literature) is econo¬metric studies of the macroeconomic effects of development aid. It contains about 100 papers of which 68 are reduced form estimates of theeffect of aid on growth in the recipient country. The raw data show that growth is unconnected to aid......, but the AEL has put so much structure on the data that all results possible have emerged. The present meta study considers both the best-set of the 68 papers and the all-set of 543 regressions published. Both sets have a positive average aid-growth elasticity, but it is small and insignificant: The AEL has...... betweenstudies is real. In particular, the aid-growth association is stronger for Asian countries, and the aid-growth association is shown to have been weaker in the 1970s....

  11. Mujeres e industria audiovisual hoy: Involución, experimentación y nuevos modelos narrativos Women and the audiovisual (industry today: regression, experiment and new narrative models

    Directory of Open Access Journals (Sweden)

    Ana MARTÍNEZ-COLLADO MARTÍNEZ

    2011-07-01

    Full Text Available Este artículo analiza las prácticas artísticas audiovisuales en el contexto actual. Describe, en primer lugar, el proceso de involución de las prácticas audiovisuales realizadas por mujeres artistas. Las mujeres no están presentes ni como productoras, ni realizadoras, ni como ejecutivas de la industria audiovisual de tal manera que inevitablemente se reconstruyen y refuerzan los estereotipos tradicionales de género. A continuación el artículo se aproxima a la práctica artística audiovisual feminista en la década de los 70 y 80. Tomar la cámara se hizo absolutamente necesario no sólo para dar voz a muchas mujeres. Era necesario reinscribir los discursos ausentes y señalar un discurso crítico respecto a la representación cultural. Analiza, también, cómo estas prácticas a partir de la década de los 90 exploran nuevos modelos narrativos vinculados a las transformaciones de la subjetividad contemporánea, al tiempo que desarrollan su producción audiovisual en un “campo expandido” de exhibición. Por último, el artículo señala la relación de las prácticas feministas audiovisuales con el complejo territorio de la globalización y la sociedad de la información. La narración de la experiencia local ha encontrado en el audiovisual un medio privilegiado para señalar los problemas de la diferencia, la identidad, la raza y la etnicidad.This article analyses audiovisual art in the contemporary context. Firstly it describes the current regression of the role of women artists’ audiovisual practices. Women have little or no presence in the audiovisual industry as producers, filmmakers or executives, a condition that inevitably reconstitutes and reinforces traditional gender stereotypes. The article goes on to look at the feminist audiovisual practices of the nineteen seventies and eighties when women’s filmmaking became an absolutely necessity, not only to give voice to women but also to inscribe discourses found to be

  12. 36 CFR 1256.98 - Can I get access to and obtain copies of USIA audiovisual records transferred to the National...

    Science.gov (United States)

    2010-07-01

    ... obtain copies of USIA audiovisual records transferred to the National Archives of the United States? 1256.98 Section 1256.98 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION... United States Information Agency Audiovisual Materials in the National Archives of the United...

  13. Processing of Audiovisually Congruent and Incongruent Speech in School-Age Children with a History of Specific Language Impairment: A Behavioral and Event-Related Potentials Study

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer; Macias, Danielle; Gustafson, Dana

    2015-01-01

    Previous studies indicate that at least some aspects of audiovisual speech perception are impaired in children with specific language impairment (SLI). However, whether audiovisual processing difficulties are also present in older children with a history of this disorder is unknown. By combining electrophysiological and behavioral measures, we…

  14. Audiovisual Speech Perception in Infancy: The Influence of Vowel Identity and Infants' Productive Abilities on Sensitivity to (Mis)Matches between Auditory and Visual Speech Cues

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-01-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…

  15. Plantilla 2: Particularidades del documento audiovisual. Orígenes de los servicios de documentación televisivos. El reto de la digitalización

    OpenAIRE

    2011-01-01

    Particularidades del soporte físico y del mensaje audiovisual. Orígenes de la documentación audiovisual. Orígenes de los servicios de documentación televisivos. El reto de la digitalización de los archivos de televisión.

  16. Aid and Growth

    DEFF Research Database (Denmark)

    Arndt, Channing; Jones, Edward Samuel; Tarp, Finn

    The micro-macro paradox has been revived. Despite broadly positive evaluations at the micro and meso-levels, recent literature has turned decidedly pessimistic with respect to the ability of foreign aid to foster economic growth. Policy implications, such as the complete cessation of aid to Africa......, are being drawn on the basis of fragile evidence. This paper first assesses the aid-growth literature with a focus on recent contributions. The aid-growth literature is then framed, for the first time, in terms of the Rubin Causal Model, applied at the macroeconomic level. Our results show that aid has...... a positive and statistically significant causal effect on growth over the long run with point estimates at levels suggested by growth theory. We conclude that aid remains an important tool for enhancing the development prospects of poor nations....

  17. Aid and Growth

    DEFF Research Database (Denmark)

    Arndt, Channing; Jones, Edward Samuel; Tarp, Finn

    2009-01-01

    The micro-macro paradox has been revived. Despite broadly positive evaluations at the micro and meso-levels, recent literature has turned decidedly pessimistic with respect to the ability of foreign aid to foster economic growth. Policy implications, such as the complete cessation of aid to Africa......, are being drawn on the basis of fragile evidence. This paper first assesses the aid-growth literature with a focus on recent contributions. The aid-growth literature is then framed, for the first time, in terms of the Rubin Causal Model, applied at the macroeconomic level. Our results show that aid has...... a positive and statistically significant causal effect on growth over the long run with point estimates at levels suggested by growth theory. We conclude that aid remains an important tool for enhancing the development prospects of poor nations....

  18. AIDS in South Africa.

    Science.gov (United States)

    Ijsselmuiden, C; Evian, C; Matjilla, J; Steinberg, M; Schneider, H

    1993-01-01

    The National AIDS Convention in South Africa (NACOSA) in October 1992 was the first real attempt to address HIV/AIDS. In Soweto, government, the African National Congress, nongovernmental organizations, and organized industry and labor representatives worked for 2 days to develop a national plan of action, but it did not result in a united effort to fight AIDS. The highest HIV infection rates in South Africa are among the KwaZulu in Natal, yet the Inkatha Freedom Party did not attend NACOSA. This episode exemplifies the key obstacles for South Africa to prevent and control AIDS. Inequality of access to health care may explain why health workers did not diagnose the first AIDS case in blacks until 1985. Migrant labor, Bantu education, and uprooted communities affect the epidemiology of HIV infection. Further, political and social polarization between blacks and whites contributes to a mindset that AIDS is limited to the other race which only diminishes the personal and collective sense of susceptibility and the volition and aptitude to act. The Department of National Health and Population Development's voluntary register of anonymously reported cases of AIDS specifies 1517 cumulative AIDS cases (October 1992), but this number is low. Seroprevalence studies show between 400,000-450,000 HIV positive cases. Public hospitals cannot give AIDS patients AZT and DDI. Few communities provided community-based care. Not all hospitals honor confidentiality and patients' need for autonomy. Even though HIV testing is not mandatory, it is required sometimes, e.g., HIV testing of immigrants. AIDS Training, Information and Counselling Centers are in urban areas, but not in poor areas where the need is most acute. The government just recently developed in AIDS education package for schools, but too many people consider it improper, so it is not being used. The poor quality education provided blacks would make it useless anyhow. Lifting of the academic boycott will allow South African

  19. JPRS Report, Epidemiology, Aids

    Science.gov (United States)

    2007-11-02

    at home. For this they can thank the National Federation of Gays and Lesbians (LBL). -To develop an adequate core of specialists in AIDS prevention... Homosexuality [Vusie Ginindza; Mbabane THE TIMES OF SWAZILAND, 15 May 91] ...................................... 4 JPRS-TEP-91-012 5 June 1991 2 AIDS TANZANIA...sensitivity and not to sensa- tionalize the issue. Health Workers ’Alarmed’ at rise in AIDS, Homosexuality SWAZILAND MB1505085891 Mbabane THE TIMES

  20. Radiographic imaging of aids

    CERN Document Server

    Mahmoud, M B

    2002-01-01

    The acquired immunodeficiency syndrome (AIDS) has impacted the civilized world like no other disease. This research aimed to discuss some of the main aids-related complications and their detection by radiology tests, specifically central nervous system and musculoskeletal system disorders. The objectives are: to show specific characteristics of various diseases of HIV patient, to analyze the effect of pathology in patients by radiology, to enhance the knowledge of technologists in aids imaging and to improve communication skills between patient and radiology technologists.

  1. The Impact of Politics 2.0 in the Spanish Social Media: Tracking the Conversations around the Audiovisual Political Wars

    Science.gov (United States)

    Noguera, José M.; Correyero, Beatriz

    After the consolidation of weblogs as interactive narratives and producers, audiovisual formats are gaining ground on the Web. Videos are spreading all over the Internet and establishing themselves as a new medium for political propaganda inside social media with tools so powerful like YouTube. This investigation proceeds in two stages: on one hand we are going to examine how this audiovisual formats have enjoyed an enormous amount of attention in blogs during the Spanish pre-electoral campaign for the elections of March 2008. On the other hand, this article tries to investigate the social impact of this phenomenon using data from a content analysis of the blog discussion related to these videos centered on the most popular Spanish political blogs. Also, we study when the audiovisual political messages (made by politicians or by users) "born" and "die" in the Web and with what kind of rules they do.

  2. Sharing killed the AVMSD star: the impossibility of European audiovisual media regulation in the era of the sharing economy

    Directory of Open Access Journals (Sweden)

    Indrek Ibrus

    2016-06-01

    Full Text Available The paper focuses on the challenges that the ‘sharing economy’ presents to the updating of the European Union’s (EU Audiovisual Media Service Directive (AVMSD, part of the broader Digital Single Market (DSM strategy of the EU. It suggests that the convergence of media markets and the emergence of video-sharing platforms may make the existing regulative tradition obsolete. It demonstrates an emergent need for regulatory convergence – AVMSD to create equal terms for all technical forms of content distribution. It then shows how the operational logic of video-sharing platforms undermines the AVMSD logic aimed at creating demand for professionally produced European content – leading potentially to the liberalisation of the EU audiovisual services market. Lastly, it argues that the DSM strategy combined with sharing-related network effects may facilitate the evolution of the oligopolistic structure in the EU audiovisual market, potentially harmful for cultural diversity.

  3. Music and hearing aids.

    Science.gov (United States)

    Madsen, Sara M K; Moore, Brian C J

    2014-10-31

    The signal processing and fitting methods used for hearing aids have mainly been designed to optimize the intelligibility of speech. Little attention has been paid to the effectiveness of hearing aids for listening to music. Perhaps as a consequence, many hearing-aid users complain that they are not satisfied with their hearing aids when listening to music. This issue inspired the Internet-based survey presented here. The survey was designed to identify the nature and prevalence of problems associated with listening to live and reproduced music with hearing aids. Responses from 523 hearing-aid users to 21 multiple-choice questions are presented and analyzed, and the relationships between responses to questions regarding music and questions concerned with information about the respondents, their hearing aids, and their hearing loss are described. Large proportions of the respondents reported that they found their hearing aids to be helpful for listening to both live and reproduced music, although less so for the former. The survey also identified problems such as distortion, acoustic feedback, insufficient or excessive gain, unbalanced frequency response, and reduced tone quality. The results indicate that the enjoyment of listening to music with hearing aids could be improved by an increase of the input and output dynamic range, extension of the low-frequency response, and improvement of feedback cancellation and automatic gain control systems.

  4. THE IMPROVEMENT OF AUDIO-VISUAL BASED DANCE APPRECIATION LEARNING AMONG PRIMARY TEACHER EDUCATION STUDENTS OF MAKASSAR STATE UNIVERSITY

    Directory of Open Access Journals (Sweden)

    Wahira

    2014-06-01

    Full Text Available This research aimed to improve the skill in appreciating dances owned by the students of Primary Teacher Education of Makassar State University, to improve the perception towards audio-visual based art appreciation, to increase the students’ interest in audio-visual based art education subject, and to increase the students’ responses to the subject. This research was classroom action research using the research design created by Kemmis & MC. Taggart, which was conducted to 42 students of Primary Teacher Education of Makassar State University. The data collection was conducted using observation, questionnaire, and interview. The techniques of data analysis applied in this research were descriptive qualitative and quantitative. The results of this research were: (1 the students’ achievement in audio-visual based dance appreciation improved: precycle 33,33%, cycle I 42,85% and cycle II 83,33%, (2 the students’ perception towards the audio-visual based dance appreciation improved: cycle I 59,52%, and cycle II 71,42%. The students’ perception towards the subject obtained through structured interview in cycle I and II was 69,83% in a high category, (3 the interest of the students in the art education subject, especially audio-visual based dance appreciation, increased: cycle I 52,38% and cycle II 64,28%, and the students’ interest in the subject obtained through structured interview was 69,50 % in a high category. (3 the students’ response to audio-visual based dance appreciation increased: cycle I 54,76% and cycle II 69,04% in a good category.

  5. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    Science.gov (United States)

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  6. De la competencia digital y audiovisual a la competencia mediática: dimensiones e indicadores

    Directory of Open Access Journals (Sweden)

    María Amor Pérez Rodríguez

    2012-10-01

    Full Text Available La necesidad de plantear la conceptualización de la competencia mediática conduce a una perspectiva más amplia en la que convergen aspectos vinculados a la competencia audiovisual y a la competencia digital. Ambas constituyen el marco de referencia de «El tratamiento de la información y competencia digital», competencia básica del currículum vigente en nuestro país. A pesar de las experiencias que se están llevando a cabo tanto en comunicación audiovisual como digital, aún son pocas las tentativas para definir, de manera precisa, los conocimientos, habilidades y actitudes necesarios para considerarse competente en sendos ámbitos, ineludibles a la hora de llevar a cabo los procesos de enseñanza-aprendizaje. Este trabajo parte del análisis de seis estudios significativos en la temática de alfabetización tanto digital como audiovisual. Considerando aspectos como los destinatarios, la conceptualización que se utiliza en cada uno de ellos, las dimensiones que plantean, el tipo de taxonomía, indicadores… y las propuestas didácticas: objetivos, contenidos, actividades, se sistematizan en una serie de dimensiones e indicadores para definir la competencia mediática y plantear el diseño de actividades para una propuesta didáctica de acuerdo a los indicadores establecidos. La investigación desarrollada nos ha permitido afirmar la necesidad de la convergencia terminológica, así como de la elaboración de recursos, a partir de los indicadores definidos, que incidan en los distintos ámbitos de la competencia mediática de una manera efectiva y sirvan para llevar a cabo actuaciones didácticas en los distintos grupos que componen la sociedad actual.

  7. [Audiovisual stimulation in children with severely limited motor function: does it improve their quality of life?].

    Science.gov (United States)

    Barja, Salesa; Muñoz, Carolina; Cancino, Natalia; Núñez, Alicia; Ubilla, Mario; Sylleros, Rodrigo; Riveros, Rodrigo; Rosas, Ricardo

    2013-08-01

    Introduccion. Los niños con enfermedades neurologicas que condicionan una limitacion grave de la movilidad tienen una calidad de vida (CV) deficiente. Objetivo. Estudiar si la CV de dichos pacientes mejora con la aplicacion de un programa de estimulacion audiovisual. Pacientes y metodos. Estudio prospectivo en nueve niños, seis de ellos varones (edad media: 42,6 ± 28,6 meses), con limitacion grave de la movilidad, hospitalizados de manera prolongada. Se elaboraron dos programas de estimulo audiovisual que, junto con videos, se aplicaron mediante una estructura especialmente diseñada. La frecuencia fue de dos veces al dia, por 10 minutos, durante 20 dias. Los primeros diez dias se llevo a cabo de manera pasiva y los segundos diez con guia del observador. Se registraron variables biologicas, conductuales, cognitivas y se aplico una encuesta de CV adaptada. Resultados. Se diagnosticaron tres casos de atrofia muscular espinal, dos de distrofia muscular congenita, dos de miopatia y dos con otros diagnosticos. Ocho pacientes completaron el seguimiento. Desde el punto de vista basal, presentaron CV regular (7,2 ± 1,7 puntos; mediana: 7,0; rango: 6-10), que mejoraba a buena al finalizar (9,4 ± 1,2 puntos; mediana: 9,0; rango: 8-11), con diferencia intraindividual de 2,1 ± 1,6 (mediana: 2,5; rango: –1 a 4; IC 95% = 0,83-3,42; p = 0,006). Se detecto mejoria en cognicion y percepcion favorable de los cuidadores. No hubo cambio en las variables biologicas ni conductuales. Conclusion. Mediante la estimulacion audiovisual es posible mejorar la calidad de vida de niños con limitacion grave de la movilidad.

  8. SU-E-J-29: Audiovisual Biofeedback Improves Tumor Motion Consistency for Lung Cancer Patients

    Energy Technology Data Exchange (ETDEWEB)

    Lee, D; Pollock, S; Makhija, K; Keall, P [The University of Sydney, Camperdown, NSW (Australia); Greer, P [The University of Newcastle, Newcastle, NSW (Australia); Calvary Mater Newcastle Hospital, Newcastle, NSW (Australia); Arm, J; Hunter, P [Calvary Mater Newcastle Hospital, Newcastle, NSW (Australia); Kim, T [The University of Sydney, Camperdown, NSW (Australia); University of Virginia Health System, Charlottesville, VA (United States)

    2014-06-01

    Purpose: To investigate whether the breathing-guidance system: audiovisual (AV) biofeedback improves tumor motion consistency for lung cancer patients. This will minimize respiratory-induced tumor motion variations across cancer imaging and radiotherapy procedues. This is the first study to investigate the impact of respiratory guidance on tumor motion. Methods: Tumor motion consistency was investigated with five lung cancer patients (age: 55 to 64), who underwent a training session to get familiarized with AV biofeedback, followed by two MRI sessions across different dates (pre and mid treatment). During the training session in a CT room, two patient specific breathing patterns were obtained before (Breathing-Pattern-1) and after (Breathing-Pattern-2) training with AV biofeedback. In each MRI session, four MRI scans were performed to obtain 2D coronal and sagittal image datasets in free breathing (FB), and with AV biofeedback utilizing Breathing-Pattern-2. Image pixel values of 2D images after the normalization of 2D images per dataset and Gaussian filter per image were used to extract tumor motion using image pixel values. The tumor motion consistency of the superior-inferior (SI) direction was evaluated in terms of an average tumor motion range and period. Results: Audiovisual biofeedback improved tumor motion consistency by 60% (p value = 0.019) from 1.0±0.6 mm (FB) to 0.4±0.4 mm (AV) in SI motion range, and by 86% (p value < 0.001) from 0.7±0.6 s (FB) to 0.1±0.2 s (AV) in period. Conclusion: This study demonstrated that audiovisual biofeedback improves both breathing pattern and tumor motion consistency for lung cancer patients. These results suggest that AV biofeedback has the potential for facilitating reproducible tumor motion towards achieving more accurate medical imaging and radiation therapy procedures.

  9. Globalización y diversidad cultural en la política audiovisual europea

    OpenAIRE

    2002-01-01

    La política audiovisual de la Unión Europea pretende afrontar los riesgos que plantea la globalización frente a la diversidad cultural. Para ello, cuenta con una serie de medidas legislativas y políticas que es necesario contextualizar y valorar según los objetivos marcados por las instituciones comunitarias. Su estudio y análisis lleva a reflexionar sobre el interés y la conveniencia de estas medidas y de su fundamentación en el entorno europeo.

  10. Tiempo de crisis. El patrimonio audiovisual valenciano frente al cambio tecnológico

    Directory of Open Access Journals (Sweden)

    Lahoz Rodrigo, Juan Ignacio

    2014-07-01

    Full Text Available Tras tres décadas de autogobierno, la Generalitat Valenciana ha creado, fomentado, recopilado y restaurado un patrimonio audiovisual de incalculable interés cultural que tiene en la Filmoteca de CulturArts-IVAC y en el archivo de RTVV sus dos grandes centros de conservación. Este patrimonio se encuentra en un punto crítico por la necesidad de afrontar su transformación tecnológica en un momento de gran dificultad económica y política. El cierre de RTVV y la incertidumbre sobre el futuro de su archivo llevan a contraponer su carácter patrimonial a la tentación de privatizar su gestión y a recordar las recomendaciones de la UE y de la UNESCO para que sean archivos públicos y sin ánimo de lucro quienes se ocupen de la salvaguarda de las imágenes en movimiento. Si la fragilidad de los soportes de la cinematografía, del vídeo y de los ficheros digitales de imagen es la clave de su conservación a largo plazo, más determinante resulta hoy el imperio de la tecnología digital en todos los ámbitos de la generación, acceso y conservación de la producción audiovisual, pues conlleva un patrón de obsolescencia que puede suponer el bloqueo del patrimonio audiovisual valenciano si la Generalitat no le hace frente de forma inmediata y decidida: dotar a la Filmoteca de CulturArts-IVAC del equipamiento tecnológico necesario para la digitalización de sus fondos, dar continuidad a los planes de digitalización del archivo de RTVV y estimular los de todos los archivos audiovisuales de la Comunitat Valenciana, reforzar –en sintonía con las recomendaciones de la UE- el acento conservacionista de instrumentos como las ayudas públicas a la producción o el depósito legal y estimular el desarrollo del Catálogo del Patrimonio Audiovisual Valenciano son medidas que deben coadyuvar a la conservación a largo plazo de nuestro patrimonio.

  11. Análisis del museo como narración audiovisual

    OpenAIRE

    2011-01-01

    El museo se adapta a los tiempos, haciendo propios los recursos audiovisuales con propuestas discursivas para la interacción y el aprendizaje. Este artículo expone algunas líneas de investigación centradas en el museo como narración audiovisual desde la Teoría de la Comunicación y el Análisis Fílmico, entre otras perspectivas, aportando el ejemplo del Museo CajaGRANADA Memoria de Andalucía. Palabras clave: análisis, narrativa visual, medios de comunicación, museo, cultura visual.

  12. Política audiovisual europea y diversidad cultural en la era digital

    Directory of Open Access Journals (Sweden)

    Ma. Trinidad García Leiva

    2016-01-01

    Full Text Available Se estudia la implementación de la Convención unesco de 2005 sobre diversidad cultural en la formulación de las políticas europeas destinadas al audiovisual digital. Partiendo de un análisis documental crítico se constata la influencia del tratado, aunque más en la esfera de la promoción que de la protección y con una función más legitimadora de lo existente, que generadora de nuevas iniciativas.

  13. Las relaciones entre cine, cultura e historia: una perspectiva de investigación audiovisual

    Directory of Open Access Journals (Sweden)

    Edward Goyeneche-Gómez

    2012-01-01

    Full Text Available Este artículo analiza una perspectiva de investigación audiovisual soportada en el estudio de las relaciones entre cine, cultura e historia, que permite comprender la construcción y el uso que las sociedades contemporáneas hacen, dentro de complejos procesos históricos, de modos especí!cos de representación y codi!cación fílmica, vinculados a modelos culturales y estéticos que dependen de sistemas ideológicos más amplios.

  14. THE GALICIAN AUDIOVISUAL IN MESTRE MATEO AWARDS. PROTOCOL AT THE CEREMONY

    Directory of Open Access Journals (Sweden)

    Anna Amoros Pons

    2013-11-01

    Full Text Available The text summarizes the main results of a research that forms a part of a more global project about the study, in the field of cinema, of the specialized public relations  and ceremonial protocol as a communication strategy (persuasive indirect and / or covert in film events and, specifically, in the ceremony of awards’ delivery. In this article we focus on the geographical context of Galician audiovisual and on Mestre Mateo Awards’ show, in the first decade of the 21st century. Information of this event’s planning and the communication results obtained with its realization are contributed.

  15. An interactive audio-visual installation using ubiquitous hardware and web-based software deployment

    Directory of Open Access Journals (Sweden)

    Tiago Fernandes Tavares

    2015-05-01

    Full Text Available This paper describes an interactive audio-visual musical installation, namely MOTUS, that aims at being deployed using low-cost hardware and software. This was achieved by writing the software as a web application and using only hardware pieces that are built-in most modern personal computers. This scenario implies in specific technical restrictions, which leads to solutions combining both technical and artistic aspects of the installation. The resulting system is versatile and can be freely used from any computer with Internet access. Spontaneous feedback from the audience has shown that the provided experience is interesting and engaging, regardless of the use of minimal hardware.

  16. PHYSIOLOGICAL MONITORING OPERATORS ACS IN AUDIO-VISUAL SIMULATION OF AN EMERGENCY

    Directory of Open Access Journals (Sweden)

    S. S. Aleksanin

    2010-01-01

    Full Text Available In terms of ship simulator automated control systems we have investigated the information content of physiological monitoring cardiac rhythm to assess the reliability and noise immunity of operators of various specializations with audio-visual simulation of an emergency. In parallel, studied the effectiveness of protection against the adverse effects of electromagnetic fields. Monitoring of cardiac rhythm in a virtual crash it is possible to differentiate the degree of voltage regulation systems of body functions of operators on specialization and note the positive effect of the use of means of protection from exposure of electromagnetic fields.

  17. Temporalitats digitals. Aproximaci?? a una teoria del temps cinem??tic en les obres audiovisuals interactives

    OpenAIRE

    Sora, Carles

    2015-01-01

    Aquesta tesi presenta una aproximaci?? te??rica a l'estudi del temps cinem??tic en l'audiovisual interactiu, des de dues perspectives: la de l'estructuraci?? narrativa, els seus usos i tractament temporal; i la de la seva viv??ncia i percepci??. L'aven?? constant de les tecnologies de la informaci?? i les comunicacions ha fet variar la manera com hem concebut i hem fet ??s del temps, a trav??s de les imatges en moviment al llarg de la hist??ria. En els mitjans digitals es generen ara m??ltipl...

  18. Constructing a survey over time: Audio-visual feedback and theatre sketches in rural Mali

    Directory of Open Access Journals (Sweden)

    Véronique Hertrich

    2011-10-01

    Full Text Available Knowledge dissemination is an emerging issue in population studies, both in terms of ethics and data quality. The challenge is especially important in long term follow-up surveys and it requires methodological imagination when the population is illiterate. The paper presents the dissemination project developed in a demographic surveillance system implemented in rural Mali over the last 20 years. After basic experience of document transfer, the feedback strategy was developed through audiovisual shows and theatre sketches. The advantages and drawbacks of these media are discussed, in terms of scientific communication and the construction of dialogue with the target population.

  19. Herramienta observacional para el estudio de conductas violentas en un cómic audiovisual

    Directory of Open Access Journals (Sweden)

    Zaida Márquez

    2012-01-01

    Full Text Available Abstract This research paper presents a study which aimed to structure a system of categories for observation and description of violent behavior within an audiovisual children program, specifically in cartoons. A chapter of an audiovisual cartoon was chosen as an example. This chapter presented three main female characters in a random fashion in order to be observed by the children. Categories were established using the taxonomic criteria proposed by Anguera (2001 and were made up of various typed behaviors according to levels of response. To identify a stable behavioral pattern, some events were taken as a sample, taking into account one or several behavior registered in the observed sessions. The episode was analyzed by two observers who appreciated the material simultaneously, making two observations, registering the relevant data and contrasting opinions. The researchers determined a set of categories which expressed violent behavior such as: Nonverbal behavior, special behavior, and vocal/verbal behavior. It was concluded that there was a pattern of predominant and stable violent behavior in the cartoon observed. Resumen El presente artículo de investigación presenta un trabajo cuyo objetivo consistió en estructurar un sistema de categorías para la observación y descripción de conductas violentas en un cómic audiovisual (dibujo animado. Se seleccionó como muestra un cómic audiovisual que tiene tres personajes principales femeninos; tomándose de forma aleatoria, para su observación, uno de sus capítulos. Para el establecimiento de las categorías se escogieron como base los criterios taxonómicos propuestos por Anguera (2001, con lo cual se tipificaron las conductas que conforman cada categoría según los niveles de respuesta. Y para identificar un patrón de conducta estable se ha realizado un muestreo de eventos, usando todas las ocurrencias de una o varias conductas que se registraron en las sesiones observadas. El episodio

  20. Telepuebla y Ebarrios televisión: dos experiencias de comunicación audiovisual

    OpenAIRE

    2005-01-01

    ¿Enseñamos lo que sabemos, o lo que realmente necesitan nuestros alumnos? En la Sociedad de la Información, muchos maestros continuamos enseñando a leer a alumnos que en gran medida no leerán en su etapa adulta; estos alumnos dedican unas mil horas anuales a la televisión, más tiempo del que están en clase. Como el analfabetismo audiovisual puede dejarles en una situación de indefensión ante los mensajes televisivos, la escuela debe adaptarse a la nueva realidad y comprometerse en su...

  1. Confession Function of Synchronized Audiovisual Recordings%论同步录音录像的口供功能

    Institute of Scientific and Technical Information of China (English)

    谢小剑; 颜翔

    2014-01-01

    In practice, synchronized audiovisual recordings as audiovisual materials are most likely being used to prove the authenticity and legality of interrogation record, while it is ignored that the such recording as a kind of videotaped confession records that encompass enormous amount of non-text information during questioning. The synchronized audiovisual recording has some featured confession functions including finding out clues, finding out breakthroughs in cases, as well as directly proving the existence of criminal facts by functioning as criminal suspect’s confession. The procuratorate taking synchronized audiovisual recording as criminal suspect’s confession would be a better solution to the problem of lacking evidence in self-investigating criminal cases. In order to activate such confession function of the synchronized audiovisual recordings, investigators’ ability of analyzing and determining audiovisual recording needs to be enhanced and efforts shall be taken to bring in court the audiovisual recording as confession evidence, and judges’ cognitive bias shall be avoided when they view the audiovisual recording, etc.%实践中,同步录音录像多用作视听资料,证明讯问笔录的合法性,从而忽略了它由于记录了口供及其非文字信息而具有的作为口供证据的功能。同步录音录像的口供功能包括通过查看同步录音录像,发现案件线索,找到案件侦查突破口,也包括作为口供证据直接证明犯罪事实。检察机关将同步录音录像作为口供使用可以更好地解决检察机关自侦案件证据缺乏的难题。为实现上述同步录音录像的口供功能,应当提高侦查人员分析判断同步录音录像的能力,积极将同步录音录像作为供述证据使用,避免法官在查看同步录音录像时的认知偏见等等。

  2. Documentary Realism, Sampling Theory and Peircean Semiotics: electronic audiovisual signs (analog or digital as indexes of reality

    Directory of Open Access Journals (Sweden)

    Hélio Godoy

    2007-07-01

    Full Text Available This paper addresses Documentary Realism, focusing on thephysical phenomena of transduction that take place in analog and digital audiovisual systems, herein analyzed in the light of the Sampling Theory, within the framework of Shannon and Weaver’s Information Theory. Transduction is a process by which one type of energy is transformed into another, or by which information is transcodified. Within the scope of Documentary Realism, it cannotbe claimed that electronic audiovisual signs, because of their technical digital features lead to a rupture with reality. Rather, the digital documentary, based on electronic digital cinematography, is still an index of reality.

  3. Identity, culture and development through participatory audiovisual: The Youth Path Project case from Costa Rica’s UNESCO

    Directory of Open Access Journals (Sweden)

    Ángel V. Rabadán

    2015-06-01

    Full Text Available In this article we present the use of audiovisuals media as a strategic element capable of integrating the concepts of culture and development, promoting intercultural dialogue and participation. The concept of cultural identity is present through coexistence and creativity of young people participating in the “Youth Path” program proposed by UNESCO and developed in Central America, in order to promote development strategies and inclusion. The ethnographic audiovisual, as a fundamental tool to generate knowledge processes and communication links and interaction.

  4. Effectiveness of respiratory-gated radiotherapy with audio-visual biofeedback for synchrotron-based scanned heavy-ion beam delivery

    Science.gov (United States)

    He, Pengbo; Li, Qiang; Zhao, Ting; Liu, Xinguo; Dai, Zhongying; Ma, Yuanyuan

    2016-12-01

    A synchrotron-based heavy-ion accelerator operates in pulse mode at a low repetition rate that is comparable to a patient’s breathing rate. To overcome inefficiencies and interplay effects between the residual motion of the target and the scanned heavy-ion beam delivery process for conventional free breathing (FB)-based gating therapy, a novel respiratory guidance method was developed to help patients synchronize their breathing patterns with the synchrotron excitation patterns by performing short breath holds with the aid of personalized audio-visual biofeedback (BFB) system. The purpose of this study was to evaluate the treatment precision, efficiency and reproducibility of the respiratory guidance method in scanned heavy-ion beam delivery mode. Using 96 breathing traces from eight healthy volunteers who were asked to breathe freely and guided to perform short breath holds with the aid of BFB, a series of dedicated four-dimensional dose calculations (4DDC) were performed on a geometric model which was developed assuming a linear relationship between external surrogate and internal tumor motions. The outcome of the 4DDCs was quantified in terms of the treatment time, dose-volume histograms (DVH) and dose homogeneity index. Our results show that with the respiratory guidance method the treatment efficiency increased by a factor of 2.23-3.94 compared with FB gating, depending on the duty cycle settings. The magnitude of dose inhomogeneity for the respiratory guidance methods was 7.5 times less than that of the non-gated irradiation, and good reproducibility of breathing guidance among different fractions was achieved. Thus, our study indicates that the respiratory guidance method not only improved the overall treatment efficiency of respiratory-gated scanned heavy-ion beam delivery, but also had the advantages of lower dose uncertainty and better reproducibility among fractions.

  5. Magnetic Implants Aid Hearing

    Institute of Scientific and Technical Information of China (English)

    陈宏

    1995-01-01

    The next generation of hearing aids may use tiny magnets that fit inside the ear. Researchersat a California company and an engineer at the University of Virginia are both developing systems that rely on magnets to convey sounds. Conventional hearing aids have three components:a microphone, an amplifier, and a speaker. The microphone picks up sounds and sends them to the am-

  6. Genetic Immunity to AIDS

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    In an article on genetic immunity to AIDS published in Science magazine, American and Chinese scientists claim to have discovered why certain HIV carriers do not develop full-blown AIDS. They say that the key to this conundrum lies in a particular protein in the endocrine system that inhibits development of HIV.

  7. Aid and sectoral growth

    DEFF Research Database (Denmark)

    Selaya, Pablo; Thiele, Rainer

    2010-01-01

    This article examines empirically the proposition that aid to poor countries is detrimental for external competitiveness, giving rise to Dutch disease type effects. At the aggregate level, aid is found to have a positive effect on growth. A sectoral decomposition shows that the effect is (i...

  8. International Aid to Education

    Science.gov (United States)

    Benavot, Aaron

    2010-01-01

    Recent evidence highlights several worrisome trends regarding aid pledges and disbursements, which have been exacerbated by the global financial crisis. First, while overall development assistance rose in 2008, after 2 years of decline, the share of all sector aid going to the education sector has remained virtually unchanged at about 12 percent…

  9. AIDS Epidemiological models

    Science.gov (United States)

    Rahmani, Fouad Lazhar

    2010-11-01

    The aim of this paper is to present mathematical modelling of the spread of infection in the context of the transmission of the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). These models are based in part on the models suggested in the field of th AIDS mathematical modelling as reported by ISHAM [6].

  10. Aid and Income

    DEFF Research Database (Denmark)

    Lof, Matthijs; Mekasha, Tseday Jemaneh; Tarp, Finn

    2015-01-01

    to nonrandom omission of a large proportion of observations. Furthermore, we show that NDHKM’s use of co-integrated regressions is not a suitable empirical strategy for estimating the causal effect of aid on income. Evidence from a Panel VAR model estimated on the dataset of NDHKM, suggests a positive...... and statistically significant long-run effect of aid on income....

  11. First Aid: Burns

    Science.gov (United States)

    ... Your 1- to 2-Year-Old First Aid: Burns KidsHealth > For Parents > First Aid: Burns A A A Scald burns from hot water and other liquids are the most common burns in early childhood. Because burns range from mild ...

  12. The Aid Effectiveness Literature

    DEFF Research Database (Denmark)

    Doucouliagos, Hristos; Paldam, Martin

    The AEL consists of empirical macro studies of the effects of development aid. At the end of 2004 it had reached 97 studies of three families, which we have summarized in one study each using meta-analysis. Studies of the effect on investments show that they rise by 1/3 of the aid – the rest is c...

  13. First Aid: Diaper Rash

    Science.gov (United States)

    ... Your 1- to 2-Year-Old First Aid: Diaper Rash KidsHealth > For Parents > First Aid: Diaper Rash A A A Diaper rash is a common skin condition in babies. ... rash is due to irritation caused by the diaper, but it can have other causes not related ...

  14. First Aid: Burns

    Science.gov (United States)

    ... Old Feeding Your 8- to 12-Month-Old Feeding Your 1- to 2-Year-Old First Aid: Burns KidsHealth > For Parents > First Aid: Burns Print A A A Scald burns from hot water and other liquids are the most common burns in early childhood. Because burns range from mild to life threatening, ...

  15. Aid, Development, and Education

    Science.gov (United States)

    Klees, Steven J.

    2010-01-01

    The world faces pervasive poverty and inequality. Hundreds of billions of dollars in international aid have been given or loaned to developing countries though bilateral and multilateral mechanisms, at least, ostensibly, in order to do something about these problems. Has such aid helped? Debates around this question have been ongoing for decades,…

  16. HIV and AIDS

    Science.gov (United States)

    ... one HIV test by the time they are teens. If you are having sex, have had sex in the past, or shared ... reviewed: October 2015 previous 1 • 2 • ... STDs How Do People Get AIDS? Can You Get HIV From Having Sex With Someone Who Has AIDS? Can Someone Get ...

  17. First Aid: Falls

    Science.gov (United States)

    ... Your 1- to 2-Year-Old First Aid: Falls KidsHealth > For Parents > First Aid: Falls Print A A A en español Folleto de instructiones: Caídas (Falls) With all the running, climbing, and exploring kids ...

  18. AIDS as Metaphor.

    Science.gov (United States)

    McMillen, Liz

    1994-01-01

    Scholarly interest in Acquired Immune Deficiency Syndrome (AIDS) has spread throughout the humanities, attracting the attention of historians of medicine, political scientists, sociologists, public health scholars, and anthropologists. Most theorists hope their research will aid in policymaking or change understanding of the epidemic. (MSE)

  19. Alfasecuencialización: la enseñanza del cine en la era del audiovisual Sequential literacy: the teaching of cinema in the age of audio-visual speech

    Directory of Open Access Journals (Sweden)

    José Antonio Palao Errando

    2007-10-01

    Full Text Available En la llamada «sociedad de la información» los estudios sobre cine se han visto diluidos en el abordaje pragmático y tecnológico del discurso audiovisual, así como la propia fruición del cine se ha visto atrapada en la red del DVD y del hipertexto. El propio cine reacciona ante ello a través de estructuras narrativas complejas que lo alejan del discurso audiovisual estándar. La función de los estudios sobre cine y de su enseñanza universitaria debe ser la reintroducción del sujeto rechazado del saber informativo por medio de la interpretación del texto fílmico. In the so called «information society», film studies have been diluted in the pragmatic and technological approaching of the audiovisual speech, as well as the own fruition of the cinema has been caught in the net of DVD and hypertext. The cinema itself reacts in the face of it through complex narrative structures that take it away from the standard audio-visual speech. The function of film studies at the university education should be the reintroduction of the rejected subject of the informative knowledge by means of the interpretation of film text.

  20. The presentation of expert testimony via live audio-visual communication.

    Science.gov (United States)

    Miller, R D

    1991-01-01

    As part of a national effort to improve efficiency in court procedures, the American Bar Association has recommended, on the basis of a number of pilot studies, increased use of current audio-visual technology, such as telephone and live video communication, to eliminate delays caused by unavailability of participants in both civil and criminal procedures. Although these recommendations were made to facilitate court proceedings, and for the convenience of attorneys and judges, they also have the potential to save significant time for clinical expert witnesses as well. The author reviews the studies of telephone testimony that were done by the American Bar Association and other legal research groups, as well as the experience in one state forensic evaluation and treatment center. He also reviewed the case law on the issue of remote testimony. He then presents data from a national survey of state attorneys general concerning the admissibility of testimony via audio-visual means, including video depositions. Finally, he concludes that the option to testify by telephone provides a significant savings in precious clinical time for forensic clinicians in public facilities, and urges that such clinicians work actively to convince courts and/or legislatures in states that do not permit such testimony (currently the majority), to consider accepting it, to improve the effective use of scarce clinical resources in public facilities.

  1. earGram Actors: An Interactive Audiovisual System Based on Social Behavior

    Directory of Open Access Journals (Sweden)

    Peter Beyls

    2015-11-01

    Full Text Available In multi-agent systems, local interactions among system components following relatively simple rules often result in complex overall systemic behavior. Complex behavioral and morphological patterns have been used to generate and organize audiovisual systems with artistic purposes. In this work, we propose to use the Actor model of social interactions to drive a concatenative synthesis engine called earGram in real time. The Actor model was originally developed to explore the emergence of complex visual patterns. On the other hand, earGram was originally developed to facilitate the creative exploration of concatenative sound synthesis. The integrated audiovisual system allows a human performer to interact with the system dynamics while receiving visual and auditory feedback. The interaction happens indirectly by disturbing the rules governing the social relationships amongst the actors, which results in a wide range of dynamic spatiotemporal patterns. A performer thus improvises within the behavioural scope of the system while evaluating the apparent connections between parameter values and actual complexity of the system output.

  2. Policing Fish at Boston's Museum of Science: Studying Audiovisual Interaction in the Wild.

    Science.gov (United States)

    Goldberg, Hannah; Sun, Yile; Hickey, Timothy J; Shinn-Cunningham, Barbara; Sekuler, Robert

    2015-08-01

    Boston's Museum of Science supports researchers whose projects advance science and provide educational opportunities to the Museum's visitors. For our project, 60 visitors to the Museum played "Fish Police!!," a video game that examines audiovisual integration, including the ability to ignore irrelevant sensory information. Players, who ranged in age from 6 to 82 years, made speeded responses to computer-generated fish that swam rapidly across a tablet display. Responses were to be based solely on the rate (6 or 8 Hz) at which a fish's size modulated, sinusoidally growing and shrinking. Accompanying each fish was a task-irrelevant broadband sound, amplitude modulated at either 6 or 8 Hz. The rates of visual and auditory modulation were either Congruent (both 6 Hz or 8 Hz) or Incongruent (6 and 8 or 8 and 6 Hz). Despite being instructed to ignore the sound, players of all ages responded more accurately and faster when a fish's auditory and visual signatures were Congruent. In a controlled laboratory setting, a related task produced comparable results, demonstrating the robustness of the audiovisual interaction reported here. Some suggestions are made for conducting research in public settings.

  3. Asynchrony adaptation reveals neural population code for audio-visual timing.

    Science.gov (United States)

    Roach, Neil W; Heron, James; Whitaker, David; McGraw, Paul V

    2011-05-01

    The relative timing of auditory and visual stimuli is a critical cue for determining whether sensory signals relate to a common source and for making inferences about causality. However, the way in which the brain represents temporal relationships remains poorly understood. Recent studies indicate that our perception of multisensory timing is flexible--adaptation to a regular inter-modal delay alters the point at which subsequent stimuli are judged to be simultaneous. Here, we measure the effect of audio-visual asynchrony adaptation on the perception of a wide range of sub-second temporal relationships. We find distinctive patterns of induced biases that are inconsistent with the previous explanations based on changes in perceptual latency. Instead, our results can be well accounted for by a neural population coding model in which: (i) relative audio-visual timing is represented by the distributed activity across a relatively small number of neurons tuned to different delays; (ii) the algorithm for reading out this population code is efficient, but subject to biases owing to under-sampling; and (iii) the effect of adaptation is to modify neuronal response gain. These results suggest that multisensory timing information is represented by a dedicated population code and that shifts in perceived simultaneity following asynchrony adaptation arise from analogous neural processes to well-known perceptual after-effects.

  4. A comparison between audio and audiovisual distraction techniques in managing anxious pediatric dental patients

    Directory of Open Access Journals (Sweden)

    Prabhakar A

    2007-01-01

    Full Text Available Pain is not the sole reason for fear of dentistry. Anxiety or the fear of unknown during dental treatment is a major factor and it has been the major concern for dentists for a long time. Therefore, the main aim of this study was to evaluate and compare the two distraction techniques, viz, audio distraction and audiovisual distraction, in management of anxious pediatric dental patients. Sixty children aged between 4-8 years were divided into three groups. Each child had four dental visits - screening visit, prophylaxis visit, cavity preparation and restoration visit, and extraction visit. Child′s anxiety level in each visit was assessed using a combination of four measures: Venham′s picture test, Venham′s rating of clinical anxiety, pulse rate, and oxygen saturation. The values obtained were tabulated and subjected to statistical analysis. It was concluded that audiovisual distraction technique was more effective in managing anxious pediatric dental patient as compared to audio distraction technique.

  5. Finding the Correspondence of Audio-Visual Events by Object Manipulation

    Science.gov (United States)

    Nishibori, Kento; Takeuchi, Yoshinori; Matsumoto, Tetsuya; Kudo, Hiroaki; Ohnishi, Noboru

    A human being understands the objects in the environment by integrating information obtained by the senses of sight, hearing and touch. In this integration, active manipulation of objects plays an important role. We propose a method for finding the correspondence of audio-visual events by manipulating an object. The method uses the general grouping rules in Gestalt psychology, i.e. “simultaneity” and “similarity” among motion command, sound onsets and motion of the object in images. In experiments, we used a microphone, a camera, and a robot which has a hand manipulator. The robot grasps an object like a bell and shakes it or grasps an object like a stick and beat a drum in a periodic, or non-periodic motion. Then the object emits periodical/non-periodical events. To create more realistic scenario, we put other event source (a metronome) in the environment. As a result, we had a success rate of 73.8 percent in finding the correspondence between audio-visual events (afferent signal) which are relating to robot motion (efferent signal).

  6. ANALYSIS OF MULTIMODAL FUSION TECHNIQUES FOR AUDIO-VISUAL SPEECH RECOGNITION

    Directory of Open Access Journals (Sweden)

    D.V. Ivanko

    2016-05-01

    Full Text Available The paper deals with analytical review, covering the latest achievements in the field of audio-visual (AV fusion (integration of multimodal information. We discuss the main challenges and report on approaches to address them. One of the most important tasks of the AV integration is to understand how the modalities interact and influence each other. The paper addresses this problem in the context of AV speech processing and speech recognition. In the first part of the review we set out the basic principles of AV speech recognition and give the classification of audio and visual features of speech. Special attention is paid to the systematization of the existing techniques and the AV data fusion methods. In the second part we provide a consolidated list of tasks and applications that use the AV fusion based on carried out analysis of research area. We also indicate used methods, techniques, audio and video features. We propose classification of the AV integration, and discuss the advantages and disadvantages of different approaches. We draw conclusions and offer our assessment of the future in the field of AV fusion. In the further research we plan to implement a system of audio-visual Russian continuous speech recognition using advanced methods of multimodal fusion.

  7. Spectacular Attractions: Museums, Audio-Visuals and the Ghosts of Memory

    Directory of Open Access Journals (Sweden)

    Mandelli Elisa

    2015-12-01

    Full Text Available In the last decades, moving images have become a common feature not only in art museums, but also in a wide range of institutions devoted to the conservation and transmission of memory. This paper focuses on the role of audio-visuals in the exhibition design of history and memory museums, arguing that they are privileged means to achieve the spectacular effects and the visitors’ emotional and “experiential” engagement that constitute the main objective of contemporary museums. I will discuss this topic through the concept of “cinematic attraction,” claiming that when embedded in displays, films and moving images often produce spectacular mises en scène with immersive effects, creating wonder and astonishment, and involving visitors on an emotional, visceral and physical level. Moreover, I will consider the diffusion of audio-visual witnesses of real or imaginary historical characters, presented in Phantasmagoria-like displays that simulate ghostly and uncanny apparitions, creating an ambiguous and often problematic coexistence of truth and illusion, subjectivity and objectivity, facts and imagination.

  8. Kernel-Based Sensor Fusion With Application to Audio-Visual Voice Activity Detection

    Science.gov (United States)

    Dov, David; Talmon, Ronen; Cohen, Israel

    2016-12-01

    In this paper, we address the problem of multiple view data fusion in the presence of noise and interferences. Recent studies have approached this problem using kernel methods, by relying particularly on a product of kernels constructed separately for each view. From a graph theory point of view, we analyze this fusion approach in a discrete setting. More specifically, based on a statistical model for the connectivity between data points, we propose an algorithm for the selection of the kernel bandwidth, a parameter, which, as we show, has important implications on the robustness of this fusion approach to interferences. Then, we consider the fusion of audio-visual speech signals measured by a single microphone and by a video camera pointed to the face of the speaker. Specifically, we address the task of voice activity detection, i.e., the detection of speech and non-speech segments, in the presence of structured interferences such as keyboard taps and office noise. We propose an algorithm for voice activity detection based on the audio-visual signal. Simulation results show that the proposed algorithm outperforms competing fusion and voice activity detection approaches. In addition, we demonstrate that a proper selection of the kernel bandwidth indeed leads to improved performance.

  9. Pre-stimulus beta and gamma oscillatory power predicts perceived audiovisual simultaneity.

    Science.gov (United States)

    Yuan, Xiangyong; Li, Haijiang; Liu, Peiduo; Yuan, Hong; Huang, Xiting

    2016-09-01

    Pre-stimulus oscillation activity in the brain continuously fluctuates, but it is correlated with subsequent behavioral and perceptual performance. Here, using fast Fourier transformation of pre-stimulus electroencephalograms, we explored how oscillatory power modulates the subsequent discrimination of perceived simultaneity from non-simultaneity in the audiovisual domain. We found that the over-scalp high beta (20-28Hz), parieto-occipital low beta (14-20Hz), and high gamma oscillations (55-80Hz) were significantly stronger before audition-then-vision sequence when they were judged as simultaneous rather than non-simultaneous. In contrast, a broad range of oscillations, mainly the beta and gamma bands over a great part of the scalp were significantly weaker before vision-then-audition sequences when they were judged as simultaneous versus non-simultaneous. Moreover, for auditory-leading sequence, pre-stimulus beta and gamma oscillatory power successfully predicted subjects' reports of simultaneity on a trial-by-trial basis, with stronger activity resulting in more simultaneous judgments. These results indicate that ongoing fluctuations of beta and gamma oscillations can modulate subsequent perceived audiovisual simultaneity, but with an opposing pattern for auditory- and visual-leading sequences.

  10. Audiovisual Stimulation Modulates Physical Performance and Biochemical and Hormonal Status of Athletes.

    Science.gov (United States)

    Golovin, M S; Golovin, M S; Aizman, R I

    2016-09-01

    We studied the effect of audiovisual stimulation training course on physical development, functional state of the cardiovascular system, blood biochemical parameters, and hormonal status of athletes. The training course led to improvement of physical performance and adaptive capacities of the circulatory system, increase in plasma levels of total protein, albumin, and glucose and total antioxidant activity, and decrease in triglyceride, lipase, total bilirubin, calcium, and phosphorus. The concentration of hormones (cortisol, thyrotropin, triiodothyronine, and thyroxine) also decreased under these conditions. In the control group, an increase in the concentration of creatinine and uric acid and a tendency toward elevation of lowdensity lipoproteins and total antioxidant activity were observed in the absence of changes in cardiac function and physical performance; calcium and phosphorus concentrations reduced. The improvement in functional state in athletes was mainly associated with intensification of anabolic processes and suppression of catabolic reactions after audiovisual stimulation (in comparison with the control). Stimulation was followed by an increase in the number of correlations between biochemical and hormonal changes and physical performance of athletes, which attested to better integration of processes at the intersystem level.

  11. Sensorimotor cortical response during motion reflecting audiovisual stimulation: evidence from fractal EEG analysis.

    Science.gov (United States)

    Hadjidimitriou, S; Zacharakis, A; Doulgeris, P; Panoulas, K; Hadjileontiadis, L; Panas, S

    2010-06-01

    Sensorimotor activity in response to motion reflecting audiovisual titillation is studied in this article. EEG recordings, and especially the Mu-rhythm over the sensorimotor cortex (C3, CZ, and C4 electrodes), were acquired and explored. An experiment was designed to provide auditory (Modest Mussorgsky's "Promenade" theme) and visual (synchronized human figure walking) stimuli to advanced music students (AMS) and non-musicians (NM) as a control subject group. EEG signals were analyzed using fractal dimension (FD) estimation (Higuchi's, Katz's and Petrosian's algorithms) and statistical methods. Experimental results from the midline electrode (CZ) based on the Higuchi method showed significant differences between the AMS and the NM groups, with the former displaying substantial sensorimotor response during auditory stimulation and stronger correlation with the acoustic stimulus than the latter. This observation was linked to mirror neuron system activity, a neurological mechanism that allows trained musicians to detect action-related meanings underlying the structural patterns in musical excerpts. Contrarily, the response of AMS and NM converged during audiovisual stimulation due to the dominant presence of human-like motion in the visual stimulus. These findings shed light upon music perception aspects, exhibiting the potential of FD to respond to different states of cortical activity.

  12. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot.

    Science.gov (United States)

    Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M

    2014-01-01

    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user.

  13. Modulation of visual responses in the superior temporal sulcus by audio-visual congruency.

    Science.gov (United States)

    Dahl, Christoph D; Logothetis, Nikos K; Kayser, Christoph

    2010-01-01

    Our ability to identify or recognize visual objects is often enhanced by evidence provided by other sensory modalities. Yet, where and how visual object processing benefits from the information received by the other senses remains unclear. One candidate region is the temporal lobe, which features neural representations of visual objects, and in which previous studies have provided evidence for multisensory influences on neural responses. In the present study we directly tested whether visual representations in the lower bank of the superior temporal sulcus (STS) benefit from acoustic information. To this end, we recorded neural responses in alert monkeys passively watching audio-visual scenes, and quantified the impact of simultaneously presented sounds on responses elicited by the presentation of naturalistic visual scenes. Using methods of stimulus decoding and information theory, we then asked whether the responses of STS neurons become more reliable and informative in multisensory contexts. Our results demonstrate that STS neurons are indeed sensitive to the modality composition of the sensory stimulus. Importantly, information provided by STS neurons' responses about the particular visual stimulus being presented was highest during congruent audio-visual and unimodal visual stimulation, but was reduced during incongruent bimodal stimulation. Together, these findings demonstrate that higher visual representations in the STS not only convey information about the visual input but also depend on the acoustic context of a visual scene.

  14. Modulation of visual responses in the superior temporal sulcus by audio-visual congruency

    Directory of Open Access Journals (Sweden)

    Christoph Dahl

    2010-04-01

    Full Text Available Our ability to identify or recognize visual objects is often enhanced by evidence provided by other sensory modalities. Yet, where and how visual object processing benefits from the information received by the other senses remains unclear. One candidate region is the temporal lobe, which features neural representations of visual objects, and in which previous studies have provided evidence for multisensory influences on neural responses. In the present study we directly tested whether visual representations in the lower bank of the superior temporal sulcus (STS benefit from acoustic information. To this end, we recorded neural responses in alert monkeys passively watching audio-visual scenes, and quantified the impact of simultaneously presented sounds on responses elicited by the presentation of naturalistic visual scenes. Using methods of stimulus decoding and information theory, we then asked whether the responses of STS neurons become more reliable and informative in multisensory contexts. Our results demonstrate that STS neurons are indeed sensitive to the modality composition of the sensory stimulus. Importantly, information provided by STS neurons’ responses about the particular visual stimulus being presented was highest during congruent audio-visual and unimodal visual stimulation, but was reduced during incongruent bimodal stimulation. Together, these findings demonstrate that higher visual representations in the STS not only convey information about the visual input but also depend on the acoustic context of a visual scene.

  15. The third language: A recurrent textual restriction that translators come across in audiovisual translation.

    Directory of Open Access Journals (Sweden)

    Montse Corrius Gimbert

    2005-01-01

    Full Text Available If the process of translating is not at all simple, the process of translating an audiovisual text is still more complex. Apart rom technical problems such as lip synchronisation, there are other factors to be considered such as the use of the language and textual structures deemed appropriate to the channel of communication. Bearing in mind that most of the films we are continually seeing on our screens were and are produced in the United States, there is an increasing need to translate them into the different languages of the world. But sometimes the source audiovisual text contains more than one language, and, thus, a new problem arises: the ranslators face additional difficulties in translating this “third language” (language or dialect into the corresponding target culture. There are many films containing two languages in the original version but in this paper we will focus mainly on three films: Butch Cassidy and the Sundance Kid (1969, Raid on Rommel (1999 and Blade Runner (1982. This paper aims at briefly illustrating different solutions which may be applied when we come across a “third language”.

  16. Effects of audio-visual presentation of target words in word translation training

    Science.gov (United States)

    Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko

    2004-05-01

    Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.

  17. A comprehensive model of audiovisual perception: both percept and temporal dynamics.

    Directory of Open Access Journals (Sweden)

    Patricia Besson

    Full Text Available The sparse information captured by the sensory systems is used by the brain to apprehend the environment, for example, to spatially locate the source of audiovisual stimuli. This is an ill-posed inverse problem whose inherent uncertainty can be solved by jointly processing the information, as well as introducing constraints during this process, on the way this multisensory information is handled. This process and its result--the percept--depend on the contextual conditions perception takes place in. To date, perception has been investigated and modeled on the basis of either one of two of its dimensions: the percept or the temporal dynamics of the process. Here, we extend our previously proposed audiovisual perception model to predict both these dimensions to capture the phenomenon as a whole. Starting from a behavioral analysis, we use a data-driven approach to elicit a bayesian network which infers the different percepts and dynamics of the process. Context-specific independence analyses enable us to use the model's structure to directly explore how different contexts affect the way subjects handle the same available information. Hence, we establish that, while the percepts yielded by a unisensory stimulus or by the non-fusion of multisensory stimuli may be similar, they result from different processes, as shown by their differing temporal dynamics. Moreover, our model predicts the impact of bottom-up (stimulus driven factors as well as of top-down factors (induced by instruction manipulation on both the perception process and the percept itself.

  18. Effects of audio-visual presentation of target words in word translation training

    Science.gov (United States)

    Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko

    2001-05-01

    Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.

  19. When Library and Archival Science Methods Converge and Diverge: KAUST’s Multi-Disciplinary Approach to the Management of its Audiovisual Heritage

    KAUST Repository

    Kenosi, Lekoko

    2015-07-16

    Libraries and Archives have long recognized the important role played by audiovisual records in the development of an informed global citizen and the King Abdullah University of Science and Technology (KAUST) is no exception. Lying on the banks of the Red Sea, KAUST has a state of the art library housing professional library and archives teams committed to the processing of digital audiovisual records created within and outside the University. This commitment, however, sometimes obscures the fundamental divergences unique to the two disciplines on the acquisition, cataloguing, access and long-term preservation of audiovisual records. This dichotomy is not isolated to KAUST but replicates itself in many settings that have employed Librarians and Archivists to manage their audiovisual collections. Using the KAUST audiovisual collections as a case study the authors of this paper will take the reader through the journey of managing KAUST’s digital audiovisual collection. Several theoretical and methodological areas of convergence and divergence will be highlighted as well as suggestions on the way forward for the IFLA and ICA working committees on the management of audiovisual records.

  20. HIV/AIDS and Alcohol

    Science.gov (United States)

    ... Psychiatric Disorders Other Substance Abuse HIV/AIDS HIV/AIDS Human immunodeficiency virus (HIV) targets the body’s immune ... and often leads to acquired immune deficiency syndrome (AIDS). Each year in the United States, between 55, ...

  1. HIV, AIDS, and the Future

    Science.gov (United States)

    Skip Navigation Bar Home Current Issue Past Issues HIV / AIDS HIV, AIDS, and the Future Past Issues / Summer 2009 ... turn Javascript on. Photo: The NAMES Project Foundation HIV and AIDS are a global catastrophe. While advances ...

  2. Political dimensions of AIDS.

    Science.gov (United States)

    Blewett, N

    1988-01-01

    World political aspects and the example of Australia as a national political response to AIDS are presented. Global policy on AIDS is influenced by the fact that the AIDS epidemic is the 1st to be largely predictable, that long lag times occur between intervention and measurable events, and by the prompt, professional leadership of WHO, lead by Dr. J. Mann. WHO began a Global Programme on AIDS in 1987, modelled on the responses of Canada and Australia. A world summit of Ministers of Health was convened in January 1988. These moves generated a response qualified by openness, cooperation, hope and common sense. The AIDS epidemic calls for unprecedented involvement of politicians: they must coordinate medical knowledge with community action, deal with public fear, exert strong, rational leadership and avoid quick, appealing counterproductive responses. 3 clear directions must be taken to deal with the epidemic: 1) strong research and education campaigns; 2) close contact with political colleagues, interest groups and the community; 3) a national strategy which enjoins diverse interest groups, with courage, rationality and compassion. In Australia, the AIDS response began with the unwitting infection of 3 infants by blood transfusion. A public information campaign emphasizing a penetrating TV ad campaign was instituted in 1987. Policy discussions were held in all parliamentary bodies. The AIDS epidemic demands rapid, creative responses, a break from traditions in health bureaucracy, continual scrutiny of funding procedures and administrative arrangements. In practical terms in Australia, this meant establishing a special AIDS branch within the Health Advancement Division of the Community Health Department. AIDS issues must remain depoliticized to defuse adversary politics and keep leaders in a united front.

  3. Lenguaje audiovisual y lenguaje escolar: dos cosmovisiones en la estructuración lingüística del niño Audiovisual language and school language: two cosmo-visions in the structuring of children linguistics

    Directory of Open Access Journals (Sweden)

    Lirian Astrid Ciro

    2007-06-01

    Full Text Available En el presente texto se pretende analizar la compleja red relacional existente entre el lenguaje audiovisual (partiendo de la televisión como uno de sus soportes y el lenguaje escolar, para vislumbrar sus efectos en el lenguaje infantil. La idea es mostrar el lenguaje audiovisual como un mecanismo potencialmente educativo, por cuanto es una forma de resignificar el mundo y de socialización lingüística; tal característica hace necesario entablar una relación estratégica entre él y el lenguaje escolar. De este modo, el lenguaje infantil se instaura como un punto intermedio en donde confluyen esos distintos lenguajes, y permite al niño tener cosmovisiones abiertas y flexibles de diversas realidades. Todo esto llevará a la configuración de seres creativos, novedosos y atentos a escuchar opciones... a la estructuración de una nueva sociedad, en donde la multiplicidad de códigos (entendidos como sistemas de simbolización vayan haciendo más fácil la expresión de lo que se es y se quiere ser.This paper analyzes the complex relationship between audiovisual language (TV being one of its main supports and school language in order to observe their effects on child language. In this way, audiovisual language is a potentially educational mechanism because it is both a new way of resignifying the world and a mechanism of linguistic socialization. Hence, it is necessary to establish a strategic relationship between audiovisual language and school language. In this way, child language is an intermediate point between these two languages and it allows the child to have open and flexible views of different realities and to be willing to weigh options. In short, it is the structuring of a new society where multiplicity of codes will contribute to facilitating free expression.

  4. 36 CFR 1256.100 - What is the copying policy for USIA audiovisual records that either have copyright protection or...

    Science.gov (United States)

    2010-07-01

    ... Distribution of United States Information Agency Audiovisual Materials in the National Archives of the United...? 1256.100 Section 1256.100 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... United States once NARA has: (1) Ensured, as described in paragraph (c) of this section, that you...

  5. Exploring determinants of early user acceptance for an audio-visual heritage archive service using the vignette method

    NARCIS (Netherlands)

    Ongena, Guido; Wijngaert, van de Lidwien; Huizer, E.

    2013-01-01

    The purpose of this study is to investigate factors, which explain the behavioural intention of the use of a new audio-visual cultural heritage archive service. An online survey in combination with a factorial survey is utilised to investigate the predictable strength of technological, individual an

  6. Undifferentiated Facial Electromyography Responses to Dynamic, Audio-Visual Emotion Displays in Individuals with Autism Spectrum Disorders

    Science.gov (United States)

    Rozga, Agata; King, Tricia Z.; Vuduc, Richard W.; Robins, Diana L.

    2013-01-01

    We examined facial electromyography (fEMG) activity to dynamic, audio-visual emotional displays in individuals with autism spectrum disorders (ASD) and typically developing (TD) individuals. Participants viewed clips of happy, angry, and fearful displays that contained both facial expression and affective prosody while surface electrodes measured…

  7. The effect of combined sensory and semantic components on audio-visual speech perception in older adults

    Directory of Open Access Journals (Sweden)

    Corrina eMaguinness

    2011-12-01

    Full Text Available Previous studies have found that perception in older people benefits from multisensory over uni-sensory information. As normal speech recognition is affected by both the auditory input and the visual lip-movements of the speaker, we investigated the efficiency of audio and visual integration in an older population by manipulating the relative reliability of the auditory and visual information in speech. We also investigated the role of the semantic context of the sentence to assess whether audio-visual integration is affected by top-down semantic processing. We presented participants with audio-visual sentences in which the visual component was either blurred or not blurred. We found that there was a greater cost in recall performance for semantically meaningless speech in the audio-visual blur compared to audio-visual no blur condition and this effect was specific to the older group. Our findings have implications for understanding how aging affects efficient multisensory integration for the perception of speech and suggests that multisensory inputs may benefit speech perception in older adults when the semantic content of the speech is unpredictable.

  8. Audiovisual infotainment in European news: A comparative content analysis of Dutch, Spanish, and Irish television news programs

    NARCIS (Netherlands)

    A. Paz Alencar (Amanda); S. Kruikemeier (Sanne)

    2016-01-01

    markdownabstractThis study investigates to what extent audiovisual infotainment features can be found in the narrative structure of television news in three European countries. Content analysis included a sample of 639 news reports aired in the first 3 weeks of September 2013, in six prime-time TV n

  9. Audiovisual infotainment in European news: a comparative content analysis of Dutch, Spanish and Irish television news programs

    NARCIS (Netherlands)

    Alencar, A.; Kruikemeier, S.

    2015-01-01

    This study investigates to what extent audiovisual infotainment features can be found in the narrative structure of television news in three European countries news. Content analysis included a sample of 639 news reports (or reporter packages) aired in the first three weeks of September 2013, in six

  10. Audiovisual distraction as a useful adjunct to epidural anesthesia and sedation for prolonged lower limb microvascular orthoplastic surgery.

    Science.gov (United States)

    Athanassoglou, Vassilis; Wallis, Anna; Galitzine, Svetlana

    2015-11-01

    Lower limb orthopedic operations are frequently performed under regional anesthesia, which allows avoidance of potential side effects and complications of general anesthesia and sedation. Often though, patients feel anxious about being awake during operations. To decrease intraoperative anxiety, we use multimedia equipment consisting of a tablet device, noise-canceling headphones, and a makeshift frame, where patients can listen to music, watch movies, or occupy themselves in numerous ways. These techniques have been extensively studies in minimally invasive, short, or minor procedures but not in prolonged orthoplastic operations. We report 2 cases where audiovisual distraction was successfully applied to 9.5-hour procedures, proved to be a very useful adjunct to epidural anesthesia + sedation, and made an important contribution to positive patients' outcomes and overall patients' experience with regional anesthesia for complex limb reconstructive surgery. In the era when not only patients' safety and clinical outcomes but also patients' positive experiences are of paramount importance, audiovisual distraction may provide a simple tool to help improve experience of appropriately informed patients undergoing suitable procedures under regional anesthesia. The anesthetic technique received a very positive appraisal by both patients and encouraged us to study further the impact of modern audiovisual technology on anxiolysis for major surgery under regional anesthesia. The duration of surgery per se is not a contraindication to the use of audiovisual distraction. The absolute proviso of successful application of this technique to major surgery is effective regional anesthesia and good teamwork between the clinicians and the patients.

  11. 36 CFR 1237.14 - What are the additional scheduling requirements for audiovisual, cartographic, and related records?

    Science.gov (United States)

    2010-07-01

    ... creation (see also 36 CFR part 1235). See § 1235.42 of this subchapter for specifications and standards for... scheduling requirements for audiovisual, cartographic, and related records? 1237.14 Section 1237.14 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT...

  12. La conformación del canon literario costarricense: observaciones a partir de la producción audiovisual

    Directory of Open Access Journals (Sweden)

    Bernardo Bolaños Esquivel

    2013-08-01

    Full Text Available El estudio analiza las relaciones entre obras insertas en el canon literario costarricense y los factores que las han llevado al formato audiovisual. Hecha una descripción del recorrido histórico de tales relaciones, se señalan factores determinantes que vinculan esas obras y la producción audiovisual. Entre esos factores se señala la cercanía con una imagen idílica de nación, la afinidad de los escritores con el poder político, y en general, criterios de conveniencia comercial. This study analyzes the relations between works belonging to the Costa Rican literary canon and the factors which have taken them to an audiovisual format. Once the history of those relations is described, mention is made of determining factors that link those works and audiovisual production. These factors include closeness with the idyllic image of nation, the authors’ affinity with political power, and criteria of commercial suitability, in general.

  13. Impairment-Factor-Based Audiovisual Quality Model for IPTV: Influence of Video Resolution, Degradation Type, and Content Type

    Directory of Open Access Journals (Sweden)

    Garcia MN

    2011-01-01

    Full Text Available This paper presents an audiovisual quality model for IPTV services. The model estimates the audiovisual quality of standard and high definition video as perceived by the user. The model is developed for applications such as network planning and packet-layer quality monitoring. It mainly covers audio and video compression artifacts and impairments due to packet loss. The quality tests conducted for model development demonstrate a mutual influence of the perceived audio and video quality, and the predominance of the video quality for the overall audiovisual quality. The balance between audio quality and video quality, however, depends on the content, the video format, and the audio degradation type. The proposed model is based on impairment factors which quantify the quality-impact of the different degradations. The impairment factors are computed from parameters extracted from the bitstream or packet headers. For high definition video, the model predictions show a correlation with unknown subjective ratings of 95%. For comparison, we have developed a more classical audiovisual quality model which is based on the audio and video qualities and their interaction. Both quality- and impairment-factor-based models are further refined by taking the content-type into account. At last, the different model variants are compared with modeling approaches described in the literature.

  14. Theological Media Literacy Education and Hermeneutic Analysis of Soviet Audiovisual Anti-Religious Media Texts in Students' Classroom

    Science.gov (United States)

    Fedorov, Alexander

    2015-01-01

    This article realized the Russian way of theological media education literacy and hermeneutic analysis of specific examples of Soviet anti-religious audiovisual media texts: a study of the process of interpretation of these media texts, cultural and historical factors influencing the views of the media agency/authors. The hermeneutic analysis…

  15. The whole is more than the sum of its parts - Audiovisual processing of phonemes investigated with ERPs

    NARCIS (Netherlands)

    Hessler, Dorte; Jonkers, Roel; Stowe, Laurie; Bastiaanse, Roelien

    2013-01-01

    In the current ERP study, an active oddball task was carried out, testing pure tones and auditory, visual and audiovisual syllables. For pure tones, an MMN, an N2b, and a P3 were found, confirming traditional findings. Auditory syllables evoked an N2 and a P3. We found that the amplitude of the P3 d

  16. Audiovisual en línea en la universidad española: bibliotecas y servicios especializados (una panorámica

    Directory of Open Access Journals (Sweden)

    Alfonso López Yepes

    2014-08-01

    Full Text Available Situación que presenta la información audiovisual en línea en el ámbito de las bibliotecas y servicios audiovisuales universitarios españoles, con ejemplos de aplicaciones y desarrollos concretos. Se destaca la presencia del audiovisual fundamentalmente en blogs, canales IPTV, portales bibliotecarios propios y en actuaciones concretas como “La Universidad Responde”, a cargo de los servicios audiovisuales de las universidades españolas, que supone sin duda un marco de referencia y de difusión informativa muy destacado también para el ámbito bibliotecario; así como en redes sociales, mencionándose una propuesta de modelo de red social de biblioteca universitaria. Se remite a la participación de bibliotecas y servicios en proyectos colaborativos de investigación y desarrollo social, presencia ya efectiva en el marco del proyecto “Red iberoamericana de patrimonio sonoro y audiovisual”, que apuesta  por la construcción social del conocimiento audiovisual basado en la interacción entre distintos grupos multidisciplinarios de profesionales con diferentes comunidades de usuarios e instituciones.A situation presenting audiovisual information online in the field of libraries and audiovisual university spanish services, with examples of applications and specific developments. The presence of the audiovisual in blogs and IPTV channels librarians and specific actions as The University Respond, in charge of the audiovisual services of the spanish universities, a very important reference and information dissemination for the field librarian and in social networks, mentioning a model of social network of University Library. Participation of libraries and services in collaborative research and social development projects in the Ibero-American network of sound and audiovisual heritage project, for the social construction of the audiovisual knowledge based on the interaction between various multidisciplinary groups of professionals with

  17. Aid and Growth

    DEFF Research Database (Denmark)

    Tarp, Finn; Mekasha, Tseday Jemaneh

    2013-01-01

    Recent litterature in the meta-analysis category where results from a range of studies are brought together throws doubt on the ability of foreign aid to foster economic growth and development. This article assesses what meta-analysis has to contribute to the litterature on the effectiveness...... of foreign aid in terms of growth impact. We re-examine key hypotheses, and find that the effect of aid on growth is positive and statistically significant. This significant effect is genuine, and not an artefact of publication selection. We also show why our results differ from those published elsewhere....

  18. Aid Supplies Over Time

    DEFF Research Database (Denmark)

    Jones, Edward Samuel

    2015-01-01

    What determines how much foreign aid donors provide? Existing answers to this question point to a complex range of influences. However, the tasks of distinguishing between long- and short-run factors, as well as differences between donors, have not been adequately addressed. Taking advantage...... of data spanning nearly 50 years, this paper uses panel cointegration techniques to consider these issues. The analysis provides clear evidence for heterogeneity both between donors and over time, bandwagon effects, and a growing influence of security considerations in aid provision. Domestic...... macroeconomic shocks have a moderate but delayed effect on aid disbursements....

  19. Aid and Growth

    DEFF Research Database (Denmark)

    Mekasha, Tseday Jemaneh; Tarp, Finn

    Some recent literature in the meta-analysis category where results from a range of studies are brought together throws doubt on the ability of foreign aid to foster economic growth and development. This paper assesses what meta-analysis has to say about the effectiveness of foreign aid in terms...... of the growth impact. We re-examine key hypotheses, and find that the effect of aid on growth is positive and statistically significant. This significant effect is genuine, and not an artefact of publication selection. We also show why our results differ from those published elsewhere....

  20. Pulmonary complications of AIDS: radiologic features. [AIDS

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, B.A.; Pomeranz, S.; Rabinowitz, J.G.; Rosen, M.J.; Train, J.S.; Norton, K.I.; Mendelson, D.S.

    1984-07-01

    Fifty-two patients with pulmonary complications of acquired immunodeficiency syndrome (AIDS) were studied over a 3-year period. The vast majority of the patients were homosexual; however, a significant number were intravenous drug abusers. Thirteen different organisms were noted, of which Pneumocystis carinii was by far the most common. Five patients had neoplasia. Most patients had initial abnormal chest films; however, eight patients subsequently shown to have Pneumocystis carinii pneumonia had normal chest films. A significant overlap in chest radiographic findings was noted among patients with different or multiple organisms. Lung biopsy should be an early consideration for all patients with a clinical history consistent with the pulmonary complications of AIDS. Of the 52 patients, 41 had died by the time this report was completed.