WorldWideScience

Sample records for publications audiovisual aids

  1. Proper Use of Audio-Visual Aids: Essential for Educators.

    Science.gov (United States)

    Dejardin, Conrad

    1989-01-01

    Criticizes educators as the worst users of audio-visual aids and among the worst public speakers. Offers guidelines for the proper use of an overhead projector and the development of transparencies. (DMM)

  2. Does hearing aid use affect audiovisual integration in mild hearing impairment?

    Science.gov (United States)

    Gieseler, Anja; Tahden, Maike A S; Thiel, Christiane M; Colonius, Hans

    2018-04-01

    There is converging evidence for altered audiovisual integration abilities in hearing-impaired individuals and those with profound hearing loss who are provided with cochlear implants, compared to normal-hearing adults. Still, little is known on the effects of hearing aid use on audiovisual integration in mild hearing loss, although this constitutes one of the most prevalent conditions in the elderly and, yet, often remains untreated in its early stages. This study investigated differences in the strength of audiovisual integration between elderly hearing aid users and those with the same degree of mild hearing loss who were not using hearing aids, the non-users, by measuring their susceptibility to the sound-induced flash illusion. We also explored the corresponding window of integration by varying the stimulus onset asynchronies. To examine general group differences that are not attributable to specific hearing aid settings but rather reflect overall changes associated with habitual hearing aid use, the group of hearing aid users was tested unaided while individually controlling for audibility. We found greater audiovisual integration together with a wider window of integration in hearing aid users compared to their age-matched untreated peers. Signal detection analyses indicate that a change in perceptual sensitivity as well as in bias may underlie the observed effects. Our results and comparisons with other studies in normal-hearing older adults suggest that both mild hearing impairment and hearing aid use seem to affect audiovisual integration, possibly in the sense that hearing aid use may reverse the effects of hearing loss on audiovisual integration. We suggest that these findings may be particularly important for auditory rehabilitation and call for a longitudinal study.

  3. Audiovisual Aids for Astronomy and Space Physics at an Urban College

    Science.gov (United States)

    Moche, Dinah L.

    1973-01-01

    Discusses the use of easily available audiovisual aids to teach a one semester course in astronomy and space physics to liberal arts students of both sexes at Queensborough Community College. Included is a list of teaching aids for use in astronomy instruction. (CC)

  4. Audio-Visual Aids for Cooperative Education and Training.

    Science.gov (United States)

    Botham, C. N.

    Within the context of cooperative education, audiovisual aids may be used for spreading the idea of cooperatives and helping to consolidate study groups; for the continuous process of education, both formal and informal, within the cooperative movement; for constant follow up purposes; and for promoting loyalty to the movement. Detailed…

  5. Effects of audio-visual aids on foreign language test anxiety, reading and listening comprehension, and retention in EFL learners.

    Science.gov (United States)

    Lee, Shu-Ping; Lee, Shin-Da; Liao, Yuan-Lin; Wang, An-Chi

    2015-04-01

    This study examined the effects of audio-visual aids on anxiety, comprehension test scores, and retention in reading and listening to short stories in English as a Foreign Language (EFL) classrooms. Reading and listening tests, general and test anxiety, and retention were measured in English-major college students in an experimental group with audio-visual aids (n=83) and a control group without audio-visual aids (n=94) with similar general English proficiency. Lower reading test anxiety, unchanged reading comprehension scores, and better reading short-term and long-term retention after four weeks were evident in the audiovisual group relative to the control group. In addition, lower listening test anxiety, higher listening comprehension scores, and unchanged short-term and long-term retention were found in the audiovisual group relative to the control group after the intervention. Audio-visual aids may help to reduce EFL learners' listening test anxiety and enhance their listening comprehension scores without facilitating retention of such materials. Although audio-visual aids did not increase reading comprehension scores, they helped reduce EFL learners' reading test anxiety and facilitated retention of reading materials.

  6. Media Aid Beyond the Factual: Culture, Development, and Audiovisual Assistance

    Directory of Open Access Journals (Sweden)

    Benjamin A. J. Pearson

    2015-01-01

    Full Text Available This paper discusses audiovisual assistance, a form of development aid that focuses on the production and distribution of cultural and entertainment media such as fictional films and TV shows. While the first audiovisual assistance program dates back to UNESCO’s International Fund for the Promotion of Culture in the 1970s, the past two decades have seen a proliferation of audiovisual assistance that, I argue, is related to a growing concern for culture in post-2015 global development agendas. In this paper, I examine the aims and motivations behind the EU’s audiovisual assistance programs to countries in the Global South, using data from policy documents and semi-structured, in-depth interviews with Program Managers and administrative staff in Brussels. These programs prioritize forms of audiovisual content that are locally specific, yet globally tradable. Furthermore, I argue that they have an ambivalent relationship with traditional notions of international development, one that conceptualizes media not only as a means to achieve economic development and human rights aims, but as a form of development itself.

  7. The efficacy of an audiovisual aid in teaching the Neo-Classical ...

    African Journals Online (AJOL)

    This study interrogated the central theoretical statement that understanding and learning to apply the abstract concept of classical dramatic narrative structure can be addressed effectively through a useful audiovisual teaching method. The purpose of the study was to design an effective DVD teaching and learning aid, ...

  8. Audiovisual aid viewing immediately before pediatric induction moderates the accompanying parents' anxiety

    NARCIS (Netherlands)

    Berghmans, Johan; Weber, Frank; van Akoleyen, Candyce; Utens, Elisabeth; Adriaenssens, Peter; Klein, Jan; Himpe, Dirk

    2012-01-01

    Parents accompanying their child during induction of anesthesia experience stress. The impact of audiovisual aid (AVA) on parental state anxiety and assessment of the child's anxiety at induction have been studied previously but need closer scrutiny. One hundred and twenty parents whose children

  9. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals

    Science.gov (United States)

    Lidestam, Björn; Rönnberg, Jerker

    2016-01-01

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667

  10. The Efficacy of Short-term Gated Audiovisual Speech Training for Improving Auditory Sentence Identification in Noise in Elderly Hearing Aid Users

    Science.gov (United States)

    Moradi, Shahram; Wahlin, Anna; Hällgren, Mathias; Rönnberg, Jerker; Lidestam, Björn

    2017-01-01

    This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants’ auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Conclusion: Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research. PMID:28348542

  11. Seminario latinoamericano de didactica de los medios audiovisuales (Latin American Seminar on Teaching with Audiovisual Aids).

    Science.gov (United States)

    Eduplan Informa, 1971

    1971-01-01

    This seminar on the use of audiovisual aids reached several conclusions on the need for and the use of such aids in Latin America. The need for educational innovation in the face of a new society, a new type of communication, and a new vision of man is stressed. A new definition of teaching and learning as a fundamental process of communication is…

  12. Audiovisual spoken word recognition as a clinical criterion for sensory aids efficiency in Persian-language children with hearing loss.

    Science.gov (United States)

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar

    2015-12-01

    The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (Paudiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Comparative evaluation of the effectiveness of audio and audiovisual distraction aids in the management of anxious pediatric dental patients

    Directory of Open Access Journals (Sweden)

    Rajwinder Kaur

    2015-01-01

    Full Text Available Objective: The aim of this study was to evaluate and compare audio and audiovisual distraction aids in management of anxious pediatric dental patients of different age groups and to study children′s response to sequential dental visits with the use of distraction aids. Study Design: This study was conducted on two age groups, that is, 4-6 years and 6-8 years with 30 patients in each age group on their first dental visit. The children of both the age groups were divided into 3 subgroups, the control group, audio distraction group, audiovisual distraction group with 10 patients in each subgroup. Each child in all the subgroups had gone through three dental visits. Child anxiety level at each visit was assessed by using a combination of anxiety measuring parameters. The data collected was tabulated and subjected to statistical analysis. Results: Tukey honest significant difference post-hoc test at 0.05% level of significance revealed audiovisual group showed statistically highly significant difference from audio and control group, whereas audio group showed the statistically significant difference from the control group. Conclusion: Audiovisual distraction was found to be a more effective mode of distraction in the management of anxious children in both the age groups when compared to audio distraction. In both the age groups, a significant effect of the visit type was also observed.

  14. Comparative evaluation of the effectiveness of audio and audiovisual distraction aids in the management of anxious pediatric dental patients.

    Science.gov (United States)

    Kaur, Rajwinder; Jindal, Ritu; Dua, Rohini; Mahajan, Sandeep; Sethi, Kunal; Garg, Sunny

    2015-01-01

    The aim of this study was to evaluate and compare audio and audiovisual distraction aids in management of anxious pediatric dental patients of different age groups and to study children's response to sequential dental visits with the use of distraction aids. This study was conducted on two age groups, that is, 4-6 years and 6-8 years with 30 patients in each age group on their first dental visit. The children of both the age groups were divided into 3 subgroups, the control group, audio distraction group, audiovisual distraction group with 10 patients in each subgroup. Each child in all the subgroups had gone through three dental visits. Child anxiety level at each visit was assessed by using a combination of anxiety measuring parameters. The data collected was tabulated and subjected to statistical analysis. Tukey honest significant difference post-hoc test at 0.05% level of significance revealed audiovisual group showed statistically highly significant difference from audio and control group, whereas audio group showed the statistically significant difference from the control group. Audiovisual distraction was found to be a more effective mode of distraction in the management of anxious children in both the age groups when compared to audio distraction. In both the age groups, a significant effect of the visit type was also observed.

  15. The efectiveness of mnemonic audio-visual aids in teaching content words to EFL students at a Turkish university

    OpenAIRE

    Kılınç, A Reha

    1996-01-01

    Ankara : Institute of Economics and Social Sciences, Bilkent University, 1996. Thesis(Master's) -- Bilkent University, 1996. Includes bibliographical references leaves 63-67 This experimental study aimed at investigating the effects of mnemonic audio-visual aids on recognition and recall of vocabulary items in comparison to a dictionary using control group. The study was conducted at Middle East Technical University Department of Basic English. The participants were 64 beginner and u...

  16. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli.

    Science.gov (United States)

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2016-06-17

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. © The Author(s) 2016.

  17. Visual Cues Contribute Differentially to Audiovisual Perception of Consonants and Vowels in Improving Recognition and Reducing Cognitive Demands in Listeners With Hearing Impairment Using Hearing Aids.

    Science.gov (United States)

    Moradi, Shahram; Lidestam, Björn; Danielsson, Henrik; Ng, Elaine Hoi Ning; Rönnberg, Jerker

    2017-09-18

    We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels-in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands-in listeners with hearing impairment using hearing aids. The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Gated Swedish consonants and vowels were presented aurally and audiovisually to participants. Linear amplification was adjusted for each participant to assure audibility. The reading span test was used to measure participants' working memory capacity. Audiovisual presentation resulted in shortened isolation points and improved accuracy for consonants and vowels relative to auditory-only presentation. This benefit was more evident for consonants than vowels. In addition, correlations and subsequent analyses revealed that listeners with higher scores on the reading span test identified both consonants and vowels earlier in auditory-only presentation, but only vowels (not consonants) in audiovisual presentation. Consonants and vowels differed in terms of the benefits afforded from their associative visual cues, as indicated by the degree of audiovisual benefit and reduction in cognitive demands linked to the identification of consonants and vowels presented audiovisually.

  18. Acceptance of online audio-visual cultural heritage archive services: a study of the general public

    NARCIS (Netherlands)

    Ongena, G.; van de Wijngaert, Lidwien; Huizer, E.

    2013-01-01

    Introduction. This study examines the antecedents of user acceptance of an audio-visual heritage archive for a wider audience (i.e., the general public) by extending the technology acceptance model with the concepts of perceived enjoyment, nostalgia proneness and personal innovativeness. Method. A

  19. 36 CFR 1237.16 - How do agencies store audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... audiovisual records? 1237.16 Section 1237.16 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.16 How do agencies store audiovisual records? Agencies must maintain appropriate storage conditions for permanent...

  20. Educational aids

    International Nuclear Information System (INIS)

    Lenkeit, S.

    1989-01-01

    Educational aids include printed matter, aural media, visual media, audiovisual media and objects. A distinction is made between learning aids, which include blackboards, overhead projectors, flipcharts, wallcharts and pinboards, and learning aids, which include textbooks, worksheets, documentation and experimental equipment. The various aids are described and their use explained. The aids available at the School for Nuclear Technology of the Karlsruhe Nuclear Research Centre are described

  1. Historia audiovisual para una sociedad audiovisual

    Directory of Open Access Journals (Sweden)

    Julio Montero Díaz

    2013-04-01

    Full Text Available This article analyzes the possibilities of presenting an audiovisual history in a society in which audiovisual media has progressively gained greater protagonism. We analyze specific cases of films and historical documentaries and we assess the difficulties faced by historians to understand the keys of audiovisual language and by filmmakers to understand and incorporate history into their productions. We conclude that it would not be possible to disseminate history in the western world without audiovisual resources circulated through various types of screens (cinema, television, computer, mobile phone, video games.

  2. Audiovisual Capture with Ambiguous Audiovisual Stimuli

    Directory of Open Access Journals (Sweden)

    Jean-Michel Hupé

    2011-10-01

    Full Text Available Audiovisual capture happens when information across modalities get fused into a coherent percept. Ambiguous multi-modal stimuli have the potential to be powerful tools to observe such effects. We used such stimuli made of temporally synchronized and spatially co-localized visual flashes and auditory tones. The flashes produced bistable apparent motion and the tones produced ambiguous streaming. We measured strong interferences between perceptual decisions in each modality, a case of audiovisual capture. However, does this mean that audiovisual capture occurs before bistable decision? We argue that this is not the case, as the interference had a slow temporal dynamics and was modulated by audiovisual congruence, suggestive of high-level factors such as attention or intention. We propose a framework to integrate bistability and audiovisual capture, which distinguishes between “what” competes and “how” it competes (Hupé et al., 2008. The audiovisual interactions may be the result of contextual influences on neural representations (“what” competes, quite independent from the causal mechanisms of perceptual switches (“how” it competes. This framework predicts that audiovisual capture can bias bistability especially if modalities are congruent (Sato et al., 2007, but that is fundamentally distinct in nature from the bistable competition mechanism.

  3. Student′s preference of various audiovisual aids used in teaching pre- and para-clinical areas of medicine

    Directory of Open Access Journals (Sweden)

    Navatha Vangala

    2015-01-01

    Full Text Available Introduction: The formal lecture is among the oldest teaching methods that have been widely used in medical education. Delivering a lecture is made easy and better by use of audiovisual aids (AV aids such as blackboard or whiteboard, an overhead projector, and PowerPoint presentation (PPT. Objective: To know the students preference of various AV aids and their use in medical education with an aim to improve their use in didactic lectures. Materials and Methods: The study was carried out among 230 undergraduate medical students of first and second M.B.B.S studying at Malla Reddy Medical College for Women, Hyderabad, Telangana, India during the month of November 2014. Students were asked to answer a questionnaire on the use of AV aids for various aspects of learning. Results: This study indicates that students preferred PPT, the most for a didactic lecture, for better perception of diagrams and flowcharts. Ninety-five percent of the students (first and second M.B.B.S were stimulated for further reading if they attended a lecture augmented by the use of visual aids. Teacher with good teaching skills and AV aids (58% was preferred most than a teacher with only good teaching skills (42%. Conclusion: Our study demonstrates that lecture delivered using PPT was more appreciated and preferred by the students. Furthermore, teachers with a proper lesson plan, good interactive and communicating skills are needed for an effective presentation of lecture.

  4. 36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?

    Science.gov (United States)

    2010-07-01

    ... standards for audiovisual records storage? 1237.18 Section 1237.18 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.18 What are the environmental standards for audiovisual records storage? (a...

  5. 36 CFR 1237.12 - What record elements must be created and preserved for permanent audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... created and preserved for permanent audiovisual records? 1237.12 Section 1237.12 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC... permanent audiovisual records? For permanent audiovisual records, the following record elements must be...

  6. 36 CFR 1237.10 - How must agencies manage their audiovisual, cartographic, and related records?

    Science.gov (United States)

    2010-07-01

    ... their audiovisual, cartographic, and related records? 1237.10 Section 1237.10 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.10 How must agencies manage their audiovisual, cartographic, and related...

  7. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... considerations in the maintenance of audiovisual records? 1237.20 Section 1237.20 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.20 What are special considerations in the maintenance of audiovisual...

  8. 36 CFR 1237.26 - What materials and processes must agencies use to create audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... must agencies use to create audiovisual records? 1237.26 Section 1237.26 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.26 What materials and processes must agencies use to create audiovisual...

  9. 36 CFR 1237.14 - What are the additional scheduling requirements for audiovisual, cartographic, and related records?

    Science.gov (United States)

    2010-07-01

    ... scheduling requirements for audiovisual, cartographic, and related records? 1237.14 Section 1237.14 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL... audiovisual, cartographic, and related records? The disposition instructions should also provide that...

  10. Copyright for audiovisual work and analysis of websites offering audiovisual works

    OpenAIRE

    Chrastecká, Nicolle

    2014-01-01

    This Bachelor's thesis deals with the matter of audiovisual piracy. It discusses the question of audiovisual piracy being caused not by the wrong interpretation of law but by the lack of competitiveness among websites with legal audiovisual content. This thesis questions the quality of legal interpretation in the matter of audiovisual piracy and focuses on its sufficiency. It analyses the responsibility of website providers, providers of the illegal content, the responsibility of illegal cont...

  11. Training Aids for Online Instruction: An Analysis.

    Science.gov (United States)

    Guy, Robin Frederick

    This paper describes a number of different types of training aids currently employed in online training: non-interactive audiovisual presentations; interactive computer-based aids; partially interactive aids based on recorded searches; print-based materials; and kits. The advantages and disadvantages of each type of aid are noted, and a table…

  12. Your Most Essential Audiovisual Aid--Yourself!

    Science.gov (United States)

    Hamp-Lyons, Elizabeth

    2012-01-01

    Acknowledging that an interested and enthusiastic teacher can create excitement for students and promote learning, the author discusses how teachers can improve their appearance, and, consequently, how their students perceive them. She offers concrete suggestions on how a teacher can be both a "visual aid" and an "audio aid" in the classroom.…

  13. CDC WONDER: AIDS Public Use Data

    Data.gov (United States)

    U.S. Department of Health & Human Services — The AIDS Public Information Data Set (APIDS) for years 1981-2002 on CDC WONDER online database contains counts of AIDS (Acquired Immune Deficiency Syndrome) cases...

  14. Audiovisual sentence repetition as a clinical criterion for auditory development in Persian-language children with hearing loss.

    Science.gov (United States)

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Rahimi, Zahra; Mayahi, Anis

    2017-02-01

    It is important for clinician such as speech-language pathologists and audiologists to develop more efficient procedures to assess the development of auditory, speech and language skills in children using hearing aid and/or cochlear implant compared to their peers with normal hearing. So, the aim of study was the comparison of the performance of 5-to-7-year-old Persian-language children with and without hearing loss in visual-only, auditory-only, and audiovisual presentation of sentence repetition task. The research was administered as a cross-sectional study. The sample size was 92 Persian 5-7 year old children including: 60 with normal hearing and 32 with hearing loss. The children with hearing loss were recruited from Soroush rehabilitation center for Persian-language children with hearing loss in Shiraz, Iran, through consecutive sampling method. All the children had unilateral cochlear implant or bilateral hearing aid. The assessment tool was the Sentence Repetition Test. The study included three computer-based experiments including visual-only, auditory-only, and audiovisual. The scores were compared within and among the three groups through statistical tests in α = 0.05. The score of sentence repetition task between V-only, A-only, and AV presentation was significantly different in the three groups; in other words, the highest to lowest scores belonged respectively to audiovisual, auditory-only, and visual-only format in the children with normal hearing (P audiovisual sentence repetition scores in all the 5-to-7-year-old children (r = 0.179, n = 92, P = 0.088), but audiovisual sentence repetition scores were found to be strongly correlated with auditory-only scores in all the 5-to-7-year-old children (r = 0.943, n = 92, P = 0.000). According to the study's findings, audiovisual integration occurs in the 5-to-7-year-old Persian children using hearing aid or cochlear implant during sentence repetition similar to their peers with normal hearing

  15. Longevity and Depreciation of Audiovisual Equipment.

    Science.gov (United States)

    Post, Richard

    1987-01-01

    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  16. Digital audiovisual archives

    CERN Document Server

    Stockinger, Peter

    2013-01-01

    Today, huge quantities of digital audiovisual resources are already available - everywhere and at any time - through Web portals, online archives and libraries, and video blogs. One central question with respect to this huge amount of audiovisual data is how they can be used in specific (social, pedagogical, etc.) contexts and what are their potential interest for target groups (communities, professionals, students, researchers, etc.).This book examines the question of the (creative) exploitation of digital audiovisual archives from a theoretical, methodological, technical and practical

  17. Project Management in Development Aid Industry – Public vs. Private

    Directory of Open Access Journals (Sweden)

    Simović Dragana

    2015-02-01

    Full Text Available This article examines the relationship between the type of a development aid implementing organisation (public or private and the quality of project management in development aid. The author begins with main public administration considerations - how public aid administration is different from private and furthermore, how particular sectoral characteristics of organisations influence the quality of the management process. The article combines empirical findings on the differences between the public and private sector with a complex setting of development aid and main success factors in development aid activity, in order to determine whether for-profit or public companies are more likely to achieve better project management processes. The article identifies some indices that favorise private companies, and outlines further necessary steps that should be taken in order to broaden the argumentation and confirm or reject this assertion

  18. Audiovisual aid viewing immediately before pediatric induction moderates the accompanying parents' anxiety.

    Science.gov (United States)

    Berghmans, Johan; Weber, Frank; van Akoleyen, Candyce; Utens, Elisabeth; Adriaenssens, Peter; Klein, Jan; Himpe, Dirk

    2012-04-01

    Parents accompanying their child during induction of anesthesia experience stress. The impact of audiovisual aid (AVA) on parental state anxiety and assessment of the child's anxiety at induction have been studied previously but need closer scrutiny. One hundred and twenty parents whose children were scheduled for day-care surgery entered this randomized, controlled study. The intervention group (n = 60) was exposed to an AVA in the holding area. Parental anxiety was measured with the Spielberger State-Trait Anxiety Inventory and the Amsterdam Preoperative Anxiety and Information Scale (APAIS) at three time points: (i) on admission [T1]; (ii) in the holding area just before entering the operating theater [T2]; and (iii) after leaving [T3]. Additionally, at [T3], both parent and attending anesthetist evaluated the child's anxiety using a visual analogue scale. The anesthetist also filled out the Induction Compliance Checklist. On the state anxiety subscale, APAIS parental anxiety at T2 (P = 0.015) and T3 (P = 0.009) was lower in the AVA intervention group than in the control group. After induction, the child's anxiety rating by the anesthetist was significantly lower than by the parent, in both intervention and control groups. Preoperative AVA shown to parents immediately before induction moderates the increase in anxiety associated with the anesthetic induction of their child. Present results suggest that behavioral characteristics seem better predictors of child's anxiety during induction than anxiety ratings per se and that anesthetists are better than parents in predicting child's anxiety during induction. © 2011 Blackwell Publishing Ltd.

  19. 78 FR 48190 - Certain Audiovisual Components and Products Containing the Same Notice of Request for Statements...

    Science.gov (United States)

    2013-08-07

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-837] Certain Audiovisual Components and Products Containing the Same Notice of Request for Statements on the Public Interest AGENCY: U.S... infringing audiovisual components and products containing the same, imported by Funai Corporation, Inc. of...

  20. RECURSO AUDIOVISUAL PAA ENSEÑAR Y APRENDER EN EL AULA: ANÁLISIS Y PROPUESTA DE UN MODELO FORMATIVO

    Directory of Open Access Journals (Sweden)

    Damian Marilu Mendoza Zambrano

    2015-09-01

    teaching of the media means following the initiative of Spain and Portugal, the international protagonists of some university educational models was made. Due to the extension and focalization in information technology and web communication through the Internet, the audiovisual aid as a technological instrument have gained utility as a dynamic and conciliatory source with special characteristics that differs it form the other sources that belong to the audiovisual aids eco system. As a result of this research; two application means are proposed: A. Proposal of the iconic and audiovisual language as a learning objective and/or as a curriculum subject in the university syllabus that will include workshops for the development of the audiovisual document, digital photography and the audiovisual production. B. Usage of the audiovisual resources as education means which will imply a pre- training process to the teachers in the activities recommended for the teachers and students. As a consequence, suggestions that allow implementing both means of academic actions are presented.KEYWORDS: Media Literacy; Education Audiovisual; Media Competence; Educommunication.

  1. Effect of Audiovisual Treatment Information on Relieving Anxiety in Patients Undergoing Impacted Mandibular Third Molar Removal.

    Science.gov (United States)

    Choi, Sung-Hwan; Won, Ji-Hoon; Cha, Jung-Yul; Hwang, Chung-Ju

    2015-11-01

    The authors hypothesized that an audiovisual slide presentation that provided treatment information regarding the removal of an impacted mandibular third molar could improve patient knowledge of postoperative complications and decrease anxiety in young adults before and after surgery. A group that received an audiovisual description was compared with a group that received the conventional written description of the procedure. This randomized clinical trial included young adult patients who required surgical removal of an impacted mandibular third molar and fulfilled the predetermined criteria. The predictor variable was the presentation of an audiovisual slideshow. The audiovisual informed group provided informed consent after viewing an audiovisual slideshow. The control group provided informed consent after reading a written description of the procedure. The outcome variables were the State-Trait Anxiety Inventory, the Dental Anxiety Scale, a self-reported anxiety questionnaire, completed immediately before and 1 week after surgery, and a postoperative questionnaire about the level of understanding of potential postoperative complications. The data were analyzed with χ(2) tests, independent t tests, Mann-Whitney U  tests, and Spearman rank correlation coefficients. Fifty-one patients fulfilled the inclusion criteria. The audiovisual informed group was comprised of 20 men and 5 women; the written informed group was comprised of 21 men and 5 women. The audiovisual informed group remembered significantly more information than the control group about a potential allergic reaction to local anesthesia or medication and potential trismus (P audiovisual informed group had lower self-reported anxiety scores than the control group 1 week after surgery (P audiovisual slide presentation could improve patient knowledge about postoperative complications and aid in alleviating anxiety after the surgical removal of an impacted mandibular third molar. Copyright © 2015

  2. Public Libraries Participation In Hiv/Aids Awareness Campaign In ...

    African Journals Online (AJOL)

    The paper examines public libraries involvement in HIV/AIDS awareness campaign in South West Nigeria. These include the materials and services available on HIV/AIDS and challenges to their participation in the war against the epidemic. The study revealed that public libraries in South West Nigeria are not participating ...

  3. The audiovisual communication policy of the socialist Government (2004-2009: A neoliberal turn

    Directory of Open Access Journals (Sweden)

    Ramón Zallo, Ph. D.

    2010-01-01

    Full Text Available The first legislature of Jose Luis Rodriguez Zapatero’s government (2004-08 generated important initiatives for some progressive changes in the public communicative system. However, all of these initiatives have been dissolving in the second legislature to give way to a non-regulated and privatizing model that is detrimental to the public service. Three phases can be distinguished, even temporarily: the first one is characterized by interesting reforms; followed by contradictory reforms and, in the second legislature, an accumulation of counter reforms, that lead the system towards a communicative system model completely different from the one devised in the first legislature. This indicates that there has been not one but two different audiovisual policies running the cyclical route of the audiovisual policy from one end to the other. The emphasis has changed from the public service to private concentration; from decentralization to centralization; from the diffusion of knowledge to the accumulation and appropriation of the cognitive capital; from the Keynesian model - combined with the Schumpeterian model and a preference for social access - to a delayed return to the neoliberal model, after having distorted the market through public decisions in the benefit of the most important audiovisual services providers. All this seems to crystallize the impressive process of concentration occurring between audiovisual services providers in two large groups that would be integrated by Mediaset and Sogecable and - in negotiations - between Antena 3 and Imagina. A combination of neo-statist restructuring of the market and neo-liberalism.

  4. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2017-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  5. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2018-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  6. The Notice on the Notion of State Aid and Public Procurement Law

    DEFF Research Database (Denmark)

    Ølykke, Grith Skovgaard

    2016-01-01

    uncertainty. Then the elaborations made in the Notice on the notion of aid concerning the relation between the two areas of law are analysed and discussed, in particular, first, the question whether adhering to the procurement procedures laid down in the public procurement directives will eliminate the risk......The Commission Notice on the notion of State aid includes elaboration on the relationship between State aid law and public procurement law. To begin with, the article examines some of the reasons why the relationship between State aid law and public procurement law is surrounded by legal...... of granting State aid and, second, the issues rising from State aid control of in-house situations. It is concluded that even though the Notice on the notion of aid brings some needed clarity that fosters coherence between State aid law and public procurement law, the existing legal uncertainty is not even...

  7. 29 CFR 2.13 - Audiovisual coverage prohibited.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage prohibited. 2.13 Section 2.13 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.13 Audiovisual coverage prohibited. The Department shall not permit audiovisual coverage of the...

  8. Personal Audiovisual Aptitude Influences the Interaction Between Landscape and Soundscape Appraisal.

    Science.gov (United States)

    Sun, Kang; Echevarria Sanchez, Gemma M; De Coensel, Bert; Van Renterghem, Timothy; Talsma, Durk; Botteldooren, Dick

    2018-01-01

    It has been established that there is an interaction between audition and vision in the appraisal of our living environment, and that this appraisal is influenced by personal factors. Here, we test the hypothesis that audiovisual aptitude influences appraisal of our sonic and visual environment. To measure audiovisual aptitude, an auditory deviant detection experiment was conducted in an ecologically valid and complex context. This experiment allows us to distinguish between accurate and less accurate listeners. Additionally, it allows to distinguish between participants that are easily visually distracted and those who are not. To do so, two previously conducted laboratory experiments were re-analyzed. The first experiment focuses on self-reported noise annoyance in a living room context, whereas the second experiment focuses on the perceived pleasantness of using outdoor public spaces. In the first experiment, the influence of visibility of vegetation on self-reported noise annoyance was modified by audiovisual aptitude. In the second one, it was found that the overall appraisal of walking across a bridge is influenced by audiovisual aptitude, in particular when a visually intrusive noise barrier is used to reduce highway traffic noise levels. We conclude that audiovisual aptitude may affect the appraisal of the living environment.

  9. Award of Public Contracts as a Means to Conferring State Aid

    DEFF Research Database (Denmark)

    Fanøe Petersen, Cecilie

    The Thesis investigates the interface between State aid law and public procurement law with an emphasis on analysing when the award of public contracts by contracting authorities constitutes State aid within the meaning of Article 107(1) TFEU. Article 107(1) TFEU prohibits any aid granted by a Me...

  10. Public Reactions to People with HIV/AIDS in the Netherlands

    NARCIS (Netherlands)

    A.E.R. Bos (Arjan); G.J. Kok (Gerjo); A.J. Dijker (Anton)

    2001-01-01

    textabstractA national telephone survey was conducted (1) to assess present-day public reactions to people with HIV/AIDS in the Netherlands, (2) to measure how knowledge about highly active antiretroviral therapy (HAART) is related to public reactions to people with HIV/AIDS, and (3) to investigate

  11. Plantilla 1: El documento audiovisual: elementos importantes

    OpenAIRE

    Alemany, Dolores

    2011-01-01

    Concepto de documento audiovisual y de documentación audiovisual, profundizando en la distinción de documentación de imagen en movimiento con posible incorporación de sonido frente al concepto de documentación audiovisual según plantea Jorge Caldera. Diferenciación entre documentos audiovisuales, obras audiovisuales y patrimonio audiovisual según Félix del Valle.

  12. Audiovisual integration facilitates monkeys' short-term memory.

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2016-07-01

    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans.

  13. Audiovisual Blindsight: Audiovisual learning in the absence of primary visual cortex

    OpenAIRE

    Mehrdad eSeirafi; Peter eDe Weerd; Alan J Pegna; Beatrice ede Gelder

    2016-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit...

  14. Audio-visual aid in teaching "fatty liver".

    Science.gov (United States)

    Dash, Sambit; Kamath, Ullas; Rao, Guruprasad; Prakash, Jay; Mishra, Snigdha

    2016-05-06

    Use of audio visual tools to aid in medical education is ever on a rise. Our study intends to find the efficacy of a video prepared on "fatty liver," a topic that is often a challenge for pre-clinical teachers, in enhancing cognitive processing and ultimately learning. We prepared a video presentation of 11:36 min, incorporating various concepts of the topic, while keeping in view Mayer's and Ellaway guidelines for multimedia presentation. A pre-post test study on subject knowledge was conducted for 100 students with the video shown as intervention. A retrospective pre study was conducted as a survey which inquired about students understanding of the key concepts of the topic and a feedback on our video was taken. Students performed significantly better in the post test (mean score 8.52 vs. 5.45 in pre-test), positively responded in the retrospective pre-test and gave a positive feedback for our video presentation. Well-designed multimedia tools can aid in cognitive processing and enhance working memory capacity as shown in our study. In times when "smart" device penetration is high, information and communication tools in medical education, which can act as essential aid and not as replacement for traditional curriculums, can be beneficial to the students. © 2015 by The International Union of Biochemistry and Molecular Biology, 44:241-245, 2016. © 2015 The International Union of Biochemistry and Molecular Biology.

  15. 29 CFR 2.12 - Audiovisual coverage permitted.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage permitted. 2.12 Section 2.12 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.12 Audiovisual coverage permitted. The following are the types of hearings where the Department...

  16. State Aid as a Defence for Public Authorities?

    DEFF Research Database (Denmark)

    Ølykke, Grith Skovgaard

    2016-01-01

    court’s perception. As the contracts had been declared to be in force by a declaratory judgment that was res judicata, the dispute before the CJEU concerned the national interpretation of the principle of res judicata and its application in a State aid context. The CJEU first turned to the principle......In the annotated judgment a public authority uses the existence of State aid as a defence in a legal action, where its contractual partner aimed to achieve damages and fulfilment of the contracts. The public authority claimed that the contracts were not on market terms, which also was the national...... of consistent interpretation, which it considered could provide various solutions for the national court to draw all the necessary consequences of the possible breach of the duty to notify State aid. In the alternative, the CJEU considered the principle of effectiveness and found that due to the fundamental...

  17. Audiovisual preservation strategies, data models and value-chains

    OpenAIRE

    Addis, Matthew; Wright, Richard

    2010-01-01

    This is a report on preservation strategies, models and value-chains for digital file-based audiovisual content. The report includes: (a)current and emerging value-chains and business-models for audiovisual preservation;(b) a comparison of preservation strategies for audiovisual content including their strengths and weaknesses, and(c) a review of current preservation metadata models, and requirements for extension to support audiovisual files.

  18. Audiovisual segregation in cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Simon Landry

    Full Text Available It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition, as well as in normal controls. A visual speech recognition task (i.e. speechreading was administered either in silence or in combination with three types of auditory distractors: i noise ii reverse speech sound and iii non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users.

  19. The Netherlands: The representativeness of trade unions and employer associations in the audiovisual sector

    NARCIS (Netherlands)

    Grunell, M.

    2013-01-01

    The relevance of the Dutch audiovisual sector in terms of the number of employees is negligible. However, in qualitative terms, the sector is influential in Dutch society. The characteristics of collective bargaining are defined by the division into public and commercial broadcasting. In public

  20. Audiovisual Styling and the Film Experience

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2015-01-01

    Approaches to music and audiovisual meaning in film appear to be very different in nature and scope when considered from the point of view of experimental psychology or humanistic studies. Nevertheless, this article argues that experimental studies square with ideas of audiovisual perception...... and meaning in humanistic film music studies in two ways: through studies of vertical synchronous interaction and through studies of horizontal narrative effects. Also, it is argued that the combination of insights from quantitative experimental studies and qualitative audiovisual film analysis may actually...... be combined into a more complex understanding of how audiovisual features interact in the minds of their audiences. This is demonstrated through a review of a series of experimental studies. Yet, it is also argued that textual analysis and concepts from within film and music studies can provide insights...

  1. Characterization of the teaching aids in the teaching-learning process in Physical Education

    Directory of Open Access Journals (Sweden)

    César Perazas Zamora

    2017-04-01

    Full Text Available The aids and resources of teaching are an important didactic component inside of the teaching learning process, they are the material support of the teaching aids, its adequate use warrant the quality of the process. With the accelerated development of the science, technique and technologies the audiovisual aids has passed to form part of the teaching learning process humanizing the teacher’s work and favouring the transmission of knowledge with a truly scientific approach. The objective of this article is standing out the main concepts, definitions and advantages of the teaching aids more used nowadays, its importance as didactic component and its adequate use in the teaching learning process linked with the objective, method and content, ensuring the lasting learning that contributes to raise the integral general culture of the students. Besides it deals with the topic of audiovisual aids as one of the components of the teaching learning process, it is offered concepts and definitions from different authors and emphasized the advantages, use and importance of its systematic and planned use.

  2. Las aventuras de Zamba. Some notes on audiovisual communication in a TV channel for children of the argentinian Ministry of Education

    Directory of Open Access Journals (Sweden)

    Sabina Crivelli

    2015-12-01

    Full Text Available From 2009, within the frame of a process of de-monopolization of audiovisual communication, several public policies were developed in Argentina with the purpose of extending participation in the production of audiovisual contents. In this paper, the main aesthetic qualities of an audiovisual program, Las aventuras de Zamba, produced by a State-run TV channel for children, are analyzed. Some tensions risen in the relationship state / market, producing artistic representations about otherness, are examined.

  3. Audio-Visual Aid in Teaching "Fatty Liver"

    Science.gov (United States)

    Dash, Sambit; Kamath, Ullas; Rao, Guruprasad; Prakash, Jay; Mishra, Snigdha

    2016-01-01

    Use of audio visual tools to aid in medical education is ever on a rise. Our study intends to find the efficacy of a video prepared on "fatty liver," a topic that is often a challenge for pre-clinical teachers, in enhancing cognitive processing and ultimately learning. We prepared a video presentation of 11:36 min, incorporating various…

  4. La regulación audiovisual: argumentos a favor y en contra The audio-visual regulation: the arguments for and against

    Directory of Open Access Journals (Sweden)

    Jordi Sopena Palomar

    2008-03-01

    Full Text Available El artículo analiza la efectividad de la regulación audiovisual y valora los diversos argumentos a favor y en contra de la existencia de consejos reguladores a nivel estatal. El debate sobre la necesidad de un organismo de este calado en España todavía persiste. La mayoría de los países comunitarios se han dotado de consejos competentes en esta materia, como es el caso del OFCOM en el Reino Unido o el CSA en Francia. En España, la regulación audiovisual se limita a organismos de alcance autonómico, como son el Consejo Audiovisual de Navarra, el de Andalucía y el Consell de l’Audiovisual de Catalunya (CAC, cuyo modelo también es abordado en este artículo. The article analyzes the effectiveness of the audio-visual regulation and assesses the different arguments for and against the existence of the broadcasting authorities at the state level. The debate of the necessity of a Spanish organism of regulation is still active. Most of the European countries have created some competent authorities, like the OFCOM in United Kingdom and the CSA in France. In Spain, the broadcasting regulation is developed by regional organisms, like the Consejo Audiovisual de Navarra, the Consejo Audiovisual de Andalucía and the Consell de l’Audiovisual de Catalunya (CAC, whose case is also studied in this article.

  5. Audiovisual perception in amblyopia: A review and synthesis.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-05-17

    Amblyopia is a common developmental sensory disorder that has been extensively and systematically investigated as a unisensory visual impairment. However, its effects are increasingly recognized to extend beyond vision to the multisensory domain. Indeed, amblyopia is associated with altered cross-modal interactions in audiovisual temporal perception, audiovisual spatial perception, and audiovisual speech perception. Furthermore, although the visual impairment in amblyopia is typically unilateral, the multisensory abnormalities tend to persist even when viewing with both eyes. Knowledge of the extent and mechanisms of the audiovisual impairments in amblyopia, however, remains in its infancy. This work aims to review our current understanding of audiovisual processing and integration deficits in amblyopia, and considers the possible mechanisms underlying these abnormalities. Copyright © 2018. Published by Elsevier Ltd.

  6. Talking about AIDS in Hong Kong: Cultural Models in Public Health Discourse.

    Science.gov (United States)

    Jones, Rodney H.

    A study explored the issues of cultural identity and interaction in public health discourse concerning Acquired Immune Deficiency Syndrome (AIDS) in Hong Kong's multilingual, multicultural social context. Twenty public service announcements (PSAs) concerning AIDS awareness televised in both English and Cantonese in Hong Kong from 1987 to 1994 were…

  7. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    Science.gov (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  8. Decreased BOLD responses in audiovisual processing

    NARCIS (Netherlands)

    Wiersinga-Post, Esther; Tomaskovic, Sonja; Slabu, Lavinia; Renken, Remco; de Smit, Femke; Duifhuis, Hendrikus

    2010-01-01

    Audiovisual processing was studied in a functional magnetic resonance imaging study using the McGurk effect. Perceptual responses and the brain activity patterns were measured as a function of audiovisual delay. In several cortical and subcortical brain areas, BOLD responses correlated negatively

  9. Learning sparse generative models of audiovisual signals

    OpenAIRE

    Monaci, Gianluca; Sommer, Friedrich T.; Vandergheynst, Pierre

    2008-01-01

    This paper presents a novel framework to learn sparse represen- tations for audiovisual signals. An audiovisual signal is modeled as a sparse sum of audiovisual kernels. The kernels are bimodal functions made of synchronous audio and video components that can be positioned independently and arbitrarily in space and time. We design an algorithm capable of learning sets of such audiovi- sual, synchronous, shift-invariant functions by alternatingly solving a coding and a learning pr...

  10. Audiovisual Discrimination between Laughter and Speech

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audiovisual approach to distinguishing laughter from speech and we show that integrating the information from audio and video leads to an improved reliability of audiovisual approach in

  11. Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, B.

    2007-01-01

    Laughter is a highly variable signal, and can express a spectrum of emotions. This makes the automatic detection of laughter a challenging but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is performed

  12. Accounts from the field: a public relations perspective on global AIDS/HIV.

    Science.gov (United States)

    Bardhan, Nilanjana R

    2002-01-01

    This study is a theoretical as well as empirical exploration of the power and cultural differentials that mark and construct various intersecting discourses, specifically media discourse, on global AIDS/HIV. It applies the language and concepts of public relations to understand how the press coverage of the pandemic is associated with the variables that impact the newsmaking process as well as the public and policy implications of macro news frames generated over time. Theoretical work in the areas of agenda setting and news framing also instruct the conceptual framework of this analysis. Narrative analysis is used as a methodology to qualitatively analyze three pools of accounts-from people either living with AIDS/HIV, involved in AIDS/HIV work, or discursively engaged in the media construction of the pandemic; from transnational wire service journalists who cover the issue at global and regional levels; and policy shapers and communicators who are active at the global level. These three communities of respondents represent important stakeholders in the AIDS/HIV issue. The findings are analyzed from a public relations standpoint. Perhaps the most important finding of this study is that the public relations approaches used to address AIDS/HIV related issues need to be grounded in context-specific research and communicative practices that bring out the lived realities of AIDS/HIV at grassroots levels. The findings also posit that those situated at critical junctions between various stakeholders need to cultivate a finely balanced understanding of the etic and emic intersections and subjectivities of global/local AIDS/HIV.

  13. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.

    Directory of Open Access Journals (Sweden)

    Kirsten E Smayda

    Full Text Available Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35 and thirty-three older adults (ages 60-90 to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger

  14. Lip movements affect infants' audiovisual speech perception.

    Science.gov (United States)

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  15. The level of audiovisual print-speech integration deficits in dyslexia.

    Science.gov (United States)

    Kronschnabel, Jens; Brem, Silvia; Maurer, Urs; Brandeis, Daniel

    2014-09-01

    The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No

  16. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan.

    Science.gov (United States)

    Noel, Jean-Paul; De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.

  17. Audiovisual Interaction

    DEFF Research Database (Denmark)

    Karandreas, Theodoros-Alexandros

    in a manner that allowed the subjective audiovisual evaluation of loudspeakers under controlled conditions. Additionally, unimodal audio and visual evaluations were used as a baseline for comparison. The same procedure was applied in the investigation of the validity of less than optimal stimuli presentations...

  18. Categorization of natural dynamic audiovisual scenes.

    Directory of Open Access Journals (Sweden)

    Olli Rummukainen

    Full Text Available This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database.

  19. "I'm very visible but seldom seen": consumer choice and use of mobility aids on public transport.

    Science.gov (United States)

    Unsworth, Carolyn A; Rawat, Vijay; Sullivan, John; Tay, Richard; Naweed, Anjum; Gudimetla, Prasad

    2017-11-28

    The number of mobility aid users continues to rise as the population ages. While mobility aid users rely on public transport due to its affordability, evidence suggests access can be difficult. This study aims to describe people who use mobility aids to access public transport and the role of public transport access in influencing mobility aid choice. Sixty-seven mobility aid users participated in telephone surveys which predominantly used a structured quantitative format. Data were analysed descriptively and any additional comments were simply categorized. Thirty-six participants were female (54%), with a total sample mean age of 58.15 years (SD = 14.46). Seventy-two percent lived in metropolitan areas, 48% lived alone, and the sample experienced a variety of conditions including spinal cord injury (37%) and arthritis (18%). Sixty-four percent of all respondents used two or more mobility aids including powered wheelchairs, scooters and walking frames. The most important features when choosing a mobility aid were reliability, turning ability and size. Fifty-two percent of all respondents strongly agreed that public transport is generally accessible. While work continues to ensure that public transport vehicles and stations are fully accessible, mobility aid users must manage current infrastructure and access a system which has been shown through this research to have many limitations. Mobility aid users, vendors and health professionals need to work together to identify mobility aids that fulfil needs, and are reliable and safe, so that mobility aid users are both "visible and seen" when accessing the public transport network. Implications for rehabilitation Some mobility aid users experience difficulties accessing and using public transport and further research is required to ensure the whole public transport network is fully accessible to people using mobility aids. Many people have more than one seated mobility aid, suggesting people can choose different

  20. Influences of selective adaptation on perception of audiovisual speech

    Science.gov (United States)

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  1. Audiovisual Review

    Science.gov (United States)

    Physiology Teacher, 1976

    1976-01-01

    Lists and reviews recent audiovisual materials in areas of medical, dental, nursing and allied health, and veterinary medicine; undergraduate, and high school studies. Each is classified as to level, type of instruction, usefulness, and source of availability. Topics include respiration, renal physiology, muscle mechanics, anatomy, evolution,…

  2. Development of user guidelines for ECAS display design. Volume 2: Tasks 9 and 10. [educating the public to the benefits of spacelab and the space transportation system

    Science.gov (United States)

    Bathurst, D. B.

    1979-01-01

    Lay-oriented speakers aids, articles, a booklet, and a press kit were developed to inform the press and the general public with background information on the space transportation system, Spacelab, and Spacelab 1 experiments. Educational materials relating to solar-terrestrial physics and its potential benefits to mankind were also written. A basic network for distributing audiovisual and printed materials to regional secondary schools and universities was developed. Suggested scripts to be used with visual aids describing materials science and technology and astronomy and solar physics are presented.

  3. Audio-visual training-aid for speechreading

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich; Gebert, H.

    2011-01-01

    People with decreasing hearing ability are more dependent on alternative personal communication channels. To ‘read and understand’ visible articulatory movements of the conversation partner, as done in the process of speechreading, is one possible solution for understanding verbal statements...... on the employment of computer‐based communication aids for hearing‐impaired, deaf and deaf‐blind people [6]. This paper presents the complete system that is composed of a 3D‐facial animation with synchronized speech synthesis, a natural language dialogue unit and a student‐teacher‐training module. Due to the very...... modular structure of the software package and the centralized event manager, it is possible to add or replace specific modules when needed. The present version of our teacher‐student module uses a hierarchically structured composition of important single words and short phrases, supplemented by easy...

  4. Electrophysiological evidence for speech-specific audiovisual integration.

    Science.gov (United States)

    Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode. © 2013 Published by Elsevier Ltd.

  5. Audiovisual quality assessment and prediction for videotelephony

    CERN Document Server

    Belmudez, Benjamin

    2015-01-01

    The work presented in this book focuses on modeling audiovisual quality as perceived by the users of IP-based solutions for video communication like videotelephony. It also extends the current framework for the parametric prediction of audiovisual call quality. The book addresses several aspects related to the quality perception of entire video calls, namely, the quality estimation of the single audio and video modalities in an interactive context, the audiovisual quality integration of these modalities and the temporal pooling of short sample-based quality scores to account for the perceptual quality impact of time-varying degradations.

  6. Speech-specificity of two audiovisual integration effects

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2010-01-01

    Seeing the talker’s articulatory mouth movements can influence the auditory speech percept both in speech identification and detection tasks. Here we show that these audiovisual integration effects also occur for sine wave speech (SWS), which is an impoverished speech signal that naïve observers...... often fail to perceive as speech. While audiovisual integration in the identification task only occurred when observers were informed of the speech-like nature of SWS, integration occurred in the detection task both for informed and naïve observers. This shows that both speech-specific and general...... mechanisms underlie audiovisual integration of speech....

  7. A Catalan code of best practices for the audiovisual sector

    OpenAIRE

    Teodoro, Emma; Casanovas, Pompeu

    2010-01-01

    In spite of a new general law regarding Audiovisual Communication, the regulatory framework of the audiovisual sector in Spain can still be defined as huge, disperse and obsolete. The first part of this paper provides an overview of the major challenges of the Spanish audiovisual sector as a result of the convergence of platforms, services and operators, paying especial attention to the Audiovisual Sector in Catalonia. In the second part, we will present an example of self-regulation through...

  8. Speech cues contribute to audiovisual spatial integration.

    Directory of Open Access Journals (Sweden)

    Christopher W Bishop

    Full Text Available Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral 'what' and dorsal 'where' pathways.

  9. The Education, Audiovisual and Culture Executive Agency: Helping You Grow Your Project

    Science.gov (United States)

    Education, Audiovisual and Culture Executive Agency, European Commission, 2011

    2011-01-01

    The Education, Audiovisual and Culture Executive Agency (EACEA) is a public body created by a Decision of the European Commission and operates under its supervision. It is located in Brussels and has been operational since January 2006. Its role is to manage European funding opportunities and networks in the fields of education and training,…

  10. Audiovisual Interval Size Estimation Is Associated with Early Musical Training.

    Science.gov (United States)

    Abel, Mary Kathryn; Li, H Charles; Russo, Frank A; Schlaug, Gottfried; Loui, Psyche

    2016-01-01

    Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.

  11. Audiovisual Interval Size Estimation Is Associated with Early Musical Training.

    Directory of Open Access Journals (Sweden)

    Mary Kathryn Abel

    Full Text Available Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.

  12. Reduced audiovisual recalibration in the elderly.

    Science.gov (United States)

    Chan, Yu Man; Pianta, Michael J; McKendrick, Allison M

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22-32 years old) and 15 older (64-74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age.

  13. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson's Disease.

    Science.gov (United States)

    Ren, Yanna; Suzuki, Keisuke; Yang, Weiping; Ren, Yanling; Wu, Fengxia; Yang, Jiajia; Takahashi, Satoshi; Ejima, Yoshimichi; Wu, Jinglong; Hirata, Koichi

    2018-01-01

    The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson's disease (PD). This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC) and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p audiovisual stimuli was significantly faster than that to unimodal stimuli in both NC and PD ( p audiovisual integration was absent in PD; however, it did occur in NC. Further analysis showed that there was no significant audiovisual integration in PD with/without cognitive impairment or in PD with/without sleep disturbances. Furthermore, audiovisual facilitation was not associated with Hoehn and Yahr stage, disease duration, or the presence of sleep disturbances (all p > 0.05). The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.

  14. Timing in audiovisual speech perception: A mini review and new psychophysical data.

    Science.gov (United States)

    Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory

    2016-02-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.

  15. Timing in Audiovisual Speech Perception: A Mini Review and New Psychophysical Data

    Science.gov (United States)

    Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory

    2015-01-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309

  16. Electrophysiological assessment of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Dau, Torsten

    Speech perception integrates signal from ear and eye. This is witnessed by a wide range of audiovisual integration effects, such as ventriloquism and the McGurk illusion. Some behavioral evidence suggest that audiovisual integration of specific aspects is special for speech perception. However, our...... knowledge of such bimodal integration would be strengthened if the phenomena could be investigated by objective, neutrally based methods. One key question of the present work is if perceptual processing of audiovisual speech can be gauged with a specific signature of neurophysiological activity...... on the auditory speech percept? In two experiments, which both combine behavioral and neurophysiological measures, an uncovering of the relation between perception of faces and of audiovisual integration is attempted. Behavioral findings suggest a strong effect of face perception, whereas the MMN results are less...

  17. Multistage audiovisual integration of speech: dissociating identification and detection.

    Science.gov (United States)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias S

    2011-02-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers, the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multistage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.

  18. Audiovisual signs and information science: an evaluation

    Directory of Open Access Journals (Sweden)

    Jalver Bethônico

    2006-12-01

    Full Text Available This work evaluates the relationship of Information Science with audiovisual signs, pointing out conceptual limitations, difficulties imposed by the verbal fundament of knowledge, the reduced use within libraries and the ways in the direction of a more consistent analysis of the audiovisual means, supported by the semiotics of Charles Peirce.

  19. Subjective Evaluation of Audiovisual Signals

    Directory of Open Access Journals (Sweden)

    F. Fikejz

    2010-01-01

    Full Text Available This paper deals with subjective evaluation of audiovisual signals, with emphasis on the interaction between acoustic and visual quality. The subjective test is realized by a simple rating method. The audiovisual signal used in this test is a combination of images compressed by JPEG compression codec and sound samples compressed by MPEG-1 Layer III. Images and sounds have various contents. It simulates a real situation when the subject listens to compressed music and watches compressed pictures without the access to original, i.e. uncompressed signals.

  20. The Fungible Audio-Visual Mapping and its Experience

    Directory of Open Access Journals (Sweden)

    Adriana Sa

    2014-12-01

    Full Text Available This article draws a perceptual approach to audio-visual mapping. Clearly perceivable cause and effect relationships can be problematic if one desires the audience to experience the music. Indeed perception would bias those sonic qualities that fit previous concepts of causation, subordinating other sonic qualities, which may form the relations between the sounds themselves. The question is, how can an audio-visual mapping produce a sense of causation, and simultaneously confound the actual cause-effect relationships. We call this a fungible audio-visual mapping. Our aim here is to glean its constitution and aspect. We will report a study, which draws upon methods from experimental psychology to inform audio-visual instrument design and composition. The participants are shown several audio-visual mapping prototypes, after which we pose quantitative and qualitative questions regarding their sense of causation, and their sense of understanding the cause-effect relationships. The study shows that a fungible mapping requires both synchronized and seemingly non-related components – sufficient complexity to be confusing. As the specific cause-effect concepts remain inconclusive, the sense of causation embraces the whole. 

  1. Parametric packet-based audiovisual quality model for IPTV services

    CERN Document Server

    Garcia, Marie-Neige

    2014-01-01

    This volume presents a parametric packet-based audiovisual quality model for Internet Protocol TeleVision (IPTV) services. The model is composed of three quality modules for the respective audio, video and audiovisual components. The audio and video quality modules take as input a parametric description of the audiovisual processing path, and deliver an estimate of the audio and video quality. These outputs are sent to the audiovisual quality module which provides an estimate of the audiovisual quality. Estimates of perceived quality are typically used both in the network planning phase and as part of the quality monitoring. The same audio quality model is used for both these phases, while two variants of the video quality model have been developed for addressing the two application scenarios. The addressed packetization scheme is MPEG2 Transport Stream over Real-time Transport Protocol over Internet Protocol. In the case of quality monitoring, that is the case for which the network is already set-up, the aud...

  2. Audiovisual interpretative skills: between textual culture and formalized literacy

    Directory of Open Access Journals (Sweden)

    Estefanía Jiménez, Ph. D.

    2010-01-01

    Full Text Available This paper presents the results of a study on the process of acquiring interpretative skills to decode audiovisual texts among adolescents and youth. Based on the conception of such competence as the ability to understand the meanings connoted beneath the literal discourses of audiovisual texts, this study compared two variables: the acquisition of such skills from the personal and social experience in the consumption of audiovisual products (which is affected by age difference, and, on the second hand, the differences marked by the existence of formalized processes of media literacy. Based on focus groups of young students, the research assesses the existing academic debate about these processes of acquiring skills to interpret audiovisual materials.

  3. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    Science.gov (United States)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  4. Improving pharmacy practice through public health programs: experience from Global HIV/AIDS initiative Nigeria project.

    Science.gov (United States)

    Oqua, Dorothy; Agu, Kenneth Anene; Isah, Mohammed Alfa; Onoh, Obialunamma U; Iyaji, Paul G; Wutoh, Anthony K; King, Rosalyn C

    2013-01-01

    The use of medicines is an essential component of many public health programs (PHPs). Medicines are important not only for their capacity to treat and prevent diseases. The public confidence in healthcare system is inevitably linked to their confidence in the availability of safe and effective medicines and the measures for ensuring their rational use. However, pharmacy services component receives little or no attention in most public health programs in developing countries. This article describes the strategies, lessons learnt, and some accomplishments of Howard University Pharmacists and Continuing Education (HU-PACE) Centre towards improving hospital pharmacy practice through PHP in Nigeria. In a cross-sectional survey, 60 hospital pharmacies were randomly selected from 184 GHAIN-supported health facilities. The assessment was conducted at baseline and repeated after at least 12 months post-intervention using a study-specific instrument. Interventions included engagement of stakeholders; provision of standards for infrastructural upgrade; development of curricula and modules for training of pharmacy personnel; provision of job aids and tools amongst others. A follow-up hands-on skill enhancement based on identified gaps was conducted. Chi-square was used for inferential statistics. All reported p-values were 2-tailed at 95% confidence interval. The mean duration of service provision at post-intervention assessment was 24.39 (95% CI, 21.70-27.08) months. About 16.7% of pharmacies reported been trained in HIV care at pre-intervention compared to 83.3% at post-intervention. The proportion of pharmacies with audio-visual privacy for patient counseling increased significantly from 30.9% at pre-intervention to 81.4% at post-intervention. Filled prescriptions were cross-checked by pharmacist (61.9%) and pharmacy technician (23.8%) before dispensing at pre-intervention compared to pharmacist (93.1%) and pharmacy technician (6.9%) at post intervention. 40.0% of

  5. Quality models for audiovisual streaming

    Science.gov (United States)

    Thang, Truong Cong; Kim, Young Suk; Kim, Cheon Seog; Ro, Yong Man

    2006-01-01

    Quality is an essential factor in multimedia communication, especially in compression and adaptation. Quality metrics can be divided into three categories: within-modality quality, cross-modality quality, and multi-modality quality. Most research has so far focused on within-modality quality. Moreover, quality is normally just considered from the perceptual perspective. In practice, content may be drastically adapted, even converted to another modality. In this case, we should consider the quality from semantic perspective as well. In this work, we investigate the multi-modality quality from the semantic perspective. To model the semantic quality, we apply the concept of "conceptual graph", which consists of semantic nodes and relations between the nodes. As an typical of multi-modality example, we focus on audiovisual streaming service. Specifically, we evaluate the amount of information conveyed by a audiovisual content where both video and audio channels may be strongly degraded, even audio are converted to text. In the experiments, we also consider the perceptual quality model of audiovisual content, so as to see the difference with semantic quality model.

  6. Awareness of Public Library and Utilization of its HIV/AIDS ...

    African Journals Online (AJOL)

    Nekky Umera

    public library in their city; positive respondents were then implored to provide answers to .... In a study of the impact of Youth's Use of the Internet on the Public. Library by .... novels of adventure, modern music, comics, games and sports, cinema and library internet .... Have been to video shows on HIV/AIDS organized by the ...

  7. Audiovisual Archive Exploitation in the Networked Information Society

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.

    2011-01-01

    Safeguarding the massive body of audiovisual content, including rich music collections, in audiovisual archives and enabling access for various types of user groups is a prerequisite for unlocking the social-economic value of these collections. Data quantities and the need for specific content

  8. The role of emotion in dynamic audiovisual integration of faces and voices.

    Science.gov (United States)

    Kokinous, Jenny; Kotz, Sonja A; Tavano, Alessandro; Schröger, Erich

    2015-05-01

    We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  9. Need-Based Educational Aid Act of 2015 (Public Law 114-44)

    Science.gov (United States)

    US Congress, 2015

    2015-01-01

    The Need-Based Educational Aid Act of 2015 (Public Law 114-44) was put in place to improve and reauthorize provisions relating to the application of the antitrust laws to the award of need-based educational aid. The contents for this Act is as follows: (1) Short Title; and (2) Extension Relating to the Application of the Antitrust Laws to the…

  10. Audiovisual Integration in High Functioning Adults with Autism

    Science.gov (United States)

    Keane, Brian P.; Rosenthal, Orna; Chun, Nicole H.; Shams, Ladan

    2010-01-01

    Autism involves various perceptual benefits and deficits, but it is unclear if the disorder also involves anomalous audiovisual integration. To address this issue, we compared the performance of high-functioning adults with autism and matched controls on experiments investigating the audiovisual integration of speech, spatiotemporal relations, and…

  11. Decision-level fusion for audio-visual laughter detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, M.; Truong, K.; Poppe, R.; Pantic, M.

    2008-01-01

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laughter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is

  12. Multistage audiovisual integration of speech: dissociating identification and detection

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech...... signal. Here we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers...... informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multi-stage account of audiovisual integration of speech in which the many attributes...

  13. Use of Audiovisual Texts in University Education Process

    Science.gov (United States)

    Aleksandrov, Evgeniy P.

    2014-01-01

    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  14. Perceived synchrony for realistic and dynamic audiovisual events.

    Science.gov (United States)

    Eg, Ragnhild; Behne, Dawn M

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli.

  15. Age-related audiovisual interactions in the superior colliculus of the rat.

    Science.gov (United States)

    Costa, M; Piché, M; Lepore, F; Guillemot, J-P

    2016-04-21

    It is well established that multisensory integration is a functional characteristic of the superior colliculus that disambiguates external stimuli and therefore reduces the reaction times toward simple audiovisual targets in space. However, in a condition where a complex audiovisual stimulus is used, such as the optical flow in the presence of modulated audio signals, little is known about the processing of the multisensory integration in the superior colliculus. Furthermore, since visual and auditory deficits constitute hallmark signs during aging, we sought to gain some insight on whether audiovisual processes in the superior colliculus are altered with age. Extracellular single-unit recordings were conducted in the superior colliculus of anesthetized Sprague-Dawley adult (10-12 months) and aged (21-22 months) rats. Looming circular concentric sinusoidal (CCS) gratings were presented alone and in the presence of sinusoidally amplitude modulated white noise. In both groups of rats, two different audiovisual response interactions were encountered in the spatial domain: superadditive, and suppressive. In contrast, additive audiovisual interactions were found only in adult rats. Hence, superior colliculus audiovisual interactions were more numerous in adult rats (38%) than in aged rats (8%). These results suggest that intersensory interactions in the superior colliculus play an essential role in space processing toward audiovisual moving objects during self-motion. Moreover, aging has a deleterious effect on complex audiovisual interactions. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. The presentation of expert testimony via live audio-visual communication.

    Science.gov (United States)

    Miller, R D

    1991-01-01

    As part of a national effort to improve efficiency in court procedures, the American Bar Association has recommended, on the basis of a number of pilot studies, increased use of current audio-visual technology, such as telephone and live video communication, to eliminate delays caused by unavailability of participants in both civil and criminal procedures. Although these recommendations were made to facilitate court proceedings, and for the convenience of attorneys and judges, they also have the potential to save significant time for clinical expert witnesses as well. The author reviews the studies of telephone testimony that were done by the American Bar Association and other legal research groups, as well as the experience in one state forensic evaluation and treatment center. He also reviewed the case law on the issue of remote testimony. He then presents data from a national survey of state attorneys general concerning the admissibility of testimony via audio-visual means, including video depositions. Finally, he concludes that the option to testify by telephone provides a significant savings in precious clinical time for forensic clinicians in public facilities, and urges that such clinicians work actively to convince courts and/or legislatures in states that do not permit such testimony (currently the majority), to consider accepting it, to improve the effective use of scarce clinical resources in public facilities.

  17. Knowledge Generated by Audiovisual Narrative Action Research Loops

    Science.gov (United States)

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of…

  18. Audiovisual Association Learning in the Absence of Primary Visual Cortex.

    Science.gov (United States)

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J; de Gelder, Beatrice

    2015-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning.

  19. Long-term music training modulates the recalibration of audiovisual simultaneity.

    Science.gov (United States)

    Jicol, Crescent; Proulx, Michael J; Pollick, Frank E; Petrini, Karin

    2018-07-01

    To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally occurring multisensory training known to enhance audiovisual perceptual accuracy. Hence, we asked a group of drummers, of non-drummer musicians and of non-musicians to judge the audiovisual simultaneity of musical and non-musical audiovisual events, before and after adaptation with two fixed audiovisual asynchronies. We found that the recalibration for the musicians and drummers was in the opposite direction (sound leading vision) to that of non-musicians (vision leading sound), and change together with both increased music training and increased perceptual accuracy (i.e. ability to detect asynchrony). Our findings demonstrate that long-term musical training reshapes the way humans adaptively recalibrate simultaneity between auditory and visual signals.

  20. Efficient visual search from synchronized auditory signals requires transient audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.

  1. Gestión documental de la información audiovisual deportiva en las televisiones generalistas Documentary management of the sport audio-visual information in the generalist televisions

    Directory of Open Access Journals (Sweden)

    Jorge Caldera Serrano

    2005-01-01

    Full Text Available Se analiza la gestión de la información audiovisual deportiva en el marco de los Sistemas de Información Documental de las cadenas estatales, zonales y locales. Para ello se realiza un realiza un recorrido por la cadena documental que realiza la información audiovisual deportiva con el fin de ir analizando cada uno de los parámetros, mostrando así una serie de recomendaciones y normativas para la confección del registro audiovisual deportivo. Evidentemente la documentación deportiva audiovisual no se diferencia en exceso del análisis de otros tipos documentales televisivos por lo que se lleva a cabo una profundización yampliación de su gestión y difusión, mostrando el flujo informacional dentro del Sistema.The management of the sport audio-visual documentation of the Information Systems of the state, zonal and local chains is analyzed within the framework. For it it is made makes a route by the documentary chain that makes the sport audio-visual information with the purpose of being analyzing each one of the parameters, showing therefore a series of recommendations and norms for the preparation of the sport audio-visual registry. Evidently the audio-visual sport documentation difference in excess of the analysis of other televising documentary types reason why is not carried out a deepening and extension of its management and diffusion, showing the informational flow within the System.

  2. Challenges and opportunities for audiovisual diversity in the Internet

    Directory of Open Access Journals (Sweden)

    Trinidad García Leiva

    2017-06-01

    Full Text Available http://dx.doi.org/10.5007/2175-7984.2017v16n35p132 At the gates of the first quarter of the XXI century, nobody doubts the fact that the value chain of the audiovisual industry has suffered important transformations. The digital era presents opportunities for cultural enrichment as well as displays new challenges. After presenting a general portray of the audiovisual industries in the digital era, taking as a point of departure the Spanish case and paying attention to players and logics in tension, this paper will present some notes about the advantages and disadvantages that exist for the diversity of audiovisual production, distribution and consumption online. It is here sustained that the diversity of the audiovisual sector online is not guaranteed because the formula that has made some players successful and powerful is based on walled-garden models to monetize contents (which, besides, add restrictions to their reproduction and circulation by and among consumers. The final objective is to present some ideas about the elements that prevent the strengthening of the diversity of the audiovisual industry in the digital scenario. Barriers to overcome are classified as technological, financial, social, legal and political.

  3. An Instrumented Glove for Control Audiovisual Elements in Performing Arts

    Directory of Open Access Journals (Sweden)

    Rafael Tavares

    2018-02-01

    Full Text Available The use of cutting-edge technologies such as wearable devices to control reactive audiovisual systems are rarely applied in more conventional stage performances, such as opera performances. This work reports a cross-disciplinary approach for the research and development of the WMTSensorGlove, a data-glove used in an opera performance to control audiovisual elements on stage through gestural movements. A system architecture of the interaction between the wireless wearable device and the different audiovisual systems is presented, taking advantage of the Open Sound Control (OSC protocol. The developed wearable system was used as audiovisual controller in “As sete mulheres de Jeremias Epicentro”, a portuguese opera by Quarteto Contratempus, which was premiered in September 2017.

  4. Audiovisual integration in speech perception: a multi-stage process

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    investigate whether the integration of auditory and visual speech observed in these two audiovisual integration effects are specific traits of speech perception. We further ask whether audiovisual integration is undertaken in a single processing stage or multiple processing stages....

  5. Elevated audiovisual temporal interaction in patients with migraine without aura

    Science.gov (United States)

    2014-01-01

    Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903

  6. Attenuated audiovisual integration in middle-aged adults in a discrimination task.

    Science.gov (United States)

    Yang, Weiping; Ren, Yanna

    2018-02-01

    Numerous studies have focused on the diversity of audiovisual integration between younger and older adults. However, consecutive trends in audiovisual integration throughout life are still unclear. In the present study, to clarify audiovisual integration characteristics in middle-aged adults, we instructed younger and middle-aged adults to conduct an auditory/visual stimuli discrimination experiment. Randomized streams of unimodal auditory (A), unimodal visual (V) or audiovisual stimuli were presented on the left or right hemispace of the central fixation point, and subjects were instructed to respond to the target stimuli rapidly and accurately. Our results demonstrated that the responses of middle-aged adults to all unimodal and bimodal stimuli were significantly slower than those of younger adults (p Audiovisual integration was markedly delayed (onset time 360 ms) and weaker (peak 3.97%) in middle-aged adults than in younger adults (onset time 260 ms, peak 11.86%). The results suggested that audiovisual integration was attenuated in middle-aged adults and further confirmed age-related decline in information processing.

  7. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Teacher’s Voice on Metacognitive Strategy Based Instruction Using Audio Visual Aids for Listening

    Directory of Open Access Journals (Sweden)

    Salasiah Salasiah

    2018-02-01

    Full Text Available The paper primarily stresses on exploring the teacher’s voice toward the application of metacognitive strategy with audio-visual aid in improving listening comprehension. The metacognitive strategy model applied in the study was inspired from Vandergrift and Tafaghodtari (2010 instructional model. Thus it is modified in the procedure and applied with audio-visual aids for improving listening comprehension. The study’s setting was at SMA Negeri 2 Parepare, South Sulawesi Province, Indonesia. The population of the research was the teacher of English at tenth grade at SMAN 2. The sample was taken by using random sampling technique. The data was collected by using in depth interview during the research, recorded, and analyzed using qualitative analysis. This study explored the teacher’s response toward the modified model of metacognitive strategy with audio visual aids in class of listening which covers positive and negative response toward the strategy applied during the teaching of listening. The result of data showed that this strategy helped the teacher a lot in teaching listening comprehension as the procedure has systematic steps toward students’ listening comprehension. Also, it eases the teacher to teach listening by empowering audio visual aids such as video taken from youtube.

  9. Establishing evidence-informed core intervention competencies in psychological first aid for public health personnel.

    Science.gov (United States)

    Parker, Cindy L; Everly, George S; Barnett, Daniel J; Links, Jonathan M

    2006-01-01

    A full-scale public health response to disasters must attend to both the physical and mental health needs of affected communities. Public health preparedness efforts can be greatly expanded to address the latter set of needs, particularly in light of the high ratio of psychological to physical casualties that often rapidly overwhelms existing mental health response resources in a large-scale emergency. Psychological first aid--the provision of basic psychological care in the short term aftermath of a traumatic event--is a mental health response skill set that public health personnel can readily acquire with proper training. The application of psychological first aid by public health workers can significantly augment front-line community-based mental health responses during the crisis phase of an event. To help achieve this augmented response, we have developed a set of psychological first aid intervention competencies for public health personnel. These competencies, empirically grounded and based on best practice models and consensus statements from leading mental health organizations, represent a necessary step for developing a public health workforce that can better respond to the psychological needs of impacted populations in disasters.

  10. Documentary management of the sport audio-visual information in the generalist televisions

    OpenAIRE

    Jorge Caldera Serrano; Felipe Alonso

    2007-01-01

    The management of the sport audio-visual documentation of the Information Systems of the state, zonal and local chains is analyzed within the framework. For it it is made makes a route by the documentary chain that makes the sport audio-visual information with the purpose of being analyzing each one of the parameters, showing therefore a series of recommendations and norms for the preparation of the sport audio-visual registry. Evidently the audio-visual sport documentation difference i...

  11. Rapid, generalized adaptation to asynchronous audiovisual speech.

    Science.gov (United States)

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-07

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  12. Narrativa audiovisual. Estrategias y recursos [Reseña

    OpenAIRE

    Cuenca Jaramillo, María Dolores

    2011-01-01

    Reseña del libro "Narrativa audiovisual. Estrategias y recursos" de Fernando Canet y Josep Prósper. Cuenca Jaramillo, MD. (2011). Narrativa audiovisual. Estrategias y recursos [Reseña]. Vivat Academia. Revista de Comunicación. Año XIV(117):125-130. http://hdl.handle.net/10251/46210 Senia 125 130 Año XIV 117

  13. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    International Nuclear Information System (INIS)

    George, Rohini; Chung, Theodore D.; Vedam, Sastry S.; Ramakrishnan, Viswanathan; Mohan, Radhe; Weiss, Elisabeth; Keall, Paul J.

    2006-01-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathed without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating

  14. The role of visual spatial attention in audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias; Tiippana, K.; Laarni, J.

    2009-01-01

    Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive b......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre...... from each of the faces and from the voice on the auditory speech percept. We found that directing visual spatial attention towards a face increased the influence of that face on auditory perception. However, the influence of the voice on auditory perception did not change suggesting that audiovisual...... integration did not change. Visual spatial attention was also able to select between the faces when lip reading. This suggests that visual spatial attention acts at the level of visual speech perception prior to audiovisual integration and that the effect propagates through audiovisual integration...

  15. Gestión documental de la información audiovisual deportiva en las televisiones generalistas

    Documentary management of the sport audio-visual information in the generalist televisions

    OpenAIRE

    Jorge Caldera Serrano; Felipe Zapico Alonso

    2005-01-01

    Se analiza la gestión de la información audiovisual deportiva en el marco de los Sistemas de Información Documental de las cadenas estatales, zonales y locales. Para ello se realiza un realiza un recorrido por la cadena documental que realiza la información audiovisual deportiva con el fin de ir analizando cada uno de los parámetros, mostrando así una serie de recomendaciones y normativas para la confección del registro audiovisual deportivo. Evidentemente la documentación deportiva audiovisu...

  16. The role of the public service broadcasting in the european countries

    Directory of Open Access Journals (Sweden)

    Budacia Elisabeta Andreea

    2008-04-01

    Full Text Available Broadcasting in particular has seen remarkable change from the days of single-channel public broadcasting systems. The audiovisual “explosion” is a cultural, social and economic phenomenon of global dimension. The audiovisual sector forms an essential part of Europe’s economic and cultural influence in the world. The fundamental principle of the Union’s audiovisual policy is to provide for the free circulation of reception of trans frontier broadcasts. So the European audiovisual industry is likely to become a stronger and more competitive player on the global scene. The future of public service broadcasting in Europe is increasingly challenged by unfavorable external factors, such as intensifying competition from commercial media, media concentrations, political and economic interests adversary to independent media, and by internal difficulties, such as cost ineffectiveness.

  17. Trigger videos on the Web: Impact of audiovisual design

    NARCIS (Netherlands)

    Verleur, R.; Heuvelman, A.; Verhagen, Pleunes Willem

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is

  18. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    Science.gov (United States)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  19. Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection.

    Science.gov (United States)

    Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica

    2015-09-01

    Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration.

  20. Audiovisual Script Writing.

    Science.gov (United States)

    Parker, Norton S.

    In audiovisual writing the writer must first learn to think in terms of moving visual presentation. The writer must research his script, organize it, and adapt it to a limited running time. By use of a pleasant-sounding narrator and well-written narration, the visual and narrative can be successfully integrated. There are two types of script…

  1. Effect of attentional load on audiovisual speech perception: Evidence from ERPs

    Directory of Open Access Journals (Sweden)

    Agnès eAlsius

    2014-07-01

    Full Text Available Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e. a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  2. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    Science.gov (United States)

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  3. Development of Sensitivity to Audiovisual Temporal Asynchrony during Midchildhood

    Science.gov (United States)

    Kaganovich, Natalya

    2016-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…

  4. "Audio-visuel Integre" et Communication(s) ("Integrated Audiovisual" and Communication)

    Science.gov (United States)

    Moirand, Sophie

    1974-01-01

    This article examines the usefullness of the audiovisual method in teaching communication competence, and calls for research in audiovisual methods as well as in communication theory for improvement in these areas. (Text is in French.) (AM)

  5. The Influence of Selective and Divided Attention on Audiovisual Integration in Children.

    Science.gov (United States)

    Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong

    2016-01-24

    This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder. © The Author(s) 2016.

  6. Mujeres e industria audiovisual hoy: Involución, experimentación y nuevos modelos narrativos Women and the audiovisual (industry today: regression, experiment and new narrative models

    Directory of Open Access Journals (Sweden)

    Ana MARTÍNEZ-COLLADO MARTÍNEZ

    2011-07-01

    Full Text Available Este artículo analiza las prácticas artísticas audiovisuales en el contexto actual. Describe, en primer lugar, el proceso de involución de las prácticas audiovisuales realizadas por mujeres artistas. Las mujeres no están presentes ni como productoras, ni realizadoras, ni como ejecutivas de la industria audiovisual de tal manera que inevitablemente se reconstruyen y refuerzan los estereotipos tradicionales de género. A continuación el artículo se aproxima a la práctica artística audiovisual feminista en la década de los 70 y 80. Tomar la cámara se hizo absolutamente necesario no sólo para dar voz a muchas mujeres. Era necesario reinscribir los discursos ausentes y señalar un discurso crítico respecto a la representación cultural. Analiza, también, cómo estas prácticas a partir de la década de los 90 exploran nuevos modelos narrativos vinculados a las transformaciones de la subjetividad contemporánea, al tiempo que desarrollan su producción audiovisual en un “campo expandido” de exhibición. Por último, el artículo señala la relación de las prácticas feministas audiovisuales con el complejo territorio de la globalización y la sociedad de la información. La narración de la experiencia local ha encontrado en el audiovisual un medio privilegiado para señalar los problemas de la diferencia, la identidad, la raza y la etnicidad.This article analyses audiovisual art in the contemporary context. Firstly it describes the current regression of the role of women artists’ audiovisual practices. Women have little or no presence in the audiovisual industry as producers, filmmakers or executives, a condition that inevitably reconstitutes and reinforces traditional gender stereotypes. The article goes on to look at the feminist audiovisual practices of the nineteen seventies and eighties when women’s filmmaking became an absolutely necessity, not only to give voice to women but also to inscribe discourses found to be

  7. Prácticas de producción audiovisual universitaria reflejadas en los trabajos presentados en la muestra audiovisual universitaria Ventanas 2005-2009

    Directory of Open Access Journals (Sweden)

    Maria Urbanczyk

    2011-01-01

    Full Text Available Este artículo presenta los resultados de la investigación realizada sobre la producción audiovisual universitaria en Colombia, a partir de los trabajos presentados en la muestra audiovisual Ventanas 2005-2009. El estudio de los trabajos trató de abarcar de la manera más completa posible el proceso de producción audiovisual que realizan los jóvenes universitarios, desde el nacimiento de la idea hasta el producto final, la circulación y la socialización. Se encontró que los temas más recurrentes son la violencia y los sentimientos, reflejados desde distintos géneros, tratamientos estéticos y abordajes conceptuales. Ante la ausencia de investigaciones que legitimen el saber que se produce en las aulas en cuanto al campo audiovisual en Colombia, esta investigación pretende abrir un camino para evidenciar el aporte que dejan los jóvenes en la consolidación de una narrativa nacional y en la preservación de la memoria del país.

  8. Neural Correlates of Audiovisual Integration of Semantic Category Information

    Science.gov (United States)

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-01-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded…

  9. Haptic and Audio-visual Stimuli: Enhancing Experiences and Interaction

    NARCIS (Netherlands)

    Nijholt, Antinus; Dijk, Esko O.; Lemmens, Paul M.C.; Luitjens, S.B.

    2010-01-01

    The intention of the symposium on Haptic and Audio-visual stimuli at the EuroHaptics 2010 conference is to deepen the understanding of the effect of combined Haptic and Audio-visual stimuli. The knowledge gained will be used to enhance experiences and interactions in daily life. To this end, a

  10. A Self-Instructional Course in Student Financial Aid Administration. Module 16: Forms and Publications. Second Edition.

    Science.gov (United States)

    Washington Consulting Group, Inc., Washington, DC.

    Module 16 (in a 17-module self-instructional course on student financial aid administration for novice financial aid administrators and other institutional personnel) discusses forms and publications that should be developed and used by the financial aid office. The full course is an introduction to the management of federal financial aid programs…

  11. Evolving with modern technology: Impact of incorporating audiovisual aids in preanesthetic checkup clinics on patient education and anxiety.

    Science.gov (United States)

    Kaur, Haramritpal; Singh, Gurpreet; Singh, Amandeep; Sharda, Gagandeep; Aggarwal, Shobha

    2016-01-01

    Perioperative stress is an often ignored commonly occurring phenomenon. Little or no prior knowledge of anesthesia techniques can increase this significantly. Patients awaiting surgery may experience high level of anxiety. Preoperative visit is an ideal time to educate patients about anesthesia and address these fears. The present study evaluates two different approaches, i.e., standard interview versus informative audiovisual presentation with standard interview on information gain (IG) and its impact on patient anxiety during preoperative visit. This prospective, double-blind, randomized study was conducted in a Tertiary Care Teaching Hospital in rural India over 2 months. This prospective, double-blind, randomized study was carried out among 200 American Society of Anesthesiologist Grade I and II patients in the age group 18-65 years scheduled to undergo elective surgery under general anesthesia. Patients were allocated to either one of the two equal-sized groups, Group A and Group B. Baseline anxiety and information desire component was assessed using Amsterdam Preoperative Anxiety and Information Scale for both the groups. Group A patients received preanesthetic interview with the anesthesiologist and were reassessed. Group B patients were shown a short audiovisual presentation about operation theater and anesthesia procedure followed by preanesthetic interview and were also reassessed. In addition, patient satisfaction score (PSS) and IG was assessed at the end of preanesthetic visit using standard questionnaire. Data were expressed as mean and standard deviation. Nonparametric tests such as Kruskal-Wallis, Mann-Whitney, and Wilcoxon signed rank tests, and Student's t -test and Chi-square test were used for statistical analysis. Patient's IG was significantly more in Group B (5.43 ± 0.55) as compared to Group A (4.41 ± 0.922) ( P < 0.001). There was significant reduction in total anxiety from the baseline values in both the groups. This reduction was

  12. Neural Correlates of Temporal Complexity and Synchrony during Audiovisual Correspondence Detection.

    Science.gov (United States)

    Baumann, Oliver; Vromen, Joyce M G; Cheung, Allen; McFadyen, Jessica; Ren, Yudan; Guo, Christine C

    2018-01-01

    We often perceive real-life objects as multisensory cues through space and time. A key challenge for audiovisual integration is to match neural signals that not only originate from different sensory modalities but also that typically reach the observer at slightly different times. In humans, complex, unpredictable audiovisual streams lead to higher levels of perceptual coherence than predictable, rhythmic streams. In addition, perceptual coherence for complex signals seems less affected by increased asynchrony between visual and auditory modalities than for simple signals. Here, we used functional magnetic resonance imaging to determine the human neural correlates of audiovisual signals with different levels of temporal complexity and synchrony. Our study demonstrated that greater perceptual asynchrony and lower signal complexity impaired performance in an audiovisual coherence-matching task. Differences in asynchrony and complexity were also underpinned by a partially different set of brain regions. In particular, our results suggest that, while regions in the dorsolateral prefrontal cortex (DLPFC) were modulated by differences in memory load due to stimulus asynchrony, areas traditionally thought to be involved in speech production and recognition, such as the inferior frontal and superior temporal cortex, were modulated by the temporal complexity of the audiovisual signals. Our results, therefore, indicate specific processing roles for different subregions of the fronto-temporal cortex during audiovisual coherence detection.

  13. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    Science.gov (United States)

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    Science.gov (United States)

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate

  15. "If you don't abstain, you will die of AIDS": AIDS education in Kenyan public schools.

    Science.gov (United States)

    Njue, Carolyne; Nzioka, Charles; Ahlberg, Beth-Maina; Pertet, Anne M; Voeten, Helene A C M

    2009-04-01

    We explored constraints of implementing AIDS education in public schools in Kenya. Sixty interviews with teachers and 60 focus group discussions with students were conducted in 21 primary and nine secondary schools. System/school-level constraints included lack of time in the curriculum, limited reach of secondary-school students (because AIDS education is embedded in biology, which is not compulsory), and disapproval of openness about sex and condoms by the Ministry of Education and parents. Alternative strategies to teach about AIDS had their own constraints. Teachers lacked training and support and felt uncomfortable with the topic. They were not used to interactive teaching methods and sometimes breached confidentiality. Teachers' negative attitudes constrained students from seeking information. Training interventions should be provided to teachers to increase their self-confidence, foster more positive attitudes, and stimulate interactive teaching methods. The Ministry of Education needs to have a clear policy toward the promotion of condoms.

  16. Trigger Videos on the Web: Impact of Audiovisual Design

    Science.gov (United States)

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  17. Specialization in audiovisual speech perception: a replication study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by bimodal integration in the McGurk effect. This integration effect may be specific to speech or be applied to all stimuli in general. To investigate this, Tuomainen et al. (2005) used sine-wave speech, which naïve observers may perceive as non......-speech, but hear as speech once informed of the linguistic origin of the signal. Combinations of sine-wave speech and incongruent video of the talker elicited a McGurk effect only for informed observers. This indicates that the audiovisual integration effect is specific to speech perception. However, observers...... that observers did look near the mouth. We conclude that eye-movements did not influence the results of Tuomainen et al. and that their results thus can be taken as evidence of a speech specific mode of audiovisual integration underlying the McGurk illusion....

  18. Cross-modal cueing in audiovisual spatial attention

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Greenlee, Mark W.; Gondan, Matthias

    2015-01-01

    effects have been reported for endogenous visual cues while exogenous cues seem to be mostly ineffective. In three experiments, we investigated cueing effects on the processing of audiovisual signals. In Experiment 1 we used endogenous cues to investigate their effect on the detection of auditory, visual......, and audiovisual targets presented with onset asynchrony. Consistent cueing effects were found in all target conditions. In Experiment 2 we used exogenous cues and found cueing effects only for visual target detection, but not auditory target detection. In Experiment 3 we used predictive exogenous cues to examine...

  19. Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special

    Science.gov (United States)

    Vroomen, Jean; Stekelenburg, Jeroen J.

    2011-01-01

    Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…

  20. GÖRSEL-İŞİTSEL ÇEVİRİ / AUDIOVISUAL TRANSLATION

    Directory of Open Access Journals (Sweden)

    Sevtap GÜNAY KÖPRÜLÜ

    2016-04-01

    Full Text Available Audiovisual translation dating back to the silent film era is a special translation method which has been developed for the translation of the movies and programs shown on TV and cinema. Therefore, in the beginning, the term “film translation” was used for this type of translation. Due to the growing number of audiovisual texts it has attracted the interest of scientists and has been assessed under the translation studies. Also in our country the concept of film translation was used for this area, but recently, the concept of audio-visual has been used. Since it not only encompasses the films but also covers all the audio-visual communicatian tools as well, especially in scientific field. In this study, the aspects are analyzed which should be taken into consideration by the translator during the audio-visual translation process within the framework of source text, translated text, film, technical knowledge and knowledge. At the end of the study, it is shown that there are some factors, apart from linguistic and paralinguistic factors and they must be considered carefully as they can influence the quality of the translation. And it is also shown that the given factors require technical knowledge in translation. In this sense, the audio-visual translation is accessed from a different angle compared to the other researches which have been done.

  1. The production of audiovisual teaching tools in minimally invasive surgery.

    Science.gov (United States)

    Tolerton, Sarah K; Hugh, Thomas J; Cosman, Peter H

    2012-01-01

    Audiovisual learning resources have become valuable adjuncts to formal teaching in surgical training. This report discusses the process and challenges of preparing an audiovisual teaching tool for laparoscopic cholecystectomy. The relative value in surgical education and training, for both the creator and viewer are addressed. This audiovisual teaching resource was prepared as part of the Master of Surgery program at the University of Sydney, Australia. The different methods of video production used to create operative teaching tools are discussed. Collating and editing material for an audiovisual teaching resource can be a time-consuming and technically challenging process. However, quality learning resources can now be produced even with limited prior video editing experience. With minimal cost and suitable guidance to ensure clinically relevant content, most surgeons should be able to produce short, high-quality education videos of both open and minimally invasive surgery. Despite the challenges faced during production of audiovisual teaching tools, these resources are now relatively easy to produce using readily available software. These resources are particularly attractive to surgical trainees when real time operative footage is used. They serve as valuable adjuncts to formal teaching, particularly in the setting of minimally invasive surgery. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  2. Public-private partnerships as a strategy against HIV/AIDS in South Africa: the influence of historical legacies.

    Science.gov (United States)

    Brunne, Viviane

    2009-09-01

    In the face of the extreme challenges posed by the South African HIV/AIDS epidemic, collaboration between public and private partners is often called for in an attempt to mobilise additional resources and generate synergies. This paper shows that the ability to successfully use public-private partnerships to address complex challenges, such as an HIV/AIDS epidemic, is influenced by the fabric of society, one important aspect being historical legacies. The first part of the article shows how South Africa's apartheid past affects the ability of public and private partners to collaborate in a response to HIV and AIDS today. It also takes into account how reconciliation and nation-building policies in the immediate post-transformation period have affected the ability to form and sustain partnerships concerning HIV/AIDS issues. The second part of the article analyses more recent developments regarding the information that these hold as to the feasibility of public-private partnerships and whether these continue to be affected by the legacies of the past. Two events with symbolic political value in South Africa, namely the 2010 FIFA World Cup soccer event and the recent changes in government, are systematically examined on the basis of three analytical queries, regarding: the impact of the event on nation-building and transcending cleavages in society; the event's impact on the ability to form public-private partnerships in general; and the role of HIV/AIDS in connection with the event. Conclusions are drawn a propos the influence of historic factors on the ability of South African society to effectively use public-private partnerships in the response to HIV and AIDS, and the continued dynamics and likely future directions of these partnerships.

  3. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    Science.gov (United States)

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  4. From "Piracy" to Payment: Audio-Visual Copyright and Teaching Practice.

    Science.gov (United States)

    Anderson, Peter

    1993-01-01

    The changing circumstances in Australia governing the use of broadcast television and radio material in education are examined, from the uncertainty of the early 1980s to current management of copyrighted audiovisual material under the statutory licensing agreement between universities and an audiovisual copyright agency. (MSE)

  5. Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression

    Science.gov (United States)

    McNorgan, Chris; Booth, James R.

    2015-01-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276

  6. Skill dependent audiovisual integration in the fusiform induces repetition suppression.

    Science.gov (United States)

    McNorgan, Chris; Booth, James R

    2015-02-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Audiovisual consumption and its social logics on the web

    OpenAIRE

    Rose Marie Santini; Juan C. Calvi

    2013-01-01

    This article analyzes the social logics underlying audiovisualconsumption on digital networks. We retrieved some data on the Internet globaltraffic of audiovisual files since 2008 to identify formats, modes of distributionand consumption of audiovisual contents that tend to prevail on the Web. Thisresearch shows the types of social practices which are dominant among usersand its relation to what we designate as “Internet culture”.

  8. Panorama de les fonts audiovisuals internacionals en televisió : contingut, gestió i drets

    Directory of Open Access Journals (Sweden)

    López de Solís, Iris

    2014-12-01

    Full Text Available Les cadenes generalistes espanyoles (nacionals i autonòmiques disposen de diferents fonts audiovisuals per informar dels temes internacionals, com ara agències, consorcis de notícies i corresponsalies. En aquest article, a partir de les dades facilitades per diferents cadenes, s'aborda la cobertura, l'ús i la gestió d'aquestes fonts, així com també els seus drets d'ús i arxivament, i s'analitza la història i les eines en línia de les agències més emprades. Finalment, es descriu la tasca diària del departament d'Eurovision de TVE, al qual fa uns mesos s'han incorporat documentalistes que, a més de tractar documentalment el material audiovisual, duen a terme tasques d'edició i de producció.Las cadenas generalistas españolas (nacionales y autonómicas cuentan con diferentes fuentes audiovisuales para informar de los temas internacionales, como agencias, consorcios de noticias y corresponsalías. En este artículo, a partir de los datos facilitados por diferentes cadenas, se aborda la cobertura, el uso y la gestión de dichas fuentes, así como sus derechos de uso y archivado, y se analiza la historia y las herramientas en línea de las agencias más empleadas. Finalmente se describe la labor diaria del departamento de Eurovision de TVE, al que hace unos meses se han incorporado documentalistas que, además de tratar documentalmente el material audiovisual, realizan labores de edición y producción.At both national and regional levels, Spain’s main public service television channels rely upon a number of independent producers of audiovisual content to deliver news on international affairs, including news agencies and consortia and correspondent networks. Using the data provided by different channels, this paper examines the coverage, use and management of these sources as well as the regulations determining their use and storage. It also analyzes the history of the most prominent agencies and the online toolkits they offer

  9. Claves para reconocer los niveles de lectura crítica audiovisual en el niño Keys to Recognizing the Levels of Critical Audiovisual Reading in Children

    Directory of Open Access Journals (Sweden)

    Jacqueline Sánchez Carrero

    2012-03-01

    Full Text Available Diversos estudios con niños y adolescentes han demostrado que a mayor conocimiento del mundo de la producción y transmisión de mensajes audiovisuales, mayor capacidad adquieren para formarse un criterio propio ante la pantalla. En este artículo se aúnan tres experiencias de educación mediática realizadas en Venezuela, Colombia y España, desde el enfoque de la recepción crítica. Se proporcionan los indicadores que llevan a determinar los niveles de lectura crítica audiovisual en niños de entre 8 y 12 años, construidos a partir de procesos de intervención mediante talleres de alfabetización mediática. Los grupos han sido instruidos acerca del universo audiovisual, dándoles a conocer cómo se gestan los contenidos audiovisuales y el modo de analizarlos, desestructurarlos y recrearlos. Primero, se hace referencia al concepto en evolución de educación mediática. Después, se describen las experiencias comunes en los tres países para luego incidir en los indicadores que permiten medir el nivel de lectura crítica. Por último, se reflexiona sobre la necesidad de la educación mediática en la era de la multialfabetización. No es muy frecuente encontrar estudios que revelen las claves para reconocer qué grado de criticidad tiene un niño cuando visiona los contenidos de los distintos medios digitales. Es un tema fundamental pues permite saber con qué nivel de comprensión cuenta y cuál adquiere después de un proceso de formación en educación mediática.Based on the results of several projects carried out with children and adolescents, we can state that knowledge of production and broadcasting aids the acquisition of critical media skills. This article combines three media education experiences in Venezuela, Colombia and Spain driven by a critical reception approach. It presents leading indicators for determining the level of critical audiovisual reading in children aged 8-12 extracted from intervention processes through

  10. [Audio-visual communication in the history of psychiatry].

    Science.gov (United States)

    Farina, B; Remoli, V; Russo, F

    1993-12-01

    The authors analyse the evolution of visual communication in the history of psychiatry. From the 18th century oil paintings to the first dagherrotic prints until the cinematography and the modern audiovisual systems they observed an increasing diffusion of the new communication techniques in psychiatry, and described the use of the different techniques in psychiatric practice. The article ends with a brief review of the current applications of the audiovisual in therapy, training, teaching, and research.

  11. 77 FR 22803 - Certain Audiovisual Components and Products Containing the Same; Institution of Investigation...

    Science.gov (United States)

    2012-04-17

    ... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-837] Certain Audiovisual Components and Products... importation of certain audiovisual components and products containing the same by reason of infringement of... importation, or the sale within the United States after importation of certain audiovisual components and...

  12. Cortical Integration of Audio-Visual Information

    Science.gov (United States)

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  13. Cortical oscillations modulated by congruent and incongruent audiovisual stimuli.

    Science.gov (United States)

    Herdman, A T; Fujioka, T; Chau, W; Ross, B; Pantev, C; Picton, T W

    2004-11-30

    Congruent or incongruent grapheme-phoneme stimuli are easily perceived as one or two linguistic objects. The main objective of this study was to investigate the changes in cortical oscillations that reflect the processing of congruent and incongruent audiovisual stimuli. Graphemes were Japanese Hiragana characters for four different vowels (/a/, /o/, /u/, and /i/). They were presented simultaneously with their corresponding phonemes (congruent) or non-corresponding phonemes (incongruent) to native-speaking Japanese participants. Participants' reaction times to the congruent audiovisual stimuli were significantly faster by 57 ms as compared to reaction times to incongruent stimuli. We recorded the brain responses for each condition using a whole-head magnetoencephalograph (MEG). A novel approach to analysing MEG data, called synthetic aperture magnetometry (SAM), was used to identify event-related changes in cortical oscillations involved in audiovisual processing. The SAM contrast between congruent and incongruent responses revealed greater event-related desynchonization (8-16 Hz) bilaterally in the occipital lobes and greater event-related synchronization (4-8 Hz) in the left transverse temporal gyrus. Results from this study further support the concept of interactions between the auditory and visual sensory cortices in multi-sensory processing of audiovisual objects.

  14. Shedding light on our audiovisual heritage: perspectives to emphasise CERN Digital Memory

    CERN Document Server

    Salvador, Mathilde Estelle

    2017-01-01

    This work aims to answer the question of how to add value to CERN’s audiovisual heritage available on CERN Document Server. In other terms, how to make more visible to the scientific community and grand public what is hidden and classified: namely CERN’s archives, and more precisely audiovisual ones because of their creative potential. Rather than focusing on its scientific and technical value, we will analyse its artistic and attractive power. In fact, we will see that all kind of archive can be intentionally or even accidentally artistic and exciting, that it is possible to change our vision of a photo, a sound or a film. This process of enhancement is a virtuous circle as it has an educational value and makes accessible scientific content that is normally out of range. However, the problem of how to magnify such archives remains. That is why we will try to learn from other digital memories in the world to see how they managed to highlight their own archives, in order to suggest new ways of enhancing au...

  15. Hearing aid patients in private practice and public health (Veterans Affairs) clinics: are they different?

    Science.gov (United States)

    Cox, Robyn M; Alexander, Genevieve C; Gray, Ginger A

    2005-12-01

    In hearing aid research, it is commonplace to combine data across subjects whose hearing aids were provided in different service delivery models. There is reason to question whether these types of patients are always similar enough to justify this practice. To explore this matter, this investigation evaluated similarities and differences in self-report data obtained from hearing aid patients derived from public health (Veterans Affairs, VA) and private practice (PP) settings. The study was a multisite, cross-sectional survey in which 230 hearing aid patients from VA and PP audiology clinic settings provided self-report data on a collection of questionnaires both before and after the hearing aid fitting. Subjects were all older adults with mild to moderately severe hearing loss. About half of them had previous experience wearing hearing aids. All subjects were fitted with wide-dynamic-range-compression instruments and received similar treatment protocols. Numerous statistically significant differences were observed between the VA and PP subject groups. Before the fitting, VA patients reported higher expectations from the hearing aids and more severe unaided problems compared with PP patients with similar audiograms. Three wks after the fitting, VA patients reported more satisfaction with their hearing aids. On some measures VA patients reported more benefit, but different measures of benefit did not give completely consistent results. Both groups reported using the hearing aids an average of approximately 8 hrs per day. VA patients reported age-normal physical and mental health, but PP patients tended to report better than typical health for their age group. These data indicate that hearing aid patients seen in the VA public health hearing services are systematically different in self-report domains from those seen in private practice services. It is therefore risky to casually combine data from these two types of subjects or to generalize research results from one

  16. Context-specific effects of musical expertise on audiovisual integration

    Science.gov (United States)

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  17. Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.

    Science.gov (United States)

    Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu

    2018-05-01

    Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.

  18. Vulnerability and risk perception in the management of HIV/AIDS: Public priorities in a global pandemic

    Directory of Open Access Journals (Sweden)

    Peter Tsasis

    2008-11-01

    Full Text Available Peter Tsasis1,2, N. Nirupama21School of Health Policy and Management, 2School of Administrative Studies, York University, Toronto, Ontario, CanadaAbstract: Understanding the way perception of risk is shaped and constructed is crucial in understanding why it has been so difficult to mitigate the spread of HIV/AIDS. This paper uses the Pressure and Release (PAR model, used to predict the onset of natural disasters as the conceptual framework. It substitutes vulnerability and risk perception as the trigger factors in the model, in making the case that HIV/AIDS can be characterized as a slow onset disaster. The implications are that vulnerability must be managed and reduced by addressing root causes, dynamic pressures, and unsafe conditions that contribute to the HIV/AIDS pandemic. HIV/AIDS programs must be culturally appropriate and work toward influencing risk perception, while addressing social norms and values that negatively impact vulnerable populations. By impacting cultural and social expectations, individuals will be able to more readily adopt safer sex behaviors. The development of policies and programs addressing the issues in context, as opposed to individual behaviors alone, allows for effective public health intervention. This may have implications for public health measures implemented for combating the spread of HIV/AIDS.Keywords: vulnerability, risk perception, HIV/AIDS, public health intervention

  19. Audiovisual Speech Synchrony Measure: Application to Biometrics

    Directory of Open Access Journals (Sweden)

    Gérard Chollet

    2007-01-01

    Full Text Available Speech is a means of communication which is intrinsically bimodal: the audio signal originates from the dynamics of the articulators. This paper reviews recent works in the field of audiovisual speech, and more specifically techniques developed to measure the level of correspondence between audio and visual speech. It overviews the most common audio and visual speech front-end processing, transformations performed on audio, visual, or joint audiovisual feature spaces, and the actual measure of correspondence between audio and visual speech. Finally, the use of synchrony measure for biometric identity verification based on talking faces is experimented on the BANCA database.

  20. Audiovisual consumption and its social logics on the web

    Directory of Open Access Journals (Sweden)

    Rose Marie Santini

    2013-06-01

    Full Text Available This article analyzes the social logics underlying audiovisualconsumption on digital networks. We retrieved some data on the Internet globaltraffic of audiovisual files since 2008 to identify formats, modes of distributionand consumption of audiovisual contents that tend to prevail on the Web. Thisresearch shows the types of social practices which are dominant among usersand its relation to what we designate as “Internet culture”.

  1. Audiovisual Integration of Speech in a Patient with Broca’s Aphasia

    DEFF Research Database (Denmark)

    Andersen, Tobias; Starrfelt, Randi

    2015-01-01

    's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical......, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing...

  2. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis.

    Science.gov (United States)

    Altieri, Nicholas; Wenger, Michael J

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of -12 dB, and S/N ratio of -18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity.

  3. The use of audiovisual techniques in participative diagnosis: the experience of the Polvo Fields; O uso do audiovisual no diagnostico participativo: a experiencia do projeto de educacao ambiental no Campo de Polvo

    Energy Technology Data Exchange (ETDEWEB)

    Loureiro, Juliana; Pitanga, Luisa [Abaete Estudos Socioambientais Ltda., Rio de Janeiro, RJ (Brazil); Borensztein, Fernando [Devon Energy do Brasil Ltda., Rio de Janeiro, RJ (Brazil)

    2008-07-01

    The Brazilian environmental law requires oil companies' commitment to implement environmental programs, among which the environmental education project. This type of project should be understood by the companies as an opportunity for the development of socio environmental responsibility policies towards the affected populations. In order for the environmental education project to be effective as a means of awareness and social transformation, it is required to increase public's participation from the process of knowledge creation on the communities environmental problems to the disclose of the produced contents. This work refers to the use of the audiovisual as an instrument for the mobilization and consciousness for the construction of participative diagnostics, from the experience of the Environmental Education Project of the Polvo field, accomplished in ten municipal districts of the Campos Basin region. Inspired on an original methodology, the project promoted environmental cinema workshops that resulted in 30 documentaries directed by the local population and 10 environmental forums in which were developed local audiovisual environmental agendas. (author)

  4. Search in audiovisual broadcast archives

    NARCIS (Netherlands)

    Huurnink, B.

    2010-01-01

    Documentary makers, journalists, news editors, and other media professionals routinely require previously recorded audiovisual material for new productions. For example, a news editor might wish to reuse footage from overseas services for the evening news, or a documentary maker describing the

  5. The process of developing audiovisual patient information: challenges and opportunities.

    Science.gov (United States)

    Hutchison, Catherine; McCreaddie, May

    2007-11-01

    The aim of this project was to produce audiovisual patient information, which was user friendly and fit for purpose. The purpose of the audiovisual patient information is to inform patients about randomized controlled trials, as a supplement to their trial-specific written information sheet. Audiovisual patient information is known to be an effective way of informing patients about treatment. User involvement is also recognized as being important in the development of service provision. The aim of this paper is (i) to describe and discuss the process of developing the audiovisual patient information and (ii) to highlight the challenges and opportunities, thereby identifying implications for practice. A future study will test the effectiveness of the audiovisual patient information in the cancer clinical trial setting. An advisory group was set up to oversee the project and provide guidance in relation to information content, level and delivery. An expert panel of two patients provided additional guidance and a dedicated operational team dealt with the logistics of the project including: ethics; finance; scriptwriting; filming; editing and intellectual property rights. Challenges included the limitations of filming in a busy clinical environment, restricted technical and financial resources, ethical needs and issues around copyright. There were, however, substantial opportunities that included utilizing creative skills, meaningfully involving patients, teamworking and mutual appreciation of clinical, multidisciplinary and technical expertise. Developing audiovisual patient information is an important area for nurses to be involved with. However, this must be performed within the context of the multiprofessional team. Teamworking, including patient involvement, is crucial as a wide variety of expertise is required. Many aspects of the process are transferable and will provide information and guidance for nurses, regardless of specialty, considering developing this

  6. 'Public enemy no. 1': Tobacco industry funding for the AIDS response

    African Journals Online (AJOL)

    2016-03-29

    Mar 29, 2016 ... SAHARA-J: Journal of Social Aspects of HIV/AIDS ... how they have used various charitable causes to subvert tobacco control efforts and influence public health policy. This .... health goals while drawing on extensive resources and networks, ...... reputation as corporate citizens and to indirectly promote.

  7. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate...... of audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase...... visual detection task. In our first experiment, observers presented with congruent and incongruent audiovisual sine-wave speech stimuli did only show a McGurk effect when informed of the speech nature of the stimulus. Performance on the secondary visual task was very good, thus supporting the finding...

  8. Strategies for media literacy: Audiovisual skills and the citizenship in Andalusia

    Directory of Open Access Journals (Sweden)

    Ignacio Aguaded-Gómez

    2012-07-01

    Full Text Available Media consumption is an undeniable fact in present-day society. The hours that members of all social segments spend in front of a screen take up a large part of their leisure time worldwide. Audiovisual communication becomes especially important within the context of today’s digital society (society-network, where information and communication technologies pervade all corners of everyday life. However, people do not own enough audiovisual media skills to cope with this mass media omnipresence. Neither the education system nor civic associations, or the media themselves, have promoted audiovisual skills to make people critically competent when viewing media. This study aims to provide an updated conceptualization of the “audiovisual skill” in this digital environment and transpose it onto a specific interventional environment, seeking to detect needs and shortcomings, plan global strategies to be adopted by governments and devise training programmes for the various sectors involved.

  9. Situación actual de la traducción audiovisual en Colombia

    Directory of Open Access Journals (Sweden)

    Jeffersson David Orrego Carmona

    2010-05-01

    Full Text Available Objetivos: el presente artículo tiene dos objetivos: dar a conocer el panorama general del mercado actual de la traducción audiovisual en Colombia y resaltar la importancia de desarrollar estudios en esta área. Método: la metodología empleada incluyó investigación y lectura de bibliografía relacionada con el tema, aplicación de encuestas a diferentes grupos vinculados con la traducción audiovisual y el posterior análisis. Resultados: éstos mostraron el desconocimiento general que hay sobre esta labor y las preferencias de los grupos encuestados sobre las modalidades de traducción audiovisual. Se pudo observar que hay una marcada preferencia por el subtitulaje, por razones particulares de cada grupo. Conclusiones: los traductores colombianos necesitan un entrenamiento en traducción audiovisual para satisfacer las demandas del mercado y se resalta la importancia de desarrollar estudios más profundos enfocados en el desarrollo de la traducción audiovisual en Colombia.

  10. Audiovisual speech facilitates voice learning.

    Science.gov (United States)

    Sheffert, Sonya M; Olson, Elizabeth

    2004-02-01

    In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.

  11. A conceptual framework for audio-visual museum media

    DEFF Research Database (Denmark)

    Kirkedahl Lysholm Nielsen, Mikkel

    2017-01-01

    In today's history museums, the past is communicated through many other means than original artefacts. This interdisciplinary and theoretical article suggests a new approach to studying the use of audio-visual media, such as film, video and related media types, in a museum context. The centre...... and museum studies, existing case studies, and real life observations, the suggested framework instead stress particular characteristics of contextual use of audio-visual media in history museums, such as authenticity, virtuality, interativity, social context and spatial attributes of the communication...

  12. Understanding the basics of audiovisual archiving in Africa and the ...

    African Journals Online (AJOL)

    In the developed world, the cultural value of the audiovisual media gained legitimacy and widening acceptance after World War II, and this is what Africa still requires. There are a lot of problems in Africa, and because of this, activities such as preservation of a historical record, especially in the audiovisual media are seen as ...

  13. Audio-visual temporal recalibration can be constrained by content cues regardless of spatial overlap

    Directory of Open Access Journals (Sweden)

    Warrick eRoseboom

    2013-04-01

    Full Text Available It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this was necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; Experiment 1 and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; Experiment 2 we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  14. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG.

    Directory of Open Access Journals (Sweden)

    Yao Lu

    Full Text Available Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave, while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  15. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    Science.gov (United States)

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  16. Selected Bibliography of Egyptian Educational Materials, Vol. 1, No. 4, 1975.

    Science.gov (United States)

    Al-Ahram Center for Scientific Translations, Cairo (Egypt).

    This annotated bibliography of Egyptian publications on education contains 108 entries. Publications include journal articles, books, and government documents. The following educational topics are covered: adult education, teaching Arabic language, art education, audiovisual aids, teaching civics, formation of committees, secondary school courses…

  17. On the relevance of script writing basics in audiovisual translation practice and training

    Directory of Open Access Journals (Sweden)

    Juan José Martínez-Sierra

    2012-07-01

    Full Text Available http://dx.doi.org/10.5007/2175-7968.2012v1n29p145   Audiovisual texts possess characteristics that clearly differentiate audiovisual translation from both oral and written translation, and prospective screen translators are usually taught about the issues that typically arise in audiovisual translation. This article argues for the development of an interdisciplinary approach that brings together Translation Studies and Film Studies, which would prepare future audiovisual translators to work with the nature and structure of a script in mind, in addition to the study of common and diverse translational aspects. Focusing on film, the article briefly discusses the nature and structure of scripts, and identifies key points in the development and structuring of a plot. These key points and various potential hurdles are illustrated with examples from the films Chinatown and La habitación de Fermat. The second part of this article addresses some implications for teaching audiovisual translation.

  18. La estacíon de trabajo del traductor audiovisual: Herramientas y Recursos.

    Directory of Open Access Journals (Sweden)

    Anna Matamala

    2005-01-01

    Full Text Available In this article, we discuss the relationship between audiovisual translation and new technologies, and describe the characteristics of the audiovisual translator´s workstation, especially as regards dubbing and voiceover. After presenting the tools necessary for the translator to perform his/ her task satisfactorily as well as pointing to future perspectives, we make a list of sources that can be consulted in order to solve translation problems, including those available on the Internet. Keywords: audiovisual translation, new technologies, Internet, translator´s tools.

  19. A general audiovisual temporal processing deficit in adult readers with dyslexia

    NARCIS (Netherlands)

    Francisco, A.A.; Jesse, A.; Groen, M.A.; McQueen, J.M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with

  20. Multiple concurrent temporal recalibrations driven by audiovisual stimuli with apparent physical differences.

    Science.gov (United States)

    Yuan, Xiangyong; Bi, Cuihua; Huang, Xiting

    2015-05-01

    Out-of-synchrony experiences can easily recalibrate one's subjective simultaneity point in the direction of the experienced asynchrony. Although temporal adjustment of multiple audiovisual stimuli has been recently demonstrated to be spatially specific, perceptual grouping processes that organize separate audiovisual stimuli into distinctive "objects" may play a more important role in forming the basis for subsequent multiple temporal recalibrations. We investigated whether apparent physical differences between audiovisual pairs that make them distinct from each other can independently drive multiple concurrent temporal recalibrations regardless of spatial overlap. Experiment 1 verified that reducing the physical difference between two audiovisual pairs diminishes the multiple temporal recalibrations by exposing observers to two utterances with opposing temporal relationships spoken by one single speaker rather than two distinct speakers at the same location. Experiment 2 found that increasing the physical difference between two stimuli pairs can promote multiple temporal recalibrations by complicating their non-temporal dimensions (e.g., disks composed of two rather than one attribute and tones generated by multiplying two frequencies); however, these recalibration aftereffects were subtle. Experiment 3 further revealed that making the two audiovisual pairs differ in temporal structures (one transient and one gradual) was sufficient to drive concurrent temporal recalibration. These results confirm that the more audiovisual pairs physically differ, especially in temporal profile, the more likely multiple temporal perception adjustments will be content-constrained regardless of spatial overlap. These results indicate that multiple temporal recalibrations are based secondarily on the outcome of perceptual grouping processes.

  1. Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation.

    Science.gov (United States)

    Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina

    2017-01-01

    Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this "online" multisensory improvement, there is evidence of long-lasting, "offline" effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced "online" effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual

  2. Audiovisual Temporal Processing and Synchrony Perception in the Rat.

    Science.gov (United States)

    Schormans, Ashley L; Scott, Kaela E; Vo, Albert M Q; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L

    2016-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer's ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats ( n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats ( n = 7) perceived the synchronous audiovisual stimuli to be "visual first" for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20-40 ms. Ultimately, given

  3. A pilot study of audiovisual family meetings in the intensive care unit.

    Science.gov (United States)

    de Havenon, Adam; Petersen, Casey; Tanana, Michael; Wold, Jana; Hoesch, Robert

    2015-10-01

    We hypothesized that virtual family meetings in the intensive care unit with conference calling or Skype videoconferencing would result in increased family member satisfaction and more efficient decision making. This is a prospective, nonblinded, nonrandomized pilot study. A 6-question survey was completed by family members after family meetings, some of which used conference calling or Skype by choice. Overall, 29 (33%) of the completed surveys came from audiovisual family meetings vs 59 (67%) from control meetings. The survey data were analyzed using hierarchical linear modeling, which did not find any significant group differences between satisfaction with the audiovisual meetings vs controls. There was no association between the audiovisual intervention and withdrawal of care (P = .682) or overall hospital length of stay (z = 0.885, P = .376). Although we do not report benefit from an audiovisual intervention, these results are preliminary and heavily influenced by notable limitations to the study. Given that the intervention was feasible in this pilot study, audiovisual and social media intervention strategies warrant additional investigation given their unique ability to facilitate communication among family members in the intensive care unit. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Exploring Student Perceptions of Audiovisual Feedback via Screencasting in Online Courses

    Science.gov (United States)

    Mathieson, Kathleen

    2012-01-01

    Using Moore's (1993) theory of transactional distance as a framework, this action research study explored students' perceptions of audiovisual feedback provided via screencasting as a supplement to text-only feedback. A crossover design was employed to ensure that all students experienced both text-only and text-plus-audiovisual feedback and to…

  5. La comunicación corporativa audiovisual: propuesta metodológica de estudio

    OpenAIRE

    Lorán Herrero, María Dolores

    2016-01-01

    Esta investigación, versa en torno a dos conceptos, la Comunicación Audiovisual y La Comunicación Corporativa, disciplinas que afectan a las organizaciones y que se van articulando de tal manera que dan lugar a la Comunicación Corporativa Audiovisual, concepto que se propone en esta tesis. Se realiza una clasificación y definición de los formatos que utilizan las organizaciones para su comunicación. Se trata de poder analizar cualquier documento audiovisual corporativo para constatar si el l...

  6. 75 FR 53968 - Reverb Communications, Inc.; Analysis of Proposed Consent Order To Aid Public Comment

    Science.gov (United States)

    2010-09-02

    ... final the agreement's proposed order. This matter involves the public relations, marketing, and sales... FEDERAL TRADE COMMISSION [File No. 092 3199] Reverb Communications, Inc.; Analysis of Proposed Consent Order To Aid Public Comment AGENCY: Federal Trade Commission. ACTION: Proposed Consent Agreement...

  7. 77 FR 16561 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-03-21

    ... INTERNATIONAL TRADE COMMISSION [DN 2884] Certain Audiovisual Components and Products Containing.... International Trade Commission has received a complaint entitled Certain Audiovisual Components and Products... audiovisual components and products containing the same. The complaint names as respondents Funai Electric...

  8. 77 FR 16560 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-03-21

    ... INTERNATIONAL TRADE COMMISSION [DN 2884] Certain Audiovisual Components and Products Containing.... International Trade Commission has received a complaint entitled Certain Audiovisual Components and Products... audiovisual components and products containing the same. The complaint names as respondents Funai Electric...

  9. Spatiotemporal Relationships among Audiovisual Stimuli Modulate Auditory Facilitation of Visual Target Discrimination.

    Science.gov (United States)

    Li, Qi; Yang, Huamin; Sun, Fang; Wu, Jinglong

    2015-03-01

    Sensory information is multimodal; through audiovisual interaction, task-irrelevant auditory stimuli tend to speed response times and increase visual perception accuracy. However, mechanisms underlying these performance enhancements have remained unclear. We hypothesize that task-irrelevant auditory stimuli might provide reliable temporal and spatial cues for visual target discrimination and behavioral response enhancement. Using signal detection theory, the present study investigated the effects of spatiotemporal relationships on auditory facilitation of visual target discrimination. Three experiments were conducted where an auditory stimulus maintained reliable temporal and/or spatial relationships with visual target stimuli. Results showed that perception sensitivity (d') to visual target stimuli was enhanced only when a task-irrelevant auditory stimulus maintained reliable spatiotemporal relationships with a visual target stimulus. When only reliable spatial or temporal information was contained, perception sensitivity was not enhanced. These results suggest that reliable spatiotemporal relationships between visual and auditory signals are required for audiovisual integration during a visual discrimination task, most likely due to a spread of attention. These results also indicate that auditory facilitation of visual target discrimination follows from late-stage cognitive processes rather than early stage sensory processes. © 2015 SAGE Publications.

  10. Psychophysiological effects of audiovisual stimuli during cycle exercise.

    Science.gov (United States)

    Barreto-Silva, Vinícius; Bigliassi, Marcelo; Chierotti, Priscila; Altimari, Leandro R

    2018-05-01

    Immersive environments induced by audiovisual stimuli are hypothesised to facilitate the control of movements and ameliorate fatigue-related symptoms during exercise. The objective of the present study was to investigate the effects of pleasant and unpleasant audiovisual stimuli on perceptual and psychophysiological responses during moderate-intensity exercises performed on an electromagnetically braked cycle ergometer. Twenty young adults were administered three experimental conditions in a randomised and counterbalanced order: unpleasant stimulus (US; e.g. images depicting laboured breathing); pleasant stimulus (PS; e.g. images depicting pleasant emotions); and neutral stimulus (NS; e.g. neutral facial expressions). The exercise had 10 min of duration (2 min of warm-up + 6 min of exercise + 2 min of warm-down). During all conditions, the rate of perceived exertion and heart rate variability were monitored to further understanding of the moderating influence of audiovisual stimuli on perceptual and psychophysiological responses, respectively. The results of the present study indicate that PS ameliorated fatigue-related symptoms and reduced the physiological stress imposed by the exercise bout. Conversely, US increased the global activity of the autonomic nervous system and increased exertional responses to a greater degree when compared to PS. Accordingly, audiovisual stimuli appear to induce a psychophysiological response in which individuals visualise themselves within the story presented in the video. In such instances, individuals appear to copy the behaviour observed in the videos as if the situation was real. This mirroring mechanism has the potential to up-/down-regulate the cardiac work as if in fact the exercise intensities were different in each condition.

  11. Constructing publics, preventing diseases and medicalizing bodies: HIV, AIDS, and its visual cultures

    Directory of Open Access Journals (Sweden)

    Fabrizzio Mc Manus

    Full Text Available Abstract: In this paper we analyze the visual cultures surrounding HIV and AIDS; we are especially interested in tracking the actors, discourses and visual cultures involved in AIDS prevention in Mexico for a period of twenty years: from 1985 to 2005. We use media studies to better comprehend how HIV and AIDS further medicalized human bodies by mobilizing specific discourses, metaphors and visual resources that, though promoting a better understanding of how HIV could be acquired and how it could be prevented, also generated new representations of sexuality, bodies and persons living with HIV or AIDS often biased in favor of different systems of value. Moreover, we try to offer a general characterization of the different publics that were targeted and preconceptions involving ethnicity, gender, sexual orientation, geography and membership in different sociocultural groups.

  12. KAMAN PELAYANAN MEDIA AUDIOVISUAL: STUDI KASUS DI THE BRITISH COUNCIL JAKARTA

    Directory of Open Access Journals (Sweden)

    Hindar Purnomo

    2015-12-01

    Full Text Available Tujuan penelitian ini adalah untuk mengetahui cara penyelenggaraan pelayanan media AV, efektivitas pelayanan serta tingkat kepuasan pemustaka terhadap berbagai aspek pelayanan. Penelitian dilakukan di The British Council Jakarta dengan cara evaluasi karena dengan cara ini dapat diketahui berbagai fenomena yang terjadi. Perpustakaan British Council menyediakan tiga jenis media yaitu berupa kaset video, kaset audio, dan siaran televisi BBC. Subjek penelitian adalah pemakai jasa pelaya-nan media audiovisual yang terdaftar sebagai anggota. Subjek dikelompokkan berdasarkan kelompok usia dan kelompok tujuan pemanfaatan media AV. Data angket terkumpul sebanyak 157 responden (75,48% kemudian dianalisis secara statistik dengan uji analisis varian sate arah Kruskal-Wallis. Hasil penelitian menunjukkan bahwa ketiga media tersebut diminati oleh banyak pemakai terutama pada kelompok usia muda. Sebagian besar pemustaka lebih menyukai jenis fiksi dibandingkan jenis nonfiksi, mereka menggunakan media audiovisual untuk mencari informasi pengetahuan. Pelayanan media audiovisual terbukti sangat efektif dilihat dari angka keterpakaian koleksi maupun tingkat kepuasan pemakain. Hasil uji hipotesis menunjukkan bahwa antarkelompok usia maupun tujuan kegunaan tidak ada perbedaan yang berarti dalam menanggapi berbagai aspek pelayanan media audiovisual. Kata Kunci: MediaAudio Visual-Layanan Perpustakaan

  13. The use of audiovisual techniques in participative diagnosis: the experience of the Polvo Fields; O uso do audiovisual no diagnostico participativo: a experiencia do projeto de educacao ambiental no Campo de Polvo

    Energy Technology Data Exchange (ETDEWEB)

    Loureiro, Juliana; Pitanga, Luisa [Abaete Estudos Socioambientais Ltda., Rio de Janeiro, RJ (Brazil); Borensztein, Fernando [Devon Energy do Brasil Ltda., Rio de Janeiro, RJ (Brazil)

    2008-07-01

    The Brazilian environmental law requires oil companies' commitment to implement environmental programs, among which the environmental education project. This type of project should be understood by the companies as an opportunity for the development of socio environmental responsibility policies towards the affected populations. In order for the environmental education project to be effective as a means of awareness and social transformation, it is required to increase public's participation from the process of knowledge creation on the communities environmental problems to the disclose of the produced contents. This work refers to the use of the audiovisual as an instrument for the mobilization and consciousness for the construction of participative diagnostics, from the experience of the Environmental Education Project of the Polvo field, accomplished in ten municipal districts of the Campos Basin region. Inspired on an original methodology, the project promoted environmental cinema workshops that resulted in 30 documentaries directed by the local population and 10 environmental forums in which were developed local audiovisual environmental agendas. (author)

  14. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    Science.gov (United States)

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.

  15. Neural initialization of audiovisual integration in prereaders at varying risk for developmental dyslexia.

    Science.gov (United States)

    I Karipidis, Iliana; Pleisch, Georgette; Röthlisberger, Martina; Hofstetter, Christoph; Dornbierer, Dario; Stämpfli, Philipp; Brem, Silvia

    2017-02-01

    Learning letter-speech sound correspondences is a major step in reading acquisition and is severely impaired in children with dyslexia. Up to now, it remains largely unknown how quickly neural networks adopt specific functions during audiovisual integration of linguistic information when prereading children learn letter-speech sound correspondences. Here, we simulated the process of learning letter-speech sound correspondences in 20 prereading children (6.13-7.17 years) at varying risk for dyslexia by training artificial letter-speech sound correspondences within a single experimental session. Subsequently, we acquired simultaneously event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) scans during implicit audiovisual presentation of trained and untrained pairs. Audiovisual integration of trained pairs correlated with individual learning rates in right superior temporal, left inferior temporal, and bilateral parietal areas and with phonological awareness in left temporal areas. In correspondence, a differential left-lateralized parietooccipitotemporal ERP at 400 ms for trained pairs correlated with learning achievement and familial risk. Finally, a late (650 ms) posterior negativity indicating audiovisual congruency of trained pairs was associated with increased fMRI activation in the left occipital cortex. Taken together, a short (audiovisual integration in neural systems that are responsible for processing linguistic information in proficient readers. To conclude, the ability to learn grapheme-phoneme correspondences, the familial history of reading disability, and phonological awareness of prereading children account for the degree of audiovisual integration in a distributed brain network. Such findings on emerging linguistic audiovisual integration could allow for distinguishing between children with typical and atypical reading development. Hum Brain Mapp 38:1038-1055, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals

  16. 36 CFR 1235.42 - What specifications and standards for transfer apply to audiovisual records, cartographic, and...

    Science.gov (United States)

    2010-07-01

    ... standards for transfer apply to audiovisual records, cartographic, and related records? 1235.42 Section 1235... Standards § 1235.42 What specifications and standards for transfer apply to audiovisual records... elements that are needed for future preservation, duplication, and reference for audiovisual records...

  17. 16 CFR 307.8 - Requirements for disclosure in audiovisual and audio advertising.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Requirements for disclosure in audiovisual and audio advertising. 307.8 Section 307.8 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... ACT OF 1986 Advertising Disclosures § 307.8 Requirements for disclosure in audiovisual and audio...

  18. The Effects of Audiovisual Stimulation on the Acceptance of Background Noise.

    Science.gov (United States)

    Plyler, Patrick N; Lang, Rowan; Monroe, Amy L; Gaudiano, Paul

    2015-05-01

    Previous examinations of noise acceptance have been conducted using an auditory stimulus only; however, the effect of visual speech supplementation of the auditory stimulus on acceptance of noise remains limited. The purpose of the present study was to determine the effect of audiovisual stimulation on the acceptance of noise in listeners with normal and impaired hearing. A repeated measures design was utilized. A total of 92 adult participants were recruited for this experiment. Of these participants, 54 were listeners with normal hearing and 38 were listeners with sensorineural hearing impairment. Most comfortable levels and acceptable noise levels (ANL) were obtained using auditory and auditory-visual stimulation modes for the unaided listening condition for each participant and for the aided listening condition for 35 of the participants with impaired hearing that owned hearing aids. Speech reading ability was assessed using the Utley test for each participant. The addition of visual input did not impact the most comfortable level values for listeners in either group; however, visual input improved unaided ANL values for listeners with normal hearing and aided ANL values in listeners with impaired hearing. ANL benefit received from visual speech input was related to the auditory ANL in listeners in each group; however, it was not related to speech reading ability for either listener group in any experimental condition. Visual speech input can significantly impact measures of noise acceptance. The current ANL measure may not accurately reflect acceptance of noise values when in more realistic environments, where the signal of interest is both audible and visible to the listener. American Academy of Audiology.

  19. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection

    Science.gov (United States)

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-01-01

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC. PMID:26123281

  20. A General Audiovisual Temporal Processing Deficit in Adult Readers with Dyslexia

    Science.gov (United States)

    Francisco, Ana A.; Jesse, Alexandra; Groen, Margriet A.; McQueen, James M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of…

  1. Film Studies in Motion : From Audiovisual Essay to Academic Research Video

    NARCIS (Netherlands)

    Kiss, Miklós; van den Berg, Thomas

    2016-01-01

    Our (co-written with Thomas van den Berg) ‪media rich,‬ ‪‎open access‬ ‪‎Scalar‬ ‪e-book‬ on the ‪‎Audiovisual Essay‬ practice is available online: http://scalar.usc.edu/works/film-studies-in-motion Audiovisual essaying should be more than an appropriation of traditional video artistry, or a mere

  2. The organization and reorganization of audiovisual speech perception in the first year of life.

    Science.gov (United States)

    Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F

    2017-04-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.

  3. Ordinal models of audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2011-01-01

    Audiovisual information is integrated in speech perception. One manifestation of this is the McGurk illusion in which watching the articulating face alters the auditory phonetic percept. Understanding this phenomenon fully requires a computational model with predictive power. Here, we describe...

  4. Audiovisual cultural heritage: bridging the gap between digital archives and its users

    NARCIS (Netherlands)

    Ongena, G.; Donoso, Veronica; Geerts, David; Cesar, Pablo; de Grooff, Dirk

    2009-01-01

    This document describes a PhD research track on the disclosure of audiovisual digital archives. The domain of audiovisual material is introduced as well as a problem description is formulated. The main research objective is to investigate the gap between the different users and the digital archives.

  5. Catching Audiovisual Interactions With a First-Person Fisherman Video Game.

    Science.gov (United States)

    Sun, Yile; Hickey, Timothy J; Shinn-Cunningham, Barbara; Sekuler, Robert

    2017-07-01

    The human brain is excellent at integrating information from different sources across multiple sensory modalities. To examine one particularly important form of multisensory interaction, we manipulated the temporal correlation between visual and auditory stimuli in a first-person fisherman video game. Subjects saw rapidly swimming fish whose size oscillated, either at 6 or 8 Hz. Subjects categorized each fish according to its rate of size oscillation, while trying to ignore a concurrent broadband sound seemingly emitted by the fish. In three experiments, categorization was faster and more accurate when the rate at which a fish oscillated in size matched the rate at which the accompanying, task-irrelevant sound was amplitude modulated. Control conditions showed that the difference between responses to matched and mismatched audiovisual signals reflected a performance gain in the matched condition, rather than a cost from the mismatched condition. The performance advantage with matched audiovisual signals was remarkably robust over changes in task demands between experiments. Performance with matched or unmatched audiovisual signals improved over successive trials at about the same rate, emblematic of perceptual learning in which visual oscillation rate becomes more discriminable with experience. Finally, analysis at the level of individual subjects' performance pointed to differences in the rates at which subjects can extract information from audiovisual stimuli.

  6. Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception

    DEFF Research Database (Denmark)

    Baart, Martijn; Lindborg, Alma Cornelia; Andersen, Tobias S

    2017-01-01

    Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure...... of audiovisual integration) for fusions was comparable to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli. This article is protected...

  7. Enhancing audiovisual experience with haptic feedback: a survey on HAV.

    Science.gov (United States)

    Danieau, F; Lecuyer, A; Guillotel, P; Fleureau, J; Mollet, N; Christie, M

    2013-01-01

    Haptic technology has been widely employed in applications ranging from teleoperation and medical simulation to art and design, including entertainment, flight simulation, and virtual reality. Today there is a growing interest among researchers in integrating haptic feedback into audiovisual systems. A new medium emerges from this effort: haptic-audiovisual (HAV) content. This paper presents the techniques, formalisms, and key results pertinent to this medium. We first review the three main stages of the HAV workflow: the production, distribution, and rendering of haptic effects. We then highlight the pressing necessity for evaluation techniques in this context and discuss the key challenges in the field. By building on existing technologies and tackling the specific challenges of the enhancement of audiovisual experience with haptics, we believe the field presents exciting research perspectives whose financial and societal stakes are significant.

  8. Temporal Ventriloquism Reveals Intact Audiovisual Temporal Integration in Amblyopia.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-02-01

    We have shown previously that amblyopia involves impaired detection of asynchrony between auditory and visual events. To distinguish whether this impairment represents a defect in temporal integration or nonintegrative multisensory processing (e.g., cross-modal matching), we used the temporal ventriloquism effect in which visual temporal order judgment (TOJ) is normally enhanced by a lagging auditory click. Participants with amblyopia (n = 9) and normally sighted controls (n = 9) performed a visual TOJ task. Pairs of clicks accompanied the two lights such that the first click preceded the first light, or second click lagged the second light by 100, 200, or 450 ms. Baseline audiovisual synchrony and visual-only conditions also were tested. Within both groups, just noticeable differences for the visual TOJ task were significantly reduced compared with baseline in the 100- and 200-ms click lag conditions. Within the amblyopia group, poorer stereo acuity and poorer visual acuity in the amblyopic eye were significantly associated with greater enhancement in visual TOJ performance in the 200-ms click lag condition. Audiovisual temporal integration is intact in amblyopia, as indicated by perceptual enhancement in the temporal ventriloquism effect. Furthermore, poorer stereo acuity and poorer visual acuity in the amblyopic eye are associated with a widened temporal binding window for the effect. These findings suggest that previously reported abnormalities in audiovisual multisensory processing may result from impaired cross-modal matching rather than a diminished capacity for temporal audiovisual integration.

  9. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    Science.gov (United States)

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  10. Audiovisual Association Learning in the Absence of Primary Visual Cortex

    OpenAIRE

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J.; de Gelder, Beatrice

    2016-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit ...

  11. [Intermodal timing cues for audio-visual speech recognition].

    Science.gov (United States)

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  12. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    OpenAIRE

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin?Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possib...

  13. Detecting Functional Connectivity During Audiovisual Integration with MEG: A Comparison of Connectivity Metrics.

    Science.gov (United States)

    Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard

    2015-08-01

    In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.

  14. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    Science.gov (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Ensenyar amb casos audiovisuals en l'entorn virtual: metodologia i resultats

    OpenAIRE

    Triadó i Ivern, Xavier Ma.; Aparicio Chueca, Ma. del Pilar (María del Pilar); Jaría Chacón, Natalia; Gallardo-Gallardo, Eva; Elasri Ejjaberi, Amal

    2010-01-01

    Aquest quadern pretén posar i donar a conèixer les bases d'una metodologia que serveixi per engegar experiències d'aprenentatge amb casos audiovisuals en l'entorn del campus virtual. Per aquest motiu, s'ha definit un protocol metodològic per utilitzar els casos audiovisuals dins l'entorn del campus virtual a diferents assignatures.

  16. Audiovisual integration increases the intentional step synchronization of side-by-side walkers.

    Science.gov (United States)

    Noy, Dominic; Mouta, Sandra; Lamas, Joao; Basso, Daniel; Silva, Carlos; Santos, Jorge A

    2017-12-01

    When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. The future of school nursing: banishing band-AIDS to improve public health outcomes.

    Science.gov (United States)

    Fleming, Robin

    2012-08-01

    This article provides analysis and commentary on the cultural roots that promote the provision of minor first aid in schools by school nurses. Using the Institute of Medicine's Future of Nursing report as a lens, this article illustrates how the focus on provision of first aid by school nurses dilutes larger public health contributions that school nurses could make if they were able to work to the full extent of their education, training and licensure. The article concludes with recommendations designed to support fuller use of nurses' scope of practice in schools.

  18. Neurofunctional Underpinnings of Audiovisual Emotion Processing in Teens with Autism Spectrum Disorders

    Science.gov (United States)

    Doyle-Thomas, Krissy A.R.; Goldberg, Jeremy; Szatmari, Peter; Hall, Geoffrey B.C.

    2013-01-01

    Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD). Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n = 18) and typically developing controls (n = 16) during audiovisual and unimodal emotion processing. Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviors, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that in the absence of engaging integrative emotional networks during audiovisual emotion matching, teens with ASD may have recruited the parietofrontal network as an alternate compensatory system. PMID:23750139

  19. Survey on Public Awareness On AIDS- Role Of Government And Non Government Agencies In A Rural South Indian Community

    Directory of Open Access Journals (Sweden)

    Balagnesh G

    1996-01-01

    Full Text Available Research Question: What is the level of Public awareness on AIDS in a rural community and to what extent the government and non-government agencies have played their role in creating the awareness? Objectives: (i To study the public awareness on AIDS in a rural community (ii To Study role of government and non-government agencies in creating the awareness on AIDS. Design: Cross-sectional study Setting: Rural area under S. V. Medical College Triputi (AP Participants: 100 males (15-45 yrs and 100 females (15-45 yrs. Study variables: Awareness on AIDS, Government and non-government agencies. Statistical Analysis: Percentages Results: Most of the persons interviewed had minimal knowledge on AIDS. Quite a large section of the ‘ study population was ignorant over the safety offered by condoms in preventing AIDS. Doordarshan and Newspaper agencies played much role in creation the awareness on AIDS, while the non-government agencies like Lions’ Club, Rotary Club. Indian Junior Chamber etc. played no role in creating the awareness on AIDS in the study area. Recommendations: Government health sector should take more responsibility in educating the people and creating adequate awareness on AIDS. Non-government agencies should involve themselves in creating awareness on AIDS.

  20. Policing Fish at Boston's Museum of Science: Studying Audiovisual Interaction in the Wild.

    Science.gov (United States)

    Goldberg, Hannah; Sun, Yile; Hickey, Timothy J; Shinn-Cunningham, Barbara; Sekuler, Robert

    2015-08-01

    Boston's Museum of Science supports researchers whose projects advance science and provide educational opportunities to the Museum's visitors. For our project, 60 visitors to the Museum played "Fish Police!!," a video game that examines audiovisual integration, including the ability to ignore irrelevant sensory information. Players, who ranged in age from 6 to 82 years, made speeded responses to computer-generated fish that swam rapidly across a tablet display. Responses were to be based solely on the rate (6 or 8 Hz) at which a fish's size modulated, sinusoidally growing and shrinking. Accompanying each fish was a task-irrelevant broadband sound, amplitude modulated at either 6 or 8 Hz. The rates of visual and auditory modulation were either Congruent (both 6 Hz or 8 Hz) or Incongruent (6 and 8 or 8 and 6 Hz). Despite being instructed to ignore the sound, players of all ages responded more accurately and faster when a fish's auditory and visual signatures were Congruent. In a controlled laboratory setting, a related task produced comparable results, demonstrating the robustness of the audiovisual interaction reported here. Some suggestions are made for conducting research in public settings.

  1. Plan empresa productora de audiovisuales : La Central Audiovisual y Publicidad

    OpenAIRE

    Arroyave Velasquez, Alejandro

    2015-01-01

    El presente documento corresponde al plan de creación de empresa La Central Publicidad y Audiovisual, una empresa dedicada a la pre-producción, producción y post-producción de material de tipo audiovisual. La empresa estará ubicada en la ciudad de Cali y tiene como mercado objetivo atender los diferentes tipos de empresas de la ciudad, entre las cuales se encuentran las pequeñas, medianas y grandes empresas.

  2. Self-organizing maps for measuring similarity of audiovisual speech percepts

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich

    The goal of this work is to find a way to measure similarity of audiovisual speech percepts. Phoneme-related self-organizing maps (SOM) with a rectangular basis are trained with data material from a (labeled) video film. For the training, a combination of auditory speech features and corresponding....... Dependent on the training data, these other units may also be contextually immediate neighboring units. The poster demonstrates the idea with text material spoken by one individual subject using a set of simple audio-visual features. The data material for the training process consists of 44 labeled...... sentences in German with a balanced phoneme repertoire. As a result it can be stated that (i) the SOM can be trained to map auditory and visual features in a topology-preserving way and (ii) they show strain due to the influence of other audio-visual units. The SOM can be used to measure similarity amongst...

  3. Audiovisual Speech Perception and Eye Gaze Behavior of Adults with Asperger Syndrome

    Science.gov (United States)

    Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-01-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…

  4. Audiovisual integration for speech during mid-childhood: Electrophysiological evidence

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-01-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7–8-year-olds and 10–11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception. PMID:25463815

  5. Exploring Audiologists' Language and Hearing Aid Uptake in Initial Rehabilitation Appointments.

    Science.gov (United States)

    Sciacca, Anna; Meyer, Carly; Ekberg, Katie; Barr, Caitlin; Hickson, Louise

    2017-06-13

    The study aimed (a) to profile audiologists' language during the diagnosis and management planning phase of hearing assessment appointments and (b) to explore associations between audiologists' language and patients' decisions to obtain hearing aids. Sixty-two audiologist-patient dyads participated. Patient participants were aged 55 years or older. Hearing assessment appointments were audiovisually recorded and transcribed for analysis. Audiologists' language was profiled using two measures: general language complexity and use of jargon. A binomial, multivariate logistic regression analysis was conducted to investigate the associations between these language measures and hearing aid uptake. The logistic regression model revealed that the Flesch-Kincaid reading grade level of audiologists' language was significantly associated with hearing aid uptake. Patients were less likely to obtain hearing aids when audiologists' language was at a higher reading grade level. No associations were found between audiologists' use of jargon and hearing aid uptake. Audiologists' use of complex language may present a barrier for patients to understand hearing rehabilitation recommendations. Reduced understanding may limit patient participation in the decision-making process and result in patients being less willing to trial hearing aids. Clear, concise language is recommended to facilitate shared decision making.

  6. Academic e-learning experience in the enhancement of open access audiovisual and media education

    OpenAIRE

    Pacholak, Anna; Sidor, Dorota

    2015-01-01

    The paper presents how the academic e-learning experience and didactic methods of the Centre for Open and Multimedia Education (COME UW), University of Warsaw, enhance the open access to audiovisual and media education at various levels of education. The project is implemented within the Audiovisual and Media Education Programme (PEAM). It is funded by the Polish Film Institute (PISF). The aim of the project is to create a proposal of a comprehensive and open programme for the audiovisual (me...

  7. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Yanna Ren

    2018-01-01

    Full Text Available The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson’s disease (PD. This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p0.05. The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.

  8. Public perceptions about HIV/AIDS and discriminatory attitudes toward people living with acquired immunodeficiency syndrome in Iran.

    Science.gov (United States)

    Masoudnia, Ebrahim

    2015-01-01

    Negative and discriminatory attitudes towards people living with HIV/AIDS (PLWHA) are one of the biggest experienced challenges by people suffering from HIV, and these attitudes have been regarded as a serious threat to the fundamental rights of all infected people who are affected or associated with this disease in Iran. This study aimed to determine the relationship between public perception about HIV/AIDS and discriminatory attitudes toward PLWHA . The present study was conducted using a descriptive and survey design. Data were collected from 450 patients (236 male and 214 female) in Tehran and Yazd cities. The research instruments were modified HIV-related knowledge/attitude and perception questions about PLWHA, and discriminatory attitudes toward PLWHA. The results showed that prevalence of discriminatory attitudes toward PLWHA in the studied population was 60.0%. There was a significant negative correlation between citizens' awareness about HIV/AIDS, HIV-related attitudes, negative perception toward people with HIV/AIDS symptoms and their discriminatory attitudes toward PLWHA (p AIDS explained for 23.7% of the variance of discriminatory attitudes toward PLWHA. Negative public perceptions about HIV/AIDS in Iran associated with discriminatory attitudes toward PLWHA and cultural beliefs in Iran tend to stigmatize and discriminate against the LWHA.

  9. Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception.

    Science.gov (United States)

    Baart, Martijn; Lindborg, Alma; Andersen, Tobias S

    2017-11-01

    Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure of audiovisual integration) for fusions was similar to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli. © 2017 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. Audiovisual Asynchrony Detection in Human Speech

    Science.gov (United States)

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  11. Neural correlates of audiovisual speech processing in a second language.

    Science.gov (United States)

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Multisensory integration in complete unawareness: evidence from audiovisual congruency priming.

    Science.gov (United States)

    Faivre, Nathan; Mudrik, Liad; Schwartz, Naama; Koch, Christof

    2014-11-01

    Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter "b" or "m," respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit "6" or "8," respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration. © The Author(s) 2014.

  13. Threats and opportunities for new audiovisual cultural heritage archive services: the Dutch case

    NARCIS (Netherlands)

    Ongena, G.; Huizer, E.; van de Wijngaert, Lidwien

    2012-01-01

    Purpose The purpose of this paper is to analyze the business-to-consumer market for digital audiovisual archiving services. In doing so we identify drivers, threats, and opportunities for new services based on audiovisual archives in the cultural heritage domain. By analyzing the market we provide

  14. Vicarious audiovisual learning in perfusion education.

    Science.gov (United States)

    Rath, Thomas E; Holt, David W

    2010-12-01

    Perfusion technology is a mechanical and visual science traditionally taught with didactic instruction combined with clinical experience. It is difficult to provide perfusion students the opportunity to experience difficult clinical situations, set up complex perfusion equipment, or observe corrective measures taken during catastrophic events because of patient safety concerns. Although high fidelity simulators offer exciting opportunities for future perfusion training, we explore the use of a less costly low fidelity form of simulation instruction, vicarious audiovisual learning. Two low fidelity modes of instruction; description with text and a vicarious, first person audiovisual production depicting the same content were compared. Students (n = 37) sampled from five North American perfusion schools were prospectively randomized to one of two online learning modules, text or video.These modules described the setup and operation of the MAQUET ROTAFLOW stand-alone centrifugal console and pump. Using a 10 question multiple-choice test, students were assessed immediately after viewing the module (test #1) and then again 2 weeks later (test #2) to determine cognition and recall of the module content. In addition, students completed a questionnaire assessing the learning preferences of today's perfusion student. Mean test scores from test #1 for video learners (n = 18) were significantly higher (88.89%) than for text learners (n = 19) (74.74%), (p audiovisual learning modules may be an efficacious, low cost means of delivering perfusion training on subjects such as equipment setup and operation. Video learning appears to improve cognition and retention of learned content and may play an important role in how we teach perfusion in the future, as simulation technology becomes more prevalent.

  15. Stigmatization and discrimination towards people living with or affected by HIV/AIDS by the general public in Malaysia.

    Science.gov (United States)

    Wong, L P; Syuhada, A R Nur

    2011-09-01

    Globally, HIV/AIDS-related stigma and discriminatory attitudes deter the effectiveness of HIV prevention and care programs. This study investigated the general public's perceptions about HIV/AIDS-related stigma and discrimination towards people living with or affected by HIV/AIDS in order to understand the root of HIV/AIDS-related stigma and discriminatory attitudes. Study was carried out using qualitative focus group discussions (FGD). An interview guide with semi-structured questions was used. Participants were members of the public in Malaysia. Purposive sampling was adopted for recruitment of participants. A total 14 focus group discussions (n = 74) was carried out between March and July 2008. HIV/AIDS-related stigma and discrimination towards people living with HIV/AIDS (PLWHA) was profound. Key factors affecting discriminatory attitudes included high-risk taking behavior, individuals related to stigmatized identities, sources of HIV infection, stage of the disease, and relationship with an infected person. Other factors that influence attitudes toward PLWHA include ethnicity and urban-rural locality. Malay participants were less likely than other ethnic groups to perceive no stigmatization if their spouses were HIV positive. HIV/AIDS-related stigma and discrimination were stronger among participants in rural settings. The differences indicate attitudes toward PLWHA are influenced by cultural differences.

  16. Lousa Digital Interativa: avaliação da interação didática e proposta de aplicação de narrativa audiovisual / Interactive White Board – IWB: assessment in interaction didactic and audiovisual narrative proposal

    Directory of Open Access Journals (Sweden)

    Francisco García García

    2011-04-01

    Full Text Available O uso de audiovisual em sala de aula não garante uma eficácia na aprendizagem, mas para os estudantes é um elemento interessante e ainda atrativo. Este trabalho — uma aproximação de duas pesquisas: a primeira apresenta a importância da interação didática com a LDI e a segunda, uma lista de elementos de narrativa audiovisual que podem ser aplicados em sala de aula — propõe o domínio de elementos da narrativa audiovisual como uma possibilidade teórica para o professor que quer produzir um conteúdo audiovisual para aplicar em plataformas digitais, como é o caso da Lousa Digital Interativa - LDI. O texto está divido em três partes: a primeira apresenta os conceitos teóricos das duas pesquisas, a segunda discute os resultados de ambas e, por fim, a terceira parte propõe uma prática pedagógica de interação didática com elementos de narrativa audiovisual para uso em LDI. AbstractThe audiovisual use in classroom does not guarantee effectiveness in learning, but for students is an interesting element and still attractive. This work suggests that the field of audiovisual elements of the narrative is a theoretical possibility for the teacher who wants to produce an audiovisual content to apply to digital platforms, such as the Interactive Digital Whiteboard - LDI. This work is an approximation of two doctoral theses, the first that shows the importance of interaction with the didactic and the second LDI provides a list of audiovisual narrative elements that can be applied in the classroom. This work is divided into three parts, the first part presents the theoretical concepts of the two surveys, the second part discusses the results of two surveys and finally the third part, proposes a practical pedagogical didactic interaction with audiovisual narrative elements to use in LDI.

  17. Sustainable models of audiovisual commons

    Directory of Open Access Journals (Sweden)

    Mayo Fuster Morell

    2013-03-01

    Full Text Available This paper addresses an emerging phenomenon characterized by continuous change and experimentation: the collaborative commons creation of audiovisual content online. The analysis wants to focus on models of sustainability of collaborative online creation, paying particular attention to the use of different forms of advertising. This article is an excerpt of a larger investigation, which unit of analysis are cases of Online Creation Communities that take as their central node of activity the Catalan territory. From 22 selected cases, the methodology combines quantitative analysis, through a questionnaire delivered to all cases, and qualitative analysis through face interviews conducted in 8 cases studied. The research, which conclusions we summarize in this article,in this article, leads us to conclude that the sustainability of the project depends largely on relationships of trust and interdependence between different voluntary agents, the non-monetary contributions and retributions as well as resources and infrastructure of free use. All together leads us to understand that this is and will be a very important area for the future of audiovisual content and its sustainability, which will imply changes in the policies that govern them.

  18. Student performance and their perception of a patient-oriented problem-solving approach with audiovisual aids in teaching pathology: a comparison with traditional lectures.

    Science.gov (United States)

    Singh, Arjun

    2011-01-01

    We use different methods to train our undergraduates. The patient-oriented problem-solving (POPS) system is an innovative teaching-learning method that imparts knowledge, enhances intrinsic motivation, promotes self learning, encourages clinical reasoning, and develops long-lasting memory. The aim of this study was to develop POPS in teaching pathology, assess its effectiveness, and assess students' preference for POPS over didactic lectures. One hundred fifty second-year MBBS students were divided into two groups: A and B. Group A was taught by POPS while group B was taught by traditional lectures. Pre- and posttest numerical scores of both groups were evaluated and compared. Students then completed a self-structured feedback questionnaire for analysis. The mean (SD) difference in pre- and post-test scores of groups A and B was 15.98 (3.18) and 7.79 (2.52), respectively. The significance of the difference between scores of group A and group B teaching methods was 16.62 (P effectiveness of POPS. Students responded that POPS facilitates self-learning, helps in understanding topics, creates interest, and is a scientific approach to teaching. Feedback response on POPS was strong in 57.52% of students, moderate in 35.67%, and negative in only 6.81%, showing that 93.19% students favored POPS over simple lectures. It is not feasible to enforce the PBL method of teaching throughout the entire curriculum; However, POPS can be incorporated along with audiovisual aids to break the monotony of dialectic lectures and as alternative to PBL.

  19. AWARENESS REGARDING MODES OF TRANSMISSION AND RELATED MISCONCEPTION ABOUT HIV/AIDS AMONG SECONDARY SCHOOL GOING FEMALES OF PUBLIC AND GOVT SCHOOLS

    Directory of Open Access Journals (Sweden)

    Chhabi Mohan

    2010-06-01

    Full Text Available .Research Question: What is the level of awareness about different modes of transmission and related misconception about HIV/AIDS among secondary school going females of public and Govt. Schools of Kanpur city. Study Area: Public and Govt. Schools of Kanpur city. Participatns: 120 Govt. and 120 Public secondary School females students. Results: 100% Public school female students knew about heterosexual mode of transmission of HI V/AIDS as compared to 80% of Govt. School students. Among Public School students knowledge about transmission of HIV/AIDS by contaminated needle and syringe intravenous drug abuse, blood transfusion and mother to child was known to almost 80% student. Among Govt. School students except for knowledge about transmission by contaminated needle and syringe (60% and mother to child transmission (55% the other modes were poorly known (<50%.

  20. Record Desktop Activity as Streaming Videos for Asynchronous, Video-Based Collaborative Learning.

    Science.gov (United States)

    Chang, Chih-Kai

    As Web-based courses using videos have become popular in recent years, the issue of managing audiovisual aids has become noteworthy. The contents of audiovisual aids may include a lecture, an interview, a featurette, an experiment, etc. The audiovisual aids of Web-based courses are transformed into the streaming format that can make the quality of…

  1. Audiovisual quality assessment in communications applications: Current status, trends and challenges

    DEFF Research Database (Denmark)

    Korhonen, Jari

    2010-01-01

    Audiovisual quality assessment is one of the major challenges in multimedia communications. Traditionally, algorithm-based (objective) assessment methods have focused primarily on the compression artifacts. However, compression is only one of the numerous factors influencing the perception...... addressed in practical quality metrics is the co-impact of audio and video qualities. This paper provides an overview of the current trends and challenges in objective audiovisual quality assessment, with emphasis on communication applications...

  2. Alfasecuencialización: la enseñanza del cine en la era del audiovisual Sequential literacy: the teaching of cinema in the age of audio-visual speech

    Directory of Open Access Journals (Sweden)

    José Antonio Palao Errando

    2007-10-01

    Full Text Available En la llamada «sociedad de la información» los estudios sobre cine se han visto diluidos en el abordaje pragmático y tecnológico del discurso audiovisual, así como la propia fruición del cine se ha visto atrapada en la red del DVD y del hipertexto. El propio cine reacciona ante ello a través de estructuras narrativas complejas que lo alejan del discurso audiovisual estándar. La función de los estudios sobre cine y de su enseñanza universitaria debe ser la reintroducción del sujeto rechazado del saber informativo por medio de la interpretación del texto fílmico. In the so called «information society», film studies have been diluted in the pragmatic and technological approaching of the audiovisual speech, as well as the own fruition of the cinema has been caught in the net of DVD and hypertext. The cinema itself reacts in the face of it through complex narrative structures that take it away from the standard audio-visual speech. The function of film studies at the university education should be the reintroduction of the rejected subject of the informative knowledge by means of the interpretation of film text.

  3. A economia do audiovisual no contexto contemporâneo das Cidades Criativas

    Directory of Open Access Journals (Sweden)

    Paulo Celso da Silva

    2012-12-01

    Full Text Available Este trabalho aborda a economia do audiovisual em cidades com status de criativas. Mais do que um adjetivo, é no bojo das atividades ligadas à comunicação, o audiovisual entre elas, cultura, moda, arquitetura, artes manuais ou artesanato local, que tais cidades renovaram a forma de acumulação, reorganizando espaços públicos e privados. As cidades de  Barcelona, Berlim, New York, Milão e São Paulo, são representativas para atingir o objetivo de analisar as cidades relacionado ao desenvolvimento do setor audiovisual. Ainda que tal hipótese possa parecer indicar, através de dados oficiais que auxiliam em uma compreensão mais realista de cada uma delas.

  4. Venezuela: Nueva Experiencia Audiovisual

    Directory of Open Access Journals (Sweden)

    Revista Chasqui

    2015-01-01

    Full Text Available La Universidad Simón Bolívar (USB creó en 1986, la Fundación para el Desarrollo del Arte Audiovisual, ARTEVISION. Su objetivo general es la promoción y venta de servicios y productos para la televisión, radio, cine, diseño y fotografía de alta calidad artística y técnica. Todo esto sin descuidar los aspectos teóricos-académicos de estas disciplinas.

  5. Audiovisual Narrative Creation and Creative Retrieval: How Searching for a Story Shapes the Story

    NARCIS (Netherlands)

    Sauer, Sabrina

    2017-01-01

    Media professionals – such as news editors, image researchers, and documentary filmmakers - increasingly rely on online access to digital content within audiovisual archives to create narratives. Retrieving audiovisual sources therefore requires an in-depth knowledge of how to find sources

  6. Selective attention modulates the direction of audio-visual temporal recalibration.

    Science.gov (United States)

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  7. Selective attention modulates the direction of audio-visual temporal recalibration.

    Directory of Open Access Journals (Sweden)

    Nara Ikumi

    Full Text Available Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging, was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  8. UNDERSTANDING PROSE THROUGH TASK ORIENTED AUDIO-VISUAL ACTIVITY: AN AMERICAN MODERN PROSE COURSE AT THE FACULTY OF LETTERS, PETRA CHRISTIAN UNIVERSITY

    Directory of Open Access Journals (Sweden)

    Sarah Prasasti

    2001-01-01

    Full Text Available The method presented here provides the basis for a course in American prose for EFL students. Understanding and appreciation of American prose is a difficult task for the students because they come into contact with works that are full of cultural baggage and far apart from their own world. The audio visual aid is one of the alternatives to sensitize the students to the topic and the cultural background. Instead of proving the ready-made audio visual aids, teachers can involve students to actively engage in a more task oriented audiovisual project. Here, the teachers encourage their students to create their own audio visual aids using colors, pictures, sound and gestures as a point of initiation for further discussion. The students can use color that has become a strong element of fiction to help them calling up a forceful visual representation. Pictures can also stimulate the students to build their mental image. Sound and silence, which are a part of the fabric of literature, may also help them to increase the emotional impact.

  9. Audiovisual integration of speech falters under high attention demands.

    Science.gov (United States)

    Alsius, Agnès; Navarra, Jordi; Campbell, Ruth; Soto-Faraco, Salvador

    2005-05-10

    One of the most commonly cited examples of human multisensory integration occurs during exposure to natural speech, when the vocal and the visual aspects of the signal are integrated in a unitary percept. Audiovisual association of facial gestures and vocal sounds has been demonstrated in nonhuman primates and in prelinguistic children, arguing for a general basis for this capacity. One critical question, however, concerns the role of attention in such multisensory integration. Although both behavioral and neurophysiological studies have converged on a preattentive conceptualization of audiovisual speech integration, this mechanism has rarely been measured under conditions of high attentional load, when the observers' attention resources are depleted. We tested the extent to which audiovisual integration was modulated by the amount of available attentional resources by measuring the observers' susceptibility to the classic McGurk illusion in a dual-task paradigm. The proportion of visually influenced responses was severely, and selectively, reduced if participants were concurrently performing an unrelated visual or auditory task. In contrast with the assumption that crossmodal speech integration is automatic, our results suggest that these multisensory binding processes are subject to attentional demands.

  10. Rehabilitation of balance-impaired stroke patients through audio-visual biofeedback

    DEFF Research Database (Denmark)

    Gheorghe, Cristina; Nissen, Thomas; Juul Rosengreen Christensen, Daniel

    2015-01-01

    This study explored how audio-visual biofeedback influences physical balance of seven balance-impaired stroke patients, between 33–70 years-of-age. The setup included a bespoke balance board and a music rhythm game. The procedure was designed as follows: (1) a control group who performed a balance...... training exercise without any technological input, (2) a visual biofeedback group, performing via visual input, and (3) an audio-visual biofeedback group, performing via audio and visual input. Results retrieved from comparisons between the data sets (2) and (3) suggested superior postural stability...

  11. Audiovisual speech integration in the superior temporal region is dysfunctional in dyslexia.

    Science.gov (United States)

    Ye, Zheng; Rüsseler, Jascha; Gerth, Ivonne; Münte, Thomas F

    2017-07-25

    Dyslexia is an impairment of reading and spelling that affects both children and adults even after many years of schooling. Dyslexic readers have deficits in the integration of auditory and visual inputs but the neural mechanisms of the deficits are still unclear. This fMRI study examined the neural processing of auditorily presented German numbers 0-9 and videos of lip movements of a German native speaker voicing numbers 0-9 in unimodal (auditory or visual) and bimodal (always congruent) conditions in dyslexic readers and their matched fluent readers. We confirmed results of previous studies that the superior temporal gyrus/sulcus plays a critical role in audiovisual speech integration: fluent readers showed greater superior temporal activations for combined audiovisual stimuli than auditory-/visual-only stimuli. Importantly, such an enhancement effect was absent in dyslexic readers. Moreover, the auditory network (bilateral superior temporal regions plus medial PFC) was dynamically modulated during audiovisual integration in fluent, but not in dyslexic readers. These results suggest that superior temporal dysfunction may underly poor audiovisual speech integration in readers with dyslexia. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. 78 FR 63243 - Certain Audiovisual Components and Products Containing the Same; Commission Determination To...

    Science.gov (United States)

    2013-10-23

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-837] Certain Audiovisual Components and Products Containing the Same; Commission Determination To Review a Final Initial Determination Finding a... section 337 as to certain audiovisual components and products containing the same with respect to claims 1...

  13. Remote hearing aid fitting: Tele-audiology in the context of Brazilian Public Policy

    Science.gov (United States)

    Penteado, Silvio Pires; Ramos, Sueli de Lima; Battistella, Linamara Rizzo; Marone, Silvio Antonio Monteiro; Bento, Ricardo Ferreira

    2012-01-01

    Summary Introduction: Currently, the Brazilian government has certificated nearly 140 specialized centers in hearing aid fittings through the Brazilian National Health System (SUS). Remote fitting through the Internet can allow a broader and more efficient coverage with a higher likelihood of success for patients covered by the SUS, as they can receive fittings from their own homes instead of going to the few and distant specialized centers. Aim: To describe a case of remote fitting between 2 cities, with revision of the literature. Method: Computer gears, a universal interface, and hearing aids were used. Case study: An audiologist located in a specialized center introduced a new hearing aid and its fitting procedure to a remote center (200 km away). The specialized center helped the remote center in fitting a hearing aid in 2 patients, and performed fitting in one of its own patients. The whole process was done through the Internet with audio and video in real time. Results: Three patients were fitted remotely. Three audiologists were remotely trained on how to fit the hearing aids. Conclusions: Remote fitting of hearing aids is possible through the Internet, as well as further supplying technical training to a remote center about the fitting procedures. Such a technological approach can help the government advance public policies on hearing rehabilitation, as patients can be motivated about maintaining their use of hearing aids with the option to ask for help in the comfort of their own homes. PMID:25991960

  14. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    Science.gov (United States)

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation

  15. Search in audiovisual broadcast archives : doctoral abstract

    NARCIS (Netherlands)

    Huurnink, B.

    Documentary makers, journalists, news editors, and other media professionals routinely require previously recorded audiovisual material for new productions. For example, a news editor might wish to reuse footage shot by overseas services for the evening news, or a documentary maker might require

  16. Correlates of Job Placement Practice: Public Rehabilitation Counselors and Consumers Living with AIDS

    Science.gov (United States)

    Hergenrather, Kenneth C.; Rhodes, Scott D.; McDaniel, Randall S.

    2005-01-01

    The Theory of Planned Behavior (TPB) was used to study the factors that influence the intention of public rehabilitation counselors to place consumers living with AIDS into jobs. Participants completed the Rehabilitation Counselor Intention to Place Survey, which was based on 2,089 elicited salient job placement beliefs of 155 public…

  17. Inactivation of Primate Prefrontal Cortex Impairs Auditory and Audiovisual Working Memory.

    Science.gov (United States)

    Plakke, Bethany; Hwang, Jaewon; Romanski, Lizabeth M

    2015-07-01

    The prefrontal cortex is associated with cognitive functions that include planning, reasoning, decision-making, working memory, and communication. Neurophysiology and neuropsychology studies have established that dorsolateral prefrontal cortex is essential in spatial working memory while the ventral frontal lobe processes language and communication signals. Single-unit recordings in nonhuman primates has shown that ventral prefrontal (VLPFC) neurons integrate face and vocal information and are active during audiovisual working memory. However, whether VLPFC is essential in remembering face and voice information is unknown. We therefore trained nonhuman primates in an audiovisual working memory paradigm using naturalistic face-vocalization movies as memoranda. We inactivated VLPFC, with reversible cortical cooling, and examined performance when faces, vocalizations or both faces and vocalization had to be remembered. We found that VLPFC inactivation impaired subjects' performance in audiovisual and auditory-alone versions of the task. In contrast, VLPFC inactivation did not disrupt visual working memory. Our studies demonstrate the importance of VLPFC in auditory and audiovisual working memory for social stimuli but suggest a different role for VLPFC in unimodal visual processing. The ventral frontal lobe, or inferior frontal gyrus, plays an important role in audiovisual communication in the human brain. Studies with nonhuman primates have found that neurons within ventral prefrontal cortex (VLPFC) encode both faces and vocalizations and that VLPFC is active when animals need to remember these social stimuli. In the present study, we temporarily inactivated VLPFC by cooling the cortex while nonhuman primates performed a working memory task. This impaired the ability of subjects to remember a face and vocalization pair or just the vocalization alone. Our work highlights the importance of the primate VLPFC in the processing of faces and vocalizations in a manner that

  18. Audiovisual distraction for pain relief in paediatric inpatients: A crossover study.

    Science.gov (United States)

    Oliveira, N C A C; Santos, J L F; Linhares, M B M

    2017-01-01

    Pain is a stressful experience that can have a negative impact on child development. The aim of this crossover study was to examine the efficacy of audiovisual distraction for acute pain relief in paediatric inpatients. The sample comprised 40 inpatients (6-11 years) who underwent painful puncture procedures. The participants were randomized into two groups, and all children received the intervention and served as their own controls. Stress and pain-catastrophizing assessments were initially performed using the Child Stress Scale and Pain Catastrophizing Scale for Children, with the aim of controlling these variables. The pain assessment was performed using a Visual Analog Scale and the Faces Pain Scale-Revised after the painful procedures. Group 1 received audiovisual distraction before and during the puncture procedure, which was performed again without intervention on another day. The procedure was reversed in Group 2. Audiovisual distraction used animated short films. A 2 × 2 × 2 analysis of variance for 2 × 2 crossover study was performed, with a 5% level of statistical significance. The two groups had similar baseline measures of stress and pain catastrophizing. A significant difference was found between periods with and without distraction in both groups, in which scores on both pain scales were lower during distraction compared with no intervention. The sequence of exposure to the distraction intervention in both groups and first versus second painful procedure during which the distraction was performed also significantly influenced the efficacy of the distraction intervention. Audiovisual distraction effectively reduced the intensity of pain perception in paediatric inpatients. The crossover study design provides a better understanding of the power effects of distraction for acute pain management. Audiovisual distraction was a powerful and effective non-pharmacological intervention for pain relief in paediatric inpatients. The effects were

  19. The natural statistics of audiovisual speech.

    Directory of Open Access Journals (Sweden)

    Chandramouli Chandrasekaran

    2009-07-01

    Full Text Available Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2-7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.

  20. Automated social skills training with audiovisual information.

    Science.gov (United States)

    Tanaka, Hiroki; Sakti, Sakriani; Neubig, Graham; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi

    2016-08-01

    People with social communication difficulties tend to have superior skills using computers, and as a result computer-based social skills training systems are flourishing. Social skills training, performed by human trainers, is a well-established method to obtain appropriate skills in social interaction. Previous works have attempted to automate one or several parts of social skills training through human-computer interaction. However, while previous work on simulating social skills training considered only acoustic and linguistic features, human social skills trainers take into account visual features (e.g. facial expression, posture). In this paper, we create and evaluate a social skills training system that closes this gap by considering audiovisual features regarding ratio of smiling, yaw, and pitch. An experimental evaluation measures the difference in effectiveness of social skill training when using audio features and audiovisual features. Results showed that the visual features were effective to improve users' social skills.

  1. Regional Climate Change and Development of Public Health Decision Aids

    Science.gov (United States)

    Hegedus, A. M.; Darmenova, K.; Grant, F.; Kiley, H.; Higgins, G. J.; Apling, D.

    2011-12-01

    According to the World Heath Organization (WHO) climate change is a significant and emerging threat to public health, and changes the way we must look at protecting vulnerable populations. Worldwide, the occurrence of some diseases and other threats to human health depend predominantly on local climate patterns. Rising average temperatures, in combination with changing rainfall patterns and humidity levels, alter the lifecycle and regional distribution of certain disease-carrying vectors, such as mosquitoes, ticks and rodents. In addition, higher surface temperatures will bring heat waves and heat stress to urban regions worldwide and will likely increase heat-related health risks. A growing body of scientific evidence also suggests an increase in extreme weather events such as floods, droughts and hurricanes that can be destructive to human health and well-being. Therefore, climate adaptation and health decision aids are urgently needed by city planners and health officials to determine high risk areas, evaluate vulnerable populations and develop public health infrastructure and surveillance systems. To address current deficiencies in local planning and decision making with respect to regional climate change and its effect on human health, our research is focused on performing a dynamical downscaling with the Weather Research and Forecasting (WRF) model to develop decision aids that translate the regional climate data into actionable information for users. WRF model is initialized with the Max Planck Institute European Center/Hamburg Model version 5 (ECHAM5) General Circulation Model simulations forced with the Special Report on Emissions (SRES) A1B emissions scenario. Our methodology involves development of climatological indices of extreme weather, quantifying the risk of occurrence of water/rodent/vector-borne diseases as well as developing various heat stress related decision aids. Our results indicate that the downscale simulations provide the necessary

  2. Presentación: Narrativas de no ficción audiovisual, interactiva y transmedia

    Directory of Open Access Journals (Sweden)

    Arnau Gifreu Castells

    2015-03-01

    Full Text Available El número 8 de la Revista profundiza en las formas de expresión narrativas de no ficción audiovisual, interactiva y transmedia. A lo largo de la historia de la comunicación, el ámbito de la no ficción siempre ha sido considerado como menor respecto de su homónimo de ficción. Esto sucede también en el campo de la investigación, donde las narrativas de ficción audiovisual, interactiva y transmedia siempre han ido un paso por delante de las de no ficción. Este monográfico propone un acercamiento teórico-práctico a narrativas de no ficción como el documental, el reportaje, el ensayo, los formatos educativos o las películas institucionales, con el propósito de ofrecer una radiografía de su ubicación actual en el ecosistema de medios.  Audiovisual, interactive and transmedia non-fiction Abstract Number 8 of  Obra Digital Revista de Comunicación  explores  audiovisual, interactive and transmedia non-fiction narrative expression forms. Throughout the history of communication the field of non-fiction has always been regarded as less than its fictional namesake. This is also true in the field of research, where the studies into audiovisual, interactive and transmedia fiction narratives have always been one step ahead of the studies into nonfiction narratives. This monograph proposes a theoretical and practical approach to narrative nonfiction forms as documentary, reporting, essay, educational formats and institutional films in order to supply a picture of its current position in the media ecosystem. Keywords: Non-fiction, Audiovisual Narrative, Interactive Narrative, Transmedia Narrative.

  3. The audiovisual mounting narrative as a basis for the documentary film interactive: news studies

    Directory of Open Access Journals (Sweden)

    Mgs. Denis Porto Renó

    2008-01-01

    Full Text Available This paper presents a literature review and experiment results from pilot-doctoral research "assembly language visual narrative for the documentary film interactive," which defend the thesis that there are features interactive audio and video editing of the movie, even as causing agent of interactivity. The search for interactive audio-visual formats are present in international investigations, but sob glances technology. He believes that this paper is to propose possible formats for interactive audiovisual production film, video, television, computer and cell phone from the postmodern society. Key words: Audiovisual, language, interactivity, cinema interactive, documentary, communication.

  4. Comparison of audio and audiovisual measures of adult stuttering: Implications for clinical trials.

    Science.gov (United States)

    O'Brian, Sue; Jones, Mark; Onslow, Mark; Packman, Ann; Menzies, Ross; Lowe, Robyn

    2015-04-15

    This study investigated whether measures of percentage syllables stuttered (%SS) and stuttering severity ratings with a 9-point scale differ when made from audiovisual compared with audio-only recordings. Four experienced speech-language pathologists measured %SS and assigned stuttering severity ratings to 10-minute audiovisual and audio-only recordings of 36 adults. There was a mean 18% increase in %SS scores when samples were presented in audiovisual compared with audio-only mode. This result was consistent across both higher and lower %SS scores and was found to be directly attributable to counts of stuttered syllables rather than the total number of syllables. There was no significant difference between stuttering severity ratings made from the two modes. In clinical trials research, when using %SS as the primary outcome measure, audiovisual samples would be preferred as long as clear, good quality, front-on images can be easily captured. Alternatively, stuttering severity ratings may be a more valid measure to use as they correlate well with %SS and values are not influenced by the presentation mode.

  5. Audiovisual associations alter the perception of low-level visual motion

    Directory of Open Access Journals (Sweden)

    Hulusi eKafaligonul

    2015-03-01

    Full Text Available Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role.

  6. Propuestas para la investigavción en comunicación audiovisual: publicidad social y creación colectiva en Internet / Research proposals for audiovisual communication: social advertising and collective creation on the internet

    Directory of Open Access Journals (Sweden)

    Teresa Fraile Prieto

    2011-09-01

    Full Text Available Resumen: La sociedad de la información digital plantea nuevos retos a los investigadores. A mediada que la comunicación audiovisual se ha consolidado como disciplina, los estudios culturales se muestran como una perspectiva de análisis ventajosa para acercarse a las nuevas prácticas creativas y de consumo del medio audiovisual. Este artículo defiende el estudio de los productos culturales audiovisuales que esta sociedad digital produce por cuanto son un testimonio de los cambios sociales que se operan en ella. En concreto se propone el acercamiento a la publicidad social y a los objetos de creación colectiva en Internet como medio para conocer las circunstancias de nuestra sociedad. Abstract: The information society poses new challenges to researchers. While audiovisual communication has been consolidated as a discipline, cultural studies is an advantageous analytical perspective to approach the new creative practices and consumption of audiovisual media. This article defends the study of audiovisual cultural products produced by the digital society because they are a testimony of the social changes taking place in it. Specifically, it proposes an approach to social advertising and objects of collective creation on the Internet as a means to know the circumstances of our society.

  7. Audiovisual laughter detection based on temporal features

    NARCIS (Netherlands)

    Petridis, Stavros; Nijholt, Antinus; Nijholt, A.; Pantic, M.; Pantic, Maja; Poel, Mannes; Poel, M.; Hondorp, G.H.W.

    2008-01-01

    Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audiovisual approach to distinguishing laughter from speech based on temporal features and we show that the integration of audio and visual information leads to improved

  8. Audio-visual materials usage preference among agricultural ...

    African Journals Online (AJOL)

    It was found that respondents preferred radio, television, poster, advert, photographs, specimen, bulletin, magazine, cinema, videotape, chalkboard, and bulletin board as audio-visual materials for extension work. These are the materials that can easily be manipulated and utilized for extension work. Nigerian Journal of ...

  9. Voice activity detection using audio-visual information

    DEFF Research Database (Denmark)

    Petsatodis, Theodore; Pnevmatikakis, Aristodemos; Boukis, Christos

    2009-01-01

    An audio-visual voice activity detector that uses sensors positioned distantly from the speaker is presented. Its constituting unimodal detectors are based on the modeling of the temporal variation of audio and visual features using Hidden Markov Models; their outcomes are fused using a post...

  10. First clinical implementation of audiovisual biofeedback in liver cancer stereotactic body radiation therapy

    International Nuclear Information System (INIS)

    Pollock, Sean; Tse, Regina; Martin, Darren

    2015-01-01

    This case report details a clinical trial's first recruited liver cancer patient who underwent a course of stereotactic body radiation therapy treatment utilising audiovisual biofeedback breathing guidance. Breathing motion results for both abdominal wall motion and tumour motion are included. Patient 1 demonstrated improved breathing motion regularity with audiovisual biofeedback. A training effect was also observed.

  11. Brain responses to audiovisual speech mismatch in infants are associated with individual differences in looking behaviour.

    Science.gov (United States)

    Kushnerenko, Elena; Tomalski, Przemyslaw; Ballieux, Haiko; Ribeiro, Helena; Potton, Anita; Axelsson, Emma L; Murphy, Elizabeth; Moore, Derek G

    2013-11-01

    Research on audiovisual speech integration has reported high levels of individual variability, especially among young infants. In the present study we tested the hypothesis that this variability results from individual differences in the maturation of audiovisual speech processing during infancy. A developmental shift in selective attention to audiovisual speech has been demonstrated between 6 and 9 months with an increase in the time spent looking to articulating mouths as compared to eyes (Lewkowicz & Hansen-Tift. (2012) Proc. Natl Acad. Sci. USA, 109, 1431-1436; Tomalski et al. (2012) Eur. J. Dev. Psychol., 1-14). In the present study we tested whether these changes in behavioural maturational level are associated with differences in brain responses to audiovisual speech across this age range. We measured high-density event-related potentials (ERPs) in response to videos of audiovisually matching and mismatched syllables /ba/ and /ga/, and subsequently examined visual scanning of the same stimuli with eye-tracking. There were no clear age-specific changes in ERPs, but the amplitude of audiovisual mismatch response (AVMMR) to the combination of visual /ba/ and auditory /ga/ was strongly negatively associated with looking time to the mouth in the same condition. These results have significant implications for our understanding of individual differences in neural signatures of audiovisual speech processing in infants, suggesting that they are not strictly related to chronological age but instead associated with the maturation of looking behaviour, and develop at individual rates in the second half of the first year of life. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. Audiovisual Webjournalism: An analysis of news on UOL News and on TV UERJ Online

    Directory of Open Access Journals (Sweden)

    Leila Nogueira

    2008-06-01

    Full Text Available This work shows the development of audiovisual webjournalism on the Brazilian Internet. This paper, based on the analysis of UOL News on UOL TV – pioneer format on commercial web television - and of UERJ Online TV – first on-line university television in Brazil - investigates the changes in the gathering, production and dissemination processes of audiovisual news when it starts to be transmitted through the web. Reflections of authors such as Herreros (2003, Manovich (2001 and Gosciola (2003 are used to discuss the construction of audiovisual narrative on the web. To comprehend the current changes in today’s webjournalism, we draw on the concepts developed by Fidler (1997; Bolter and Grusin (1998; Machado (2000; Mattos (2002 and Palacios (2003. We may conclude that the organization of narrative elements in cyberspace makes for the efficiency of journalistic messages, while establishing the basis of a particular language for audiovisual news on the Internet.

  13. Audio/visual analysis for high-speed TV advertisement detection from MPEG bitstream

    OpenAIRE

    Sadlier, David A.

    2002-01-01

    Advertisement breaks dunng or between television programmes are typically flagged by senes of black-and-silent video frames, which recurrendy occur in order to audio-visually separate individual advertisement spots from one another. It is the regular prevalence of these flags that enables automatic differentiauon between what is programme content and what is advertisement break. Detection of these audio-visual depressions within broadcast television content provides a basis on which advertise...

  14. Functionally segregated neural substrates for arbitrary audiovisual paired-association learning.

    Science.gov (United States)

    Tanabe, Hiroki C; Honda, Manabu; Sadato, Norihiro

    2005-07-06

    To clarify the neural substrates and their dynamics during crossmodal association learning, we conducted functional magnetic resonance imaging (MRI) during audiovisual paired-association learning of delayed matching-to-sample tasks. Thirty subjects were involved in the study; 15 performed an audiovisual paired-association learning task, and the remainder completed a control visuo-visual task. Each trial consisted of the successive presentation of a pair of stimuli. Subjects were asked to identify predefined audiovisual or visuo-visual pairs by trial and error. Feedback for each trial was given regardless of whether the response was correct or incorrect. During the delay period, several areas showed an increase in the MRI signal as learning proceeded: crossmodal activity increased in unimodal areas corresponding to visual or auditory areas, and polymodal responses increased in the occipitotemporal junction and parahippocampal gyrus. This pattern was not observed in the visuo-visual intramodal paired-association learning task, suggesting that crossmodal associations might be formed by binding unimodal sensory areas via polymodal regions. In both the audiovisual and visuo-visual tasks, the MRI signal in the superior temporal sulcus (STS) in response to the second stimulus and feedback peaked during the early phase of learning and then decreased, indicating that the STS might be key to the creation of paired associations, regardless of stimulus type. In contrast to the activity changes in the regions discussed above, there was constant activity in the frontoparietal circuit during the delay period in both tasks, implying that the neural substrates for the formation and storage of paired associates are distinct from working memory circuits.

  15. An audiovisual emotion recognition system

    Science.gov (United States)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  16. Audiovisual focus of attention and its application to Ultra High Definition video compression

    Science.gov (United States)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  17. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  18. 45 CFR 707.10 - Auxiliary aids.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Auxiliary aids. 707.10 Section 707.10 Public Welfare Regulations Relating to Public Welfare (Continued) COMMISSION ON CIVIL RIGHTS ENFORCEMENT OF... § 707.10 Auxiliary aids. (a) The Agency shall furnish appropriate auxiliary aids where necessary to...

  19. Selected Bibliography of Egyptian Educational Materials, Vol. 2, No. 3, 1976.

    Science.gov (United States)

    Al-Ahram Center for Scientific Translations, Cairo (Egypt).

    The selective annotated bibliography of Egyptian educational publications contains 109 entries on 42 topics. Included are journal articles, books, and government documents published during 1976. Content includes the following topics: adult education, art education, audiovisual aids, care for distinguished students, educational planning,…

  20. A Four-State Comparison of Expenditures and Income Sources of Financial Aid Recipients in Public Colleges and Universities.

    Science.gov (United States)

    Stampen, Jacob O.; Fenske, Robert H.

    The way public college students finance college was studied, based on student resource and expenditure surveys from four states: Arizona, California, New York, and Wisconsin. Comparisons were made of demographic and academic variables, as well as expenditure patterns of students receiving different kinds of aid. The following four aid recipient…

  1. Mental health first aid responses of the public: results from an Australian national survey

    Directory of Open Access Journals (Sweden)

    Kitchener Betty A

    2005-02-01

    Full Text Available Abstract Background The prevalence of mental disorders is so high that members of the public will commonly have contact with someone affected. How they respond to that person (the mental health first aid response may affect outcomes. However, there is no information on what members of the public might do in such circumstances. Methods In a national survey of 3998 Australian adults, respondents were presented with one of four case vignettes and asked what they would do if that person was someone they had known for a long time and cared about. There were four types of vignette: depression, depression with suicidal thoughts, early schizophrenia, and chronic schizophrenia. Verbatim responses to the open-ended question were coded into categories. Results The most common responses to all vignettes were to encourage professional help-seeking and to listen to and support the person. However, a significant minority did not give these responses. Much less common responses were to assess the problem or risk of harm, to give or seek information, to encourage self-help, or to support the family. Few respondents mentioned contacting a professional on the person's behalf or accompanying them to a professional. First aid responses were generally more appropriate in women, those with less stigmatizing attitudes, and those who correctly identified the disorder in the vignette. Conclusions There is room for improving the range of mental health first aid responses in the community. Lack of knowledge of mental disorders and stigmatizing attitudes are important barriers to effective first aid.

  2. A linguagem audiovisual da lousa digital interativa no contexto educacional/Audiovisual language of the digital interactive whiteboard in the educational environment

    Directory of Open Access Journals (Sweden)

    Rosária Helena Ruiz Nakashima

    2006-01-01

    Full Text Available Neste artigo serão apresentadas informações sobre a lousa digital como um instrumento que proporciona a inserção da linguagem audiovisual no contexto escolar. Para o funcionamento da lousa digital interativa é necessário que esteja conectada a um computador e este a um projetor multimídia, sendo que, através da tecnologia Digital Vision Touch (DViT, a superfície desse quadro torna-se sensível ao toque. Dessa forma, utilizando-se o dedo, professores e alunos executarão funções que aumentam a interatividade com as atividades propostas na lousa. Serão apresentadas duas possibilidades de atividades pedagógicas, destacando as áreas do conhecimento de Ciências e Língua Portuguesa, que poderão ser aplicadas na educação infantil, com alunos de cinco a seis anos. Essa tecnologia reflete a evolução de um tipo de linguagem que não é mais baseada somente na oralidade e na escrita, mas também é audiovisual e dinâmica, pois permite que o sujeito além de receptor, seja produtor de informações. Portanto, a escola deve aproveitar esses recursos tecnológicos que facilitam o trabalho com a linguagem audiovisual em sala de aula, permitindo a elaboração de aulas mais significativas e inovadoras.In this paper we present some information about the digital interactive whiteboard and its use as a tool to introduce the audiovisual language in the educational environment. The digital interactive whiteboard is connected to both a computer and a multimedia projector and it uses the Digital Vision Touch (DViT, which means that the screen is touch-sensitive. By touching with their fingers, both teachers and pupils have access to functionalities that increase the interactivity with the activities worked during the class. We present two pedagogical activities to be used in Science and Portuguese classes, for five- and six-years old pupils. This new technology is the result of the evolution of a new type of communication, which is not grounded

  3. Humanizing HIV/AIDS and its (re)stigmatizing effects: HIV public 'positive' speaking in India.

    Science.gov (United States)

    Finn, Mark; Sarangi, Srikant

    2009-01-01

    Social stigma has been inextricably linked with HIV and AIDS since the epidemic erupted in the early 1980s. The stigma that has built up around HIV and AIDS is generally regarded as having a negative impact on the quality of life of HIV-positive people and on general prevention efforts. Current attempts to combat HIV-related stigma focus on increasing the acceptance of HIV among the stigmatizing public and stigmatized individuals alike. In this, the global HIV-positive community is being increasingly called upon to ;humanize' the virus, not least through public displays of HIV 'positive' health and public ;positive' speaking. This article critically explores the constitutive effects and inherent power relations of HIV Positive Speakers' Bureaus (PSBs) as a platform for such a display. Adopting a post-structuralist discourse analytic approach, we explore accounts of positive-speaking and HIV health from HIV-related non-government organizations in India and in PSB training manuals. In particular, we highlight ways in which positive-speaking in India can be seen to have significant (re)stigmatizing effects by way of ambivalent and hyper-real configurations of HIV 'positive' identity and life.

  4. Early and late beta-band power reflect audiovisual perception in the McGurk illusion.

    Science.gov (United States)

    Roa Romero, Yadira; Senkowski, Daniel; Keil, Julian

    2015-04-01

    The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13-30 Hz) at short (0-500 ms) and long (500-800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept. Copyright © 2015 the American Physiological Society.

  5. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    Directory of Open Access Journals (Sweden)

    Yuanqing Li

    Full Text Available One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG. The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  6. Planning and Producing Audiovisual Materials. Third Edition.

    Science.gov (United States)

    Kemp, Jerrold E.

    A revised edition of this handbook provides illustrated, step-by-step explanations of how to plan and produce audiovisual materials. Included are sections on the fundamental skills--photography, graphics and recording sound--followed by individual sections on photographic print series, slide series, filmstrips, tape recordings, overhead…

  7. Content-based analysis improves audiovisual archive retrieval

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2012-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. In this paper, we take into account the information needs

  8. Quantifying temporal ventriloquism in audiovisual synchrony perception

    NARCIS (Netherlands)

    Kuling, I.A.; Kohlrausch, A.G.; Juola, J.F.

    2013-01-01

    The integration of visual and auditory inputs in the human brain works properly only if the components are perceived in close temporal proximity. In the present study, we quantified cross-modal interactions in the human brain for audiovisual stimuli with temporal asynchronies, using a paradigm from

  9. 36 CFR 1256.100 - What is the copying policy for USIA audiovisual records that either have copyright protection or...

    Science.gov (United States)

    2010-07-01

    ... for USIA audiovisual records that either have copyright protection or contain copyrighted material... Distribution of United States Information Agency Audiovisual Materials in the National Archives of the United States § 1256.100 What is the copying policy for USIA audiovisual records that either have copyright...

  10. The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study.

    Science.gov (United States)

    Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence

    2017-09-25

    At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Selective Attention and Audiovisual Integration: Is Attending to Both Modalities a Prerequisite for Early Integration?

    NARCIS (Netherlands)

    Talsma, D.; Doty, Tracy J.; Woldorff, Marty G.

    2007-01-01

    Interactions between multisensory integration and attention were studied using a combined audiovisual streaming design and a rapid serial visual presentation paradigm. Event-related potentials (ERPs) following audiovisual objects (AV) were compared with the sum of the ERPs following auditory (A) and

  12. Influence of auditory and audiovisual stimuli on the right-left prevalence effect

    DEFF Research Database (Denmark)

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate...... vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did...... not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension...

  13. A Similarity-Based Approach for Audiovisual Document Classification Using Temporal Relation Analysis

    Directory of Open Access Journals (Sweden)

    Ferrane Isabelle

    2011-01-01

    Full Text Available Abstract We propose a novel approach for video classification that bases on the analysis of the temporal relationships between the basic events in audiovisual documents. Starting from basic segmentation results, we define a new representation method that is called Temporal Relation Matrix (TRM. Each document is then described by a set of TRMs, the analysis of which makes events of a higher level stand out. This representation has been first designed to analyze any audiovisual document in order to find events that may well characterize its content and its structure. The aim of this work is to use this representation to compute a similarity measure between two documents. Approaches for audiovisual documents classification are presented and discussed. Experimentations are done on a set of 242 video documents and the results show the efficiency of our proposals.

  14. Conditioning Influences Audio-Visual Integration by Increasing Sound Saliency

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    2011-10-01

    Full Text Available We investigated the effect of prior conditioning of an auditory stimulus on audiovisual integration in a series of four psychophysical experiments. The experiments factorially manipulated the conditioning procedure (picture vs monetary conditioning and multisensory paradigm (2AFC visual detection vs redundant target paradigm. In the conditioning sessions, subjects were presented with three pure tones (= conditioned stimulus, CS that were paired with neutral, positive, or negative unconditioned stimuli (US, monetary: +50 euro cents,.–50 cents, 0 cents; pictures: highly pleasant, unpleasant, and neutral IAPS. In a 2AFC visual selective attention paradigm, detection of near-threshold Gabors was improved by concurrent sounds that had previously been paired with a positive (monetary or negative (picture outcome relative to neutral sounds. In the redundant target paradigm, sounds previously paired with positive (monetary or negative (picture outcomes increased response speed to both auditory and audiovisual targets similarly. Importantly, prior conditioning did not increase the multisensory response facilitation (ie, (A + V/2 – AV or the race model violation. Collectively, our results suggest that prior conditioning primarily increases the saliency of the auditory stimulus per se rather than influencing audiovisual integration directly. In turn, conditioned sounds are rendered more potent for increasing response accuracy or speed in detection of visual targets.

  15. Democracy as a meaning. Regional participatory forums of public consultation in Argentina

    Directory of Open Access Journals (Sweden)

    Víctor Humberto Guzmán

    2017-06-01

    Full Text Available This paper presents the study of part of the dispute process around the Audiovisual Communication Services law in the argentine public space during the year 2009. Specifically, it shows how the signification of democracy was configured in the Regional Participatory Forums of Public Consultation (FPCP organized by the Federal Broadcasting Committee (COMFER which were held during 2009 as a previous stage to the presentation of the Audiovisual Communication Services Bill. Thus, from the analysis of the interventions in the FPCP, the paper presents the emergence of democracy as democratic gradualness configured in three analytical dimensions: what democracy is not, democracy as plurality, and democracy as participation.

  16. 6th International Workshop on Computer-Aided Scheduling of Public Transport

    CERN Document Server

    Branco, Isabel; Paixão, José

    1995-01-01

    This proceedings volume consists of papers presented at the Sixth International Workshop on Computer-Aided Scheduling of Public Transpon, which was held at the Fund~lio Calouste Gulbenkian in Lisbon from July 6th to 9th, 1993. In the tradition of alternating Workshops between North America and Europe - Chicago (1975), Leeds (1980), Montreal (1983), Hamburg (1987) and again Montreal (1990), the European city of Lisbon was selected as the venue for the Workshop in 1993. As in earlier Workshops, the central theme dealt with vehicle and duty scheduling problems and the employment of operations-research-based software systems for operational planning in public transport. However, as was initiated in Hamburg in 1987, the scope of this Workshop was broadened to include topics in related fields. This fundamental alteration was an inevitable consequence of the growing demand over the last decade for solutions to the complete planning process in public transport through integrated systems. Therefore, the program of thi...

  17. Market potential for interactive audio-visual media

    NARCIS (Netherlands)

    Leurdijk, A.; Limonard, S.

    2005-01-01

    NM2 (New Media for a New Millennium) develops tools for interactive, personalised and non-linear audio-visual content that will be tested in seven pilot productions. This paper looks at the market potential for these productions from a technological, a business and a users' perspective. It shows

  18. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  19. Audiovisual preconditioning enhances the efficacy of an anatomical dissection course: A randomised study.

    Science.gov (United States)

    Collins, Anne M; Quinlan, Christine S; Dolan, Roisin T; O'Neill, Shane P; Tierney, Paul; Cronin, Kevin J; Ridgway, Paul F

    2015-07-01

    The benefits of incorporating audiovisual materials into learning are well recognised. The outcome of integrating such a modality in to anatomical education has not been reported previously. The aim of this randomised study was to determine whether audiovisual preconditioning is a useful adjunct to learning at an upper limb dissection course. Prior to instruction participants completed a standardised pre course multiple-choice questionnaire (MCQ). The intervention group was subsequently shown a video with a pre-recorded commentary. Following initial dissection, both groups completed a second MCQ. The final MCQ was completed at the conclusion of the course. Statistical analysis confirmed a significant improvement in the performance in both groups over the duration of the three MCQs. The intervention group significantly outperformed their control group counterparts immediately following audiovisual preconditioning and in the post course MCQ. Audiovisual preconditioning is a practical and effective tool that should be incorporated in to future course curricula to optimise learning. Level of evidence This study appraises an intervention in medical education. Kirkpatrick Level 2b (modification of knowledge). Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  20. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech and music

    Directory of Open Access Journals (Sweden)

    Hwee Ling eLee

    2014-08-01

    Full Text Available This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogues of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms. Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past three years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  1. On-line repository of audiovisual material feminist research methodology

    Directory of Open Access Journals (Sweden)

    Lena Prado

    2014-12-01

    Full Text Available This paper includes a collection of audiovisual material available in the repository of the Interdisciplinary Seminar of Feminist Research Methodology SIMReF (http://www.simref.net.

  2. Revelation of shrunken or stretched binomial dispersion and public perception of situations which might spread AIDS or HIV

    OpenAIRE

    Ramalingam Shanmugam

    2014-01-01

    Background: In 1985, the center for disease control coined the name: and ldquo;Acquired Immune Deficiency Syndrome (AIDS) and rdquo; to refer a deadly illness. The World Health Organization (WHO) estimated that about 33.4 million people were suffering with AIDS and two million people (including 330,000 children) died in 2009 alone in many parts of the world. A scary fact is that the public worry about situations which might spread AIDS according to reported survey result in Meulders et al. (...

  3. Selected Bibliography of Egyptian Educational Materials, Vol. 2, No. 2, 1976.

    Science.gov (United States)

    Al-Ahram Center for Scientific Translations, Cairo (Egypt).

    One hundred fourteen entries on 58 topics are contained in the selective annotated bibliography of Egyptian publications on education. Included are journal articles, books, and government documents published during 1976. Content includes the following topics: adult education, Arabic language, audiovisual aids, child upbringing, civics, economics…

  4. Net neutrality and audiovisual services

    OpenAIRE

    van Eijk, N.; Nikoltchev, S.

    2011-01-01

    Net neutrality is high on the European agenda. New regulations for the communication sector provide a legal framework for net neutrality and need to be implemented on both a European and a national level. The key element is not just about blocking or slowing down traffic across communication networks: the control over the distribution of audiovisual services constitutes a vital part of the problem. In this contribution, the phenomenon of net neutrality is described first. Next, the European a...

  5. Shopping on the Public and Private Health Insurance Marketplaces: Consumer Decision Aids and Plan Presentation.

    Science.gov (United States)

    Wong, Charlene A; Kulhari, Sajal; McGeoch, Ellen J; Jones, Arthur T; Weiner, Janet; Polsky, Daniel; Baker, Tom

    2018-05-29

    The design of the Affordable Care Act's (ACA) health insurance marketplaces influences complex health plan choices. To compare the choice environments of the public health insurance exchanges in the fourth (OEP4) versus third (OEP3) open enrollment period and to examine online marketplace run by private companies, including a total cost estimate comparison. In November-December 2016, we examined the public and private online health insurance exchanges. We navigated each site for "real-shopping" (personal information required) and "window-shopping" (no required personal information). Public (n = 13; 12 state-based marketplaces and HealthCare.gov ) and private (n = 23) online health insurance exchanges. Features included consumer decision aids (e.g., total cost estimators, provider lookups) and plan display (e.g., order of plans). We examined private health insurance exchanges for notable features (i.e., those not found on public exchanges) and compared the total cost estimates on public versus private exchanges for a standardized consumer. Nearly all studied consumer decision aids saw increased deployment in the public marketplaces in OEP4 compared to OEP3. Over half of the public exchanges (n = 7 of 13) had total cost estimators (versus 5 of 14 in OEP3) in window-shopping and integrated provider lookups (window-shopping: 7; real-shopping: 8). The most common default plan orders were by premium or total cost estimate. Notable features on private health insurance exchanges were unique data presentation (e.g., infographics) and further personalized shopping (e.g., recommended plan flags). Health plan total cost estimates varied substantially between the public and private exchanges (average difference $1526). The ACA's public health insurance exchanges offered more tools in OEP4 to help consumers select a plan. While private health insurance exchanges presented notable features, the total cost estimates for a standardized consumer varied widely on public

  6. Audiovisual integration of speech in a patient with Broca's Aphasia

    Science.gov (United States)

    Andersen, Tobias S.; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia. PMID:25972819

  7. Electrophysiological evidence for speech-specific audiovisual integration

    NARCIS (Netherlands)

    Baart, M.; Stekelenburg, J.J.; Vroomen, J.

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were

  8. School Building Design and Audio-Visual Resources.

    Science.gov (United States)

    National Committee for Audio-Visual Aids in Education, London (England).

    The design of new schools should facilitate the use of audiovisual resources by ensuring that the materials used in the construction of the buildings provide adequate sound insulation and acoustical and viewing conditions in all learning spaces. The facilities to be considered are: electrical services; electronic services; light control and…

  9. Iniciativas e ações feministas no audiovisual brasileiro contemporâneo

    Directory of Open Access Journals (Sweden)

    Marina Cavalcanti Tedesco

    2017-10-01

    Full Text Available É possível afirmar que nos últimos dois anos a palavra feminismo adquiriu um novo peso, conquistando um espaço significativo nas redes sociais, na mídia e nas ruas. O audiovisual foi uma das áreas que acompanhou esta ascensão recente do feminismo, o que se materializou através de uma série de iniciativas focadas em reivindicar direitos e discutir o machismo no mercado de trabalho. Neste artigo pretendemos, sem nenhuma pretensão de esgotar o tema, apresentar e refletir sobre oito iniciativas que consideramos emblemáticas dessa intersecção contemporânea entre feminismo e cinema: Mulher no Cinema, Mulheres do Audiovisual Brasil, Mulheres Negras no Audiovisual Brasileiro, Cabíria Prêmio de Roteiro, Eparrêi Filmes, Academia das Musas, Cineclube Delas e o FINCAR – Festival Internacional de Cinema de Realizadoras.

  10. Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction.

    Science.gov (United States)

    Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro

    2016-10-01

    Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue

  11. Text-to-audiovisual speech synthesizer for children with learning disabilities.

    Science.gov (United States)

    Mendi, Engin; Bayrak, Coskun

    2013-01-01

    Learning disabilities affect the ability of children to learn, despite their having normal intelligence. Assistive tools can highly increase functional capabilities of children with learning disorders such as writing, reading, or listening. In this article, we describe a text-to-audiovisual synthesizer that can serve as an assistive tool for such children. The system automatically converts an input text to audiovisual speech, providing synchronization of the head, eye, and lip movements of the three-dimensional face model with appropriate facial expressions and word flow of the text. The proposed system can enhance speech perception and help children having learning deficits to improve their chances of success.

  12. Computationally efficient clustering of audio-visual meeting data

    NARCIS (Netherlands)

    Hung, H.; Friedland, G.; Yeo, C.; Shao, L.; Shan, C.; Luo, J.; Etoh, M.

    2010-01-01

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors,

  13. Sincronía entre formas sonoras y formas visuales en la narrativa audiovisual

    Directory of Open Access Journals (Sweden)

    Lic. José Alfredo Sánchez Ríos

    1999-01-01

    Full Text Available ¿Dónde tiene que situarse el investigador para realizar un trabajo que lleve consigo un conocimiento más profundo para entender un fenómeno tan próximo y tan complejo como es la comunicación audiovisual que usa sonido e imagen a la vez? ¿Cuál es el papel del investigador en comunicación audiovisual para aportar nuevas aproximaciones en torno a su objeto de estudio? Desde esta perspectiva, pensamos que la nueva tarea del investigador en comunicación audiovisual será hacer una teoría menos interpretativa-subjetiva y encaminar sus observaciones hacia conocimientos segmentados que puedan ser demostrables, repetibles y autocuestionables, es decir, estudiar, elaborar y construir una teoría con un mayor y nuevo rigor metodológico.

  14. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    Science.gov (United States)

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  15. Optimal Audiovisual Integration in the Ventriloquism Effect But Pervasive Deficits in Unisensory Spatial Localization in Amblyopia.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-01-01

    Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.

  16. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    Science.gov (United States)

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  17. Audiovisual en línea en la universidad española: bibliotecas y servicios especializados (una panorámica

    Directory of Open Access Journals (Sweden)

    Alfonso López Yepes

    2014-08-01

    Full Text Available Situación que presenta la información audiovisual en línea en el ámbito de las bibliotecas y servicios audiovisuales universitarios españoles, con ejemplos de aplicaciones y desarrollos concretos. Se destaca la presencia del audiovisual fundamentalmente en blogs, canales IPTV, portales bibliotecarios propios y en actuaciones concretas como “La Universidad Responde”, a cargo de los servicios audiovisuales de las universidades españolas, que supone sin duda un marco de referencia y de difusión informativa muy destacado también para el ámbito bibliotecario; así como en redes sociales, mencionándose una propuesta de modelo de red social de biblioteca universitaria. Se remite a la participación de bibliotecas y servicios en proyectos colaborativos de investigación y desarrollo social, presencia ya efectiva en el marco del proyecto “Red iberoamericana de patrimonio sonoro y audiovisual”, que apuesta  por la construcción social del conocimiento audiovisual basado en la interacción entre distintos grupos multidisciplinarios de profesionales con diferentes comunidades de usuarios e instituciones.A situation presenting audiovisual information online in the field of libraries and audiovisual university spanish services, with examples of applications and specific developments. The presence of the audiovisual in blogs and IPTV channels librarians and specific actions as The University Respond, in charge of the audiovisual services of the spanish universities, a very important reference and information dissemination for the field librarian and in social networks, mentioning a model of social network of University Library. Participation of libraries and services in collaborative research and social development projects in the Ibero-American network of sound and audiovisual heritage project, for the social construction of the audiovisual knowledge based on the interaction between various multidisciplinary groups of professionals with

  18. Audio-Visual Tibetan Speech Recognition Based on a Deep Dynamic Bayesian Network for Natural Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Yue Zhao

    2012-12-01

    Full Text Available Audio-visual speech recognition is a natural and robust approach to improving human-robot interaction in noisy environments. Although multi-stream Dynamic Bayesian Network and coupled HMM are widely used for audio-visual speech recognition, they fail to learn the shared features between modalities and ignore the dependency of features among the frames within each discrete state. In this paper, we propose a Deep Dynamic Bayesian Network (DDBN to perform unsupervised extraction of spatial-temporal multimodal features from Tibetan audio-visual speech data and build an accurate audio-visual speech recognition model under a no frame-independency assumption. The experiment results on Tibetan speech data from some real-world environments showed the proposed DDBN outperforms the state-of-art methods in word recognition accuracy.

  19. THE USE OF VISUAL AIDS IN A SOCIO-INTERACTIVE ENVIRONMENT O USO DO VISUAL AIDS EM UM CONTEXTO SÓCIO-AMBIENTE INTERATIVO

    Directory of Open Access Journals (Sweden)

    TÂNIA REGINA VIEIRA

    2002-01-01

    Full Text Available The need for audiovisual materials in the EFL classroom arises from the factthat the association of visual aids with the new language makes meaning moredirect and quick to understand than through verbal explanation, attracts thestudents’ attention and aids concentration. Learning a language through visualaids in collaboration with other peers makes the experience more productiveand profitable. Therefore, this work discusses how the use of visual aids in asocio-interactive environment can improve students’ ability to learn a language.A necessidade do uso de recursos visuais em uma aula de língua estrangeiraderiva do fato de que a associação de imagens à nova língua torna o significadomais direto e fácil de compreender do que por explicações verbais, atrai a atençãodos alunos e ajuda na concentração. Aprender uma língua através de recursosvisuais em colaboração com outros colegas torna a experiência mais produtivae proveitosa. Assim, este trabalho discute como o uso de recursos visuais emum contexto sociointerativo pode melhorar a capacidade dos alunos paraaprender uma língua.

  20. Handicrafts production: documentation and audiovisual dissemination as sociocultural appreciation technology

    Directory of Open Access Journals (Sweden)

    Luciana Alvarenga

    2016-01-01

    Full Text Available The paper presents the results of scientific research, technology and innovation project in the creative economy sector, conducted from January 2014 to January 2015 that aimed to document and disclose the artisans and handicraft production of Vila de Itaúnas, ES, Brasil. The process was developed from initial conversations, followed by planning and conducting participatory workshops for documentation and audiovisual dissemination around the production of handicrafts and its relation to biodiversity and local culture. The initial objective was to promote expression and diffusion spaces of knowledge among and for the local population, also reaching a regional, state and national public. Throughout the process, it was found that the participatory workshops and the collective production of a virtual site for disclosure of practices and products contributed to the development and socio-cultural recognition of artisan and craft in the region.

  1. Audiovisual News, Cartoons, and Films as Sources of Authentic Language Input and Language Proficiency Enhancement

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2012-01-01

    In today's audiovisually driven world, various audiovisual programs can be incorporated as authentic sources of potential language input for second language acquisition. In line with this view, the present research aimed at discovering the effectiveness of exposure to news, cartoons, and films as three different types of authentic audiovisual…

  2. Teleconferences and Audiovisual Materials in Earth Science Education

    Science.gov (United States)

    Cortina, L. M.

    2007-05-01

    Unidad de Educacion Continua y a Distancia, Universidad Nacional Autonoma de Mexico, Coyoaca 04510 Mexico, MEXICO As stated in the special session description, 21st century undergraduate education has access to resources/experiences that go beyond university classrooms. However in some cases, resources may go largely unused and a number of factors may be cited such as logistic problems, restricted internet and telecommunication service access, miss-information, etc. We present and comment on our efforts and experiences at the National University of Mexico in a new unit dedicated to teleconferences and audio-visual materials. The unit forms part of the geosciences institutes, located in the central UNAM campus and campuses in other States. The use of teleconference in formal graduate and undergraduate education allows teachers and lecturers to distribute course material as in classrooms. Course by teleconference requires learning and student and teacher effort without physical contact, but they have access to multimedia available to support their exhibition. Well selected multimedia material allows the students to identify and recognize digital information to aid understanding natural phenomena integral to Earth Sciences. Cooperation with international partnerships providing access to new materials and experiences and to field practices will greatly add to our efforts. We will present specific examples of the experiences that we have at the Earth Sciences Postgraduate Program of UNAM with the use of technology in the education in geosciences.

  3. Users Requirements in Audiovisual Search: A Quantitative Approach

    NARCIS (Netherlands)

    Nadeem, Danish; Ordelman, Roeland J.F.; Aly, Robin; Verbruggen, Erwin; Aalberg, Trond; Papatheodorou, Christos; Dobreva, Milena; Tsakonas, Giannis; Farrugia, Charles J.

    2013-01-01

    This paper reports on the results of a quantitative analysis of user requirements for audiovisual search that allow the categorisation of requirements and to compare requirements across user groups. The categorisation provides clear directions with respect to the prioritisation of system features

  4. When Library and Archival Science Methods Converge and Diverge: KAUST’s Multi-Disciplinary Approach to the Management of its Audiovisual Heritage

    KAUST Repository

    Kenosi, Lekoko

    2015-07-16

    Libraries and Archives have long recognized the important role played by audiovisual records in the development of an informed global citizen and the King Abdullah University of Science and Technology (KAUST) is no exception. Lying on the banks of the Red Sea, KAUST has a state of the art library housing professional library and archives teams committed to the processing of digital audiovisual records created within and outside the University. This commitment, however, sometimes obscures the fundamental divergences unique to the two disciplines on the acquisition, cataloguing, access and long-term preservation of audiovisual records. This dichotomy is not isolated to KAUST but replicates itself in many settings that have employed Librarians and Archivists to manage their audiovisual collections. Using the KAUST audiovisual collections as a case study the authors of this paper will take the reader through the journey of managing KAUST’s digital audiovisual collection. Several theoretical and methodological areas of convergence and divergence will be highlighted as well as suggestions on the way forward for the IFLA and ICA working committees on the management of audiovisual records.

  5. Computationally Efficient Clustering of Audio-Visual Meeting Data

    Science.gov (United States)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  6. Youth Suicide Prevention: Mental Health and Public Health Perspectives. A Presentation and Training Aid.

    Science.gov (United States)

    California Univ., Los Angeles. Center for Mental Health in Schools.

    This presentation and training aid provides a brief overview and discussion of the nature and scope of youth suicide, what prevention programs try to do, a framework for a public health approach, guides to programs and more. This material can be used for both handouts and as overheads for use with presentations. (GCP)

  7. What public school teachers teach about preventing pregnancy, AIDS and sexually transmitted diseases.

    Science.gov (United States)

    Forrest, J D; Silverman, J

    1989-01-01

    Ninety-three percent of public school teachers in five specialties-biology, health education, home economics, physical education and school nursing--who teach grades 7-12 report that their schools offer sex education or AIDS education in some form. Almost all the teachers believe that a wide range of topics related to the prevention of pregnancy, AIDS and other sexually transmitted diseases (STDs) should be taught in the public schools, and most believe these topics should be covered by grades 7-8 at the latest. In practice, however, sex education tends not to occur until the ninth or 10th grades. Moreover, there is often a gap between what teachers think should be taught and what actually is taught. For example, virtually all the teachers say that school sex education should cover sexual decision-making, abstinence and birth control methods, but only 82-84 percent of the teachers are in schools that provide instruction in those topics. The largest gap occurs in connection with sources of birth control methods: Ninety-seven percent of teachers say that sex education classes should address where students can go to obtain a method, but only 48 percent are in schools where this is done. Forty-five percent of teachers in the five specialties currently provide sex education in some form. The messages they most want to give to their students are responsibility regarding sexual relationships and parenthood, the importance of abstinence and ways of resisting pressures to become sexually active, and information about AIDS and other STDs.(ABSTRACT TRUNCATED AT 250 WORDS)

  8. Does audiovisual distraction reduce dental anxiety in children under local anesthesia? A systematic review and meta-analysis.

    Science.gov (United States)

    Zhang, Cai; Qin, Dan; Shen, Lu; Ji, Ping; Wang, Jinhua

    2018-03-02

    To perform a systematic review and meta-analysis on the effects of audiovisual distraction on reducing dental anxiety in children during dental treatment under local anesthesia. The authors identified eligible reports published through August 2017 by searching PubMed, EMBASE, and Cochrane Central Register of Controlled Trials. Clinical trials that reported the effects of audiovisual distraction on children's physiological measures, self-reports and behavior rating scales during dental treatment met the minimum inclusion requirements. The authors extracted data and performed a meta-analysis of appropriate articles. Nine eligible trials were included and qualitatively analyzed; some of these trials were also quantitatively analyzed. Among the physiological measures, heart rate or pulse rate was significantly lower (p=0.01) in children subjected to audiovisual distraction during dental treatment under local anesthesia than in those who were not; a significant difference in oxygen saturation was not observed. The majority of the studies using self-reports and behavior rating scales suggested that audiovisual distraction was beneficial in reducing anxiety perception and improving children's cooperation during dental treatment. The audiovisual distraction approach effectively reduces dental anxiety among children. Therefore, we suggest the use of audiovisual distraction when children need dental treatment under local anesthesia. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  9. 36 CFR 1256.98 - Can I get access to and obtain copies of USIA audiovisual records transferred to the National...

    Science.gov (United States)

    2010-07-01

    ... obtain copies of USIA audiovisual records transferred to the National Archives of the United States? 1256... United States Information Agency Audiovisual Materials in the National Archives of the United States § 1256.98 Can I get access to and obtain copies of USIA audiovisual records transferred to the National...

  10. Audiovisual English-Arabic Translation: De Beaugrande's Perspective

    Directory of Open Access Journals (Sweden)

    Alaa Eddin Hussain

    2016-05-01

    Full Text Available This paper attempts to demonstrate the significance of the seven standards of textuality with special application to audiovisual English Arabic translation.  Ample and thoroughly analysed examples have been provided to help in audiovisual English-Arabic translation decision-making. A text is meaningful if and only if it carries meaning and knowledge to its audience, and is optimally activatable, recoverable and accessible.  The same is equally applicable to audiovisual translation (AVT. The latter should also carry knowledge which can be easily accessed by the TL audience, and be processed with least energy and time, i.e. achieving the utmost level of efficiency. Communication occurs only when that text is coherent, with continuity of senses and concepts that are appropriately linked. Coherence of a text will be achieved when all aspects of cohesive devices are well accounted for pragmatically.  This combined with a good amount of psycholinguistic element will provide a text with optimal communicative value. Non-text is certainly devoid of such components and ultimately non-communicative. Communicative knowledge can be classified into three categories: determinate knowledge, typical knowledge and accidental knowledge. To create dramatic suspense and the element of surprise, the text in AV environment, as in any dialogue, often carries accidental knowledge.  This unusual knowledge aims to make AV material interesting in the eyes of its audience. That cognitive environment is enhanced by an adequate employment of material (picture and sound, and helps to recover sense in the text. Hence, the premise of this paper is the application of certain aspects of these standards to AV texts taken from various recent feature films and documentaries, in order to facilitate the translating process and produce a final appropriate product.

  11. Audio-Visual Equipment Depreciation. RDU-75-07.

    Science.gov (United States)

    Drake, Miriam A.; Baker, Martha

    A study was conducted at Purdue University to gather operational and budgetary planning data for the Libraries and Audiovisual Center. The objectives were: (1) to complete a current inventory of equipment including year of purchase, costs, and salvage value; (2) to determine useful life data for general classes of equipment; and (3) to determine…

  12. Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation.

    Science.gov (United States)

    Banks, Briony; Gowen, Emma; Munro, Kevin J; Adank, Patti

    2015-01-01

    Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker's facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants' eye gaze was recorded to verify that they looked at the speaker's face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation.

  13. Dissociating verbal and nonverbal audiovisual object processing.

    Science.gov (United States)

    Hocking, Julia; Price, Cathy J

    2009-02-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

  14. Summarizing Audiovisual Contents of a Video Program

    Science.gov (United States)

    Gong, Yihong

    2003-12-01

    In this paper, we focus on video programs that are intended to disseminate information and knowledge such as news, documentaries, seminars, etc, and present an audiovisual summarization system that summarizes the audio and visual contents of the given video separately, and then integrating the two summaries with a partial alignment. The audio summary is created by selecting spoken sentences that best present the main content of the audio speech while the visual summary is created by eliminating duplicates/redundancies and preserving visually rich contents in the image stream. The alignment operation aims to synchronize each spoken sentence in the audio summary with its corresponding speaker's face and to preserve the rich content in the visual summary. A Bipartite Graph-based audiovisual alignment algorithm is developed to efficiently find the best alignment solution that satisfies these alignment requirements. With the proposed system, we strive to produce a video summary that: (1) provides a natural visual and audio content overview, and (2) maximizes the coverage for both audio and visual contents of the original video without having to sacrifice either of them.

  15. 36 CFR 1256.96 - What provisions apply to the transfer of USIA audiovisual records to the National Archives of the...

    Science.gov (United States)

    2010-07-01

    ... transfer of USIA audiovisual records to the National Archives of the United States? 1256.96 Section 1256.96... Information Agency Audiovisual Materials in the National Archives of the United States § 1256.96 What provisions apply to the transfer of USIA audiovisual records to the National Archives of the United States...

  16. Student performance and their perception of a patient-oriented problem-solving approach with audiovisual aids in teaching pathology: a comparison with traditional lectures

    Directory of Open Access Journals (Sweden)

    Arjun Singh

    2010-12-01

    Full Text Available Arjun SinghDepartment of Pathology, Sri Venkateshwara Medical College Hospital and Research Centre, Pondicherry, IndiaPurpose: We use different methods to train our undergraduates. The patient-oriented problem-solving (POPS system is an innovative teaching–learning method that imparts knowledge, enhances intrinsic motivation, promotes self learning, encourages clinical reasoning, and develops long-lasting memory. The aim of this study was to develop POPS in teaching pathology, assess its effectiveness, and assess students’ preference for POPS over didactic lectures.Method: One hundred fifty second-year MBBS students were divided into two groups: A and B. Group A was taught by POPS while group B was taught by traditional lectures. Pre- and post-test numerical scores of both groups were evaluated and compared. Students then completed a self-structured feedback questionnaire for analysis.Results: The mean (SD difference in pre- and post-test scores of groups A and B was 15.98 (3.18 and 7.79 (2.52, respectively. The significance of the difference between scores of group A and group B teaching methods was 16.62 (P < 0.0001, as determined by the z-test. Improvement in post-test performance of group A was significantly greater than of group B, demonstrating the effectiveness of POPS. Students responded that POPS facilitates self-learning, helps in understanding topics, creates interest, and is a scientific approach to teaching. Feedback response on POPS was strong in 57.52% of students, moderate in 35.67%, and negative in only 6.81%, showing that 93.19% students favored POPS over simple lectures.Conclusion: It is not feasible to enforce the PBL method of teaching throughout the entire curriculum; However, POPS can be incorporated along with audiovisual aids to break the monotony of dialectic lectures and as alternative to PBL.Keywords: medical education, problem-solving exercise, problem-based learning

  17. Exposure to audiovisual programs as sources of authentic language ...

    African Journals Online (AJOL)

    Exposure to audiovisual programs as sources of authentic language input and second ... Southern African Linguistics and Applied Language Studies ... The findings of the present research contribute more insights on the type and amount of ...

  18. Infants' preference for native audiovisual speech dissociated from congruency preference.

    Directory of Open Access Journals (Sweden)

    Kathleen Shaw

    Full Text Available Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces. Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English and non-native (Spanish language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.

  19. Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability

    NARCIS (Netherlands)

    Francisco, A.A.; Groen, M.A.; Jesse, A.; McQueen, J.M.

    2017-01-01

    The aim of this study was to clarify whether audiovisual processing accounted for variance in reading and reading-related abilities, beyond the effect of a set of measures typically associated with individual differences in both reading and audiovisual processing. Testing adults with and without a

  20. ETNOGRAFÍA Y COMUNICACIÓN: EL PROYECTO ARCHIVO ETNOGRÁFICO AUDIOVISUAL DE LA UNIVERSIDAD DE CHILE

    Directory of Open Access Journals (Sweden)

    Mauricio Pineda Pertier

    2012-06-01

    This article considers audiovisual ethnography as a communication process, and takes the Audiovisual Ethnographic Archive of Universidad de Chile and its experience in the development of audiovisual ethnographies during the past eight years as a case of analysis. Beyond its use as a data recording technique, the construction and dissemination of messages with social content based on the aforementioned data records constitute a complex praxis of communication production that leads us to critically review the traditional conceptualization of the concept of communication. This work discusses these models, setting forth alternatives from an applied ethno-political perspective in local development contexts.

  1. Degradation of labial information modifies audiovisual speech perception in cochlear-implanted children.

    Science.gov (United States)

    Huyse, Aurélie; Berthommier, Frédéric; Leybaert, Jacqueline

    2013-01-01

    The aim of the present study was to examine audiovisual speech integration in cochlear-implanted children and in normally hearing children exposed to degraded auditory stimuli. Previous studies have shown that speech perception in cochlear-implanted users is biased toward the visual modality when audition and vision provide conflicting information. Our main question was whether an experimentally designed degradation of the visual speech cue would increase the importance of audition in the response pattern. The impact of auditory proficiency was also investigated. A group of 31 children with cochlear implants and a group of 31 normally hearing children matched for chronological age were recruited. All children with cochlear implants had profound congenital deafness and had used their implants for at least 2 years. Participants had to perform an /aCa/ consonant-identification task in which stimuli were presented randomly in three conditions: auditory only, visual only, and audiovisual (congruent and incongruent McGurk stimuli). In half of the experiment, the visual speech cue was normal; in the other half (visual reduction) a degraded visual signal was presented, aimed at preventing lipreading of good quality. The normally hearing children received a spectrally reduced speech signal (simulating the input delivered by the cochlear implant). First, performance in visual-only and in congruent audiovisual modalities were decreased, showing that the visual reduction technique used here was efficient at degrading lipreading. Second, in the incongruent audiovisual trials, visual reduction led to a major increase in the number of auditory based responses in both groups. Differences between proficient and nonproficient children were found in both groups, with nonproficient children's responses being more visual and less auditory than those of proficient children. Further analysis revealed that differences between visually clear and visually reduced conditions and between

  2. Plan de empresa de una productora audiovisual de nueva creación en la ciudad de Valencia

    OpenAIRE

    BARBA MUÑOZ, SARA

    2013-01-01

    [ES] El presente trabajo ha sido un recorrido acerca de la realización de un plan de empresa audiovisual situada en Valencia. Hemos ideado una empresa audiovisual especialmente dirigida a ofrecer sus productos a las medianas empresas. Hemos analizado el sector audiovisual como un ente en constante crecimiento dado su relación con las nuevas tecnologías; lo que le da categoría de un sector generador de empleo directo; especialmente en los jóvenes que en la actualidad es un...

  3. Energy consumption of audiovisual devices in the residential sector: Economic impact of harmonic losses

    International Nuclear Information System (INIS)

    Santiago, I.; López-Rodríguez, M.A.; Gil-de-Castro, A.; Moreno-Munoz, A.; Luna-Rodríguez, J.J.

    2013-01-01

    In this work, energy losses and the economic consequences of the use of small appliances containing power electronics (PE) in the Spanish residential sector were estimated. Audiovisual devices emit harmonics, originating in the distribution system an increment in wiring losses and a greater demand in the total apparent power. Time Use Surveys (2009–10) conducted by the National Statistical Institute in Spain were used to obtain information about the activities occurring in Spanish homes regarding the use of audiovisual equipment. Moreover, measurements of different types of household appliances available in the PANDA database were also utilized, and the active and non-active annual power demand of these residential-sector devices were determined. Although a single audiovisual device has an almost negligible contribution, the aggregated actions of this type of appliances, whose total annual energy demand is greater than 4000 GWh, can be significant enough to be taken into account in any energy efficiency program. It was proven that a reduction in the total harmonic distortion in the distribution systems ranging from 50% to 5% can reduce energy losses significantly, with economic savings of around several million Euros. - Highlights: • Time Use Survey provides information about Spanish household electricity consumption. • The annual aggregated energy demand of audiovisual appliances is very significant. • TV use accounts for more than 80% of household audiovisual electricity consumption. • A reduction from 50% to 5% in the total harmonic distortion would have economic savings of around several million Euros. • Stricter regulations regarding harmonic emissions must be demanded

  4. Decision-Level Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, Mannes; Truong, Khiet Phuong; Poppe, Ronald Walter; Pantic, Maja; Popescu-Belis, Andrei; Stiefelhagen, Rainer

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laugh- ter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio- visual laughter detection is

  5. Spatio-temporal patterns of event-related potentials related to audiovisual synchrony judgments in older adults.

    Science.gov (United States)

    Chan, Yu Man; Pianta, Michael Julian; Bode, Stefan; McKendrick, Allison Maree

    2017-07-01

    Older adults have altered perception of the relative timing between auditory and visual stimuli, even when stimuli are scaled to equate detectability. To help understand why, this study investigated the neural correlates of audiovisual synchrony judgments in older adults using electroencephalography (EEG). Fourteen younger (18-32 year old) and 16 older (61-74 year old) adults performed an audiovisual synchrony judgment task on flash-pip stimuli while EEG was recorded. All participants were assessed to have healthy vision and hearing for their age. Observers responded to whether audiovisual pairs were perceived as synchronous or asynchronous via a button press. The results showed that the onset of predictive sensory information for synchrony judgments was not different between groups. Channels over auditory areas contributed more to this predictive sensory information than visual areas. The spatial-temporal profile of the EEG activity also indicates that older adults used different resources to maintain a similar level of performance in audiovisual synchrony judgments compared with younger adults. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. INSTRUCTIONAL MATERIALS CATALOG.

    Science.gov (United States)

    Ohio Vocational Agriculture Instructional Materials Service, Columbus.

    THE TITLE, IDENTIFICATION NUMBER, DATE OF PUBLICATION, PAGINATION, A BRIEF DESCRIPTION, AND PRICE ARE GIVEN FOR EACH OF THE INSTRUCTIONAL MATERIALS AND AUDIOVISUAL AIDS INCLUDED IN THIS CATALOG. TOPICS COVERED ARE FIELD CORPS, HORTICULTURE, ANIMAL SCIENCE, SOILS, AGRICULTURAL ENGINEERING, AND FARMING PROGRAMS. AN ORDER FORM IS INCLUDED. (JM)

  7. La direction generale des relations culturelles et l'enseignement du francais sur les ondes (The Office of Cultural Relations and the Teaching of French by Radio-Television).

    Science.gov (United States)

    Francais dans le Monde, 1980

    1980-01-01

    Outlines the means employed by the French Cultural Relations Office to support the teaching of the French language by foreign radio and television networks. Support includes: the assistance of French professionals; the production and publication of audiovisual aids, language courses, and teachers' guides; and equipment and training for its…

  8. fMR-adaptation indicates selectivity to audiovisual content congruency in distributed clusters in human superior temporal cortex.

    Science.gov (United States)

    van Atteveldt, Nienke M; Blau, Vera C; Blomert, Leo; Goebel, Rainer

    2010-02-02

    Efficient multisensory integration is of vital importance for adequate interaction with the environment. In addition to basic binding cues like temporal and spatial coherence, meaningful multisensory information is also bound together by content-based associations. Many functional Magnetic Resonance Imaging (fMRI) studies propose the (posterior) superior temporal cortex (STC) as the key structure for integrating meaningful multisensory information. However, a still unanswered question is how superior temporal cortex encodes content-based associations, especially in light of inconsistent results from studies comparing brain activation to semantically matching (congruent) versus nonmatching (incongruent) multisensory inputs. Here, we used fMR-adaptation (fMR-A) in order to circumvent potential problems with standard fMRI approaches, including spatial averaging and amplitude saturation confounds. We presented repetitions of audiovisual stimuli (letter-speech sound pairs) and manipulated the associative relation between the auditory and visual inputs (congruent/incongruent pairs). We predicted that if multisensory neuronal populations exist in STC and encode audiovisual content relatedness, adaptation should be affected by the manipulated audiovisual relation. The results revealed an occipital-temporal network that adapted independently of the audiovisual relation. Interestingly, several smaller clusters distributed over superior temporal cortex within that network, adapted stronger to congruent than to incongruent audiovisual repetitions, indicating sensitivity to content congruency. These results suggest that the revealed clusters contain multisensory neuronal populations that encode content relatedness by selectively responding to congruent audiovisual inputs, since unisensory neuronal populations are assumed to be insensitive to the audiovisual relation. These findings extend our previously revealed mechanism for the integration of letters and speech sounds and

  9. fMR-adaptation indicates selectivity to audiovisual content congruency in distributed clusters in human superior temporal cortex

    Directory of Open Access Journals (Sweden)

    Blomert Leo

    2010-02-01

    Full Text Available Abstract Background Efficient multisensory integration is of vital importance for adequate interaction with the environment. In addition to basic binding cues like temporal and spatial coherence, meaningful multisensory information is also bound together by content-based associations. Many functional Magnetic Resonance Imaging (fMRI studies propose the (posterior superior temporal cortex (STC as the key structure for integrating meaningful multisensory information. However, a still unanswered question is how superior temporal cortex encodes content-based associations, especially in light of inconsistent results from studies comparing brain activation to semantically matching (congruent versus nonmatching (incongruent multisensory inputs. Here, we used fMR-adaptation (fMR-A in order to circumvent potential problems with standard fMRI approaches, including spatial averaging and amplitude saturation confounds. We presented repetitions of audiovisual stimuli (letter-speech sound pairs and manipulated the associative relation between the auditory and visual inputs (congruent/incongruent pairs. We predicted that if multisensory neuronal populations exist in STC and encode audiovisual content relatedness, adaptation should be affected by the manipulated audiovisual relation. Results The results revealed an occipital-temporal network that adapted independently of the audiovisual relation. Interestingly, several smaller clusters distributed over superior temporal cortex within that network, adapted stronger to congruent than to incongruent audiovisual repetitions, indicating sensitivity to content congruency. Conclusions These results suggest that the revealed clusters contain multisensory neuronal populations that encode content relatedness by selectively responding to congruent audiovisual inputs, since unisensory neuronal populations are assumed to be insensitive to the audiovisual relation. These findings extend our previously revealed mechanism for

  10. Perception of the Multisensory Coherence of Fluent Audiovisual Speech in Infancy: Its Emergence & the Role of Experience

    Science.gov (United States)

    Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa

    2014-01-01

    To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038

  11. A Double Dissociation between Anterior and Posterior Superior Temporal Gyrus for Processing Audiovisual Speech Demonstrated by Electrocorticography.

    Science.gov (United States)

    Ozker, Muge; Schepers, Inga M; Magnotti, John F; Yoshor, Daniel; Beauchamp, Michael S

    2017-06-01

    Human speech can be comprehended using only auditory information from the talker's voice. However, comprehension is improved if the talker's face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl's gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech.

  12. Portraits, publics and politics: Gisele Wulfsohn's photographs of HIV/AIDS, 1987-2007

    Directory of Open Access Journals (Sweden)

    Annabelle Wienand

    2012-01-01

    Full Text Available Contemporary South African documentary photography is often framed in relation to the history of apartheid and the resistance movement. A number of well-known South African photographers came of age in the 1980s and many of them went on to receive critical acclaim locally and abroad. In comparison, Gisele Wulfsohn (19572011 has remained relatively unknown despite her involvement in the Afrapix collective and her important contribution to HIV/AIDS awareness and education. In focusing on Wulfsohn's extended engagement with the issue of HIV/AIDS in South Africa, this article aims to highlight the distinctive nature of Wulfsohn's visualisation of the epidemic. Wulfsohn photographed the epidemic long before there was major public interest in the issue and continued to do so for twenty years. Her approach is unique in a number of ways, most notably in her use of portraiture and her documentation of subjects from varied racial, cultural and socio-economic backgrounds in South Africa. The essay tracks the development of the different projects Wulfsohn embarked on and situates her photographs of HIV/AIDS in relation to her politically informed work of the late 1980s, her personal projects and the relationships she developed with non-governmental organisations.

  13. Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale

    2015-10-01

    Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Changes of the Prefrontal EEG (Electroencephalogram) Activities According to the Repetition of Audio-Visual Learning.

    Science.gov (United States)

    Kim, Yong-Jin; Chang, Nam-Kee

    2001-01-01

    Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…

  15. Cinco discursos da digitalidade audiovisual

    Directory of Open Access Journals (Sweden)

    Gerbase, Carlos

    2001-01-01

    Full Text Available Michel Foucault ensina que toda fala sistemática - inclusive aquela que se afirma “neutra” ou “uma desinteressada visão objetiva do que acontece” - é, na verdade, mecanismo de articulação do saber e, na seqüência, de formação de poder. O aparecimento de novas tecnologias, especialmente as digitais, no campo da produção audiovisual, provoca uma avalanche de declarações de cineastas, ensaios de acadêmicos e previsões de demiurgos da mídia.

  16. The protection of minors in the new audiovisual regulation in Spain

    Directory of Open Access Journals (Sweden)

    José A. Ruiz-San Román, Ph.D.

    2011-01-01

    Full Text Available In 2010 the Spanish Parliament approved the General Law on Audiovisual Communication (GLAC, a new regulation which implements the European Audiovisual Media Services Directive (AVMSD. This research analyses how the regulations focused on the protection of children evolved throughout the legislative process, from the first text drafted by the Government to the text finally approved by Parliament. The research deals with the debates and amendments on harmful content which is prohibited or limited. The main objective of the research is to establish the extent to what the new regulation approved in Spain meets the requirements fixed by the AVMSD and the Spanish Government to guarantee child protection.

  17. PUBLIC POLICIES TO R&D IN ROMANIA IN THE CONTEXT OF THE EU STATE AID POLICY

    Directory of Open Access Journals (Sweden)

    Bacila Nicolae

    2015-07-01

    Full Text Available From an economic perspective, the importance of EU state aid policy refers to correcting “market failure” situations that may occur in the economy, aiming at maintaining an undistorted competition in the economic environment. In the context of the Commission focusing its efforts towards promoting R&D investment through Europe 2020 strategy, Romania is a modest innovator and is facing a relatively low level of economic competitiveness. The present paper aims at providing a contribution to the literature on public policies to R&D in the EU, developing both a quantitative and a qualitative analysis of public policies to R&D in Romania in the context of the EU state aid policy. Our research hypothesis considers that public policies to R&D in Romania, as in other Central and Eastern European countries, are following a convergence process with the practices from the EU level. Based on data provided by Eurostat, we have stressed that the existing gap between the national level and the EU level tends to maintain in the state aid field even in the future, in spite of Romanian government sector R&D expenditure tending to converge with the EU level, which highlights the potential of catching up with the European model. We believe that the success of the convergence process will depend in the future, to a large extent, on the implementation of the modernised legal and institutional framework of state aid policy, as well as on the capacity to build consensus by policy makers around the necessity to structure future economic development around R&D investment. In order to successfully address these structural R&D problems, the National Strategy for Research, Development and Innovation aims to establish R&D as engine for increasing economic competitiveness, while at the same time strengthening strategic areas with comparative advantages, supporting public-private partnerships, funding clusters in areas of smart specialisation, developing intellectual

  18. Developing an Audiovisual Notebook as a Self-Learning Tool in Histology: Perceptions of Teachers and Students

    Science.gov (United States)

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four…

  19. Children with a History of SLI Show Reduced Sensitivity to Audiovisual Temporal Asynchrony: An ERP Study

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer; Leonard, Laurence B.; Gustafson, Dana; Macias, Danielle

    2014-01-01

    Purpose: The authors examined whether school-age children with a history of specific language impairment (H-SLI), their peers with typical development (TD), and adults differ in sensitivity to audiovisual temporal asynchrony and whether such difference stems from the sensory encoding of audiovisual information. Method: Fifteen H-SLI children, 15…

  20. Narrativa audiovisual i cinema d'animació per ordinador

    OpenAIRE

    Duran Castells, Jaume

    2009-01-01

    DE LA TESI:Aquesta tesi doctoral estudia les relacions entre la narrativa audiovisual i el cinema d'animació per ordinador i fa una anàlisi al respecte dels llargmetratges de Pixar Animation Studios compresos entre 1995 i 2006.

  1. [From oral history to the research film: the audiovisual as a tool of the historian].

    Science.gov (United States)

    Mattos, Hebe; Abreu, Martha; Castro, Isabel

    2017-01-01

    An analytical essay of the process of image production, audiovisual archive formation, analysis of sources, and creation of the filmic narrative of the four historiographic films that form the DVD set Passados presentes (Present pasts) from the Oral History and Image Laboratory of Universidade Federal Fluminense (Labhoi/UFF). From excerpts from the audiovisual archive of Labhoi and the films made, the article analyzes: how the problem of research (the memory of slavery, and the legacy of the slave song in the agrofluminense region) led us to the production of images in a research situation; the analytical shift in relation to the cinematographic documentary and the ethnographic film; the specificities of revisiting the audiovisual collection constituted by the formulation of new research problems.

  2. Source book of educational materials for radiation therapy. Final report

    International Nuclear Information System (INIS)

    Pijar, M.L.

    1979-08-01

    The Source Book is a listing of educational materials in radiation therapy technology. The first 17 sections correspond to the subjects identified in the ASRT Curriculum Guide for schools of radiation therapy. Each section is divided into publications and in some sections audiovisuals and training aids. Entries are listed without endorsement

  3. Comparison for younger and older adults: Stimulus temporal asynchrony modulates audiovisual integration.

    Science.gov (United States)

    Ren, Yanna; Ren, Yanling; Yang, Weiping; Tang, Xiaoyu; Wu, Fengxia; Wu, Qiong; Takahashi, Satoshi; Ejima, Yoshimichi; Wu, Jinglong

    2018-02-01

    Recent research has shown that the magnitudes of responses to multisensory information are highly dependent on the stimulus structure. The temporal proximity of multiple signal inputs is a critical determinant for cross-modal integration. Here, we investigated the influence that temporal asynchrony has on audiovisual integration in both younger and older adults using event-related potentials (ERP). Our results showed that in the simultaneous audiovisual condition, except for the earliest integration (80-110ms), which occurred in the occipital region for older adults was absent for younger adults, early integration was similar for the younger and older groups. Additionally, late integration was delayed in older adults (280-300ms) compared to younger adults (210-240ms). In audition‑leading vision conditions, the earliest integration (80-110ms) was absent in younger adults but did occur in older adults. Additionally, after increasing the temporal disparity from 50ms to 100ms, late integration was delayed in both younger (from 230 to 290ms to 280-300ms) and older (from 210 to 240ms to 280-300ms) adults. In the audition-lagging vision conditions, integration only occurred in the A100V condition for younger adults and in the A50V condition for older adults. The current results suggested that the audiovisual temporal integration pattern differed between the audition‑leading and audition-lagging vision conditions and further revealed the varying effect of temporal asynchrony on audiovisual integration in younger and older adults. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. The effects of semantic congruency: a research of audiovisual P300-speller.

    Science.gov (United States)

    Cao, Yong; An, Xingwei; Ke, Yufeng; Jiang, Jin; Yang, Hanjun; Chen, Yuqian; Jiao, Xuejun; Qi, Hongzhi; Ming, Dong

    2017-07-25

    Over the past few decades, there have been many studies of aspects of brain-computer interface (BCI). Of particular interests are event-related potential (ERP)-based BCI spellers that aim at helping mental typewriting. Nowadays, audiovisual unimodal stimuli based BCI systems have attracted much attention from researchers, and most of the existing studies of audiovisual BCIs were based on semantic incongruent stimuli paradigm. However, no related studies had reported that whether there is difference of system performance or participant comfort between BCI based on semantic congruent paradigm and that based on semantic incongruent paradigm. The goal of this study was to investigate the effects of semantic congruency in system performance and participant comfort in audiovisual BCI. Two audiovisual paradigms (semantic congruent and incongruent) were adopted, and 11 healthy subjects participated in the experiment. High-density electrical mapping of ERPs and behavioral data were measured for the two stimuli paradigms. The behavioral data indicated no significant difference between congruent and incongruent paradigms for offline classification accuracy. Nevertheless, eight of the 11 participants reported their priority to semantic congruent experiment, two reported no difference between the two conditions, and only one preferred the semantic incongruent paradigm. Besides, the result indicted that higher amplitude of ERP was found in incongruent stimuli based paradigm. In a word, semantic congruent paradigm had a better participant comfort, and maintained the same recognition rate as incongruent paradigm. Furthermore, our study suggested that the paradigm design of spellers must take both system performance and user experience into consideration rather than merely pursuing a larger ERP response.

  5. The contribution of perceptual factors and training on varying audiovisual integration capacity.

    Science.gov (United States)

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2018-06-01

    The suggestion that the capacity of audiovisual integration has an upper limit of 1 was challenged in 4 experiments using perceptual factors and training to enhance the binding of auditory and visual information. Participants were required to note a number of specific visual dot locations that changed in polarity when a critical auditory stimulus was presented, under relatively fast (200-ms stimulus onset asynchrony [SOA]) and slow (700-ms SOA) rates of presentation. In Experiment 1, transient cross-modal congruency between the brightness of polarity change and pitch of the auditory tone was manipulated. In Experiment 2, sustained chunking was enabled on certain trials by connecting varying dot locations with vertices. In Experiment 3, training was employed to determine if capacity would increase through repeated experience with an intermediate presentation rate (450 ms). Estimates of audiovisual integration capacity (K) were larger than 1 during cross-modal congruency at slow presentation rates (Experiment 1), during perceptual chunking at slow and fast presentation rates (Experiment 2), and, during an intermediate presentation rate posttraining (Experiment 3). Finally, Experiment 4 showed a linear increase in K using SOAs ranging from 100 to 600 ms, suggestive of quantitative rather than qualitative changes in the mechanisms in audiovisual integration as a function of presentation rate. The data compromise the suggestion that the capacity of audiovisual integration is limited to 1 and suggest that the ability to bind sounds to sights is contingent on individual and environmental factors. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    Science.gov (United States)

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  7. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    Science.gov (United States)

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  8. La m??sica en la narrativa publicitaria audiovisual. El caso de Coca-Cola

    OpenAIRE

    S??nchez Porras, Mar??a Jos??

    2015-01-01

    En esta investigaci??n se ha realizado un estudio profundo de la m??sica en la publicidad audiovisual y su relaci??n con otros aspectos sonoros y visuales de la publicidad. Para llevarlo a cabo se ha seleccionado una marca concreta, Coca-Cola, debido a su globalizaci??n y reconocimiento. Se ha abordado una nueva perspectiva de an??lisis musical en la publicidad audiovisual, abordando los diferentes elementos de la estructura musical a trav??s de la proyecci??n de los anuncios. Se ha rea...

  9. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    Science.gov (United States)

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  10. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

    Science.gov (United States)

    Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa

    2015-02-01

    To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. The effect of combined sensory and semantic components on audio-visual speech perception in older adults

    Directory of Open Access Journals (Sweden)

    Corrina eMaguinness

    2011-12-01

    Full Text Available Previous studies have found that perception in older people benefits from multisensory over uni-sensory information. As normal speech recognition is affected by both the auditory input and the visual lip-movements of the speaker, we investigated the efficiency of audio and visual integration in an older population by manipulating the relative reliability of the auditory and visual information in speech. We also investigated the role of the semantic context of the sentence to assess whether audio-visual integration is affected by top-down semantic processing. We presented participants with audio-visual sentences in which the visual component was either blurred or not blurred. We found that there was a greater cost in recall performance for semantically meaningless speech in the audio-visual blur compared to audio-visual no blur condition and this effect was specific to the older group. Our findings have implications for understanding how aging affects efficient multisensory integration for the perception of speech and suggests that multisensory inputs may benefit speech perception in older adults when the semantic content of the speech is unpredictable.

  12. Recording and Validation of Audiovisual Expressions by Faces and Voices

    Directory of Open Access Journals (Sweden)

    Sachiko Takagi

    2011-10-01

    Full Text Available This study aims to further examine the cross-cultural differences in multisensory emotion perception between Western and East Asian people. In this study, we recorded the audiovisual stimulus video of Japanese and Dutch actors saying neutral phrase with one of the basic emotions. Then we conducted a validation experiment of the stimuli. In the first part (facial expression, participants watched a silent video of actors and judged what kind of emotion the actor is expressing by choosing among 6 options (ie, happiness, anger, disgust, sadness, surprise, and fear. In the second part (vocal expression, they listened to the audio part of the same videos without video images while the task was the same. We analyzed their categorization responses based on accuracy and confusion matrix and created a controlled audiovisual stimulus set.

  13. Erotized, AIDS-HIV information on public-access television: a study of obscenity, state censorship and cultural resistance.

    Science.gov (United States)

    Lukenbill, W B

    1998-06-01

    This study analyzes court records of a county-level obscenity trial in Austin, Texas, and the appeal of the guilty verdict beginning with a Texas appellate court up to the U.S. Supreme Court of two individuals who broadcast erotized AIDS and HIV safer sex information on a public-access cable television. The trial and appellate court decisions are reviewed in terms of argument themes, and the nature of sexual value controversy is outlined. Erotic materials often conflict with broad-based sexual and community values, and providing erotized HIV and AIDS information products can be a form of radical political action designed to force societal change. This study raises question as to how this trial and this type of informational product might affect the programs and activities of information resource centers, community-based organizations, libraries, and the overall mission of public health education.

  14. Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception.

    Science.gov (United States)

    Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki

    2016-10-13

    Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs' response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs' early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception.

  15. Audiovisual Integration of Speech in a Patient with Broca’s Aphasia

    Directory of Open Access Journals (Sweden)

    Tobias Søren Andersen

    2015-04-01

    Full Text Available Lesions to Broca’s area cause aphasia characterised by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca’s area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca’s area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca’s aphasia did not experience the McGurk illusion suggesting that an intact Broca’s area is necessary for audiovisual integration of speech. Here we describe a patient with Broca’s aphasia who experienced the McGurk illusion. This indicates that an intact Broca’s area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca’s area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke’s aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca’s aphasia.

  16. Electrophysiological evidence for a self-processing advantage during audiovisual speech integration.

    Science.gov (United States)

    Treille, Avril; Vilain, Coriandre; Kandel, Sonia; Sato, Marc

    2017-09-01

    Previous electrophysiological studies have provided strong evidence for early multisensory integrative mechanisms during audiovisual speech perception. From these studies, one unanswered issue is whether hearing our own voice and seeing our own articulatory gestures facilitate speech perception, possibly through a better processing and integration of sensory inputs with our own sensory-motor knowledge. The present EEG study examined the impact of self-knowledge during the perception of auditory (A), visual (V) and audiovisual (AV) speech stimuli that were previously recorded from the participant or from a speaker he/she had never met. Audiovisual interactions were estimated by comparing N1 and P2 auditory evoked potentials during the bimodal condition (AV) with the sum of those observed in the unimodal conditions (A + V). In line with previous EEG studies, our results revealed an amplitude decrease of P2 auditory evoked potentials in AV compared to A + V conditions. Crucially, a temporal facilitation of N1 responses was observed during the visual perception of self speech movements compared to those of another speaker. This facilitation was negatively correlated with the saliency of visual stimuli. These results provide evidence for a temporal facilitation of the integration of auditory and visual speech signals when the visual situation involves our own speech gestures.

  17. Cognitive control during audiovisual working memory engages frontotemporal theta-band interactions.

    Science.gov (United States)

    Daume, Jonathan; Graetz, Sebastian; Gruber, Thomas; Engel, Andreas K; Friese, Uwe

    2017-10-03

    Working memory (WM) maintenance of sensory information has been associated with enhanced cross-frequency coupling between the phase of low frequencies and the amplitude of high frequencies, particularly in medial temporal lobe (MTL) regions. It has been suggested that these WM maintenance processes are controlled by areas of the prefrontal cortex (PFC) via frontotemporal phase synchronisation in low frequency bands. Here, we investigated whether enhanced cognitive control during audiovisual WM as compared to visual WM alone is associated with increased low-frequency phase synchronisation between sensory areas maintaining WM content and areas from PFC. Using magnetoencephalography, we recorded neural oscillatory activity from healthy human participants engaged in an audiovisual delayed-match-to-sample task. We observed that regions from MTL, which showed enhanced theta-beta phase-amplitude coupling (PAC) during the WM delay window, exhibited stronger phase synchronisation within the theta-band (4-7 Hz) to areas from lateral PFC during audiovisual WM as compared to visual WM alone. Moreover, MTL areas also showed enhanced phase synchronisation to temporooccipital areas in the beta-band (20-32 Hz). Our results provide further evidence that a combination of long-range phase synchronisation and local PAC might constitute a mechanism for neuronal communication between distant brain regions and across frequencies during WM maintenance.

  18. Today's and tomorrow's retrieval practice in the audiovisual archive

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2010-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. We investigate to what extent content-based video

  19. Here's What Our Schools Are Doing to Combat Drunk Driving.

    Science.gov (United States)

    Gersten, Leon

    1984-01-01

    A 2-year-old drunk-driving program developed in the Deer Park, New York, public schools has included a week-long campaign addressing Deer Park high school students and an "awareness week" meeting for parents. Both events made extensive use of experts and audiovisual aids. Information for obtaining the latter is provided. (JBM)

  20. Audiovisual materials are effective for enhancing the correction of articulation disorders in children with cleft palate.

    Science.gov (United States)

    Pamplona, María Del Carmen; Ysunza, Pablo Antonio; Morales, Santiago

    2017-02-01

    Children with cleft palate frequently show speech disorders known as compensatory articulation. Compensatory articulation requires a prolonged period of speech intervention that should include reinforcement at home. However, frequently relatives do not know how to work with their children at home. To study whether the use of audiovisual materials especially designed for complementing speech pathology treatment in children with compensatory articulation can be effective for stimulating articulation practice at home and consequently enhancing speech normalization in children with cleft palate. Eighty-two patients with compensatory articulation were studied. Patients were randomly divided into two groups. Both groups received speech pathology treatment aimed to correct articulation placement. In addition, patients from the active group received a set of audiovisual materials to be used at home. Parents were instructed about strategies and ideas about how to use the materials with their children. Severity of compensatory articulation was compared at the onset and at the end of the speech intervention. After the speech therapy period, the group of patients using audiovisual materials at home demonstrated significantly greater improvement in articulation, as compared with the patients receiving speech pathology treatment on - site without audiovisual supporting materials. The results of this study suggest that audiovisual materials especially designed for practicing adequate articulation placement at home can be effective for reinforcing and enhancing speech pathology treatment of patients with cleft palate and compensatory articulation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Concurrent audio-visual feedback for supporting drivers at intersections: A study using two linked driving simulators.

    Science.gov (United States)

    Houtenbos, M; de Winter, J C F; Hale, A R; Wieringa, P A; Hagenzieker, M P

    2017-04-01

    A large portion of road traffic crashes occur at intersections for the reason that drivers lack necessary visual information. This research examined the effects of an audio-visual display that provides real-time sonification and visualization of the speed and direction of another car approaching the crossroads on an intersecting road. The location of red blinking lights (left vs. right on the speedometer) and the lateral input direction of beeps (left vs. right ear in headphones) corresponded to the direction from where the other car approached, and the blink and beep rates were a function of the approaching car's speed. Two driving simulators were linked so that the participant and the experimenter drove in the same virtual world. Participants (N = 25) completed four sessions (two with the audio-visual display on, two with the audio-visual display off), each session consisting of 22 intersections at which the experimenter approached from the left or right and either maintained speed or slowed down. Compared to driving with the display off, the audio-visual display resulted in enhanced traffic efficiency (i.e., greater mean speed, less coasting) while not compromising safety (i.e., the time gap between the two vehicles was equivalent). A post-experiment questionnaire showed that the beeps were regarded as more useful than the lights. It is argued that the audio-visual display is a promising means of supporting drivers until fully automated driving is technically feasible. Copyright © 2016. Published by Elsevier Ltd.

  2. Héroes, machos o, simplemente, hombres: una mirada a la representación audiovisual de las (nuevas masculinidades / Heroes, Machomen or, Just Men: A Look at the Audiovisual Representation of the (New Masculinities

    Directory of Open Access Journals (Sweden)

    Francisco A. Zurian Hernández

    2016-09-01

    Full Text Available El presente texto indaga en la evolución de la representación de los hombres en el audiovisual (cine y televisión y cómo dicha representación ha evolucionado desde las representaciones del macho patriarcal a representaciones de nuevas masculinidades, fuera de la influencia de la ideología patriarcal, plurales, no universalistas y con nuevos modelos de hombres.Palabras clave: género, hombres, masculinidades, audiovisual, cine, televisión.AbstractThis text analyzes the representation of men in the cinema and television, paying special attention to the ways in which it has evolved since the `patriarchal macho´ to the new types of masculinity; the latest, a new concept far from the influence of the patriarchal ideology, being plural, concrete and proposing new models of men.Keywords: gender, men, masculinities, audiovisual, cinema, television.

  3. Alterations in audiovisual simultaneity perception in amblyopia

    OpenAIRE

    Richards, Michael D.; Goltz, Herbert C.; Wong, Agnes M. F.

    2017-01-01

    Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV) perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window). Adults with unilateral amblyopia (n = 17) and visually normal controls (n = 17) judged...

  4. Audiovisual sentence recognition not predicted by susceptibility to the McGurk effect.

    Science.gov (United States)

    Van Engen, Kristin J; Xie, Zilong; Chandrasekaran, Bharath

    2017-02-01

    In noisy situations, visual information plays a critical role in the success of speech communication: listeners are better able to understand speech when they can see the speaker. Visual influence on auditory speech perception is also observed in the McGurk effect, in which discrepant visual information alters listeners' auditory perception of a spoken syllable. When hearing /ba/ while seeing a person saying /ga/, for example, listeners may report hearing /da/. Because these two phenomena have been assumed to arise from a common integration mechanism, the McGurk effect has often been used as a measure of audiovisual integration in speech perception. In this study, we test whether this assumed relationship exists within individual listeners. We measured participants' susceptibility to the McGurk illusion as well as their ability to identify sentences in noise across a range of signal-to-noise ratios in audio-only and audiovisual modalities. Our results do not show a relationship between listeners' McGurk susceptibility and their ability to use visual cues to understand spoken sentences in noise, suggesting that McGurk susceptibility may not be a valid measure of audiovisual integration in everyday speech processing.

  5. HIV/AIDS prevention: knowledge, attitudes and education practices of secondary school health personnel in 14 cities of China.

    Science.gov (United States)

    Chen, J Q; Dunne, M P; Zhao, D C

    2004-01-01

    This study assessed the preparedness of school health personnel to develop and deliver HIV/AIDS prevention education programmes for young people in China. A survey of 653 personnel working in secondary schools in 14 cities was conducted. More than 90% had basic knowledge of ways in which HIV can be transmitted, but knowledge of ways in which the virus is not transmitted needs improvement. Substantial numbers of teachers were not sure whether there was an effective preventive vaccine (42%) or did not know whether AIDS was a curable illness or not (32%). The great majority approved of AIDS prevention programmes in universities (98%) and secondary schools (91%), although fewer (58%) agreed that the topic was appropriate for primary schools. Currently, most classroom activities focuses on teaching facts about HIV/AIDS transmission, while less than half are taught about HIV/AIDS related discrimination and life skills to reduce peer pressure. Personnel with some prior training on HIV/ AIDS education (53%) had better factual knowledge, more tolerant attitudes and more confidence in teaching about HIV/AIDS than those without training. The majority of teachers indicated a need for more resource books, audiovisual products, expert guidance, school principal support and dissemination of national AIDS prevention education guidelines to schools.

  6. Audiovisual biofeedback breathing guidance for lung cancer patients receiving radiotherapy: a multi-institutional phase II randomised clinical trial.

    Science.gov (United States)

    Pollock, Sean; O'Brien, Ricky; Makhija, Kuldeep; Hegi-Johnson, Fiona; Ludbrook, Jane; Rezo, Angela; Tse, Regina; Eade, Thomas; Yeghiaian-Alvandi, Roland; Gebski, Val; Keall, Paul J

    2015-07-18

    There is a clear link between irregular breathing and errors in medical imaging and radiation treatment. The audiovisual biofeedback system is an advanced form of respiratory guidance that has previously demonstrated to facilitate regular patient breathing. The clinical benefits of audiovisual biofeedback will be investigated in an upcoming multi-institutional, randomised, and stratified clinical trial recruiting a total of 75 lung cancer patients undergoing radiation therapy. To comprehensively perform a clinical evaluation of the audiovisual biofeedback system, a multi-institutional study will be performed. Our methodological framework will be based on the widely used Technology Acceptance Model, which gives qualitative scales for two specific variables, perceived usefulness and perceived ease of use, which are fundamental determinants for user acceptance. A total of 75 lung cancer patients will be recruited across seven radiation oncology departments across Australia. Patients will be randomised in a 2:1 ratio, with 2/3 of the patients being recruited into the intervention arm and 1/3 in the control arm. 2:1 randomisation is appropriate as within the interventional arm there is a screening procedure where only patients whose breathing is more regular with audiovisual biofeedback will continue to use this system for their imaging and treatment procedures. Patients within the intervention arm whose free breathing is more regular than audiovisual biofeedback in the screen procedure will remain in the intervention arm of the study but their imaging and treatment procedures will be performed without audiovisual biofeedback. Patients will also be stratified by treating institution and for treatment intent (palliative vs. radical) to ensure similar balance in the arms across the sites. Patients and hospital staff operating the audiovisual biofeedback system will complete questionnaires to assess their experience with audiovisual biofeedback. The objectives of this

  7. Audiovisual biofeedback breathing guidance for lung cancer patients receiving radiotherapy: a multi-institutional phase II randomised clinical trial

    International Nuclear Information System (INIS)

    Pollock, Sean; O’Brien, Ricky; Makhija, Kuldeep; Hegi-Johnson, Fiona; Ludbrook, Jane; Rezo, Angela; Tse, Regina; Eade, Thomas; Yeghiaian-Alvandi, Roland; Gebski, Val; Keall, Paul J

    2015-01-01

    There is a clear link between irregular breathing and errors in medical imaging and radiation treatment. The audiovisual biofeedback system is an advanced form of respiratory guidance that has previously demonstrated to facilitate regular patient breathing. The clinical benefits of audiovisual biofeedback will be investigated in an upcoming multi-institutional, randomised, and stratified clinical trial recruiting a total of 75 lung cancer patients undergoing radiation therapy. To comprehensively perform a clinical evaluation of the audiovisual biofeedback system, a multi-institutional study will be performed. Our methodological framework will be based on the widely used Technology Acceptance Model, which gives qualitative scales for two specific variables, perceived usefulness and perceived ease of use, which are fundamental determinants for user acceptance. A total of 75 lung cancer patients will be recruited across seven radiation oncology departments across Australia. Patients will be randomised in a 2:1 ratio, with 2/3 of the patients being recruited into the intervention arm and 1/3 in the control arm. 2:1 randomisation is appropriate as within the interventional arm there is a screening procedure where only patients whose breathing is more regular with audiovisual biofeedback will continue to use this system for their imaging and treatment procedures. Patients within the intervention arm whose free breathing is more regular than audiovisual biofeedback in the screen procedure will remain in the intervention arm of the study but their imaging and treatment procedures will be performed without audiovisual biofeedback. Patients will also be stratified by treating institution and for treatment intent (palliative vs. radical) to ensure similar balance in the arms across the sites. Patients and hospital staff operating the audiovisual biofeedback system will complete questionnaires to assess their experience with audiovisual biofeedback. The objectives of this

  8. Aid is dead. Long live aid!

    Directory of Open Access Journals (Sweden)

    Jean-Michel Severino

    2012-06-01

    Full Text Available The concepts, targets, tools, institutions and modes of operation of official development assistance have been overtaken by the pace of change in a world marked by the combined momentum of demography, technology and economic growth.Aid can however recover, as social consequences of the globalization call for new forms of regulation. It will then be necessary to modify and diversify our target-setting processes, to update operating procedures, and to find better ways of measuring policy implementation. Aid volumes will certainly continue to grow, and we must transform the way aid is financed. Public and private aid stakeholders must recognize the importance of these transformations and be ready to support them, by questioning the methods as well as the objectives of the policies they are implementing. Otherwise, they will severely impede the emergence of the policies we need if we are to build a better world.

  9. Educar em comunicação audiovisual: um desafio para a Cuba “atualizada”

    Directory of Open Access Journals (Sweden)

    Liudmila Morales Alfonso

    2017-09-01

    Full Text Available O artigo analisa a pertinência da educação em comunicação audiovisual em Cuba, quando a atualização do modelo econômico e social se transforma em prioridade para o Governo. O “isolamento seletivo” que, por décadas, favoreceu a exclusividade da oferta audiovisual concentrada nos meios de comunicação estatais sofre um impacto a partir de 2008, com o auge do “pacote”, alternativa informal de distribuição de conteúdos. Assim, o público consome produtos audiovisuais estrangeiros de sua preferência, nos horários que escolhe. Contudo e, ante a mudança nos padrões  de consumo audiovisual, admitido por discursos oficiais e da imprensa, a estratégia governamental privilegia alternativas protecionistas ao “banal”, ao contrário de assumir responsabilidades formais para o empoderamento da cidadania.

  10. Extraction of Information of Audio-Visual Contents

    Directory of Open Access Journals (Sweden)

    Carlos Aguilar

    2011-10-01

    Full Text Available In this article we show how it is possible to use Channel Theory (Barwise and Seligman, 1997 for modeling the process of information extraction realized by audiences of audio-visual contents. To do this, we rely on the concepts pro- posed by Channel Theory and, especially, its treatment of representational systems. We then show how the information that an agent is capable of extracting from the content depends on the number of channels he is able to establish between the content and the set of classifications he is able to discriminate. The agent can endeavor the extraction of information through these channels from the totality of content; however, we discuss the advantages of extracting from its constituents in order to obtain a greater number of informational items that represent it. After showing how the extraction process is endeavored for each channel, we propose a method of representation of all the informative values an agent can obtain from a content using a matrix constituted by the channels the agent is able to establish on the content (source classifications, and the ones he can understand as individual (destination classifications. We finally show how this representation allows reflecting the evolution of the informative items through the evolution of audio-visual content.

  11. Use of Audiovisual Media and Equipment by Medical Educationists ...

    African Journals Online (AJOL)

    The most frequently used audiovisual medium and equipment is transparency on Overhead projector (O. H. P.) while the medium and equipment that is barely used for teaching is computer graphics on multi-media projector. This study also suggests ways of improving teaching-learning processes in medical education, ...

  12. Bayesian calibration of simultaneity in audiovisual temporal order judgments.

    Directory of Open Access Journals (Sweden)

    Shinya Yamamoto

    Full Text Available After repeated exposures to two successive audiovisual stimuli presented in one frequent order, participants eventually perceive a pair separated by some lag time in the same order as occurring simultaneously (lag adaptation. In contrast, we previously found that perceptual changes occurred in the opposite direction in response to tactile stimuli, conforming to bayesian integration theory (bayesian calibration. We further showed, in theory, that the effect of bayesian calibration cannot be observed when the lag adaptation was fully operational. This led to the hypothesis that bayesian calibration affects judgments regarding the order of audiovisual stimuli, but that this effect is concealed behind the lag adaptation mechanism. In the present study, we showed that lag adaptation is pitch-insensitive using two sounds at 1046 and 1480 Hz. This enabled us to cancel lag adaptation by associating one pitch with sound-first stimuli and the other with light-first stimuli. When we presented each type of stimulus (high- or low-tone in a different block, the point of simultaneity shifted to "sound-first" for the pitch associated with sound-first stimuli, and to "light-first" for the pitch associated with light-first stimuli. These results are consistent with lag adaptation. In contrast, when we delivered each type of stimulus in a randomized order, the point of simultaneity shifted to "light-first" for the pitch associated with sound-first stimuli, and to "sound-first" for the pitch associated with light-first stimuli. The results clearly show that bayesian calibration is pitch-specific and is at work behind pitch-insensitive lag adaptation during temporal order judgment of audiovisual stimuli.

  13. [Factors associated with condom use and knowledge about STD/AIDS among teenagers in public and private schools in São Paulo, Brazil].

    Science.gov (United States)

    Martins, Laura B Motta; da Costa-Paiva, Lúcia Helena S; Osis, Maria José D; de Sousa, Maria Helena; Pinto-Neto, Aarão M; Tadini, Valdir

    2006-02-01

    This study aimed to compare knowledge about STD/AIDS and identify the factors associated with adequate knowledge and consistent use of male condoms in teenagers from public and private schools in the city of São Paulo, Brazil. We selected 1,594 adolescents ranging 12 to 19 years of age in 13 public schools and 5 private schools to complete a questionnaire on knowledge of STD/AIDS and use of male condoms. Prevalence ratios were computed with a 95% confidence interval. The score on STD knowledge used a cutoff point corresponding to 50% of correct answers. Statistical tests were chi-square and Poisson multiple regression. Consistent use of male condoms was 60% in private and 57.1% in public schools (p > 0.05) and was associated with male gender and lower socioeconomic status. Female gender, higher schooling, enrollment in private school, Caucasian race, and being single were associated with higher knowledge of STDs. Teenagers from public and private schools have adequate knowledge of STD prevention, however this does not include the adoption of effective prevention. Educational programs and STD/AIDS awareness-raising should be expanded in order to minimize vulnerability.

  14. Brain networks engaged in audiovisual integration during speech perception revealed by persistent homology-based network filtration.

    Science.gov (United States)

    Kim, Heejung; Hahm, Jarang; Lee, Hyekyoung; Kang, Eunjoo; Kang, Hyejin; Lee, Dong Soo

    2015-05-01

    The human brain naturally integrates audiovisual information to improve speech perception. However, in noisy environments, understanding speech is difficult and may require much effort. Although the brain network is supposed to be engaged in speech perception, it is unclear how speech-related brain regions are connected during natural bimodal audiovisual or unimodal speech perception with counterpart irrelevant noise. To investigate the topological changes of speech-related brain networks at all possible thresholds, we used a persistent homological framework through hierarchical clustering, such as single linkage distance, to analyze the connected component of the functional network during speech perception using functional magnetic resonance imaging. For speech perception, bimodal (audio-visual speech cue) or unimodal speech cues with counterpart irrelevant noise (auditory white-noise or visual gum-chewing) were delivered to 15 subjects. In terms of positive relationship, similar connected components were observed in bimodal and unimodal speech conditions during filtration. However, during speech perception by congruent audiovisual stimuli, the tighter couplings of left anterior temporal gyrus-anterior insula component and right premotor-visual components were observed than auditory or visual speech cue conditions, respectively. Interestingly, visual speech is perceived under white noise by tight negative coupling in the left inferior frontal region-right anterior cingulate, left anterior insula, and bilateral visual regions, including right middle temporal gyrus, right fusiform components. In conclusion, the speech brain network is tightly positively or negatively connected, and can reflect efficient or effortful processes during natural audiovisual integration or lip-reading, respectively, in speech perception.

  15. Talker Variability in Audiovisual Speech Perception

    Directory of Open Access Journals (Sweden)

    Shannon eHeald

    2014-07-01

    Full Text Available A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition. So far, this talker-variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target-word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.

  16. El Archivo de la Palabra : contexto y proyecto del repositorio audiovisual del Ateneu Barcelonès

    Directory of Open Access Journals (Sweden)

    Alcaraz Martínez, Rubén

    2014-12-01

    Full Text Available Es presenten els resultats del projecte de digitalització del fons audiovisual de l'Ateneu Barcelonès iniciat per l'Àrea de Biblioteca i Arxiu Històric l'any 2011. S'explica la metodologia de treball fent èmfasi en la gestió dels fitxers analògics i nascuts digitals i en la problemàtica derivada dels drets d'autor. Finalment, es presenta l'Arxiu de la Paraula i l'@teneu hub, nous repositori i web respectivament, l'objectiu dels quals és difondre el patrimoni audiovisual i donar accés centralitzat als diferents continguts generats per l'entitat.Se presentan los resultados del proyecto de digitalización del fondo audiovisual del Ateneu Barcelonès iniciado por el área de Biblioteca y Archivo Histórico en 2011. Se explica la metodología de trabajo haciendo énfasis en la gestión de los ficheros analógicos y nacidos digitales y en la problemática derivada de los derechos de autor. Finalmente, se presenta el Archivo de la Palabra y el @teneo hub, nuevos repositorio y web respectivamente, cuyo objetivo es difundir el patrimonio audiovisual y dar acceso centralizado a los diferentes contenidos generados por la entidad.This paper reports on the project to digitize the audiovisual archives of the Ateneu Barcelonès, which was launched by that institution’s Library and Archive department in 2011. The paper explains the methodology used to create the repository, focusing on the management of analogue files and born-digital materials and the question of author’s rights. Finally, it presents the new repository L’Arxiu de la Paraula (the Word Archive and the new website, @teneu hub, which are designed to disseminate the Ateneu’s audiovisual heritage and provide centralized access to its different contents.

  17. Audiovisual Cues and Perceptual Learning of Spectrally Distorted Speech

    Science.gov (United States)

    Pilling, Michael; Thomas, Sharon

    2011-01-01

    Two experiments investigate the effectiveness of audiovisual (AV) speech cues (cues derived from both seeing and hearing a talker speak) in facilitating perceptual learning of spectrally distorted speech. Speech was distorted through an eight channel noise-vocoder which shifted the spectral envelope of the speech signal to simulate the properties…

  18. Selected Audio-Visual Materials for Consumer Education. [New Version.

    Science.gov (United States)

    Johnston, William L.

    Ninety-two films, filmstrips, multi-media kits, slides, and audio cassettes, produced between 1964 and 1974, are listed in this selective annotated bibliography on consumer education. The major portion of the bibliography is devoted to films and filmstrips. The main topics of the audio-visual materials include purchasing, advertising, money…

  19. Elementos diferenciales en la forma audiovisual de los videojuegos. Vinculación, presencia e inmersión. Differential elements in the audiovisual form of the video games. Bonding, presence and immersion.

    Directory of Open Access Journals (Sweden)

    María Gabino Campos

    2012-01-01

    Full Text Available In just over two decades the video games reach the top positions in the audiovisual sector. Different technical, economic and social facts make that video games are the main reference of entertainment for a growing number of millions. This phenomenon is also due to its creators develop stories with elements of interaction in order to achieve high investment of time by users. We investigate the concepts of bonding, presence and immersion for its implications in the sensory universe of video games and we show the state of the audiovisual research in this field in the first decade of the century.

  20. 77 FR 71593 - Robert Bosch GmbH; Analysis of Agreement Containing Consent Orders To Aid Public Comment

    Science.gov (United States)

    2012-12-03

    ..., Inc., Dkt. No. 9302, 2006 FTC LEXIS 101 (Aug. 20, 2006), rev'd, Rambus Inc. v. F.T.C., 522 F.3d 456 (D... Order to Aid Public Comment Sec. V.B. \\12\\ See, e.g., In re Rambus, Inc., Dkt. No. 9302 (FTC Aug. 2...

  1. A Comparison of the Development of Audiovisual Integration in Children with Autism Spectrum Disorders and Typically Developing Children

    Science.gov (United States)

    Taylor, Natalie; Isaac, Claire; Milne, Elizabeth

    2010-01-01

    This study aimed to investigate the development of audiovisual integration in children with Autism Spectrum Disorder (ASD). Audiovisual integration was measured using the McGurk effect in children with ASD aged 7-16 years and typically developing children (control group) matched approximately for age, sex, nonverbal ability and verbal ability.…

  2. A imagem-ritmo e o videoclipe no audiovisual

    Directory of Open Access Journals (Sweden)

    Felipe de Castro Muanis

    2012-12-01

    Full Text Available A televisão pode ser um espaço de reunião entre som e imagem em um dispositivo que possibilita a imagem-ritmo – dando continuidade à teoria da imagem de Gilles Deleuze, proposta para o cinema. Ela agregaria, simultaneamente, ca-racterísticas da imagem-movimento e da imagem-tempo, que se personificariam na construção de imagens pós-modernas, em produtos audiovisuais não necessariamente narrativos, porém populares. Filmes, videogames, videoclipes e vinhetas em que a música conduz as imagens permitiriam uma leitura mais sensorial. O audiovisual como imagem-música abre, assim, para uma nova forma de percepção além da textual tradicional, fruto da interação entre ritmo, texto e dispositivo. O tempo das imagens em movimento no audiovisual está atrelado inevitável e prioritariamente ao som. Elas agregam possibilidades não narrativas que se realizam, na maioria das vezes, sobre a lógica do ritmo musical, so-bressaindo-se como um valor fundamental, observado nos filmes Sem Destino (1969, Assassinos por Natureza (1994 e Corra Lola Corra (1998.

  3. The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.

    Science.gov (United States)

    Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano

    2017-12-01

    Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.

  4. La protección de los menores en la política audiovisual de la Unión Europea : un objetivo prioritario

    Directory of Open Access Journals (Sweden)

    Juan María Martínez Otero

    2012-05-01

    harmful content in the audiovisual environment. Illegal or harmful content can violate some children’s rights: to a proper development of personality, to honour, to privacy, to self—image, to data protection, to sexual indemnity, etc. Therefore, on many different legislations child protection against abuses of freedom of speech and information has already been recognized as a limit on these liberties. In addition, both countries and supranational institutions have been approving measures to protect minors in the field of media. Since its very beginning, EU audiovisual policy has paid special attention to this issue, in particular on the Television Without Frontiers Directive, renamed after its recent reform as Directive Audiovisual Media Services. The paper describes the different measures adopted by the EU to protect children in the field of the media, reviewing the legally binding instruments (Charter of Fundamental Rights and Directives, the working papers and recommendations (Green Books, Recommendations and the specific UE action—plans. The author stresses the leadership role that the UE has played in the cause of building a public audiovisual space respectful towards the rights of everyone, including children and adolescents.

  5. El documento audiovisual en las emisoras de televisión: selección, conservación y tratamiento

    OpenAIRE

    Rodríguez-Bravo, Blanca

    2004-01-01

    Analysis of the audiovisual material’s peculiarities and its management in television information units. According with the aims of the television information centers: conservation and treatment, the main approaches for the selection of audiovisual messages are considered and some thoughts about their content analysis with a view to their retrieval are carried out.

  6. Prediction-based Audiovisual Fusion for Classification of Non-Linguistic Vocalisations

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Prediction plays a key role in recent computational models of the brain and it has been suggested that the brain constantly makes multisensory spatiotemporal predictions. Inspired by these findings we tackle the problem of audiovisual fusion from a new perspective based on prediction. We train

  7. Audiovisual integration in depth: multisensory binding and gain as a function of distance.

    Science.gov (United States)

    Noel, Jean-Paul; Modi, Kahan; Wallace, Mark T; Van der Stoep, Nathan

    2018-07-01

    The integration of information across sensory modalities is dependent on the spatiotemporal characteristics of the stimuli that are paired. Despite large variation in the distance over which events occur in our environment, relatively little is known regarding how stimulus-observer distance affects multisensory integration. Prior work has suggested that exteroceptive stimuli are integrated over larger temporal intervals in near relative to far space, and that larger multisensory facilitations are evident in far relative to near space. Here, we sought to examine the interrelationship between these previously established distance-related features of multisensory processing. Participants performed an audiovisual simultaneity judgment and redundant target task in near and far space, while audiovisual stimuli were presented at a range of temporal delays (i.e., stimulus onset asynchronies). In line with the previous findings, temporal acuity was poorer in near relative to far space. Furthermore, reaction time to asynchronously presented audiovisual targets suggested a temporal window for fast detection-a range of stimuli asynchronies that was also larger in near as compared to far space. However, the range of reaction times over which multisensory response enhancement was observed was limited to a restricted range of relatively small (i.e., 150 ms) asynchronies, and did not differ significantly between near and far space. Furthermore, for synchronous presentations, these distance-related (i.e., near vs. far) modulations in temporal acuity and multisensory gain correlated negatively at an individual subject level. Thus, the findings support the conclusion that multisensory temporal binding and gain are asymmetrically modulated as a function of distance from the observer, and specifies that this relationship is specific for temporally synchronous audiovisual stimulus presentations.

  8. THE AIDS HANDBOOK

    Directory of Open Access Journals (Sweden)

    Z Khan

    1997-12-01

    Full Text Available HIV infection and AIDS is increasingly becoming a major public health problem in our country. Currently, the reported cases represent only the 'tip of the iceberg' of the problem. In view of the fact that no cure or vaccine for the disease has yet been found, spreading knowledge and removing misconceptions is about the only way that AIDS can be effectively tackled.This handbook, developed by Prof. Shankar Chowdhury and associates, seeks to address all levels of medical and non-medical AIDS workers, as well as the layman. It deals with topics ranging from biology of the virus, symptoms and transmission of disease, to prevention, counselling for infected persons and action plan for AIDS education.The biology of the virus and the immune system is described in simple terms, as well as methods of testing for HIV, and what these test results mean. The progression of disease in adults and children, development of symptoms, diagnostic criteria for AIDS, treatment and outcome of disease is dealt with. How AIDS spreads between people, and the health risk for health workers and families is examined. The various ways in which transmission of HIV can be prevented is looked at in detail, including public health measures, national and internatonal action, and ethical and human rights issues involved.

  9. Communication 2.0, visibility and interactivity: fundaments of corporate image of Public Universities in Madrid on YouTube

    OpenAIRE

    Carlos OLIVA MARAÑÓN

    2014-01-01

    In recent years, marketing in Higher Education has been growing interest not only for its economic and commercial value but for its strategic and promotion, training and strengthening the brand communication "University". The platform YouTube has positioned itself as an audiovisual medium of reference in which the user decides what content you want to see, where and when. The objectives of this research are to analyze the audiovisual institutional advertising of the Public Universities of Mad...

  10. Congruent and Incongruent Cues in Highly Familiar Audiovisual Action Sequences: An ERP Study

    Directory of Open Access Journals (Sweden)

    SM Wuerger

    2012-07-01

    Full Text Available In a previous fMRI study we found significant differences in BOLD responses for congruent and incongruent semantic audio-visual action sequences (whole-body actions and speech actions in bilateral pSTS, left SMA, left IFG, and IPL (Meyer, Greenlee, & Wuerger, JOCN, 2011. Here, we present results from a 128-channel ERP study that examined the time-course of these interactions using a one-back task. ERPs in response to congruent and incongruent audio-visual actions were compared to identify regions and latencies of differences. Responses to congruent and incongruent stimuli differed between 240–280 ms, 340–420 ms, and 460–660 ms after stimulus onset. A dipole analysis revealed that the difference around 250 ms can be partly explained by a modulation of sources in the vicinity of the superior temporal area, while the responses after 400 ms are consistent with sources in inferior frontal areas. Our results are in line with a model that postulates early recognition of congruent audiovisual actions in the pSTS, perhaps as a sensory memory buffer, and a later role of the IFG, perhaps in a generative capacity, in reconciling incongruent signals.

  11. Identification of Depressive Signs in Patients and Their Family Members During iPad-based Audiovisual Sessions.

    Science.gov (United States)

    Smith, Carol E; Werkowitch, Marilyn; Yadrich, Donna Macan; Thompson, Noreen; Nelson, Eve-Lynn

    2017-07-01

    Home parenteral nutrition requires a daily life-sustaining intravenous infusion over 12 hours. The daily intravenous infusion home care procedures are stringent, time-consuming tasks for patients and family caregivers who often experience depression. The purposes of this study were (1) to assess home parenteral nutrition patients and caregivers for depression and (2) to assess whether depressive signs can be seen during audiovisual discussion sessions using an Apple iPad Mini. In a clinical trial (N = 126), a subsample of 21 participants (16.7%) had depressive symptoms. Of those with depression, 13 participants were home parenteral nutrition patients and eight were family caregivers; ages ranged from 20 to 79 years (with 48.9 [standard deviation, 17.37] years); 76.2% were female. Individual assessments by the mental health nurse found factors related to depressive symptoms across all 21 participants. A different nurse observed participants for signs of depression when viewing the videotapes of the discussion sessions on audiovisual technology. Conclusions are that depression questionnaires, individual assessment, and observation using audiovisual technology can identify depressive symptoms. Considering the growing provision of healthcare at a distance, via technology, recommendations are to observe and assess for known signs and symptoms of depression during all audiovisual interactions.

  12. [Virtual audiovisual talking heads: articulatory data and models--applications].

    Science.gov (United States)

    Badin, P; Elisei, F; Bailly, G; Savariaux, C; Serrurier, A; Tarabalka, Y

    2007-01-01

    In the framework of experimental phonetics, our approach to the study of speech production is based on the measurement, the analysis and the modeling of orofacial articulators such as the jaw, the face and the lips, the tongue or the velum. Therefore, we present in this article experimental techniques that allow characterising the shape and movement of speech articulators (static and dynamic MRI, computed tomodensitometry, electromagnetic articulography, video recording). We then describe the linear models of the various organs that we can elaborate from speaker-specific articulatory data. We show that these models, that exhibit a good geometrical resolution, can be controlled from articulatory data with a good temporal resolution and can thus permit the reconstruction of high quality animation of the articulators. These models, that we have integrated in a virtual talking head, can produce augmented audiovisual speech. In this framework, we have assessed the natural tongue reading capabilities of human subjects by means of audiovisual perception tests. We conclude by suggesting a number of other applications of talking heads.

  13. Learning cardiopulmonary resuscitation theory with face-to-face versus audiovisual instruction for secondary school students: a randomized controlled trial.

    Science.gov (United States)

    Cerezo Espinosa, Cristina; Nieto Caballero, Sergio; Juguera Rodríguez, Laura; Castejón-Mochón, José Francisco; Segura Melgarejo, Francisca; Sánchez Martínez, Carmen María; López López, Carmen Amalia; Pardo Ríos, Manuel

    2018-02-01

    To compare secondary students' learning of basic life support (BLS) theory and the use of an automatic external defibrillator (AED) through face-to-face classroom instruction versus educational video instruction. A total of 2225 secondary students from 15 schools were randomly assigned to one of the following 5 instructional groups: 1) face-to-face instruction with no audiovisual support, 2) face-to-face instruction with audiovisual support, 3) audiovisual instruction without face-to-face instruction, 4) audiovisual instruction with face-to-face instruction, and 5) a control group that received no instruction. The students took a test of BLS and AED theory before instruction, immediately after instruction, and 2 months later. The median (interquartile range) scores overall were 2.33 (2.17) at baseline, 5.33 (4.66) immediately after instruction (Paudiovisual instruction for learning BLS and AED theory were found in secondary school students either immediately after instruction or 2 months later.

  14. On the Importance of Audiovisual Coherence for the Perceived Quality of Synthesized Visual Speech

    Directory of Open Access Journals (Sweden)

    Wesley Mattheyses

    2009-01-01

    Full Text Available Audiovisual text-to-speech systems convert a written text into an audiovisual speech signal. Typically, the visual mode of the synthetic speech is synthesized separately from the audio, the latter being either natural or synthesized speech. However, the perception of mismatches between these two information streams requires experimental exploration since it could degrade the quality of the output. In order to increase the intermodal coherence in synthetic 2D photorealistic speech, we extended the well-known unit selection audio synthesis technique to work with multimodal segments containing original combinations of audio and video. Subjective experiments confirm that the audiovisual signals created by our multimodal synthesis strategy are indeed perceived as being more synchronous than those of systems in which both modes are not intrinsically coherent. Furthermore, it is shown that the degree of coherence between the auditory mode and the visual mode has an influence on the perceived quality of the synthetic visual speech fragment. In addition, the audio quality was found to have only a minor influence on the perceived visual signal's quality.

  15. 75 FR 45114 - Rite Aid Corporation; Analysis of Proposed Consent Order to Aid Public Comment

    Science.gov (United States)

    2010-08-02

    ..., among other things, approximately 4,900 retail pharmacy stores in the United States (collectively, ``Rite Aid pharmacies'') and an online pharmacy business. The company allows consumers buying products in... obtained by all Rite Aid entities, including, but not limited to, retail pharmacies. The security program...

  16. Alterations in audiovisual simultaneity perception in amblyopia.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2017-01-01

    Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV) perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window). Adults with unilateral amblyopia (n = 17) and visually normal controls (n = 17) judged the simultaneity of a flash and a click presented with both eyes viewing. The signal onset asynchrony (SOA) varied from 0 ms to 450 ms for auditory-lead and visual-lead conditions. A subset of participants with amblyopia (n = 6) was tested monocularly. Compared to the control group, the auditory-lead side of the AV simultaneity window was widened by 48 ms (36%; p = 0.002), whereas that of the visual-lead side was widened by 86 ms (37%; p = 0.02). The overall mean window width was 500 ms, compared to 366 ms among controls (37% wider; p = 0.002). Among participants with amblyopia, the simultaneity window parameters were unchanged by viewing condition, but subgroup analysis revealed differential effects on the parameters by amblyopia severity, etiology, and foveal suppression status. Possible mechanisms to explain these findings include visual temporal uncertainty, interocular perceptual latency asymmetry, and disruption of normal developmental tuning of sensitivity to audiovisual asynchrony.

  17. Alterations in audiovisual simultaneity perception in amblyopia.

    Directory of Open Access Journals (Sweden)

    Michael D Richards

    Full Text Available Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window. Adults with unilateral amblyopia (n = 17 and visually normal controls (n = 17 judged the simultaneity of a flash and a click presented with both eyes viewing. The signal onset asynchrony (SOA varied from 0 ms to 450 ms for auditory-lead and visual-lead conditions. A subset of participants with amblyopia (n = 6 was tested monocularly. Compared to the control group, the auditory-lead side of the AV simultaneity window was widened by 48 ms (36%; p = 0.002, whereas that of the visual-lead side was widened by 86 ms (37%; p = 0.02. The overall mean window width was 500 ms, compared to 366 ms among controls (37% wider; p = 0.002. Among participants with amblyopia, the simultaneity window parameters were unchanged by viewing condition, but subgroup analysis revealed differential effects on the parameters by amblyopia severity, etiology, and foveal suppression status. Possible mechanisms to explain these findings include visual temporal uncertainty, interocular perceptual latency asymmetry, and disruption of normal developmental tuning of sensitivity to audiovisual asynchrony.

  18. 76 FR 62066 - Phusion Projects, LLC, et al.; Analysis of Proposed Consent Order To Aid Public Comment

    Science.gov (United States)

    2011-10-06

    ..., including routine uses permitted by the Privacy Act, in the Commission's privacy policy, at http://www.ftc.gov/ftc/privacy.htm . Analysis of Agreement Containing Consent Order To Aid Public Comment The Federal... servings of alcohol in the container. Part II of the proposed order further prohibits, commencing six (6...

  19. Authentic Language Input Through Audiovisual Technology and Second Language Acquisition

    Directory of Open Access Journals (Sweden)

    Taher Bahrani

    2014-09-01

    Full Text Available Second language acquisition cannot take place without having exposure to language input. With regard to this, the present research aimed at providing empirical evidence about the low and the upper-intermediate language learners’ preferred type of audiovisual programs and language proficiency development outside the classroom. To this end, 60 language learners (30 low level and 30 upper-intermediate level were asked to have exposure to their preferred types of audiovisual program(s outside the classroom and keep a diary of the amount and the type of exposure. The obtained data indicated that the low-level participants preferred cartoons and the upper-intermediate participants preferred news more. To find out which language proficiency level could improve its language proficiency significantly, a post-test was administered. The results indicated that only the upper-intermediate language learners gained significant improvement. Based on the findings, the quality of the language input should be given priority over the amount of exposure.

  20. HIV/AIDS reference questions in an AIDS service organization special library.

    Science.gov (United States)

    Deevey, Sharon; Behring, Michael

    2005-01-01

    Librarians in many venues may anticipate a wide range of reference questions related to HIV and AIDS. Information on HIV/ AIDS is now available in medical, academic, and public libraries and on the Internet, and ranges from the most complex science to the most private disclosures about personal behavior. In this article, the 913 reference questions asked between May 2002 and August 2004 in a special library in a mid-western community-based AIDS service organization are described and analyzed.

  1. L'Arxiu de la Paraula : context i projecte del repositori audiovisual de l'Ateneu Barcelonès

    Directory of Open Access Journals (Sweden)

    Alcaraz Martínez, Rubén

    2014-12-01

    Full Text Available Es presenten els resultats del projecte de digitalització del fons audiovisual de l'Ateneu Barcelonès iniciat per l'Àrea de Biblioteca i Arxiu Històric l'any 2011. S'explica la metodologia de treball fent èmfasi en la gestió dels fitxers analògics i nascuts digitals i en la problemàtica derivada dels drets d'autor. Finalment, es presenta l'Arxiu de la Paraula i l'@teneu hub, nous repositori i web respectivament, l'objectiu dels quals és difondre el patrimoni audiovisual i donar accés centralitzat als diferents continguts generats per l'entitat.Se presentan los resultados del proyecto de digitalización del fondo audiovisual del Ateneu Barcelonès iniciado por el área de Biblioteca y Archivo Histórico en 2011. Se explica la metodología de trabajo haciendo énfasis en la gestión de los ficheros analógicos y nacidos digitales y en la problemática derivada de los derechos de autor. Finalmente, se presenta el Archivo de la Palabra y el @teneo hub, nuevos repositorio y web respectivamente, cuyo objetivo es difundir el patrimonio audiovisual y dar acceso centralizado a los diferentes contenidos generados por la entidad.This paper reports on the project to digitize the audiovisual archives of the Ateneu Barcelonès, which was launched by that institution’s Library and Archive department in 2011. The paper explains the methodology used to create the repository, focusing on the management of analogue files and born-digital materials and the question of author’s rights. Finally, it presents the new repository L’Arxiu de la Paraula (the Word Archive and the new website, @teneu hub, which are designed to disseminate the Ateneu’s audiovisual heritage and provide centralized access to its different contents.

  2. Audiovisual Integration of Speech in a Patient with Broca’s Aphasia

    DEFF Research Database (Denmark)

    Andersen, Tobias; Starrfelt, Randi

    2015-01-01

    perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca......Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech......'s area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical...

  3. Aula virtual y presencial en aprendizaje de comunicación audiovisual y educación Virtual and Real Classroom in Learning Audiovisual Communication and Education

    Directory of Open Access Journals (Sweden)

    Josefina Santibáñez Velilla

    2010-10-01

    Full Text Available El modelo mixto de enseñanza-aprendizaje pretende utilizar las tecnologías de la información y de la comunicación (TIC para garantizar una formación más ajustada al Espacio Europeo de Educación Superior (EEES. Se formularon los siguientes objetivos de investigación: Averiguar la valoración que hacen los alumnos de Magisterio del aula virtual WebCT como apoyo a la docencia presencial, y conocer las ventajas del uso de la WebCT y de las TIC por los alumnos en el estudio de caso: «Valores y contravalores transmitidos por series televisivas visionadas por niños y adolescentes». La investigación se realizó con una muestra de 205 alumnos de la Universidad de La Rioja que cursaban la asignatura de «Tecnologías aplicadas a la Educación». Para la descripción objetiva, sistemática y cuantitativa del contenido manifiesto de los documentos se ha utilizado la técnica de análisis de contenido cualitativa y cuantitativa. Los resultados obtenidos demuestran que las herramientas de comunicación, contenidos y evaluación son valoradas favorablemente por los alumnos. Se llega a la conclusión de que la WebCT y las TIC constituyen un apoyo a la innovación metodológica del EEES basada en el aprendizaje centrado en el alumno. Los alumnos evidencian su competencia audiovisual en los ámbitos de análisis de valores y de expresión a través de documentos audiovisuales en formatos multimedia. Dichos alumnos aportan un nuevo sentido innovador y creativo al uso docente de series televisivas.The mixed model of Teaching-Learning intends to use Information and Communication Technologies (ICTs to guarantee an education more adjusted to European Space for Higher Education (ESHE. The following research objectives were formulated: 1 To find out the assessment made by teacher-training college students of the virtual classroom WebCT as an aid to face-to-face teaching. 2 To know the advantages of the use of WebCT and ICTs by students in the case study:

  4. Lenguaje audiovisual y lenguaje escolar: dos cosmovisiones en la estructuración lingüística del niño Audiovisual language and school language: two cosmo-visions in the structuring of children linguistics

    Directory of Open Access Journals (Sweden)

    Lirian Astrid Ciro

    2007-06-01

    Full Text Available En el presente texto se pretende analizar la compleja red relacional existente entre el lenguaje audiovisual (partiendo de la televisión como uno de sus soportes y el lenguaje escolar, para vislumbrar sus efectos en el lenguaje infantil. La idea es mostrar el lenguaje audiovisual como un mecanismo potencialmente educativo, por cuanto es una forma de resignificar el mundo y de socialización lingüística; tal característica hace necesario entablar una relación estratégica entre él y el lenguaje escolar. De este modo, el lenguaje infantil se instaura como un punto intermedio en donde confluyen esos distintos lenguajes, y permite al niño tener cosmovisiones abiertas y flexibles de diversas realidades. Todo esto llevará a la configuración de seres creativos, novedosos y atentos a escuchar opciones... a la estructuración de una nueva sociedad, en donde la multiplicidad de códigos (entendidos como sistemas de simbolización vayan haciendo más fácil la expresión de lo que se es y se quiere ser.This paper analyzes the complex relationship between audiovisual language (TV being one of its main supports and school language in order to observe their effects on child language. In this way, audiovisual language is a potentially educational mechanism because it is both a new way of resignifying the world and a mechanism of linguistic socialization. Hence, it is necessary to establish a strategic relationship between audiovisual language and school language. In this way, child language is an intermediate point between these two languages and it allows the child to have open and flexible views of different realities and to be willing to weigh options. In short, it is the structuring of a new society where multiplicity of codes will contribute to facilitating free expression.

  5. Copyright Question: Using Audiovisual Works in a Satellite-Delivered Program.

    Science.gov (United States)

    Switzer, Jamie S.; Switzer, Ralph V., Jr.

    1994-01-01

    Examines the question of copyright violation of audiovisual materials when used in a Master's of Business Administration (MBA) degree offered via satellite transmission through Colorado State University. Topics discussed include fair use; definitions of literary works, performance, and transmission; and the need to revise the 1976 Copyright Act to…

  6. Impact of audio-visual storytelling in simulation learning experiences of undergraduate nursing students.

    Science.gov (United States)

    Johnston, Sandra; Parker, Christina N; Fox, Amanda

    2017-09-01

    Use of high fidelity simulation has become increasingly popular in nursing education to the extent that it is now an integral component of most nursing programs. Anecdotal evidence suggests that students have difficulty engaging with simulation manikins due to their unrealistic appearance. Introduction of the manikin as a 'real patient' with the use of an audio-visual narrative may engage students in the simulated learning experience and impact on their learning. A paucity of literature currently exists on the use of audio-visual narratives to enhance simulated learning experiences. This study aimed to determine if viewing an audio-visual narrative during a simulation pre-brief altered undergraduate nursing student perceptions of the learning experience. A quasi-experimental post-test design was utilised. A convenience sample of final year baccalaureate nursing students at a large metropolitan university. Participants completed a modified version of the Student Satisfaction with Simulation Experiences survey. This 12-item questionnaire contained questions relating to the ability to transfer skills learned in simulation to the real clinical world, the realism of the simulation and the overall value of the learning experience. Descriptive statistics were used to summarise demographic information. Two tailed, independent group t-tests were used to determine statistical differences within the categories. Findings indicated that students reported high levels of value, realism and transferability in relation to the viewing of an audio-visual narrative. Statistically significant results (t=2.38, psimulation to clinical practice. The subgroups of age and gender although not significant indicated some interesting results. High satisfaction with simulation was indicated by all students in relation to value and realism. There was a significant finding in relation to transferability on knowledge and this is vital to quality educational outcomes. Copyright © 2017. Published by

  7. Child′s dental fear: Cause related factors and the influence of audiovisual modeling

    Directory of Open Access Journals (Sweden)

    Jayanthi Mungara

    2013-01-01

    Full Text Available Background: Delivery of effective dental treatment to a child patient requires thorough knowledge to recognize dental fear and its management by the application of behavioral management techniques. Children′s Fear Survey Schedule - Dental Subscale (CFSS-DS helps in identification of specific stimuli which provoke fear in children with regard to dental situation. Audiovisual modeling can be successfully used in pediatric dental practice. Aim: To assess the degree of fear provoked by various stimuli in the dental office and to evaluate the effect of audiovisual modeling on dental fear of children using CFSS-DS. Materials and Methods: Ninety children were divided equally into experimental (group I and control (group II groups and were assessed in two visits for their degree of fear and the effect of audiovisual modeling, with the help of CFSS-DS. Results: The most fear-provoking stimulus for children was injection and the least was to open the mouth and having somebody look at them. There was no statistically significant difference in the overall mean CFSS-DS scores between the two groups during the initial session (P > 0.05. However, in the final session, a statistically significant difference was observed in the overall mean fear scores between the groups (P < 0.01. Significant improvement was seen in group I, while no significant change was noted in case of group II. Conclusion: Audiovisual modeling resulted in a significant reduction of overall fear as well as specific fear in relation to most of the items. A significant reduction of fear toward dentists, doctors in general, injections, being looked at, the sight, sounds, and act of the dentist drilling, and having the nurse clean their teeth was observed.

  8. Development of an audiovisual teaching material for radiation management education of licenceholders

    International Nuclear Information System (INIS)

    Chae, Sung Ki; Park, Tai Jin; Lim, Ki Joong; Jung, Ho Sup; Jun, Sung Youp; Kim, Jung Keun; Heo, Pil Jong; Jang, Han Ki

    2007-02-01

    This study aims at developing an audiovisual teaching material for elevating their abilities for radiation management during the legal education of the licenceholder about radiation and radioisotope. It also aims at developing an educational video material for the RSO in radiation safety management and RI handing. The role or duty, which was needed for the activities of the regulation and management in real fields, of the licenceholder was introduced by referring the medical field and the audiovisual teaching material was then developed by presenting the examples of management in real fields. The procedures of management were analyzed by reflecting the working tables of the supervisors for radiation management in the licensed companies, the working list was divided into the main subjects of 10 and the each main subject was then also divided into the detailed subjects of 103. Based on the detailed subjects, the points of sameness and difference for the management in the educational, researching and medical fields were analyzed and the content of the material was then determined according to the points of sameness and difference. In addition, the material emphasized the effect resulted in the actual education as compared with the existing audiovisual materials. The contents of the material are as follows : regulation of radiation safety, duty of radiation safety management - management of working members, management of facilities, management of sources

  9. Development of an audiovisual teaching material for radiation management education of licenceholders

    Energy Technology Data Exchange (ETDEWEB)

    Chae, Sung Ki; Park, Tai Jin; Lim, Ki Joong; Jung, Ho Sup; Jun, Sung Youp; Kim, Jung Keun; Heo, Pil Jong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Jang, Han Ki [Hanyang Univ., Seoul (Korea, Republic of)

    2007-02-15

    This study aims at developing an audiovisual teaching material for elevating their abilities for radiation management during the legal education of the licenceholder about radiation and radioisotope. It also aims at developing an educational video material for the RSO in radiation safety management and RI handing. The role or duty, which was needed for the activities of the regulation and management in real fields, of the licenceholder was introduced by referring the medical field and the audiovisual teaching material was then developed by presenting the examples of management in real fields. The procedures of management were analyzed by reflecting the working tables of the supervisors for radiation management in the licensed companies, the working list was divided into the main subjects of 10 and the each main subject was then also divided into the detailed subjects of 103. Based on the detailed subjects, the points of sameness and difference for the management in the educational, researching and medical fields were analyzed and the content of the material was then determined according to the points of sameness and difference. In addition, the material emphasized the effect resulted in the actual education as compared with the existing audiovisual materials. The contents of the material are as follows : regulation of radiation safety, duty of radiation safety management - management of working members, management of facilities, management of sources.

  10. Creatividad y producción audiovisual en la red: el caso de la serie andaluza

    OpenAIRE

    Jiménez Marín, Gloria; Elías Zambrano, Rodrigo; Silva Robles, Carmen

    2012-01-01

    En español: La Web 2.0 ha posibilitado que jóvenes creadores generen contenido audiovisual y puedan difundirlo a través de los medios sociales, sin necesidad de pasar por los canales habituales de distribución, hasta ahora imprescindibles. Al otro lado del ordenador o de los dispositivos móviles le esperan receptores ansiosos por consumir vídeo, una actividad a la que cada vez dedicamos más horas… con una diferencia fundamental: hemos dejado de ver el televisor para consumir más audiovisual o...

  11. Publicación de materiales audiovisuales a través de un servidor de video-streaming Publication of audio-visual materials through a streaming video server

    Directory of Open Access Journals (Sweden)

    Acevedo Clavijo Edwin Jovanny

    2010-07-01

    Full Text Available Esta propuesta tiene como objetivo estudiar varias alternativas de servidores Streaming para determinar la mejor herramienta para el desarrollo de la publicación de material audiovisual educativo. Se evaluaron las plataformas más utilizadas teniendo en cuenta sus características y beneficios que tiene cada servidor entre las los cuales están: Hélix Universal Server, Windows Media Server de Microsoft, Peer Cast y Darwin Server. implementando un servidor con mayores capacidades y beneficios para la publicación de videos con fines académicos a través de la intranet de la Universidad Cooperativa de Colombia seccional Barrancabermeja This proposal has as an principal objective to study different alternatives for streaming servers to determine the best tool in the project’s development. Platforms most used were evaluated features and benefits in each served such as: Helix Universal Server, Microsoft Windows Media Server, Peer Cast and Darwin Server. Implementing a server with more capabilities and benefits for the publication of videos for academic purposes through the intranet of the Cooperative University of Colombia Barrancabermeja’s sectional

  12. An Annotated Guide to Audio-Visual Materials for Teaching Shakespeare.

    Science.gov (United States)

    Albert, Richard N.

    Audio-visual materials, found in a variety of periodicals, catalogs, and reference works, are listed in this guide to expedite the process of finding appropriate classroom materials for a study of William Shakespeare in the classroom. Separate listings of films, filmstrips, and recordings are provided, with subdivisions for "The Plays"…

  13. Aid and Growth

    DEFF Research Database (Denmark)

    Mekasha, Tseday Jemaneh; Tarp, Finn

    Some recent literature in the meta-analysis category where results from a range of studies are brought together throws doubt on the ability of foreign aid to foster economic growth and development. This paper assesses what meta-analysis has to say about the effectiveness of foreign aid in terms...... of the growth impact. We re-examine key hypotheses, and find that the effect of aid on growth is positive and statistically significant. This significant effect is genuine, and not an artefact of publication selection. We also show why our results differ from those published elsewhere....

  14. Eidos em movimento: da ação à criação do audiovisual Eidos in movement: from the reception to the creation of the audiovisual

    Directory of Open Access Journals (Sweden)

    Regina Rossetti

    2008-11-01

    Full Text Available A partir da célebre imagem bergsoniana do mecanismo cinematográfico da inteligência e da percepção, adentra-se ao mundo das idéias de Platão em busca da distinção entre o contínuo movimento do real e o eidos como representação estável da instabilidade das coisas. Os conceitos de eidos e movimento serão, então, empregados para analisar a concepção, a produção e a recepção de um produto audiovisual. A necessidade de estabilização do movimento real das coisas e dos acontecimentos é presente em vários momentos do processo audiovisual: no roteiro, que usa palavras de uma linguagem que solidifica a fluidez advinda da criação; no storyboard, que representa gráfica e estaticamente as ações mais importantes; no enquadramento das imagens, que separa do fluxo contínuo da realidade, os momentos privilegiados; no fotograma ou nos quadros videográficos, que são imagens imóveis do movimento real; na percepção do espectador, que capta os instantes da realidade para depois alinhá-los; na memória, que seleciona e separa os momentos mais marcantes do que foi percebido; e, finalmente, nos comentários do espectador, que fragmenta e narra as imagens mentais retidas. Eidos in movement: from the reception to the creation of the audiovisual — Starting from Bergson's well-known image of the cinematographic mechanism of intelligence and perception, we enter the world of Plato's ideas in search of the distinction between the continuous movement of the real and of eidos as a stable representation of the instability of things. The concepts of eidos and of movement are then used to analyze the conception, production and reception of an audiovisual product. The need for stabilizing the real movement of things and of events is present at various moments in the audiovisual process, namely: in the script, which uses words of a language that solidifies the fluidity of the creation; in the storyboard, which represents the most important

  15. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect.

    Science.gov (United States)

    Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi

    2015-11-01

    Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then

  16. Foreign aid, economic globalization, and pollution

    NARCIS (Netherlands)

    Lim, S.; Menaldo, V.; Prakash, A.

    This paper explores how trade and foreign direct investment (FDI) condition the effect of foreign aid on environmental protection in aid-recipient countries. We suggest that (1) environmental protection should be viewed as a public good and (2) all else equal, resource flows from abroad (via aid,

  17. Impairment-Factor-Based Audiovisual Quality Model for IPTV: Influence of Video Resolution, Degradation Type, and Content Type

    Directory of Open Access Journals (Sweden)

    Garcia MN

    2011-01-01

    Full Text Available This paper presents an audiovisual quality model for IPTV services. The model estimates the audiovisual quality of standard and high definition video as perceived by the user. The model is developed for applications such as network planning and packet-layer quality monitoring. It mainly covers audio and video compression artifacts and impairments due to packet loss. The quality tests conducted for model development demonstrate a mutual influence of the perceived audio and video quality, and the predominance of the video quality for the overall audiovisual quality. The balance between audio quality and video quality, however, depends on the content, the video format, and the audio degradation type. The proposed model is based on impairment factors which quantify the quality-impact of the different degradations. The impairment factors are computed from parameters extracted from the bitstream or packet headers. For high definition video, the model predictions show a correlation with unknown subjective ratings of 95%. For comparison, we have developed a more classical audiovisual quality model which is based on the audio and video qualities and their interaction. Both quality- and impairment-factor-based models are further refined by taking the content-type into account. At last, the different model variants are compared with modeling approaches described in the literature.

  18. La matriz ficcional como estrategia creativa en la adaptación audiovisual

    Directory of Open Access Journals (Sweden)

    Vicente Peña Timón

    2012-04-01

    Full Text Available El artículo pone de relieve la capacidad de las matrices ficcionales para ser utilizadas como estrategias del discurso de la narración, a la hora de realizar una adaptación (audiovisual cinematográfica. Comienza con una aproximación al concepto, con el fin de conocer el contexto y cómo beneficiarse de las matrices ficcionales a la hora de adaptar una obra original. Se define, en primer lugar, el término adaptación audiovisual, para después explicar el ya conocido paradigma de la estructura clásica y, a partir de éste, explicar qué es una matriz ficcional, para ejemplificar,  por último, cómo opera la matriz ficcional usada como estrategia en las adaptaciones audiovisuales.  

  19. The early maximum likelihood estimation model of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2015-01-01

    integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross......Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...

  20. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    Science.gov (United States)

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  1. European Union RACE program contributions to digital audiovisual communications and services

    Science.gov (United States)

    de Albuquerque, Augusto; van Noorden, Leon; Badique', Eric

    1995-02-01

    The European Union RACE (R&D in advanced communications technologies in Europe) and the future ACTS (advanced communications technologies and services) programs have been contributing and continue to contribute to world-wide developments in audio-visual services. The paper focuses on research progress in: (1) Image data compression. Several methods of image analysis leading to the use of encoders based on improved hybrid DCT-DPCM (MPEG or not), object oriented, hybrid region/waveform or knowledge-based coding methods are discussed. (2) Program production in the aspects of 3D imaging, data acquisition, virtual scene construction, pre-processing and sequence generation. (3) Interoperability and multimedia access systems. The diversity of material available and the introduction of interactive or near- interactive audio-visual services led to the development of prestandards for video-on-demand (VoD) and interworking of multimedia services storage systems and customer premises equipment.

  2. ACNP Public Education Program on nuclear medicine and related low-level waste issues. Final technical report, 7 July 1980-30 June 1983

    International Nuclear Information System (INIS)

    The goal of the ACNP Public Education Program was to educate and inform the greatest number of people in the areas of radiation and health and, in turn, to gain the public's understanding of Nuclear Medicine. The related low-level waste issues also were incorporated into the program. To carry out the program's objectives and design to educate the public, the ACNP established a Speaker Bureau which consists of those members of the ACNP and the Society of Nuclear Medicine (SNM) who go through the training seminars, conducted by ACNP, and are available to speak publicly about Nuclear Medicine and related low-level waste issues. In addition, the ACNP developed the necessary audiovisual and printed materials to be used in their own right or as supplemental tools. Promotion of the Speakers Bureau and the audiovisual materials to the media and other various public forums was undertaken

  3. Estudio sobre las relaciones de los productores de contenidos audiovisuales con su público: Youtube Brasil/Study on the relationship of producers of audiovisual content with their audiences: Youtube Brazil

    Directory of Open Access Journals (Sweden)

    Leonardo Soares Silva

    2014-05-01

    Full Text Available Este artículo, busca aportar al área de las ciencias de la comunicación, conocimiento sobre un elemento de gran relevancia para las actividades de relaciones públicas en las redes sociales de contenido audiovisual. Esta investigación se centra en indagar y analizar el nivel de interacción de los productores de vídeos con su audiencia, específicamente en Youtube Brasil, el cual es el mayor difusor de contenidos audiovisuales a la fecha en ese país. Gestionar la relación con el público es, de hecho, parte de las relaciones públicas, pero cuando se trata de contenidos audiovisuales, la mayoría de las instituciones no se dan cuenta de la importancia de estas herramientas de interacción proporcionadas por Youtube. Este artículo muestra la importancia de utilizar estas herramientas para las personas y las instituciones que cuentan con un canal en esta red social. Con apoyo del enfoque cuantitativo, el estudio descriptivo, se analizará la información recopilada gracias a la observación. Para pruebas de lo mencionado, se muestra en este artículo datos recogidos, estadísticas y el análisis profundo de 100 vídeos de Youtube Brasil divididos en categorías populares desde el surgimiento de la red en agosto de 2013. // This article seeks to contribute to the area of communication sciences, recognized as an item of great importance for public relations activities in the social networks of audiovisual content. This research focuses on investigating and analyzing the interaction level of video producers with their audience , specifically Youtube Brazil , which is the largest broadcaster of audiovisual content to date in that country. Managing the relationship with the public is in fact part of public relations, but when it comes to audiovisual content, most institutions do not realize the importance of these interaction tools provided by Youtube. This article shows the importance of using these tools to individuals and institutions that

  4. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. Da história oral ao filme de pesquisa: o audiovisual como ferramenta do historiador

    Directory of Open Access Journals (Sweden)

    Hebe Mattos

    Full Text Available Resumo: Ensaio analítico do processo de produção de imagens, formação de arquivo audiovisual, análise das fontes e criação da narrativa fílmica dos quatro filmes historiográficos que formam a caixa de DVDs Passados presentes, do Laboratório de História Oral e Imagem da Universidade Federal Fluminense (Labhoi/UFF. A partir de trechos do arquivo audiovisual do Labhoi e dos filmes realizados, o artigo analisa: como o problema de pesquisa (a memória da escravidão e o legado da canção escrava no agrofluminense nos levou à produção de imagens em situação de pesquisa; o deslocamento analítico em relação ao documentário cinematográfico e ao filme etnográfico; as especificidades de revisitar o acervo audiovisual constituído a partir da formulação de novos problemas de pesquisa.

  6. Communication 2.0, visibility and interactivity: fundaments of corporate image of Public Universities in Madrid on YouTube

    Directory of Open Access Journals (Sweden)

    Carlos OLIVA MARAÑÓN

    2014-10-01

    Full Text Available In recent years, marketing in Higher Education has been growing interest not only for its economic and commercial value but for its strategic and promotion, training and strengthening the brand communication "University". The platform YouTube has positioned itself as an audiovisual medium of reference in which the user decides what content you want to see, where and when. The objectives of this research are to analyze the audiovisual institutional advertising of the Public Universities of Madrid and verify the adequacy of YouTube as a communication from these Universities. Degrees adapted to the European Higher Education Area (EHEA and modern installations make up the identity of these Universities. The results confirm the consolidation of YouTube as a channel for transmitting audiovisual corporate messages of these Universities to their target audience.

  7. Reflexiones en tiempos de transición : Digitalización audiovisual en la Argentina

    OpenAIRE

    Meirovich, Valeria

    2014-01-01

    El presente artículo propone analizar el actual proceso de digitalización audiovisual en la Argentina –tanto para la televisión como para la radio–, desde una perspectiva de la economía política de la comunicación y de las políticas públicas para el sector. En este sentido, se consideran los marcos económicos, políticos e ideológicos que acompañan el proceso de digitalización en nuestro país que, desde la sanción de la Ley de Servicios de Comunicación Audiovisual N° 26.522, se ha en...

  8. Audiovisual integration in children listening to spectrally degraded speech.

    Science.gov (United States)

    Maidment, David W; Kang, Hi Jee; Stewart, Hannah J; Amitay, Sygal

    2015-02-01

    The study explored whether visual information improves speech identification in typically developing children with normal hearing when the auditory signal is spectrally degraded. Children (n=69) and adults (n=15) were presented with noise-vocoded sentences from the Children's Co-ordinate Response Measure (Rosen, 2011) in auditory-only or audiovisual conditions. The number of bands was adaptively varied to modulate the degradation of the auditory signal, with the number of bands required for approximately 79% correct identification calculated as the threshold. The youngest children (4- to 5-year-olds) did not benefit from accompanying visual information, in comparison to 6- to 11-year-old children and adults. Audiovisual gain also increased with age in the child sample. The current data suggest that children younger than 6 years of age do not fully utilize visual speech cues to enhance speech perception when the auditory signal is degraded. This evidence not only has implications for understanding the development of speech perception skills in children with normal hearing but may also inform the development of new treatment and intervention strategies that aim to remediate speech perception difficulties in pediatric cochlear implant users.

  9. Compliments in Audiovisual Translation – issues in character identity

    Directory of Open Access Journals (Sweden)

    Isabel Fernandes Silva

    2011-12-01

    Full Text Available Over the last decades, audiovisual translation has gained increased significance in Translation Studies as well as an interdisciplinary subject within other fields (media, cinema studies etc. Although many articles have been published on communicative aspects of translation such as politeness, only recently have scholars taken an interest in the translation of compliments. This study will focus on both these areas from a multimodal and pragmatic perspective, emphasizing the links between these fields and how this multidisciplinary approach will evidence the polysemiotic nature of the translation process. In Audiovisual Translation both text and image are at play, therefore, the translation of speech produced by the characters may either omit (because it is provided by visualgestual signs or it may emphasize information. A selection was made of the compliments present in the film What Women Want, our focus being on subtitles which did not successfully convey the compliment expressed in the source text, as well as analyze the reasons for this, namely difference in register, Culture Specific Items and repetitions. These differences lead to a different portrayal/identity/perception of the main character in the English version (original soundtrack and subtitled versions in Portuguese and Italian.

  10. Smoking education for low-educated adolescents: Comparing print and audiovisual messages

    NARCIS (Netherlands)

    de Graaf, A.; van den Putte, B.; Zebregs, S.; Lammers, J.; Neijens, P.

    2016-01-01

    This study aims to provide insight into which modality is most effective for educating low-educated adolescents about smoking. It compares the persuasive effects of print and audiovisual smoking education materials. We conducted a field experiment with 2 conditions (print vs. video) and 3

  11. Panorama del derecho audiovisual francés

    OpenAIRE

    Derieux, E. (Emmanuel)

    1999-01-01

    El artículo realiza una panorámica del Derecho audiovisual francés hasta 1998. Como características básicas, se destacan su complejidad e inestabilidad, debida en gran parte a la incapacidad para asumir los rápidos cambios tecnológicos y a las continuas modificaciones que han ido introduciendo los gobiernos de distinto signo. Además, se repasan algunas de las cuestiones actuales más relevantes, desde la regulación de las estructuras empresariales hasta los programas audiovisuales y sus conten...

  12. Atypical audio-visual speech perception and McGurk effects in children with specific language impairment.

    Science.gov (United States)

    Leybaert, Jacqueline; Macchi, Lucie; Huyse, Aurélie; Champoux, François; Bayard, Clémence; Colin, Cécile; Berthommier, Frédéric

    2014-01-01

    Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed.

  13. EXPLICITATION AND ADDITION TECHNIQUES IN AUDIOVISUAL TRANSLATION: A MULTIMODAL APPROACH OF ENGLISHINDONESIAN SUBTITLES

    Directory of Open Access Journals (Sweden)

    Ichwan Suyudi

    2017-12-01

    Full Text Available In audiovisual translation, the multimodality of the audiovisual text is both a challenge and a resource for subtitlers. This paper illustrates how multi-modes provide information that helps subtitlers to gain a better understanding of meaning-making practices that will influence them to make a decision-making in translating a certain verbal text. Subtitlers may explicit, add, and condense the texts based on the multi-modes as seen on the visual frames. Subtitlers have to consider the distribution and integration of the meanings of multi-modes in order to create comprehensive equivalence between the source and target texts. Excerpts of visual frames in this paper are taken from English films Forrest Gump (drama, 1996, and James Bond (thriller, 2010.

  14. Spectacular Attractions: Museums, Audio-Visuals and the Ghosts of Memory

    Directory of Open Access Journals (Sweden)

    Mandelli Elisa

    2015-12-01

    Full Text Available In the last decades, moving images have become a common feature not only in art museums, but also in a wide range of institutions devoted to the conservation and transmission of memory. This paper focuses on the role of audio-visuals in the exhibition design of history and memory museums, arguing that they are privileged means to achieve the spectacular effects and the visitors’ emotional and “experiential” engagement that constitute the main objective of contemporary museums. I will discuss this topic through the concept of “cinematic attraction,” claiming that when embedded in displays, films and moving images often produce spectacular mises en scène with immersive effects, creating wonder and astonishment, and involving visitors on an emotional, visceral and physical level. Moreover, I will consider the diffusion of audio-visual witnesses of real or imaginary historical characters, presented in Phantasmagoria-like displays that simulate ghostly and uncanny apparitions, creating an ambiguous and often problematic coexistence of truth and illusion, subjectivity and objectivity, facts and imagination.

  15. Finding the Correspondence of Audio-Visual Events by Object Manipulation

    Science.gov (United States)

    Nishibori, Kento; Takeuchi, Yoshinori; Matsumoto, Tetsuya; Kudo, Hiroaki; Ohnishi, Noboru

    A human being understands the objects in the environment by integrating information obtained by the senses of sight, hearing and touch. In this integration, active manipulation of objects plays an important role. We propose a method for finding the correspondence of audio-visual events by manipulating an object. The method uses the general grouping rules in Gestalt psychology, i.e. “simultaneity” and “similarity” among motion command, sound onsets and motion of the object in images. In experiments, we used a microphone, a camera, and a robot which has a hand manipulator. The robot grasps an object like a bell and shakes it or grasps an object like a stick and beat a drum in a periodic, or non-periodic motion. Then the object emits periodical/non-periodical events. To create more realistic scenario, we put other event source (a metronome) in the environment. As a result, we had a success rate of 73.8 percent in finding the correspondence between audio-visual events (afferent signal) which are relating to robot motion (efferent signal).

  16. La estética y narrativa del vídeo musical como representante del discurso audiovisual hipermoderno

    OpenAIRE

    Pedrosa González, Carlos

    2015-01-01

    Desde que en 1981 la cadena norteamericana de televisión por cable MTV introdujera el videoclip como elemento transgresor en el entramado televisivo, no ha existido pieza audiovisual más permeable, impactante e innovadora en la reciente historia del audiovisual. Representante vivo de la sociedad postmoderna, heredero de las vanguardias y popular herramienta publicitaria; el vídeo musical ha conseguido aquello que el cine sigue intentando instaurar: llegar al “mainstream” social inculcando pre...

  17. How you get HIV/AIDS

    Science.gov (United States)

    How you get HIV/AIDS Which body fluids contain HIV? HIV is a virus that lives in blood and other fluids in the body. Moving ... answers to any questions you have about HIV/AIDS. Your public health department and health care provider ...

  18. Aid and Growth

    DEFF Research Database (Denmark)

    Tarp, Finn; Mekasha, Tseday Jemaneh

    2013-01-01

    Recent litterature in the meta-analysis category where results from a range of studies are brought together throws doubt on the ability of foreign aid to foster economic growth and development. This article assesses what meta-analysis has to contribute to the litterature on the effectiveness...... of foreign aid in terms of growth impact. We re-examine key hypotheses, and find that the effect of aid on growth is positive and statistically significant. This significant effect is genuine, and not an artefact of publication selection. We also show why our results differ from those published elsewhere....

  19. Reduced orienting to audiovisual synchrony in infancy predicts autism diagnosis at 3 years of age.

    Science.gov (United States)

    Falck-Ytter, Terje; Nyström, Pär; Gredebäck, Gustaf; Gliga, Teodora; Bölte, Sven

    2018-01-23

    Effective multisensory processing develops in infancy and is thought to be important for the perception of unified and multimodal objects and events. Previous research suggests impaired multisensory processing in autism, but its role in the early development of the disorder is yet uncertain. Here, using a prospective longitudinal design, we tested whether reduced visual attention to audiovisual synchrony is an infant marker of later-emerging autism diagnosis. We studied 10-month-old siblings of children with autism using an eye tracking task previously used in studies of preschoolers. The task assessed the effect of manipulations of audiovisual synchrony on viewing patterns while the infants were observing point light displays of biological motion. We analyzed the gaze data recorded in infancy according to diagnostic status at 3 years of age (DSM-5). Ten-month-old infants who later received an autism diagnosis did not orient to audiovisual synchrony expressed within biological motion. In contrast, both infants at low-risk and high-risk siblings without autism at follow-up had a strong preference for this type of information. No group differences were observed in terms of orienting to upright biological motion. This study suggests that reduced orienting to audiovisual synchrony within biological motion is an early sign of autism. The findings support the view that poor multisensory processing could be an important antecedent marker of this neurodevelopmental condition. © 2018 Association for Child and Adolescent Mental Health.

  20. Developing an audiovisual notebook as a self-learning tool in histology: perceptions of teachers and students.

    Science.gov (United States)

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four questionnaires with items about information, images, text and music, and filmmaking were used to investigate students' (n = 115) and teachers' perceptions (n = 28) regarding the development of a video focused on a histological technique. The results show that both students and teachers significantly prioritize informative components, images and filmmaking more than text and music. The scores were significantly higher for teachers than for students for all four components analyzed. The highest scores were given to items related to practical and medically oriented elements, and the lowest values were given to theoretical and complementary elements. For most items, there were no differences between genders. A strong positive correlation was found between the scores given to each item by teachers and students. These results show that both students' and teachers' perceptions tend to coincide for most items, and suggest that audiovisual notebooks developed by students would emphasize the same items as those perceived by teachers to be the most relevant. Further, these findings suggest that the use of video as an audiovisual learning notebook would not only preserve the curricular objectives but would also offer the advantages of self-learning processes. © 2013 American Association of Anatomists.

  1. Implementing "Voix et Images de France, Part 1," in American Schools and Colleges. Section 1, Principles of Audio-Visual Language-Teaching.

    Science.gov (United States)

    Renard, Colette; And Others

    Principles of the "St. Cloud" audiovisual language instruction methodology based on "Le Francais fondamental" are presented in this guide for teachers. The material concentrates on course content, methodology, and application--including criteria for selection and gradation of course content, a description of the audiovisual and written language…

  2. Changing concepts of life-saving procedures in 19th century Polish popular first-aid publications.

    Science.gov (United States)

    Nieznanowska, Joanna

    2006-12-01

    Throughout Europe, before the era of health insurance, access to professional medical help in an emergency was limited, for the vast majority of people, especially for those living outside big cities. This did not improve in the nineteenth century, even though the number of physicians grew rapidly. The industrial revolution added a range of previously unknown threats and, with the dramatic rise in population, many more people could not afford medical help. Therefore, the need for popular, easy-to-understand instructions on first aid became urgent. In Poland, such publications were especially needed because of the country's political situation, which resulted in restricted access to university medical education. During the nineteenth century, approximately 50 works on first aid were published in Polish, with almost 90% addressed to non-physicians. Evaluation of the contents of these books and the instructions which they contained gives a good insight into the evolution that first aid concepts underwent in the nineteenth century. These range from changes in the most urgent threats (from epidemic disorders to industrial accidents and combat injuries) and the accelerating development of medical knowledge (especially the asepsis / antisepsis concept), to the changing spectrum of readers (with growing numbers of those who could read but were otherwise poorly educated).

  3. The Audio-Visual Services in Fifteen African Countries. Comparative Study on the Administration of Audio-Visual Services in Advanced and Developing Countries. Part Four. First Edition.

    Science.gov (United States)

    Jongbloed, Harry J. L.

    As the fourth part of a comparative study on the administration of audiovisual services in advanced and developing countries, this UNESCO-funded study reports on the African countries of Cameroun, Republic of Central Africa, Dahomey, Gabon, Ghana, Kenya, Libya, Mali, Nigeria, Rwanda, Senegal, Swaziland, Tunisia, Upper Volta and Zambia. Information…

  4. The Role of Audiovisual Mass Media News in Language Learning

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2011-01-01

    The present paper focuses on the role of audio/visual mass media news in language learning. In this regard, the two important issues regarding the selection and preparation of TV news for language learning are the content of the news and the linguistic difficulty. Content is described as whether the news is specialized or universal. Universal…

  5. Valores occidentales en el discurso publicitario audiovisual argentino

    Directory of Open Access Journals (Sweden)

    Isidoro Arroyo Almaraz

    2012-04-01

    Full Text Available En el presente artículo se desarrolla un análisis del discurso publicitario audiovisual argentino. Se pretende identificar los valores sociales que comunica con mayor predominancia y su posible vinculación con los valores característicos de la sociedad occidental posmoderna. Con este propósito se analizó la frecuencia de aparición de valores sociales para el estudio de 28 anuncios de diferentes anunciantes . Como modelo de análisis se utilizó el modelo “Seven/Seven” (siete pecados capitales y siete virtudes cardinales ya que se considera que los valores tradicionales son herederos de las virtudes y los pecados, utilizados por la publicidad para resolver necesidades relacionadas con el consumo. La publicidad audiovisual argentina promueve y anima ideas relacionadas con las virtudes y pecados a través de los comportamientos de los personajes de los relatos audiovisuales. Los resultados evidencian una mayor frecuencia de valores sociales caracterizados como pecados que de valores sociales caracterizados como virtudes ya que los pecados se transforman a través de la publicidad en virtudes que dinamizan el deseo y que favorecen el consumo fortaleciendo el aprendizaje de las marcas. Finalmente, a partir de los resultados obtenidos se reflexiona acerca de los usos y alcances sociales que el discurso publicitario posee.

  6. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    Directory of Open Access Journals (Sweden)

    Jonathan M P Wilbiks

    Full Text Available Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations are combined with the temporal unpredictability of the critical frame (Experiment 2, or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4. Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  7. AN EXPERIMENTAL EVALUATION OF AUDIO-VISUAL METHODS--CHANGING ATTITUDES TOWARD EDUCATION.

    Science.gov (United States)

    LOWELL, EDGAR L.; AND OTHERS

    AUDIOVISUAL PROGRAMS FOR PARENTS OF DEAF CHILDREN WERE DEVELOPED AND EVALUATED. EIGHTEEN SOUND FILMS AND ACCOMPANYING RECORDS PRESENTED INFORMATION ON HEARING, LIPREADING AND SPEECH, AND ATTEMPTED TO CHANGE PARENTAL ATTITUDES TOWARD CHILDREN AND SPOUSES. TWO VERSIONS OF THE FILMS AND RECORDS WERE NARRATED BY (1) "STARS" WHO WERE…

  8. Effects of Audio-Visual Information on the Intelligibility of Alaryngeal Speech

    Science.gov (United States)

    Evitts, Paul M.; Portugal, Lindsay; Van Dine, Ami; Holler, Aline

    2010-01-01

    Background: There is minimal research on the contribution of visual information on speech intelligibility for individuals with a laryngectomy (IWL). Aims: The purpose of this project was to determine the effects of mode of presentation (audio-only, audio-visual) on alaryngeal speech intelligibility. Method: Twenty-three naive listeners were…

  9. Attention to affective audio-visual information: Comparison between musicians and non-musicians

    NARCIS (Netherlands)

    Weijkamp, J.; Sadakata, M.

    2017-01-01

    Individuals with more musical training repeatedly demonstrate enhanced auditory perception abilities. The current study examined how these enhanced auditory skills interact with attention to affective audio-visual stimuli. A total of 16 participants with more than 5 years of musical training

  10. Audio-visual Classification and Fusion of Spontaneous Affect Data in Likelihood Space

    NARCIS (Netherlands)

    Nicolaou, Mihalis A.; Gunes, Hatice; Pantic, Maja

    2010-01-01

    This paper focuses on audio-visual (using facial expression, shoulder and audio cues) classification of spontaneous affect, utilising generative models for classification (i) in terms of Maximum Likelihood Classification with the assumption that the generative model structure in the classifier is

  11. Prácticas de activismo audiovisual con objetivo de integración social: el caso del colectivo Cine sin Autor (CsA

    Directory of Open Access Journals (Sweden)

    Ana María Sedeño Valdellós

    2016-03-01

    Full Text Available En los últimos años viene produciéndose una cierta tendencia hacia el empleo de la tecnología audiovisual por parte de colectivos activistas: con un objetivo transformador de realidades, las prácticas de activismo audiovisual se han enfocado a propiciar el empoderamiento de colectivos sociales como generadores de universos simbólicos propios. El texto reflexiona sobre el audiovisual como integración social desde un enfoque generalista y atendiendo a sus precedentes. En primer lugar, se comienza una exploración teórica sobre el concepto de activismo audiovisual y sus posibilidades educativas, activistas y para la integración social; en segundo lugar, se aborda un análisis en torno al trabajo del colectivo Cine sin Autor (CsA que, con una metodología participativa, construye “procesos audiovisuales abiertos” con comunidades y colectivos desfavorecidos.

  12. Using Audiovisual TV Interviews to Create Visible Authors that Reduce the Learning Gap between Native and Non-Native Language Speakers

    Science.gov (United States)

    Inglese, Terry; Mayer, Richard E.; Rigotti, Francesca

    2007-01-01

    Can archives of audiovisual TV interviews be used to make authors more visible to students, and thereby reduce the learning gap between native and non-native language speakers in college classes? We examined students in a college course who learned about one scholar's ideas through watching an audiovisual TV interview (i.e., visible author format)…

  13. Effects of auditory and audiovisual presentations on anxiety and behavioral changes in children undergoing elective surgery.

    Science.gov (United States)

    Hatipoglu, Z; Gulec, E; Lafli, D; Ozcengiz, D

    2018-06-01

    : Preoperative anxiety is a critical issue in children, and associated with postoperative behavioral changes. : The purpose of the current study is to evaluate how audiovisual and auditory presentations about the perioperative period impact preoperative anxiety and postoperative behavioral disturbances of children undergoing elective ambulatory surgery. : A total of 99 patients between the ages of 5-12, scheduled to undergo outpatient surgery, participated in this study. Participants were randomly assigned to one of three groups; audiovisual group (Group V, n = 33), auditory group (Group A, n = 33), and control group (Group C, n = 33). During the evaluation, the Modified Yale Preoperative Anxiety Scale (M-YPAS) and the posthospitalization behavioral questionnaire (PHBQ) were used. : There were no significant differences in demographic characteristics between the groups. M-YPAS scores were significantly lower in Group V than in Groups C and A (P audiovisual presentations, in terms of being memorable and interesting, may be more effective in reducing children's anxiety. In addition, we can suggest that both methods can be equally effective for postoperative behavioral changes.

  14. Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment.

    Science.gov (United States)

    Pons, Ferran; Andreu, Llorenç; Sanz-Torrent, Monica; Buil-Legaz, Lucía; Lewkowicz, David J

    2013-06-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.

  15. Dissociated roles of the inferior frontal gyrus and superior temporal sulcus in audiovisual processing: top-down and bottom-up mismatch detection.

    Science.gov (United States)

    Uno, Takeshi; Kawai, Kensuke; Sakai, Katsuyuki; Wakebe, Toshihiro; Ibaraki, Takuya; Kunii, Naoto; Matsuo, Takeshi; Saito, Nobuhito

    2015-01-01

    Visual inputs can distort auditory perception, and accurate auditory processing requires the ability to detect and ignore visual input that is simultaneous and incongruent with auditory information. However, the neural basis of this auditory selection from audiovisual information is unknown, whereas integration process of audiovisual inputs is intensively researched. Here, we tested the hypothesis that the inferior frontal gyrus (IFG) and superior temporal sulcus (STS) are involved in top-down and bottom-up processing, respectively, of target auditory information from audiovisual inputs. We recorded high gamma activity (HGA), which is associated with neuronal firing in local brain regions, using electrocorticography while patients with epilepsy judged the syllable spoken by a voice while looking at a voice-congruent or -incongruent lip movement from the speaker. The STS exhibited stronger HGA if the patient was presented with information of large audiovisual incongruence than of small incongruence, especially if the auditory information was correctly identified. On the other hand, the IFG exhibited stronger HGA in trials with small audiovisual incongruence when patients correctly perceived the auditory information than when patients incorrectly perceived the auditory information due to the mismatched visual information. These results indicate that the IFG and STS have dissociated roles in selective auditory processing, and suggest that the neural basis of selective auditory processing changes dynamically in accordance with the degree of incongruity between auditory and visual information.

  16. Automatic summarization of soccer highlights using audio-visual descriptors.

    Science.gov (United States)

    Raventós, A; Quijada, R; Torres, Luis; Tarrés, Francesc

    2015-01-01

    Automatic summarization generation of sports video content has been object of great interest for many years. Although semantic descriptions techniques have been proposed, many of the approaches still rely on low-level video descriptors that render quite limited results due to the complexity of the problem and to the low capability of the descriptors to represent semantic content. In this paper, a new approach for automatic highlights summarization generation of soccer videos using audio-visual descriptors is presented. The approach is based on the segmentation of the video sequence into shots that will be further analyzed to determine its relevance and interest. Of special interest in the approach is the use of the audio information that provides additional robustness to the overall performance of the summarization system. For every video shot a set of low and mid level audio-visual descriptors are computed and lately adequately combined in order to obtain different relevance measures based on empirical knowledge rules. The final summary is generated by selecting those shots with highest interest according to the specifications of the user and the results of relevance measures. A variety of results are presented with real soccer video sequences that prove the validity of the approach.

  17. A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech.

    Directory of Open Access Journals (Sweden)

    John F Magnotti

    2017-02-01

    Full Text Available Audiovisual speech integration combines information from auditory speech (talker's voice and visual speech (talker's mouth movements to improve perceptual accuracy. However, if the auditory and visual speech emanate from different talkers, integration decreases accuracy. Therefore, a key step in audiovisual speech perception is deciding whether auditory and visual speech have the same source, a process known as causal inference. A well-known illusion, the McGurk Effect, consists of incongruent audiovisual syllables, such as auditory "ba" + visual "ga" (AbaVga, that are integrated to produce a fused percept ("da". This illusion raises two fundamental questions: first, given the incongruence between the auditory and visual syllables in the McGurk stimulus, why are they integrated; and second, why does the McGurk effect not occur for other, very similar syllables (e.g., AgaVba. We describe a simplified model of causal inference in multisensory speech perception (CIMS that predicts the perception of arbitrary combinations of auditory and visual speech. We applied this model to behavioral data collected from 60 subjects perceiving both McGurk and non-McGurk incongruent speech stimuli. The CIMS model successfully predicted both the audiovisual integration observed for McGurk stimuli and the lack of integration observed for non-McGurk stimuli. An identical model without causal inference failed to accurately predict perception for either form of incongruent speech. The CIMS model uses causal inference to provide a computational framework for studying how the brain performs one of its most important tasks, integrating auditory and visual speech cues to allow us to communicate with others.

  18. Web-based audiovisual patient information system--a study of preoperative patient information in a neurosurgical department.

    Science.gov (United States)

    Gautschi, Oliver P; Stienen, Martin N; Hermann, Christel; Cadosch, Dieter; Fournier, Jean-Yves; Hildebrandt, Gerhard

    2010-08-01

    In the current climate of increasing awareness, patients are demanding more knowledge about forthcoming operations. The patient information accounts for a considerable part of the physician's daily clinical routine. Unfortunately, only a small percentage of the information is understood by the patient after solely verbal elucidation. To optimise information delivery, different auxiliary materials are used. In a prospective study, 52 consecutive stationary patients, scheduled for an elective lumbar disc operation were asked to use a web-based audiovisual patient information system. A combination of pictures, text, tone and video about the planned surgical intervention is installed on a tablet personal computer presented the day before surgery. All patients were asked to complete a questionnaire. Eighty-four percent of all participants found that the audiovisual patient information system lead to a better understanding of the forthcoming operation. Eighty-two percent found that the information system was a very helpful preparation before the pre-surgical interview with the surgeon. Ninety percent of all participants considered it meaningful to provide this kind of preoperative education also to patients planned to undergo other surgical interventions. Eighty-four percent were altogether "very content" with the audiovisual patient information system and 86% would recommend the system to others. This new approach of patient information had a positive impact on patient education as is evident from high satisfaction scores. Because patient satisfaction with the informed consent process and understanding of the presented information improved substantially, the audiovisual patient information system clearly benefits both surgeons and patients.

  19. World AIDS day 1991 observances urge sharing the challenge.

    Science.gov (United States)

    1992-01-01

    The Region of the Americas took part in World AIDS DAy 1991, whose theme, "Sharing the Challenge," urged all sectors of society to support AIDS-related education, services, and advocacy. The day of observance was intended to encourage the participation of public, private, nongovernmental, and religious leaders in promoting AIDS-related activities. Although World AIDS Day took place on December 1, activities in the Region of the Americans began from the last week of November and into the first week of December. Most of these activities were designed to educate the public on how to avoid infection, as well as inform and sensitize audiences on the health and social needs of those infected. These activities took the form of press conferences, exhibitions, lectures, public concerts, television adds, etc. One such activity, sponsored by the Pan American Health Organization (PAHO) and held at its headquarters in Washington, D.C., focused on the AIDS crisis and the need for educational activities. The program opened with a speech by Dr. Carlyle Guerra de Macedo, PAHO's director, who warned against complacency in confronting the disease. US Surgeon General Antonia Novello also spoke at the occasion, addressing the growing threat of AIDS among women. Already, 12% of AIDS victims in the US are women, and heterosexual transmissions of AIDS will likely continued to increase. Pointing out that a vaccine is not expected in the short term, PAHO's Dr. David Brandling-Bennet stressed that the fight against AIDS depends on disseminating information. The PAHO meeting also featured a panel discussion composed of educators and health professionals, who discussed the educational responsibility of television in transmitting the AIDS-prevention message to the public.

  20. Evaluation of the British Columbia AIDS Information Line.

    Science.gov (United States)

    Parsons, D C; Bell, M A; Gilchrist, L D

    1991-01-01

    We evaluated implementation of the British Columbia AIDS Information Line during its initial 15 weeks of operation. Data collected during daily operation of the line included call frequency, caller characteristics, response patterns, caller concerns and community referrals. Information on activities and resources required to implement the AIDS Line was also assembled. The study concluded that the advertising campaign sponsored by the provincial government and other AIDS-related media events had a strong impact on the frequency of calls made to the AIDS Line. However, the effect of both advertising and media events was of relatively short duration, suggesting that utilization of an AIDS information line is dependent on continuing promotional activities. The evaluation results demonstrate the importance of continuous collection of data online utilization, to track public awareness of and response to AIDS-related issues, and to facilitate planning of public education.